In this article we will go through the process of building a storage platform for your data needs. I have been contemplating updating my storage server for a while now since it is becoming outdated and is not capable of keeping up with my demands. My current setup is running on an old Intel P35 based motherboard using a dual core Intel E8400 processor. I am doing RAID 5 using 4*1TB hard drives via a Dell PERC 5I raid adapter. This setup was great back in the day but currently it is suffering in the performance department and lacking in space.

This meant that I needed to look at other solutions to find something that would suit my needs. From the research that I did, the Zettabyte File System or ZFS for short was a solution that kept coming up. What I like about ZFS is that it’s a robust enterprise solution that is open source with great community support. ZFS is implemented in a couple of different operating systems with some of the major ones being OmniOS, OpenIndiana, and Solaris. I decided to used OpenIndiana as it is a forked of Solaris which Oracle has turned into a closed source system.

Build:

Now that the platform decision is out the way we will look at the hardware. If you want a trouble-free experience when it comes to compatibility the majority of people will recommended Intel based server class hardware as it tends to be supported well across the board. I knew that I was going to be building around the new Intel Haswell Processor. I decided to go for an enterprise Intel Xeon processor and socket 1150 motherboard. The motherboard that I was looking for needed to have a minimum of 8 SATA 6Gb/s ports, support a minimum of 16GB DDR3 ECC RAM, Xeon E3-1200 v3 Processor compatible, Dual NICs(For Teaming/Bonding), and a PCI Express slot for a small graphics cards.  I decided to pick up the ASRock C226 WS motherboard since it met all of my criterias. For the processor I chose the Xeon E3-1220V3. The processor supports all of the virtualization options  should I decide to virtualize later on in the future. It should also be able to handle encryption if you decide to enable that on your build as well as compression and deduplication which requires good processing capabilities. The RAM that I decided to get was from Kingston model KVR16E11/4. It is ECC DDR3-1600 RAM which is important to have since memory corruption could lead to corrupt data. I am starting out with 8GB for now and depending on the performance will increment based on the ARC miss rate. ARC is the cache in RAM where the most frequently used data is maintained. If the data is not in ARC then we do a slow read from the disk and hence why having a bigger ARC which means more RAM results in us serving all our reads from memory giving us better performance.

For the power supply make sure that you get something that will power the number of disks that you are planning to have on the system. When it comes to picking out the hard disks make sure that you get similar drives e.g. they have to match in terms of RPMs, SATA 2 or 3, Cache Size, and Storage size. This covers the majority of components for this system and if you want to get some better performance, you can throw in an SSD or Flash card for the operating system which shouldn’t take more than 20GB.Here is a full list of the components in my build:

ASRock C226 WS

Intel Xeon E3-1220V3

2*Kingston Memory KVR16E11/4

2*Seagate 600 SSD 240GB(Raid 0 to host VMs)

2*Toshiba DT01ACA300 3TB

2*Seagate Barracuda 7200.14 ST3000DM001 3TB

ARK 3U390A 3U Rackmount Server Case

SeaSonic M12II 620 PSU(I had this lying around)

Note: The computer case above had missing screws and some of the parts didn’t lined up properly. I recommend not purchasing from ARK as I submitted a case via their website and they never contacted me.

Installation:

The installation should be straightforward and you can start out by going over to http://openindiana.org/ to grab the latest version. Once you have the image file you can burn that to a DVD and then boot from it. After booting from the DVD in the desktop you will have an icon that will give you the option to install OpenIndiana.

After selecting the icon go through the walk through and install the OS.

The motherboard comes with two intel i210 Ethernet ports. The adapter is not recognized by OpenIndiana which means that you have to manually install the drivers for it. The latest drives for it can be found here:

https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/io/igb/igb_main.c

And you can grab them from this link:

igb.tar.bz2

Once you have download the files you want to go ahead and backup the current drivers to a temporary location. The drivers are located in “/kernel/drv” and in “/kernel/drv/amd64”. The commands below will do this for you. Note that these are getting executed as the root user since you will need elevated permissions to do so.

root@HOSTNAME:/# cd /kernel/drv

root@HOSTNAME:/kernel/drv# tar -cvf - igb.conf igb amd64/igb | bzip2 > /tmp/igb.tar.bz2

Note: The location of the old drivers will be in “/tmp”

While you are in “kernel/drv” you want to delete from this directory igb and igb.conf and from the “kernel/drv/amd64” directory the igb driver.

root@HOSTNAME:/kernel/drv# rm igb igb.conf amd64/igb

The next step is to extract the contents from the igb.tar.bz2 over to “/kernel/drv” and what’s in the amd64 directory must go into “/kernel/drv/amd64”

Now that you have the drivers in the correct location we must find the Ethernet controller information using the following command

root@HOSTNAME:#prtconf -pv

 

Note: The vendor-id and device-id identify our adapter:

If you ever have a vendor id and device id for an unknown device and want to find out what it is then you can use this site.

With the driver information from above we want to go ahead and modify “etc/driver_aliases”

root@HOSTNAME:# nano /etc/driver_aliases

add the following line for your driver e.g. igb “pciex8086,1533”

Where 8086 is the vendor-id and 1533 is the device-id. The next step is to load the driver:

root@HOSTNAME:# modload /kernel/drv/igb

We then update the boot archive using the following command

root@HOSTNAME:# bootadm update-archive

We can now go ahead and plumb the interface. The plumb command will activate the interface.

root@HOSTNAME:# ifconfig igb0 plumb

We will then configure the interface to get an address using DHCP

root@HOSTNAME:# ifconfig igb0 dhcp

You can verify that it is up and running by using the following command

root@HOSTNAME:# ifconfig igb0

 

I then Disable the network default service

root@HOSTNAME:# svcadm disable svc:/network/physical:default

and enable the network auto-configuration service

root@HOSTNAME:# svcadm enable svc:/network/physical:nwam

I did a reboot afterwards and verified that everything was up and running without any issues.  You can now set a static IP by using the GUI which I found to be the best way to go about it. I spent a long time trying to modify configuration files via the command line to set a static IP address and ran into many problems. The easier way to set a static address is to go to System–>Administration–>Network

In here you will see the current connection status for both of your NICs

You can then select one from the drop down at the top and modify the settings that you want it to have:

napp-it:

The next thing that we want to do is to install napp-it which will give us an easier way to manage our server. Installing napp-it is pretty simple and requires two commands:

USERNAME@HOSTNAME:# su

USERNAME@HOSTNAME:# wget -O - www.napp-it.org/nappit | perl

Wait for the installation to finish and once you are done you can access the napp-it interface on port 81.

e.g. http://localhost:81 or http://IPADDRESS:81

The first time that you access napp-it you will be presented with the following screen:

Currently there is no password set on the administrator account so hitting login will take us to the configuration page to setup the credentials.

Fill out the information and hit submit when done. When you login again you will be presented with some information about your system:

Head over to home–>about–>update and check to see if there are any new updates pending.

Note: napp-it has a change log that you can view here if you are wondering what fixes are in the newer versions.

Now that we have the latest version, we will head over to home–>disks–>initialize and initialize all our disks that we will be grouping into a pool. When you initialize a disk it will get wiped.

Wait for the process to finish while your disks are getting initialized.

Now that our disks have been initialized, we create a pool from our disks by going over to Pools–>Create pool

In here you will select which disks you want to be part of the pool and the ZFS version of the pool. Also, you will choose the ZFS Raid Level which I have set mine to be raidz. There are other levels that you can choose from depending what you are doing and the amount of disks in your environment. This blog post has some really good design considerations that should be taken into account.

Here is a table that list the different features provided by the different pool versions courtesy of Oracle:

Version Oracle Solaris 11 Description
1 snv_36 Initial ZFS version
2 snv_38 Ditto blocks (replicated metadata)
3 snv_42 Hot spares and double parity RAID-Z
4 snv_62 zpool history
5 snv_62 gzip compression algorithm
6 snv_62 bootfs pool property
7 snv_68 Separate intent log devices
8 snv_69 Delegated administration
9 snv_77 refquota and refreservation properties
10 snv_78 Cache devices
11 snv_94 Improved scrub performance
12 snv_96 Snapshot properties
13 snv_98 snapused property
14 snv_103 aclinherit passthrough-x property
15 snv_114 user and group space accounting
16 snv_116 stmf property
17 snv_120 Triple-parity RAID-Z
18 snv_121 Snapshot user holds
19 snv_125 Log device removal
20 snv_128 zle (zero-length encoding) compression algorithm
21 snv_128 Deduplication
22 snv_128 Received properties
23 snv_135 Slim ZIL
24 snv_137 System attributes
25 snv_140 Improved scrub stats
26 snv_141 Improved snapshot deletion performance
27 snv_145 Improved snapshot creation performance
28 snv_147 Multiple vdev replacements
29 snv_148 RAID-Z/mirror hybrid allocator
30 snv_149 Encryption
31 snv_150 Improved ‘zfs list’ performance
32 snv_151 One MB blocksize
33 snv_163 Improved share support

Now that you created your pool, you can see that it is showing up in the pool overview:

The next step is for us to create a ZFS File system by heading over to ZFS Filesystems–>Create

You will choose the pool were you will be creating the FS on and select some basic parameters. I have enabled SMB on mine since this will be shared to windows Clients. After you create your volume you want to head over to ZFS Filesystems and enable compression.

Note that if you head over to pools–>features you can enable feature flags.

By enabling feature flags you can use LZ4 compression rather than LZJB. Some of the benefits of LZ4 are the following:

A rough comparative analysis of LZ4 performance vs. LZJB:

  • Approximately 50% faster compression when operating on compressible data.
  • Approximately 80% faster on decompression.
  • Over three times faster on compression of incompressible data.
  • Higher compression ratio (up to 10% on the larger block sizes).
  • Performance on modern CPUs often exceeds 500 MB/s on compression and over 1.5 GB/s on decompression and incompressible data (per single CPU core).

Note: “Feature flags  will make your pool unimportable on systems which lack support for feature flags (e.g. Solaris 10, Oracle Solaris 11, etc.).” See here for more information.

The next step is to verify that the SMB server service is enabled which is located over at Services–>SMB

We will now need to create user accounts on the system so that we can grant them access to the SMB shares.

Select the add local user option and fill in the information for your user account.

After your account has been created you want to go ahead and add it to the local administrator group. Select the “administrators” options from SMB-Groups and near the bottom you will get a list of the current members for this group. You can then select add member to add your newly created account.

DNS registration:

If you are running pfSense then it is a good idea to add an entry that maps the hostname of the file server to the static IP addresses of its interfaces. You need to head over to Services–>DNS Forwarder to do.

This will allow us to use the hostname “HOST.YOURDOMAIN.TLD” to map the SMB share to different machines in your environment that are using your pfSense box as their DNS server. Make sure that you save your changes and restart DNS to verify that It works.

Note: Since we have two IP addresses configured on our OI(OpenIndiana) box what we have in fact done is use DNS to load balance across both NICs when you try to reach your box via the hostname that has two IP addresses attached to it.

We can now map the share via any operating system and if you are using a windows based OS, it should be simple.

Success

Email Notifications:

The last thing that we will do before finalizing this article is to enable email notifications for napp-it. This is very useful in case we have a drive failure so that we will get notifications. If you head over to about–>Settings you can fill in the email notification settings at the bottom.

I am going to be using gmail as the SMTP mail server and I have typed in a gmail account in the SMTP user box that will be used for authentication. Don’t forget to put an email address in the mailto box.

After making these changes, the email notifications will not work because we need TLS modules install on our system so that we can connect to Google’s mail server. To install the modules on OI we must open a shell and login as root or run then as SU. The first package that we will install is a Perl extension for using OpenSSL  and can be done so via the following command.

"pkg install net-ssleay"

We will now access CPAN which is the Perl archive network:

"perl -MCPAN -e shell"

Note: answer yes to every prompt.

We will then install the SMTP client that supports TLS and AUTH:

"install Net::SMTP::TLS"

Note: answer yes to every prompt.

The next commands will upgrade CPAN.

"install CPAN"

"reload cpan"

"exit"

We now need to install IO::Socket::SSL which is a module that provides an interface to SSL sockets. Existing programs can use this module to provide SSL encryption.

"wget http://search.cpan.org/CPAN/authors/id/S/SU/SULLR/IO-Socket-SSL-1.68.tar.gz"
"tar xzvf IO-Socket-SSL-1.68.tar.gz"
"cd IO-Socket-SSL-1.68"
"perl Makefile.PL"
"make"
"make install"

Once done with the above you can head over to Jobs–>TLS Email–>Enable TLS mail

We will then test it using the TLS-test option.

Success

Don’t forget to modify your alert and status settings under email

This wraps up this article as it is getting quite long. I might post some performance statistics in a future article. Thank you for reading and see you next time.

No Responses to “ZFS Storage Server Build and Configuration”

Trackbacks/Pingbacks

  1. ZFS iSCSI Configuration | HIGHLNK - […] had a chance to update the content on this site. In this article I will build upon my previous…
  2. ZFS iSCSI LUN MPIO Setup Windows Server | HIGHLNK - […] LUNs that I created in my OpenIndiana server using COMSTAR. My OpenIndiana server which was built last year originally…

Leave a Reply

Your email address will not be published. Required fields are marked *