I will be detailing my newest OpenIndiana NAS build. Opensource OS, ZFS file system
Hardware Specs:
Intel DP55WG Motherboard
i3 540 Processor (stock cooling)
16Gb (4x4Gb) Gskill DDR3-10600
IBM M1015 SAS controller (flashed to IT firmware)
3x 80Gb SATA (80Gb mirror + spare)
10x 750Gb SATA (4x 750Gb mirror + 2 spares)
Intel Pro/1000 CT NIC (the only NIC for a home-built NAS imho)
Configuration
OpenIndiana 1.51 + napp-it plugin for web management
"SPEEDYDATA" pool of the 10x 750Gb SATA drives (SAS1/SAS2,SAS3/SAS4,SAS5/SAS6,SAS7/SAS - effective RAID10) with 2x 750Gb HOTSPARES
Several ZFS (v28) file shares striped across the above pool
Boot off the DVD or USB, install is self-explanatory. For this install I created a user called "admin". You'll notice after install when you "reboot" it doesn't do a full reboot, so you'll need to do a power-off at some point to remove the DVD or USB.
Login to openindiana using the user that you created when you installed OI.
Open a "Terminal" window from the top, and type:
sudo -s (hit enter)
type your password (hit enter)
Now type:
cd /$HOME (hit enter)
wget -O - www.napp-it.org/nappit | perl (hit enter. That is a O, not a zero)
Before rebooting type:
passwd root
type your new password twice. the napp-it install changes the root password so we need to change it back
Reboot using the System, Shutdown method at top. Good time to do a power-off and remove your DVD or USB stick.
Login with the same user as you used above to login to openindiana
Go up to the top and do a System, Management, Network
Note the IP that it is currently assigned.
On your OI box, or on any other PC on your network, navigate to:
http://<OI ip address>:81
If you're using Firefox on your OI box, just go to: http://127.0.0.1:81
Login using the account that you created, you can see in my screenshot the username is admin:
Click on System, then on Network
If you want to change your IP address, click on the IP address and put your correct settings in. If managing from a network PC, once you click "set property", wait about a minute and then you can access it via http from the new IP address
Let's create a ZFS Pool
Click on the "Pools" option on the http management screen
You'll notice there is already one pool, the OS disk as shown. It is in a pool called "rpool"
Click the "Create" link. Or, you can hover over "Pools" and then click "Create" from the box that shows up when you hover
MIRRORED POOL Important choices:
1) Name of pool. I am calling my pool "ZFSMIRRORPOOL" because this will be a pool of mirrored disks, or essentially a pool of RAID1 arrays.
2) ZFS version, I always leave this at "default". I don't know enough about why you'd change this option, tbh
3) "mirror" - RAID1 ; "RAIDz" - RAID5 ; "RAIDz2" - RAID6 ; etc....
4) Select type of "mirror" as shown below
5) Overflow protection. Self-explanatory
Select first two disks for pool, choose other options as detailed above, and click "submit
Now our two pools are listed, rpool (OS) and ZFSMIRRORPOOL that we just created
Let's add more mirrors to this pool, click on "Add vdev"
Two more disks, "mirror", Submit
Since I used 9 drives for this install, I added a spare as well using the "add vdev" screen
Here you can see our pool consisting of (4) mirrored vdevs.
This is where OI gets fast. Click on "ZFS Folder"
Type a name for your folder (ie movies, tv, porn (lol), etc...) I chose "FOLDER1"
enable or disable guest access. If this is for the average home user with decent network security, you can turn guest access on
Submit
Now we have one ZFS folder, on the ZFSMIRRORPOOL. When you write to this pool, it will copy your data across all 4 mirrored vdev's in the pool. So you're "striping" your data across all 4 vdevs (very fast option).
RAID5 POOL
RAIDZ = RAID5
Select the first three disks and choose "raidz" option: I called this Pool "ZFSRAID5POOL"
Since I'm using 9 disks I added two additional vdev's of raidz 3 drives per vdev
RAID6 POOL
RAIDZ2 = RAID6
Select the first four disks and choose "raidz2" option: I called this Pool "ZFSRAID2POOL" (which I should've called ZFSRAID6POOL but whatever)
Since I'm using 9 disks I added one additional vdev of raidz2 4 drives; I also added one spare
Restart/Shutdown
If you need to reboot your system, go to the "System" tab and choose "Shutdown"
Adding users
Click on the "Users" tab, and select "add local user"
Once created you will see your user in the user list:
Jumbo frames on Intel NIC:
Open a Terminal window
sudo -s (hit enter, then put your password in)
nano -w /kernel/drv/e1000g.conf
Change the "max frame size" as appropriate, a setting of "3" uses up to 9k. It goes in order of your NICs, so if you only have one NIC, you only need to change the first "0" to a "3" as shown:
Ctrl-O (to save)
Ctrl-X (to exit)
Mirror the root pool (RAID1 the OS drive)
Open a terminal window, elevate privileges via "sudo -s"
"zpool status rpool" (hit enter. a simple "zpool status" will list status of all pools)
"format" (hit enter. this will list all attached disks. You can then type "ctrl-C" to close out of the "specify disk...." option)
Two pieces of info needed. name of drive in rpool currently, and name of drive to add. We saw from the prior screenshot that the drive in our pool called "zpool" (our OS drive) is named: c3t0d0
The drive we want to add to that pool as a mirror (RAID1) is c3t11d0
Here are the commands given the 2 drives above. You will see them in the screenshot; remember to substitute your drive names as needed. This information is taken from (http://wiki.openindiana.org/oi/2.1+Post-installation)
pfexec fdisk -B c3t11d0p0 (hit enter. see above link for why we add "p0")
pfexec prtvtoc /dev/rdsk/c3t0d0s2 | pfexec fmthard -s /dev/rdsk/c3t11d0s2 (hit enter. see above link for why we add "s2")
zpool attach -f rpool c3t0d0s0 c3t11d0s0 (hit enter. see above link for why we add "s0")
zpool status rpool (hit enter. You can see the process of the "resilvering" as it creates the mirror; this is a pretty quick process, normally)
Wait till the resilver finishes
pfexec installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c3t11d0s0 (hit enter)
Good advice for how many drives in what type of ZFS pool:
I.e. Optimal sizes (http://mail.opensolaris.org/pipermail/zfs-discuss/2010-September/044701.html) RAIDZ1 vdevs should have 3, 5, or 9 devices in each vdev RAIDZ2 vdevs should have 4, 6, or 10 devices in each vdev RAIDZ3 vdevs should have 5, 7, or 11 devices in each vdevhttp://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
How to setup OI in ESXi 5.1 - (free version). ESXi host is i7 w/ 24Gb RAM, 2x Realtek NICS. Using one IBM M1015 (IT firmware) plus one IBM SAS expander (12 total ports). I actually took my existing standalone Openindiana NAS, turned it off, built the VM, configured pass-through for the M1015 controller, and was able to import my existing ZFS pool into the new VM install. Very cool.
Install ESX. Check the supported hardware list for Solaris 11, but I've had pretty good luck with anything ix based (ie i3, i5, i7).
You can do all of the work below via the vSphere mgmt utility.
Create the VM using the options shown below (choosing your own disk size, 10Gb is plenty for OS on OI
Choose the datastore that you want to create the VM disk on.
You can change this later. I chose 2Gb but for OI I'd consider 8Gb the minimum. Throw as much memory at it as you can. I believe if you have less than 4Gb allocated it will install the 32bit version, rather than the 64bit
E1000 is a generic NIC driver, if you want 10Gb NICS then you need to choose vmxnet3. edit. Ended up going back to E1000 for best performance. vmxnet3 is still buggy
Go into ESX and go to the "Configuration" tab and then click on "Storage" at the right. Choose your main datastore, right-click and select "Browse Datastore". THen you can upload the OI iso for installing OI.
Select the VM you created and click on "Edit Settings". You'll see the window(s) below. We need to tell the CD-ROM to "Connect at power on" (if it tries to boot to NIC and then fails to boot, then you forgot this step) and choose the ISO that we uploaded to the datastore
Power On the VM. You'll have a default language and keyboard option, for US just choose the defaults
After the install, you'll notice the mouse is a PITA. Let's fix that, and install the vmtools so we have the correct vga, network, and other drivers. Right-click on the OI CD on the desktop of your VM and choose "Eject". Now go into ESX and right-click the VM, choose "Guest", and "Install/Upgrade VMware Tools" option. It should then autorun in OI.
Right-click the "vmware-solaris-tools.tar.gz" and copy, then click on "File System", tmp folder, and copy the .tar.gz file into the tmp directory
Open a Terminal window from the top toolbar, and type "su -" and hit enter. You'll have to put in the root password you chose during the install; you'll then need to change that password (oddly enough)
Enter the following, hit enter after each new line as shown below
cd /tmp
tar zxvf vmware-solaris-tools.tar.gz
cd vmware-tools-distrib
./vmware-install.pl -default
Network setup
We want to add a separate vm network for the NFS share that we create from our OI NAS box. We will have one regular network (that you can connect to your physical network and other physical boxes on your network) then another vm network just for the NFS share.
First we need to create that 2nd network for NFS. The IP for that vmswitch will be 192.168.5.1. We will then configure our OI NIC to be 192.168.5.2
Go to the ESX Configuration tab, then click on "Networking" at the left.
on upper right of that window click on "Add Networking"
Click the add button as shown below
It should look something like this. Regular VMs are connect to the primary vSwitch0. vSwitch0 is just for the OI NAS and the NFS share that we'll create.
Now let's add the 2nd NIC to the OI VM.
Again, be sure to choose "VMXNET3" and choose the 2nd vswitch that we created in previous steps.
Now we need to activate and set the IP of the 2nd vswitch that we just added
You'll notice the first NIC has grabbed a NIC via DHCP, I will show how to change the settings for the 2nd NIC, and you can follow those same settings to set a static IP for your primary adapter (physical network connection) as well.
Double-click the NIC that is not showing connected status. Put in the Address and netmask fields, you can leave the "Default Route" blank.
Reboot. Login, open a terminal window, and ping your vswitch IP address to verify that network is functioning correctly.
See below instructions (other post) for creating a zpool in OI. Now we'll create a ZFS folder called "ESX_NFS" on pool "ESX". turn samba share and nbmand off.
Under the NFS column, click on the "off" next to the ZFS folder that we created for ESX. Change it to "on" option which should be default when window comes up.
Now let's add that NFS share to ESX. From main ESX window go to Configuration Tab, on left choose Storage. At upper right choose "Add Storage". Select NFS (Network File System) option.
Enter the IP address for vswitch1 that we set in OI. Type the name of the pool and the ZFS folder as shown below (ZFSPOOL/ZFSFOLDERNAME). Then type a datastore name
You should now see the NFS share that we created in OI in your ESX storage section. It should show the drive space. You can now create virtual machines and store them on that ESX share.
IF YOU ARE INTENDING TO USE LOCAL DRIVES ATTACHED TO A SATA CONTROLLER:
Your motherboard must support vt-d. vt-x does not support passthrough and will not let you passthrough physical drives. I recommend the IBM M1015 controller (8 SAS ports, SATAIII), and that is what is shown below:
Go to Configuration tab, advanced settings. Click on "Edit" and choose your SATA controller. Below is what it looks like for my M1015. You should not need a separate entry for a SAS expander if you are running one.
Reboot.
Once I did the above, when I went into OI I was able to do an import, and import the ZFS pool that was previously created on my 9x 2tb drives.
Oh, I'm really looking forward to reading about this one. I was thinking about setting up an OpenIndiana NAS back in 2010, but I chickened out since it didn't seem quite ready for prime time back then.
Oh, I'm really looking forward to reading about this one. I was thinking about setting up an OpenIndiana NAS back in 2010, but I chickened out since it didn't seem quite ready for prime time back then.
Without the napp-it plugin I'd probably be lost, although I know most of the command-line ZFS functions. napp-it makes this thing sweet, that's for sure.
Think I'm gonna take my DL380 and do some freenas/ubuntu/openindiana benchmarks for ZFS, any other major "OS" I should test for NAS?
That Read bench of 3413.33 MB/s seems like a Cache benchmark to me - not an actual "From Disk" read. Even assuming all 8 disks were able to sustain 150MB/s each and were in RAID-0, you'd still only be at 1200MB/s (and your setup is obviously not an 8-Drive RAID-0).
Try using a larger file size for the test - something larger than whatever buffer the RAID Array is using. I use ATTO with the 256MB option, but I'm on W7.
That Read bench of 3413.33 MB/s seems like a Cache benchmark to me - not an actual "From Disk" read. Even assuming all 8 disks were able to sustain 150MB/s each and were in RAID-0, you'd still only be at 1200MB/s (and your setup is obviously not an 8-Drive RAID-0).
Try using a larger file size for the test - something larger than whatever buffer the RAID Array is using. I use ATTO with the 256MB option, but I'm on W7.
Cool, im interested to see how this rolls out. Ive been contemplating a NAS/TV box for a while. Thats alot of CPU horsepower for just a simple NAS though how many systems are on your network?
Cool, im interested to see how this rolls out. Ive been contemplating a NAS/TV box for a while. Thats alot of CPU horsepower for just a simple NAS though how many systems are on your network?
The SAS expander I got in works perfectly. Only gives me an additional 4 ports but that's all I needed. Pictures when I have time, caught a cold when baby was born and haven't been feeling well.
The next few months will mean the following for my NAS.
Virtualize OpenIndiana on an i7 box with 24Gb of ram; Perhaps upgrade to 6x8Gb as some x58 boards seem to support 48Gb
Replace the 9x2tb drives with 6x3tb drives
Add a WHS to the mix for workstation backup using iscsi to store that image on the virtualized NAS
Step1 - install ESX5 - DONE
Step2 - install OpenIndiana in VM - DONE
Step3 - migrate existing OI pool to new VM - doh! current mobo doesn't support vt-d (new one ordered and will be here next week)
Already loving ESX5, especially the 10Gb link between VMs.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.