• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

OpenIndiana NAS Build Instructions

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

cw823

Honeybadger Moderator
Joined
Dec 18, 2000
Location
Earth
I will be detailing my newest OpenIndiana NAS build. Opensource OS, ZFS file system

Hardware Specs:
Intel DP55WG Motherboard
i3 540 Processor (stock cooling)
16Gb (4x4Gb) Gskill DDR3-10600
IBM M1015 SAS controller (flashed to IT firmware)
3x 80Gb SATA (80Gb mirror + spare)
10x 750Gb SATA (4x 750Gb mirror + 2 spares)
Intel Pro/1000 CT NIC (the only NIC for a home-built NAS imho)

Configuration
OpenIndiana 1.51 + napp-it plugin for web management
"SPEEDYDATA" pool of the 10x 750Gb SATA drives (SAS1/SAS2,SAS3/SAS4,SAS5/SAS6,SAS7/SAS - effective RAID10) with 2x 750Gb HOTSPARES
Several ZFS (v28) file shares striped across the above pool

ddbench benchmark results:
158.27 MB/s write
3413.33 MB/s read
 
Last edited:
Download the DVD or USB installer from http://openindiana.org/download/

Boot off the DVD or USB, install is self-explanatory. For this install I created a user called "admin". You'll notice after install when you "reboot" it doesn't do a full reboot, so you'll need to do a power-off at some point to remove the DVD or USB.

Login to openindiana using the user that you created when you installed OI.
Open a "Terminal" window from the top, and type:
sudo -s (hit enter)
type your password (hit enter)
Now type:
cd /$HOME (hit enter)
wget -O - www.napp-it.org/nappit | perl (hit enter. That is a O, not a zero)
1.jpg
2.jpg
3.jpg
Before rebooting type:
passwd root
type your new password twice. the napp-it install changes the root password so we need to change it back
4.jpg
Reboot using the System, Shutdown method at top. Good time to do a power-off and remove your DVD or USB stick.
Login with the same user as you used above to login to openindiana
Go up to the top and do a System, Management, Network
Note the IP that it is currently assigned.

On your OI box, or on any other PC on your network, navigate to:
http://<OI ip address>:81
If you're using Firefox on your OI box, just go to:
http://127.0.0.1:81

Login using the account that you created, you can see in my screenshot the username is admin:
5.jpg
Click on System, then on Network
6.jpg
If you want to change your IP address, click on the IP address and put your correct settings in. If managing from a network PC, once you click "set property", wait about a minute and then you can access it via http from the new IP address
7.jpg

Let's create a ZFS Pool

Click on the "Pools" option on the http management screen
You'll notice there is already one pool, the OS disk as shown. It is in a pool called "rpool"
10.jpg
Click the "Create" link. Or, you can hover over "Pools" and then click "Create" from the box that shows up when you hover

MIRRORED POOL
Important choices:
1) Name of pool. I am calling my pool "ZFSMIRRORPOOL" because this will be a pool of mirrored disks, or essentially a pool of RAID1 arrays.
2) ZFS version, I always leave this at "default". I don't know enough about why you'd change this option, tbh
3) "mirror" - RAID1 ; "RAIDz" - RAID5 ; "RAIDz2" - RAID6 ; etc....
4) Select type of "mirror" as shown below
5) Overflow protection. Self-explanatory

Select first two disks for pool, choose other options as detailed above, and click "submit
11.jpg
Now our two pools are listed, rpool (OS) and ZFSMIRRORPOOL that we just created
12.jpg
Let's add more mirrors to this pool, click on "Add vdev"
Two more disks, "mirror", Submit
13.jpg
Since I used 9 drives for this install, I added a spare as well using the "add vdev" screen
14.jpg
Here you can see our pool consisting of (4) mirrored vdevs.
15.jpg
This is where OI gets fast. Click on "ZFS Folder"
Type a name for your folder (ie movies, tv, porn (lol), etc...) I chose "FOLDER1"
enable or disable guest access. If this is for the average home user with decent network security, you can turn guest access on
Submit
16.jpg
Now we have one ZFS folder, on the ZFSMIRRORPOOL. When you write to this pool, it will copy your data across all 4 mirrored vdev's in the pool. So you're "striping" your data across all 4 vdevs (very fast option).

RAID5 POOL
RAIDZ = RAID5
Select the first three disks and choose "raidz" option: I called this Pool "ZFSRAID5POOL"
19.jpg
Since I'm using 9 disks I added two additional vdev's of raidz 3 drives per vdev
20.jpg

RAID6 POOL
RAIDZ2 = RAID6
Select the first four disks and choose "raidz2" option: I called this Pool "ZFSRAID2POOL" (which I should've called ZFSRAID6POOL but whatever)
21.jpg
Since I'm using 9 disks I added one additional vdev of raidz2 4 drives; I also added one spare
22.jpg

Restart/Shutdown
If you need to reboot your system, go to the "System" tab and choose "Shutdown"
23.jpg


Adding users
Click on the "Users" tab, and select "add local user"
Once created you will see your user in the user list:
18.jpg

Jumbo frames on Intel NIC:
Open a Terminal window
sudo -s (hit enter, then put your password in)
nano -w /kernel/drv/e1000g.conf
8.jpg
Change the "max frame size" as appropriate, a setting of "3" uses up to 9k. It goes in order of your NICs, so if you only have one NIC, you only need to change the first "0" to a "3" as shown:
9.jpg
Ctrl-O (to save)
Ctrl-X (to exit)

Mirror the root pool (RAID1 the OS drive)
Open a terminal window, elevate privileges via "sudo -s"
"zpool status rpool" (hit enter. a simple "zpool status" will list status of all pools)
24.jpg
"format" (hit enter. this will list all attached disks. You can then type "ctrl-C" to close out of the "specify disk...." option)
25.jpg
Two pieces of info needed. name of drive in rpool currently, and name of drive to add. We saw from the prior screenshot that the drive in our pool called "zpool" (our OS drive) is named: c3t0d0
The drive we want to add to that pool as a mirror (RAID1) is c3t11d0
Here are the commands given the 2 drives above. You will see them in the screenshot; remember to substitute your drive names as needed. This information is taken from (http://wiki.openindiana.org/oi/2.1+Post-installation)
pfexec fdisk -B c3t11d0p0 (hit enter. see above link for why we add "p0")
pfexec prtvtoc /dev/rdsk/c3t0d0s2 | pfexec fmthard -s /dev/rdsk/c3t11d0s2 (hit enter. see above link for why we add "s2")
zpool attach -f rpool c3t0d0s0 c3t11d0s0 (hit enter. see above link for why we add "s0")
zpool status rpool (hit enter. You can see the process of the "resilvering" as it creates the mirror; this is a pretty quick process, normally)
26.jpg
Wait till the resilver finishes
27.jpg
pfexec installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c3t11d0s0 (hit enter)
28.jpg

Good advice for how many drives in what type of ZFS pool:
I.e. Optimal sizes (http://mail.opensolaris.org/pipermail/zfs-discuss/2010-September/044701.html) RAIDZ1 vdevs should have 3, 5, or 9 devices in each vdev RAIDZ2 vdevs should have 4, 6, or 10 devices in each vdev RAIDZ3 vdevs should have 5, 7, or 11 devices in each vdevhttp://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
 
Last edited:
How to setup OI in ESXi 5.1 - (free version). ESXi host is i7 w/ 24Gb RAM, 2x Realtek NICS. Using one IBM M1015 (IT firmware) plus one IBM SAS expander (12 total ports). I actually took my existing standalone Openindiana NAS, turned it off, built the VM, configured pass-through for the M1015 controller, and was able to import my existing ZFS pool into the new VM install. Very cool.

Install ESX. Check the supported hardware list for Solaris 11, but I've had pretty good luck with anything ix based (ie i3, i5, i7).
You can do all of the work below via the vSphere mgmt utility.
Create the VM using the options shown below (choosing your own disk size, 10Gb is plenty for OS on OI
1_zpsead26011.jpg
2_zps56f04051.jpg

Choose the datastore that you want to create the VM disk on.
3_zps58fafb82.jpg
4_zps45d12c0a.jpg
5_zps3d2c230a.jpg
6_zpsa78f89ad.jpg

You can change this later. I chose 2Gb but for OI I'd consider 8Gb the minimum. Throw as much memory at it as you can. I believe if you have less than 4Gb allocated it will install the 32bit version, rather than the 64bit
7_zps6220a0c8.jpg

E1000 is a generic NIC driver, if you want 10Gb NICS then you need to choose vmxnet3.
edit. Ended up going back to E1000 for best performance. vmxnet3 is still buggy
8_zps9abc95f5.jpg
9_zps0f8b5585.jpg
10_zps2309343f.jpg
11_zpsbd8ebffa.jpg
12_zps4ba2e2fa.jpg
13_zps6d273f6c.jpg

Go into ESX and go to the "Configuration" tab and then click on "Storage" at the right. Choose your main datastore, right-click and select "Browse Datastore". THen you can upload the OI iso for installing OI.
14_zps6851d3bc.jpg
15_zps068aa5e8.jpg

Select the VM you created and click on "Edit Settings". You'll see the window(s) below. We need to tell the CD-ROM to "Connect at power on" (if it tries to boot to NIC and then fails to boot, then you forgot this step) and choose the ISO that we uploaded to the datastore
16_zps1f80e999.jpg
17_zps04ad508e.jpg
18_zpsd716cba1.jpg

Power On the VM. You'll have a default language and keyboard option, for US just choose the defaults
19_zps2197b3ba.jpg

After the install, you'll notice the mouse is a PITA. Let's fix that, and install the vmtools so we have the correct vga, network, and other drivers. Right-click on the OI CD on the desktop of your VM and choose "Eject". Now go into ESX and right-click the VM, choose "Guest", and "Install/Upgrade VMware Tools" option. It should then autorun in OI.
20_zps3fb5344f.jpg
21_zps032818ff.jpg

Right-click the "vmware-solaris-tools.tar.gz" and copy, then click on "File System", tmp folder, and copy the .tar.gz file into the tmp directory
22_zps031f71c2.jpg

Open a Terminal window from the top toolbar, and type "su -" and hit enter. You'll have to put in the root password you chose during the install; you'll then need to change that password (oddly enough)
23_zpsae6005f3.jpg
Enter the following, hit enter after each new line as shown below
cd /tmp
tar zxvf vmware-solaris-tools.tar.gz
cd vmware-tools-distrib
./vmware-install.pl -default


25_zps2f27d798.jpg
Network setup
We want to add a separate vm network for the NFS share that we create from our OI NAS box. We will have one regular network (that you can connect to your physical network and other physical boxes on your network) then another vm network just for the NFS share.
First we need to create that 2nd network for NFS. The IP for that vmswitch will be 192.168.5.1. We will then configure our OI NIC to be 192.168.5.2
Go to the ESX Configuration tab, then click on "Networking" at the left.
on upper right of that window click on "Add Networking"

27c_zpsa6322454.jpg
27d_zps4c966ea3.jpg

Click the add button as shown below
27e_zps3ecc20b8.jpg
27f_zpsa2efa15e.jpg
27g_zpsd27e8ecc.jpg
27h_zps45f65895.jpg
28_zps15c8fa1e.jpg

It should look something like this. Regular VMs are connect to the primary vSwitch0. vSwitch0 is just for the OI NAS and the NFS share that we'll create.
27_zpsd43ce3cb.jpg

Now let's add the 2nd NIC to the OI VM.
30_zps8992aaac.jpg
31_zpsa069a529.jpg

Again, be sure to choose "VMXNET3" and choose the 2nd vswitch that we created in previous steps.
32_zps23fb51e8.jpg

Now we need to activate and set the IP of the 2nd vswitch that we just added
33_zps3d29b60f.jpg

You'll notice the first NIC has grabbed a NIC via DHCP, I will show how to change the settings for the 2nd NIC, and you can follow those same settings to set a static IP for your primary adapter (physical network connection) as well.
34_zps986fe487.jpg

Double-click the NIC that is not showing connected status. Put in the Address and netmask fields, you can leave the "Default Route" blank.
35_zps4011f343.jpg

Reboot. Login, open a terminal window, and ping your vswitch IP address to verify that network is functioning correctly.
36_zpse57187ad.jpg

See below instructions (other post) for creating a zpool in OI. Now we'll create a ZFS folder called "ESX_NFS" on pool "ESX". turn samba share and nbmand off.
37_zpsb333cb11.jpg

Under the NFS column, click on the "off" next to the ZFS folder that we created for ESX. Change it to "on" option which should be default when window comes up.
38_zpsf290a8c3.jpg
39_zpseb10c47f.jpg

Now let's add that NFS share to ESX. From main ESX window go to Configuration Tab, on left choose Storage. At upper right choose "Add Storage". Select NFS (Network File System) option.
40_zps8d93879f.jpg

Enter the IP address for vswitch1 that we set in OI. Type the name of the pool and the ZFS folder as shown below (ZFSPOOL/ZFSFOLDERNAME). Then type a datastore name
41_zps0f51312f.jpg

You should now see the NFS share that we created in OI in your ESX storage section. It should show the drive space. You can now create virtual machines and store them on that ESX share.
42_zpsd5ff3861.jpg

IF YOU ARE INTENDING TO USE LOCAL DRIVES ATTACHED TO A SATA CONTROLLER:
Your motherboard must support vt-d. vt-x does not support passthrough and will not let you passthrough physical drives. I recommend the IBM M1015 controller (8 SAS ports, SATAIII), and that is what is shown below:
Go to Configuration tab, advanced settings. Click on "Edit" and choose your SATA controller. Below is what it looks like for my M1015. You should not need a separate entry for a SAS expander if you are running one.
Reboot.

26_zpsbcbc0381.jpg

Once I did the above, when I went into OI I was able to do an import, and import the ZFS pool that was previously created on my 9x 2tb drives.
 
Last edited:
I will be posting pictures and screenshots of setup.
 
Oh, I'm really looking forward to reading about this one. I was thinking about setting up an OpenIndiana NAS back in 2010, but I chickened out since it didn't seem quite ready for prime time back then.
 
Oh, I'm really looking forward to reading about this one. I was thinking about setting up an OpenIndiana NAS back in 2010, but I chickened out since it didn't seem quite ready for prime time back then.

Without the napp-it plugin I'd probably be lost, although I know most of the command-line ZFS functions. napp-it makes this thing sweet, that's for sure.

Think I'm gonna take my DL380 and do some freenas/ubuntu/openindiana benchmarks for ZFS, any other major "OS" I should test for NAS?
 
That Read bench of 3413.33 MB/s seems like a Cache benchmark to me - not an actual "From Disk" read. Even assuming all 8 disks were able to sustain 150MB/s each and were in RAID-0, you'd still only be at 1200MB/s (and your setup is obviously not an 8-Drive RAID-0).

Try using a larger file size for the test - something larger than whatever buffer the RAID Array is using. I use ATTO with the 256MB option, but I'm on W7.

:cool:
 
I took a look at openindiana and man I feel lost. I know my way around Linux pretty well but modern Solaris is definitely not what I am used to :)
 
That Read bench of 3413.33 MB/s seems like a Cache benchmark to me - not an actual "From Disk" read. Even assuming all 8 disks were able to sustain 150MB/s each and were in RAID-0, you'd still only be at 1200MB/s (and your setup is obviously not an 8-Drive RAID-0).

Try using a larger file size for the test - something larger than whatever buffer the RAID Array is using. I use ATTO with the 256MB option, but I'm on W7.

:cool:

Will do, arc cache is at 13Gb so maybe I'll do 25
 
I took a look at openindiana and man I feel lost. I know my way around Linux pretty well but modern Solaris is definitely not what I am used to :)

ZFS commands are straight forward, but the napp-it plugin REALLY shines for those of us that don't know Solaris
 
Cool, im interested to see how this rolls out. Ive been contemplating a NAS/TV box for a while. Thats alot of CPU horsepower for just a simple NAS though how many systems are on your network?
 
Cool, im interested to see how this rolls out. Ive been contemplating a NAS/TV box for a while. Thats alot of CPU horsepower for just a simple NAS though how many systems are on your network?

6. I'm actually upgrading it to a Xeon 5506 (or similar i5ish processor) and 24Gb of RAM here in the next 2 weeks, upgradeable to 48Gb of RAM. :D
 
Wife had our second baby Sunday night, so I'm gonna be "offline" for a bit. I have the following inbound for my NAS upgrade:

4x more 750Gb drives
IBM SAS Expander
EVGA SLi3
E5504 Xeon
 
The SAS expander I got in works perfectly. Only gives me an additional 4 ports but that's all I needed. Pictures when I have time, caught a cold when baby was born and haven't been feeling well.
 
Added initial install/config instructions. With the nappit plugin it's MUCH simpler to setup, do not be afraid! It's like webmin for Linux, ;)
 
The next few months will mean the following for my NAS.

Virtualize OpenIndiana on an i7 box with 24Gb of ram; Perhaps upgrade to 6x8Gb as some x58 boards seem to support 48Gb
Replace the 9x2tb drives with 6x3tb drives
Add a WHS to the mix for workstation backup using iscsi to store that image on the virtualized NAS
 
Step1 - install ESX5 - DONE
Step2 - install OpenIndiana in VM - DONE
Step3 - migrate existing OI pool to new VM - doh! current mobo doesn't support vt-d (new one ordered and will be here next week)

Already loving ESX5, especially the 10Gb link between VMs.
 
Back