• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

ZFS all-in-one build - dual i7 48-96Gb ram

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

cw823

Honeybadger Moderator
Joined
Dec 18, 2000
Location
Earth
Just wanted to put a note on some great recommendations for hardware that isn't bleeding edge but will still provide great performance for a ZFS all-in-one build.

A ZFS all-in-one is a machine on which you install ESX, passing through (via vt-d or iommu) a SATA adapter to a virtual machine and virtualizing your NAS. More into here:
http://www.napp-it.org/napp-it/all-in-one/index_en.html

Two builds which you can do rather cheaply on server grade hardware:

Build #1 - s1156 i7
Supermicro X8SIL-F motherboard (32Gb RDIMM, s1156). This motherboard will use a standard 1156 heatsink if you pry off the stock Supermicro fan hardware (which accepts a more expensive supermicro cooler that is NOT necessary). You should be able to find one of these on ebay for around $70. Grab one with IPMI if you can find one.
Xeon x3440 processor. you will not necessarily want x3430 as it does not have HT. I would go with at minimum an x3440. I believe you do not want normal corei3 parts as they do not support vt-d (important to passthrough your sata controller). You should be able to find one of these on ebay for around $55.
You can use unbuffered (max 16Gb) or Registered ECC (max 32Gb) memory, I would recommend the latter. Now the X8SIL is VERY picky about memory, so I only use Kingston KVR1333D3Q8R9S/8G DIMMs. You should be able to find these at around $60/dimm (don't pay more)
IBM M1015 SATA controller. This is the controller that you will pass through your SATA disks on to your NAS VM. You can find these for anywhere from $70-$100 on ebay or on about any decent size forum. It is a SATA3 controller, and you just run IT firmware on it and allow your NAS VM to do any raid calculations.
You do not need a special PSU, anything around the 500w will be MORE than enough.

Build #2 - dual s1366 i7
Supermicro X8DTL-i motherboard. I think you can use a standard s1366 heatsink but I have always used the Supermicro heatsinks, the active cooling ones. The only caveat with the active coolers as that you're limited on orientation. You can't have them both exhausting air the same direction. But if you with a Lxxxx series CPU they won't be generating a ton of heat. this motherboard is one of very few dual s1366 motheboards that is still using ATX form factor, this is a HUGE selling point. No server case needed! You can find these regularly on ebay for around $150, which is a steal. *** be sure to ask the seller about BIOS update. I just purchased an X8DTL that would not support the L5630 CPU until I ran a BIOS update. Luckily I had a Xeon 5504 sitting around, which I can send out if needed if you have the same issue***
Xeon L5630 processor. This is a 40w i7 w/ HT, and you can run two on this motherboard. 16 logical cores, more than enough for many VMs on an ESX box. You should be able to find these for <$40 (per CPU) on ebay. ***you can substitute L5520 which is also an i7 w/ HT for a few dollars cheaper, but this is slightly older CPU technology than L5630, and it's a 60w part***
Memory - this motherboard will take up to 96Gb of RDIMM (if using 16Gb DIMMs). I would recommend 8Gb ECC registered DIMMS, which you can normally find for <$50 per DIMM on ebay. 6x8Gb would give you 48Gb of ram.
Same M1015 controller as recommended previously
No special PSU requirement, but it is worth noting that only one of the 2x 8-pin CPU connectors on the motherboard needs to be used in the above described setup.

Build #3 dual s1356 Xeon e5
Tyan S7045GM4NR - EATX. Dual s1356, 2x SATA II & 2x mini-SAS
(2) Supermicro SNK p0035AP4 heatsink/fan assemblies
(2) Xeon E5-2418L w/ HT
Memory - 12x 8Gb DDR3 ECC Registered. Supports UDIMM, RDIMM, LRDIMM so lots of options here
Same M1015 controller as recommended previously
No special PSU requirement, but it is worth noting that only one of the 2x 8-pin CPU connectors on the motherboard needs to be used in the above described setup.

Updated 3/1/15
 
Last edited:
I would also recommend using the X8SIL hardware if you support any business clients. You can virtualize a few servers, including a WHS allowing you to run workstation backups. I have already restored approximately 5 workstations with bad hard drives over the past three years, and a WHS is an integral and crucial part of that workstation restore process.
 
ESX Screenshot
 

Attachments

  • SPEEDYESX.jpg
    SPEEDYESX.jpg
    38.9 KB · Views: 396
Startinga dual E5 build today (1356), should have the hardware next week. Hoping for less power draw than my dual i7 box with better performance.

Kicking myself for not picking up some of those direct cheap M4 refurb drives the other day, though.
 
Jealous. I've always lusted after a supermicro dual xeon setup. Love the idea of virtualizing a nas via esx. Btw, what's a roundabout total cost for hardware on the second higher end setup?
 
Jealous. I've always lusted after a supermicro dual xeon setup. Love the idea of virtualizing a nas via esx. Btw, what's a roundabout total cost for hardware on the second higher end setup?

1356 or 1366?
 
I love your #2 build, I'm going to be following in the same footsteps with my homelab. I think I will go with my X8DTI instead of the L, it's EATX I think is the only big difference, but I have a full tower case that will hold the EATX without any issue.
 
I love your #2 build, I'm going to be following in the same footsteps with my homelab. I think I will go with my X8DTI instead of the L, it's EATX I think is the only big difference, but I have a full tower case that will hold the EATX without any issue.

It's a great line of boards, and with 2x L5639 @ $70/per you can easily have 24 threads. The X8DTL is just handy, as you said, if you don't want an EATX board and thus a larger case.
 
Hey cw, one/two more questions:

I'm thinking about upgrading the CPUs to L5640s as they can (as you said) give me 24 threads to play with for only a 60W TDP.
edit: nevermind on the CPUs, I wouldn't be able to get the wife's approval of the double cost of the CPUs at this point + the cost of the coolers. But that does still lead me to a question below:

With those chips, are there any good coolers that are cheap/easily accessible but also quiet? I thought I've seen folders using Arctic cooling HSFs and such, but wanted to make sure before I buyout my cart on ebay that I can cool these guys. Obviously I will need to find HSFs that are compatible with S1366 mounting holes, but are there issues to expect when mounting a non-server heatsink?

http://www.amazon.com/Cooler-Master-Hyper-212-RR-212E-20PK-R2/dp/B005O65JXI

Looking at something like that and would still go with the L5630 CPU that is recommended in the OP. The motherboard I'm looking at specifies that it comes with heatsinks, but if they are loud I'd probably get in trouble.
 
Last edited:
Any thoughts on the X8DTi-F instead of the X8DTL-i? As far as I can tell the only real difference is the board size.
 
That's the board I decided to go with (X8DTI). I hope to have it built in the next few days. Check out my progress report of said build in the HTPC/Server subforum if you're interested.
 
Hey cw, one/two more questions:

I'm thinking about upgrading the CPUs to L5640s as they can (as you said) give me 24 threads to play with for only a 60W TDP.
edit: nevermind on the CPUs, I wouldn't be able to get the wife's approval of the double cost of the CPUs at this point + the cost of the coolers. But that does still lead me to a question below:

With those chips, are there any good coolers that are cheap/easily accessible but also quiet? I thought I've seen folders using Arctic cooling HSFs and such, but wanted to make sure before I buyout my cart on ebay that I can cool these guys. Obviously I will need to find HSFs that are compatible with S1366 mounting holes, but are there issues to expect when mounting a non-server heatsink?

http://www.amazon.com/Cooler-Master-Hyper-212-RR-212E-20PK-R2/dp/B005O65JXI

Looking at something like that and would still go with the L5630 CPU that is recommended in the OP. The motherboard I'm looking at specifies that it comes with heatsinks, but if they are loud I'd probably get in trouble.
I can vouch for Artic 7's they fit my X8DT6-F which is 1366 so I see no reason they won't fit.
 
I can vouch for Artic 7's they fit my X8DT6-F which is 1366 so I see no reason they won't fit.


That is good to know. I know Supermicro 1156 boards, if you separated the metal backplate you could just use a regular 1156 cooler, but not all Supermicro boards have a backplate that can be pulled off the motherboard.

I finally got the second E5-2418L, albeit at twice the cost of the first CPU (first CPU was ES), so I'm back to dual E5-2418L + 48Gb of RAM.

I'm using Supermicro coolers, and they have PWM fans so I rarely hear them.
 
Well, my current server setup finally died, after running for about 8 years. Time to upgrade I guess. I got an X8DTi-F, a single L5630, and 8GB of ram from eBay for $200. Enough to replace what I was using before. I use ZFS a lot at work and have been dying to set it up at home. But I never wanted to have a ZFS box and then another head to control VMs. So I'm giving napp-it a try. I really just need 1 Windows VM and 1 Linux VM. So the initial setup will work nicely.
 
Currently I have the following VMs:

SPEEDYNAS
2x vcpu
32Gb ram
IBM M1015 passed-through
SAS Epxander passed-through
12x 1tb drives in 3 raidz vdevs
1x Samsung PM830 250Gb SSD - cache
1x Intel 100Gb S3700 - ZLOG
1x pro/1000 NIC - mgmt
2x vmxnet3 NIC (one for SMB, one for NFS to ESX host)
30Gb OS drive
runs OmniOS + napp-it

SPEEDYAD1
50Gb OS drive
2k8 R2
2x vcpu
4Gb ram
MediaCenterMaster + utorrent
AD/DHCP/DNS
1 vmxnet3

SPEEDYAD2
50Gb OS drive
2k8 R2
4x vcpu
4Gb ram
PLEX Media Server
AD/DHCP/DNS
1 vmxnet3

SPEEDYvCENTER
deployed vmware's ovf
2 vcpu
4Gb RAM
1 vmxnet3


ESX host has a 60Gb SSD for host cache + a raid10 of 10k disks. The only VMs stored on the raid10 are vcenter and speedyad1, the others are stored on NFS storage presented by the NAS
 
Finally got my all-in-one setup. However instead of using napp-it I decided to go with Xenserver and a FreeNAS VM to handle all the ZFS. In a nutshell you give the FreeNAS VM the ability to handle the controller card via passthrough and there is your ZFS storage. Essentially the same thing napp-it does, just that the FreeNAS interface is much cleaner and polished. I wanted to use ESXi as the virtualization but LSI cards don't behave that well with FreeBSD, especially when virtualized. Now that I have my system configured the way I want it I'll write a how-to or blog article in the next few days.
 
funzie said:
Finally got my all-in-one setup. However instead of using napp-it I decided to go with Xenserver and a FreeNAS VM to handle all the ZFS.

I've used FreeNAS, the gui is nice but the performance wasn't good for NFS and even samba shares. Although that has been a while... I also do not believe that FreeNAS in a VM is a support configuration.

funzie said:
In a nutshell you give the FreeNAS VM the ability to handle the controller card via passthrough and there is your ZFS storage. Essentially the same thing napp-it does, just that the FreeNAS interface is much cleaner and polished. I wanted to use ESXi as the virtualization but LSI cards don't behave that well with FreeBSD, especially when virtualized.

The IBM M1015 works fine with FreeBSD and pretty much any other OS, which is why it's the most recommended controller for a ZFS NAS. I know some LSI cards can be flashed to IT mode as well, rather than running the original firmware for RAID.

funzie said:
Now that I have my system configured the way I want it I'll write a how-to or blog article in the next few days.

Perfect! I need to add a few notes to my build anyway, some NFS & 10G tweaks.
 
Next upgrade I do will be an M1015.

Their is no support for FreeNAS as a VM, because it's the nature of the beast. FreeNAS wants control of the hard drives to handle ZFS. Also as a side note, Xenserver does not officially support FreeBSD. The trick is giving the VM direct access to the hard drives so that it can manage ZFS. It's the same way napp-it works. It's a VM with direct access to the hard drives.

A lot of people advise against it. Because if you lose the VM handling ZFS or the virtualization host dies then you are out of luck with your data. But you are not at all, far from it. All you have to do is import the zpool in another machine or configuration and you have all your data. The thing that should NEVER be done is creating a bunch of virtual drives that span the entire size of your array and then installing FreeNAS or napp-it in the VM. Essentially having a 10TB VM or some crazy thing.

I did notice the decrease in Samba speeds. I had my zpool running on a bare metal install of FreeNAS and was getting an average of 90MB/s now I get an average of 25MB/s transfer. I did not get much time to investigate as to why. I'll take a look at it later today or tomorrow to see if I can improve it.
 
Back