• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

NVMe SSD vs SATA SSDs in RAID

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

NiHaoMike

dBa Member
Joined
Mar 1, 2013
I'm working on a HP Proliant DL380p G8 (converting it into a budget AI development workstation), in its single CPU configuration, there are 3 PCIe slots available. The x16 slot is used for the GPU, leaving a x8 and a x4 slot. One of those will be used for USB 3, leaving just one slot left.

Since I expect to try stuff like machine vision, I would like to have that slot available to add stuff like a video capture card later on. For storage, the motherboard has 1x SATA and 8x SAS ports, with the SAS ports attached to a hardware RAID controller. If I put one HDD on a SAS port (for bulk storage, with NVRAM write cache) and use the remaining 7 ports for SSDs, how would that SATA SSD array compare to a NVMe SSD?
 
On this server, you don't have a big choice. NVMe will perform better, but DL380 G8 is already old, and no matter what you do, it will limit the performance.
You need additional PCIe slots for something else, and I assume that you could still use RAID. To run RAID on NVMe, you need an additional controller/RAID card, which isn't cheap. For SATA/SAS, you already have it built-in. Either way, it's not worth investing in this machine. The cheapest would probably be getting a regular M.2 PCIe card and installing a single NVMe SSD. If you don't have SATA/SAS frames or SATA-SAS cables, then you need SATA SSDs designed for this server. It's also expensive.

In short, unless you make the single NVMe/budget option (assuming that the PCIe card will even work) or use SATA-SAS cables + SATA SSDs, then every other option feels like a waste of money for this server.
 
I went with that old platform since I was able to find a cheap deal ($68, includes everything needed to make it work except storage) and had 192GB of DDR3 LRDIMMs to use. I already installed a RTX 3060 Ti (with the help of a PCIe flex cable) and a cheap SAS breakout cable was able to make it work with regular SATA HDDs and SSDs. (Going the "official" way would indeed add up a lot in costs for drive holders. But I don't need hot swapping so directly connecting drives to cables like in a regular PC is fine. I plan to 3D print a holder for the drives once I have determined what configuration to use.)

Keep in mind that I'm a beginner at AI development so it would not make sense to invest a whole lot in a setup quite yet.
 
There are quite cheap SAS->4x SATA cables. I would connect one of those and set RAID10 on 4xSATA SSDs. It will give you ~800MB/s sequential read/write, bump IOPS, and in case any SSD fails, it will keep running. On the other hand, I don't know if 4x SSD, even SATA, are worth the additional cost. Maybe 2 in RAID1 will be good enough (if you care to have the OS/data somehow protected in case one of drives fails). If you make backups or drive images from time to time then I guess that even one SSD will be enough for your needs.

I have no idea if you have a version with HBA or a controller with a cache. If the SAS controller has a cache, then it's already not so bad.
I wouldn't try NVMe because it will be limited to PCIe 3.0, and it may also have limited PCIe lanes. Because it's an old generation and CPUs won't bump storage performance, then you may count on not much different results on a single NVMe SSD and 4x SATA in RAID10. Just a quick calculation in my head, so I can be wrong.

Older servers are cheap now, and barely anyone sells them with storage. Actually, storage is the most expensive if you want a larger RAID.
 
It's a hardware controller with 2GB NVRAM cache and RAID5/6 support. I think I'll set it to use cache only for writes to the HDD since that's what would benefit the most, the kernel already does read caching using otherwise unused RAM (write caching is generally limited in order to avoid data loss if the system crashes, the NVRAM module has safeguards against that) and the SSDs are fast enough that the NVRAM only gives marginal improvement.
 
Back