This is an unusual build for the Overclockers site, but quite common for the environment in which I work. I have two other servers configured as archive retention servers with around 24 TB of (formatted) storage each. The case and build are more expensive than I would consider for other-than-business or uses where almost bulletproof builds and stability are needed. Less expensive cases and components are available.
In this case, the server will be mounted in a dedicated cage space in a collocation facility about 20 miles from the office. Since it is also about 80 miles from my house and I am on call for most nights and weekends, corporate standards and my sanity require this to be remotely manageable and exceptionally reliable.
The server will function as a backup solution for our environment and hosts approximately 20 TB of space for a virtual tape library (VDL) and also handles a quad drive LTO-4 library. This is an update to our existing server with a 9.55 TB VDL FC to SATA bridge array and controllers for the LTO-4 library. A requirement for the design process is that the system be both highly available and remotely manageable.
This case has 24 drive bays with hot swap backplanes and 1200 W redundant power. The specifications below include 13 array drives, two OS drives and one global hot spare, with eight open slots for further storage expansion.
The motherboard was selected based on functionality and the availability of a dedicated network port with KVM over IPMI and remote management of power states. A primary requirement was 4 PCI Express slots running at x8 and at least one conventional PCI 2.0 slot. This server will have four connections to the tape library drives, one for the library controller, a dual port FC connection to our meshed FC environment and bonded NICs for network connections.
Processor and RAM utilization are low and an entry level Nehalem with 6GB of RAM will handle current and future loads due to offloading of IO to the RAID controller.
The RAID controller has a relatively large cache, quick processor, out of band management and uses a battery backup for additional configuration safety.
Existing library controllers from the current server will be incorporated into the new server to control costs.
- 2 drive RAID-1 OS array
- 7 drive RAID-5 VDL array
- 6 drive RAID-5 VDL array
- 1 drive global hot spare
Parts list, to be ordered
- Supermicro X8ST3-F Mainboard
- Supermicro SC846A-R1200B Case and power supply
- Intel Xeon E5506 Nehalem-EP 2.13Ghz
- Kingston ValueRAM 6GB (3x2GB) DDR3 ECC Unbuffered
- Areca ARC-1261ML-2GB PCI Express x8 SATAII Controller CardAreca
- ARC-6120 Battery Backup Module
- 16 2TB 7200RPM HDDs
- Corsair A50 heatsink and fan – Changed to CoolerMaster Hyper N520 for clearance reasons
First, the less than fun part of the process. Supplies and parts were either out of stock, arrived damaged or both. A couple of weeks following the initial orders, when all parts were supposed to have arrived, I had a case, hard drives, CPU, RAM and heatsink. The mainboard and Areca controller parts were on backorder. The Supermicro board was unavailable nationwide. A couple more weeks and I had all items, in theory. The mainboard’s northbridge heatsink had become dislodged in shipment. This should have been a simple reapplication of TIM and reinstallation the heatsink. Unfortunately, the heatsink managed to crush a surface mount resistor pack and the RMA process began.
I wish I could say that I had better luck with the Areca controller. I ordered the 2GB cache version, but received two cards, both with 512MB of cache. Additionally, the battery backup was shipped for an external array rather than the internal version I needed. I could have fabricated the PCI slot bracket, but lacked the proper cable to hook it up. To date I have received the battery backup and a 2GB cache upgrade module. Unfortunately, the cache RAM is too warped to use and the controller will not initialize when installed. Back to the RMA process again…
These events were partially caused by using different suppliers than I normally do, due to corporate purchasing changes. The new supplier for the Areca parts simply didn’t have the experience with the part numbers and communication with their vendors left much to be desired. Bad parts can happen on any shipment and patience is a virtue.
Finally, I had to change the planned HSF. The top of the Corsair is about 1/3” too tall when installed. I can’t say that I was thrilled with using the CoolerMaster HSF that a local dealer had on hand, but it is working appropriately. Since this chassis will not ever be overclocked, I do not expect that the CPU will ever get warm.
Now, on to the more interesting parts.
Supermicro does an excellent job of packaging. The outer box is mounted on feet and protected well from the ground intrusion or drops. The cardboard is thickly corrugated and able to withstand large forces. As you can see from the first picture, the packaging can be abused in shipment. This was undoubtedly caused by something very heavy being placed or falling on the box. Despite this, the contents inside were completely unharmed.
Looking closer, there is a second internal box protecting the chassis. The internal foam is also thick and well designed to withstand shipment. The depth of foam from the sides of the box all exceeds three inches. The mounting rails and hardware box are positioned to avoid damage and prevented from moving within the packaging.
The chassis design is relatively standard for servers with high numbers of hot swappable disks. There are no optical or devices on the front of the case. The power and reset controls and indicators are located on the bezel on the left side. They are clearly labeled, visible and accessible when the system is racked.
The metal work is done well with rolled edges on all exposed areas. The power supply latches work cleanly with no doubt of successful insertion. The drive bays and backplane are well constructed and have no issues on insertion or removal. The top cover opens easily once two tabs are pushed to release the latches.
The interior of the case is well laid out and provides sufficient for EATX form factor mainboards. Included with the case are mounts for a slim, laptop, form factor optical drive. I will not be using this as I use a PXE boot OS loading process. Alternate brackets are available for mounting two additional internal hard drives.
The drive bays and backplane are well designed. The only issue with the design is that the cooling midplane sits very close to the frame supporting the backplane. This makes inserting the SFF-8787 cables into the backplane somewhat tricky. This is compounded by the small space between the backplane and its support. Some of the sockets load from the top and some from the bottom, resulting in a small amount of cable routing chaos. Given that there are only six cables when fully populated, it is not that much of an issue. Half meter cables for this would be recommended, unless your controller is a very small PCBA and has sockets near the back of the case.
An important note on the design of the backplane is the arrangement of the drives. As I was taking pictures installing the drives, I forgot the loading order, despite having done this before. The backplane populates from the lower left slot going up and then moving to the next set of bays. Unfortunately, the manual has no references to the loading order. A side effect of this is that racking the case when the drives are only partially populated, the case is off balance and needs to be carefully handled.
Chassis cooling is provided by two different sets of fans. The midplane fans provide forced cooling for the drives. The two rear fans provide exhaust from the chassis. The clear shroud, shown in the mainboard area picture, normally will allow for passive heatsinks to be used with specific motherboard and cooler layouts. I am not using an applicable configuration, so this shroud will be discarded. The other exhausts are the pair of 1200W power supplies and passively through vented PCI slot covers and a larger mesh covered vent over the PCI slots. Expect a fairly loud noise level from the fans, they are 92mm and relatively aggressive. I would have preferred 120mm fans, but the lower pressure delivered can cause issues with the passive cooling for processors. The fans use a locking drop-in mount with cables attached to the mounts on the case. This makes assembly and maintenance much simpler, especially those on the midplane. The cables are generally long enough for most configurations.
The power supplies are listed as 1+1 redundant 1200W Supermicro branded and have worked well in previous installations. The specifications on the side and on the website are as clear as mud and not at all helpful. It is an 80+ Gold supply with PFC. According to the label on the side of each module, it should deliver at least 83A on the 12V rail. The data also seems to indicate that it is rated for 1000W between 100V and 140V, while the 1200W value is between 180V and 240V. It is somewhat aggravating that the 120V values are downgraded, but even the 1000W value is more than sufficient for my needs. This supply would suffice to power a chassis fully populated with 15K SAS drives, dual or quad processors and controllers and other devices. We have the same modules in other chassis powering two servers in a 1U form factor. The servers are each dual Opteron 2376s with 24GB of RAM and Infiniband network adapters.
Owing to the build quality of the case, the installation process is smooth and painless. The time consuming part is simply mounting the hard drives in the rails. I strongly recommend a quality cordless screwdriver with very good torque clutch. I use an older Panasonic EY6225 which has lasted me for over five years now and has sufficient control that I can screw in laptop screws at high speed without stripping them. It was expensive at the time, but has saved me an incredible amount time and effort.
Once constructed, you need to plan sufficient time to synchronize the arrays before you begin the operating system load. Depending on the array configuration, this process can take from several hours to several days. In this case, the controller is very fast and synchronization completed in about nine hours. This could seem interminable, but I have had smaller arrays take over 48 hours for completion, admittedly on a FC-to-SATA RAID bridge, but the process is similar.
I have used various Supermicro cases and components before and have yet to be disappointed. This case is no exception. The build quality and features are excellent and it meets all of my expectations for a multipurpose server case. Attention to detail shows throughout.
The only true complaints, aside from vendor issues, would be the power supply specifications and the pricing.
They do need to have more clarity in the documentation and specifications. 900W and 1400W versions are also available, as well as non-redundant models. There are also other variants of this case supporting up to 45 drives in a sometimes confusing array of options, up to and including redundant backplane connection configurations.
The pricing initially seems relatively high for this unit. I purchased it online for around $1,200.00 shipped. For my purposes, this easily meets my budget compared to OEM systems and capabilities, but it’s generally much too expensive for a casual user. It does compare well with other similarly featured chassis. Offsetting the cost a bit is a one year parts and three year labor warranty. I have not had to use this yet and hopefully will not, but have heard good things from others.
Overall, if you have the budget to absorb this case, I would strongly recommend it.