Supermicro SC846A-R1200B Review

This is an unusual build for the Overclockers site, but quite common for the environment in which I work. I have two other servers configured as archive retention servers with around 24 TB of (formatted) storage each. The case and build are more expensive than I would consider for other-than-business or uses where almost bulletproof builds and stability are needed. Less expensive cases and components are available.

In this case, the server will be mounted in a dedicated cage space in a collocation facility about 20 miles from the office. Since it is also about 80 miles from my house and I am on call for most nights and weekends, corporate standards and my sanity require this to be remotely manageable and exceptionally reliable.

The server will function as a backup solution for our environment and hosts approximately 20 TB of space for a virtual tape library (VDL) and also handles a quad drive LTO-4 library. This is an update to our existing server with a 9.55 TB VDL FC to SATA bridge array and controllers for the LTO-4 library. A requirement for the design process is that the system be both highly available and remotely manageable.

This case has 24 drive bays with hot swap backplanes and 1200 W redundant power. The specifications below include 13 array drives, two OS drives and one global hot spare, with eight open slots for further storage expansion.

Supermicro SC846A-R1200B
Supermicro SC846A-R1200B

The motherboard was selected based on functionality and the availability of a dedicated network port with KVM over IPMI and remote management of power states. A primary requirement was 4 PCI Express slots running at x8 and at least one conventional PCI 2.0 slot. This server will have four connections to the tape library drives, one for the library controller, a dual port FC connection to our meshed FC environment and bonded NICs for network connections.

Processor and RAM utilization are low and an entry level Nehalem with 6GB of RAM will handle current and future loads due to offloading of IO to the RAID controller.

The RAID controller has a relatively large cache, quick processor, out of band management and uses a battery backup for additional configuration safety.

Existing library controllers from the current server will be incorporated into the new server to control costs.

RAID Configuration:

  • 2 drive RAID-1 OS array
  • 7 drive RAID-5 VDL array
  • 6 drive RAID-5 VDL array
  • 1 drive global hot spare

Parts list, to be ordered

  • Supermicro X8ST3-F Mainboard
  • Supermicro SC846A-R1200B Case and power supply
  • Intel Xeon E5506 Nehalem-EP 2.13Ghz
  • Kingston ValueRAM 6GB (3x2GB) DDR3 ECC Unbuffered
  • Areca ARC-1261ML-2GB PCI Express x8 SATAII Controller CardAreca
  • ARC-6120 Battery Backup Module
  • 16 2TB 7200RPM HDDs
  • Corsair A50 heatsink and fan – Changed to CoolerMaster Hyper N520 for clearance reasons

First, the less than fun part of the process. Supplies and parts were either out of stock, arrived damaged or both. A couple of weeks following the initial orders, when all parts were supposed to have arrived, I had a case, hard drives, CPU, RAM and heatsink. The mainboard and Areca controller parts were on backorder. The Supermicro board was unavailable nationwide. A couple more weeks and I had all items, in theory. The mainboard’s northbridge heatsink had become dislodged in shipment. This should have been a simple reapplication of TIM and reinstallation the heatsink. Unfortunately, the heatsink managed to crush a surface mount resistor pack and the RMA process began.

Not what I had hoped to see
Not what I had hoped to see

What I wanted to see even less
What I wanted to see even less

I wish I could say that I had better luck with the Areca controller. I ordered the 2GB cache version, but received two cards, both with 512MB of cache. Additionally, the battery backup was shipped for an external array rather than the internal version I needed. I could have fabricated the PCI slot bracket, but lacked the proper cable to hook it up. To date I have received the battery backup and a 2GB cache upgrade module. Unfortunately, the cache RAM is too warped to use and the controller will not initialize when installed. Back to the RMA process again…

This didn't look good either
This didn't look good either

These events were partially caused by using different suppliers than I normally do, due to corporate purchasing changes. The new supplier for the Areca parts simply didn’t have the experience with the part numbers and communication with their vendors left much to be desired. Bad parts can happen on any shipment and patience is a virtue.

Finally, I had to change the planned HSF. The top of the Corsair is about 1/3” too tall when installed. I can’t say that I was thrilled with using the CoolerMaster HSF that a local dealer had on hand, but it is working appropriately. Since this chassis will not ever be overclocked, I do not expect that the CPU will ever get warm.

Now, on to the more interesting parts.

Packaging

Supermicro does an excellent job of packaging. The outer box is mounted on feet and protected well from the ground intrusion or drops. The cardboard is thickly corrugated and able to withstand large forces. As you can see from the first picture, the packaging can be abused in shipment. This was undoubtedly caused by something very heavy being placed or falling on the box. Despite this, the contents inside were completely unharmed.

ISO view of packaging
ISO view of packaging

Looking closer, there is a second internal box protecting the chassis. The internal foam is also thick and well designed to withstand shipment. The depth of foam from the sides of the box all exceeds three inches. The mounting rails and hardware box are positioned to avoid damage and prevented from moving within the packaging.

Inner box opened
Inner box opened

Hardware box open
Hardware box open

Rails and hardware removed
Rails and hardware removed

A view of how well the chassis is protected
A view of how well the chassis is protected

Case

The chassis design is relatively standard for servers with high numbers of hot swappable disks. There are no optical or devices on the front of the case. The power and reset controls and indicators are located on the bezel on the left side. They are clearly labeled, visible and accessible when the system is racked.

Front
Front

Back
Back

The metal work is done well with rolled edges on all exposed areas. The power supply latches work cleanly with no doubt of successful insertion. The drive bays and backplane are well constructed and have no issues on insertion or removal. The top cover opens easily once two tabs are pushed to release the latches.

Upper power supply module removed
Upper power supply module removed

The interior of the case is well laid out and provides sufficient for EATX form factor mainboards. Included with the case are mounts for a slim, laptop, form factor optical drive. I will not be using this as I use a PXE boot OS loading process. Alternate brackets are available for mounting two additional internal hard drives.

Mainboard area
Mainboard area

The drive bays and backplane are well designed. The only issue with the design is that the cooling midplane sits very close to the frame supporting the backplane. This makes inserting the SFF-8787 cables into the backplane somewhat tricky. This is compounded by the small space between the backplane and its support. Some of the sockets load from the top and some from the bottom, resulting in a small amount of cable routing chaos. Given that there are only six cables when fully populated, it is not that much of an issue. Half meter cables for this would be recommended, unless your controller is a very small PCBA and has sockets near the back of the case.

An important note on the design of the backplane is the arrangement of the drives. As I was taking pictures installing the drives, I forgot the loading order, despite having done this before. The backplane populates from the lower left slot going up and then moving to the next set of bays. Unfortunately, the manual has no references to the loading order. A side effect of this is that racking the case when the drives are only partially populated, the case is off balance and needs to be carefully handled.

SAS/SATA Backplane and midplane area
SAS/SATA Backplane and midplane area

More detailed veiw of SFF-8787 connectors on backplane
More detailed veiw of SFF-8787 connectors on backplane

Drive bays populated
Drive bays populated

Drives during initialization
Drives during initialization
SAS/SATA connector layout
SAS/SATA connector layout

Chassis cooling is provided by two different sets of fans. The midplane fans provide forced cooling for the drives. The two rear fans provide exhaust from the chassis. The clear shroud, shown in the mainboard area picture, normally will allow for passive heatsinks to be used with specific motherboard and cooler layouts. I am not using an applicable configuration, so this shroud will be discarded. The other exhausts are the pair of 1200W power supplies and passively through vented PCI slot covers and a larger mesh covered vent over the PCI slots. Expect a fairly loud noise level from the fans, they are 92mm and relatively aggressive. I would have preferred 120mm fans, but the lower pressure delivered can cause issues with the passive cooling for processors. The fans use a locking drop-in mount with cables attached to the mounts on the case. This makes assembly and maintenance much simpler, especially those on the midplane. The cables are generally long enough for most configurations.

Fan mounting hardware and connectors
Fan mounting hardware and connectors

The power supplies are listed as 1+1 redundant 1200W Supermicro branded and have worked well in previous installations. The specifications on the side and on the website are as clear as mud and not at all helpful. It is an 80+ Gold supply with PFC. According to the label on the side of each module, it should deliver at least 83A on the 12V rail. The data also seems to indicate that it is rated for 1000W between 100V and 140V, while the 1200W value is between 180V and 240V. It is somewhat aggravating that the 120V values are downgraded, but even the 1000W value is more than sufficient for my needs. This supply would suffice to power a chassis fully populated with 15K SAS drives, dual or quad processors and controllers and other devices. We have the same modules in other chassis powering two servers in a 1U form factor. The servers are each dual Opteron 2376s with 24GB of RAM and Infiniband network adapters.

Power supply module
Power supply module

Installation

Owing to the build quality of the case, the installation process is smooth and painless. The time consuming part is simply mounting the hard drives in the rails. I strongly recommend a quality cordless screwdriver with very good torque clutch. I use an older Panasonic EY6225 which has lasted me for over five years now and has sufficient control that I can screw in laptop screws at high speed without stripping them. It was expensive at the time, but has saved me an incredible amount time and effort.

Components installed
Components installed

Expansion slot area, one PCI slot will not be populated
Expansion slot area, one PCI slot will not be populated

Initialization

Once constructed, you need to plan sufficient time to synchronize the arrays before you begin the operating system load. Depending on the array configuration, this process can take from several hours to several days. In this case, the controller is very fast and synchronization completed in about nine hours. This could seem interminable, but I have had smaller arrays take over 48 hours for completion, admittedly on a FC-to-SATA RAID bridge, but the process is similar.

Conclusion

I have used various Supermicro cases and components before and have yet to be disappointed. This case is no exception. The build quality and features are excellent and it meets all of my expectations for a multipurpose server case. Attention to detail shows throughout.

The only true complaints, aside from vendor issues, would be the power supply specifications and the pricing.

They do need to have more clarity in the documentation and specifications.  900W and 1400W versions are also available, as well as non-redundant models. There are also other variants of this case supporting up to 45 drives in a sometimes confusing array of options, up to and including redundant backplane connection configurations.

The pricing initially seems relatively high for this unit. I purchased it online for around $1,200.00 shipped. For my purposes, this easily meets my budget compared to OEM systems and capabilities, but it’s generally much too expensive for a casual user. It does compare well with other similarly featured chassis. Offsetting the cost a bit is a one year parts and three year labor warranty. I have not had to use this yet and hopefully will not, but have heard good things from others.

Overall, if you have the budget to absorb this case, I would strongly recommend it.

Xaotic

About Xaotic 1 Article
My name is Jim Owen. I have worked in various areas of hardware development and am currently a datacenter engineer and network administrator for a multinational corporation. I have a wife and three kids that occupy most of my time not consumed by work. I still chase performance, though most is in storage and virtualization for work.

Loading new replies...

Avatar of hokiealumnus
hokiealumnus

Water Cooled Moderator

16,561 messages 25 likes

Excellent review Xaotic, very well written.

Reply Like

W
Wayward_Son

Member

1,138 messages 0 likes

I have nothing to add to this other than to say that my very first ground-up build centered on a Supermicro motherboard. Hell, I can still remember the model: Supermicro P6SBA. That thing got me in to computer hardware more than any other piece of hardware I've ever touched. To this day I care more about hardware than I do about actually... you know... using my computer. :)

Reply Like

Avatar of Adragontattoo
Adragontattoo

Trailer Chasing Senior

5,260 messages 0 likes

You neglected something!

How well does it play Crysis!

Oh and I'll take 2 please!

One will go to my company, the other will be stored "offsite" in a secured area.:p

Reply Like

Avatar of Dooms101
Dooms101

Member

1,672 messages 0 likes

Must be fun to work in a $1200 case with god-know's how much money in drives. Very well written review.

To this day I care more about hardware than I do about actually... you know... using my computer.

Hahah +1... I am the same way.

Reply Like

Avatar of Xaotic
Xaotic

Very kind Senior

2,651 messages 0 likes

How well does it play Crysis!

Oh and I'll take 2 please!

One will go to my company, the other will be stored "offsite" in a secured area.:p

No Crysis here. It's onboard video only. It could do this fairly easily, though the video card would run at 8X, due to the other controllers that I will be installing.

Let me guess the location. Rackmounted somewhere close to DC...

I want to build one for myself, but my wife would shoot me and already has the murder weapon.

Must be fun to work in a $1200 case with god-know's how much money in drives. Very well written review.

Hahah +1... I am the same way.

Thanks.

The originial prototype was in a reused case from around 10 years ago. It had 16 drive bays and was originally a PATA 2TB server. I gutted it and replaced everything other than the drive bays and fans. Unfortunately, there are no backplanes for hotswap and decent wiring. Later chassis use the Supermicro case and have all of the goodies. The costs for the systems are now less than the original upgrade.

The drive costs have come down significantly. The Areca controllers remain expensive as sin. The ARC-1261ML 2GB models are around 900.00 and add another 120.00 for the BBU. Add another 800.00 for SCSI controllers and 800.00 for the fiber channel controller and the budget increases dramatically. Fortunately, I will be reusing the controllers for SCSI and FC, so the impact is much less.

Reply Like

click to expand...
Avatar of Dooms101
Dooms101

Member

1,672 messages 0 likes

Thanks.

The originial prototype was in a reused case from around 10 years ago. It had 16 drive bays and was originally a PATA 2TB server. I gutted it and replaced everything other than the drive bays and fans. Unfortunately, there are no backplanes for hotswap and decent wiring. Later chassis use the Supermicro case and have all of the goodies. The costs for the systems are now less than the original upgrade.

The drive costs have come down significantly. The Areca controllers remain expensive as sin. The ARC-1261ML 2GB models are around 900.00 and add another 120.00 for the BBU. Add another 800.00 for SCSI controllers and 800.00 for the fiber channel controller and the budget increases dramatically. Fortunately, I will be reusing the controllers for SCSI and FC, so the impact is much less.

If only I had a corporate budget for my personal projects :rolleyes: My dad works for a large international corporation and he's told me the amounts of money it costs to replace some of the parts on the machinery he designs and its hard to imagine that they make as much in profit as they do. For instance, a part has to be replaced on one the furnace lathes every 4 months and costs around $40K :shock: then again each furnace lathe (something like 11 or 12 of them in the building) costs around $1.5M just in parts...

Reply Like

click to expand...
Avatar of Xaotic
Xaotic

Very kind Senior

2,651 messages 0 likes

Tell me about it. We are running evals preparing for RFQs for a couple of Petabyte capacity, high performance SAN systems for the main locations. I have a small 60TB SAN locally and it was several hundred K.

Reply Like