• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Foxconn Purus server

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

mackerel

Member
Joined
Mar 7, 2008
server1.jpg
The server has arrived. After looking at it, it seems to be called Purus and I can't find anything about it online. Probably very specialised in original distribution.

server2drives.jpg
The back area has 4 3.5" HD cages, with SATA and power connectors running to them. Arguably you could fit another two smaller drives there but the cooling fans protrude into the area.

server3psu.jpg
PSU label. I've not checked yet but it looks like it could be standard ATX.

server4mobo1.jpg
With the airflow cover off, here's the lower part of the mobo. There's unfitted PCIe connectors there... would be nice if they were present but not on this model it seems. The ram is DDR3 1333 reg ECC 4GB 2R and all are HP branded. Most sticks are Micron, some Nanya.

server5mobo2.jpg
The other end of the mobo has the SATA ports (6 total, 4 connected to drive bays), what looks like ATX power connector, and 5 fan headers. It is interesting they support 5 wire. 5???

server6port1.jpg
Front left has the serial/console port, VGA out, two USB ports and RJ45. Middle (not pictured) has the 10gig SFP+ port, and a blanking plate for an expansion card I can't fit.

server7port2.jpg
To the right, are the power/reset switches, HD and power LEDs. And the power connector. It has a wire latch to hold the connector in place and prevent accidental removal, but doesn't fit the random cable I pulled out to test it.

First time I powered it up, my ears started bleeding. Imagine standing next to a jet taking off. I expected this, but still the sound level took me by surprise. It seems to ramp to maximum on initial boot, before dropping down to temperature controlled running speed. Down is relative. It is still very loud.

Absent any clue as to what I'm doing I powered down, and checked the battery voltage. Seems normal. I juggled the ram a bit and powered up again. Now I had life on the monitor. A quick look in bios allowed me to change a couple of settings to disable raid and network boot since I never use those. I will need to revisit some ram settings later. For example, NUMA was off, and I wonder if it should be on. I have no idea how Windows handles this scenario as obviously for maximum performance I'd like data for tasks on each CPU to be on the ram attached to that CPU, not the other. Is Windows smart enough to do that?

I've got an old image of Win7 booted for testing. Right now I'm missing mobo level drivers and network connection, so it's going to be interesting getting that going. Especially as there's no manufacturer download site I can go to for these! I hope generic Intel chipset drivers would get me through the first step, then I have to work out what network chipset it actually uses...
 
I got the chipset drivers installed ok but struggling on the networking side. It turns out if I paid attention to the ebay listing, I would also have seen "1 x Integrated SFP+ 10GB Ports, 1 x Network Management Port". The RJ45 is for management only! I don't have any way to get from SFP+ to my network, and there's no expansion possibility other than USB. Right now I have a MiFi dongle connected by USB and that is allowing me to get net access, but that eats into my data allowance. As next step I will use a USB wifi adapter, but they need drivers also. My problem is... the server only has two USB ports and I don't have a hub. Keyboard, mouse, optical drive, WiFi dongle... that's going to take some juggling to install!
 
Should be able to use any kb/mouse that networks into it, right? At least that iLo (management port)?
 
I got wifi connected, which makes it a little easier for me also in that I can VNC into it rather than have to juggle if I want either keyboard or mouse until such time I get a USB hub. I've also got a USB network adapter sitting in my Amazon basket in case I go that route along with a hub.

Power measurements taken at the wall is sitting around 148W idle at Windows desktop, and 237W running Prime95 small FFT. With Prime95 still running, the temps are only going up to around 60C with an ambient of around 20C (estimated, not measured). Note this generation CPU doesn't have FMA which may account for the relatively low power and heat.

2650x2cb15.jpg

And here's a quick Cinebench R15. I didn't realise it was stuck on 800x600 and increased the desktop resolution after that screenshot. I haven't managed to identify what GPU it uses yet so this is running Windows standard VGA driver and performance is a bit horrible.

- - - Updated - - -

Should be able to use any kb/mouse that networks into it, right? At least that iLo (management port)?

iLo is the HP one isn't it? This server is a Foxconn... I have no idea what they use for management and I have zero experience in this area. I have got control over Wifi using VNC for now.

- - - Updated - - -

Edit: something isn't quite right, the CPU is stuck at 1.8 GHz.
 
Last edited:
I found the ip adress of the management port, and sticking it in a web browser gives me a login screen. It seems it is powered by Avocent MergePoint Embedded Management Software. A search for default login hasn't bought success yet, but I see there may be a way to reset it via the serial port so I'll be looking for a serial cable later! I'm gonna give my ears a rest for a bit and see if I can change the fans next.
 
2650x2cb15b.jpg

This is better... I found the CPU was fixed to 1.8 GHz in the power saving part of the bios, so I've undone that and also allowed power save states.

I also found the graphics driver. It is from a company called Aspeed. I can't seem to find a way to allocate more than 8MB of VRAM though, and I'm limited to 1024x768 now. With the standard Windows VGA driver I think I got the next step up, possibly at some cost of colour depth.

Fan wise, I'm happy to report I can replace them with something more sane. For now, I have two 140mm PWM fans as what I had spare with 4 pins. I haven't tried 3 pin fans on it but the system seems to be designed with PWM control in mind and might run 3 pin at full speed. I assume the 5th pin on the connector is due to them using double stacked fans, and that would spread the power delivery to them. Undecided at this time if I should get a pair of new coolers or if I should just get cheap fans to somehow sit on top of the existing ones. They're not really designed for that.

Peak temps when running Cinebench hit 67C.

302W running Prime95 small FFT. Peak temp hit 77 where fans go into high speed mode and it drops below that.
 
servercooled.jpg

I had a think and managed to find some 2011 compatible coolers which are now fitted as shown. This is not your typical server look, but at least it runs without causing ear pain.

Upper cooler is a Silverstone AIO. I never liked it, as it always ran too loud for me. The supplied fan was high rpm and I could never tame it, and the screw length was so tight I couldn't get any other manufacturer fan to stay on it. I remembered I bought an Antec TrueQuiet 120 for another case which I never got around to fitting. It fitted perfectly, thanks in part due to generous rubber corner parts.

The other cooler is the good old Hyper 212. Not much to say about it is there? I did like fitting both to 2011, so much easier to use the socket mechanism than having to mess around with backplates on 115x.

Temps running under Prime95 small FFT were around 55C, and the AIO was the hotter of the two. With the original fan it was barely into the 40's under load, but that brings back other problems...

I've got it running on a PrimeGrid challenge now. A little late to the start but it should provide some nice throughput.

It may be noticed I took out half the ram. This was in part due to temperatures. They did run uncomfortably hot to the touch. Removing half of them seems to help a fair bit. I'll reintroduce them when I have more of a think about what I want to do with this system longer term. I doubt I'll leave it running as it currently stands.

I've got a USB hub and network adapter in my Amazon basket too...
 
There's two E5-2650 Sandy Bridge CPUs in there. The supply of cheap 2670s seem to have dried up.

Oh, the management port, I got the login thanks to another forum, so that's working too. They also pointed me at a SFP to RJ45 adapter so I could use the built in port and I've ordered one of those too.
 
I borrowed the IR camera from work and had a look at the server. Keep in mind that by nor running the stock arrangement, I might be keeping the CPUs cool but the airflow may be different over the mobo.

FLIR0441.jpg FLIR0457.jpg
This is the region in the front on the mobo, near the SFP socket. There's a heatsink there, but a small device to the left is actually hotter. By putting a fan in the area reduced the temps quite a bit.

FLIR0443.jpg
This looks like some power conversion near the front CPU. I didn't try adding airflow to this.

FLIR0445.jpg FLIR0461.jpg
This looks like some power conversion near the rear CPU. A fan over this also helped a lot.

FLIR0449.jpg FLIR0459.jpg
This a another heatsink covered device further back on the mobo. Once again, a fan (same as for the front heatsink device) also helps temps a lot.

FLIR0453.jpg
Just for indication, you can see the temperature of the reservoir housing.

FLIR0455.jpg
Similarly here is the surface temperature of the heatsink on the other CPU. It is cooler... be cautious here, in that shiny metal surfaces can reflect IR and may not entirely show its own temperature.
 
Thinking more about the cooling, in the standard configuration the shroud and airflow would only go over the CPU/ram/VRM area on of the mobo. The heatsink areas on the right are not in the main airflow, although I need to check the shroud to see if the far right fan does provide some flow over that area or if it is also directed into the main flow.

I could swap coolers around with other systems. I have a Noctua D9L and D9DX i4 3U elsewhere, which are pretty much the same I think apart from the mount, but both can be used on 2011. Putting these two on this system would give a good match, and I can redistribute the other two coolers to the other systems.

I'm also thinking about rehousing this system as the rack mount doesn't fit anywhere for me. I need to check the hole spacing and see if it matches any standards, although even if it did, the IO is certainly not ATX. Given that, how hard is it to make a case from scratch? I have no experience of this at all, but may be the better option to minimise space and optimise airflow. I guess this is only at initial concept stage for now.
 
Finally getting around to doing more with the system today. I've fitted a 2nd Hyper 212 Evo instead of the watercooler. Temps running Prime95 small FFT are in the low 50's C so no worries there. Lesson of the day: I took off the existing 212, and was surprised to see the thermal compound covered perhaps half the area of the IHS! I thought I was generous with its application, but it seems not in this case. It didn't seem to hurt performance while in that state. I was a lot more generous when fitting them this time.

With research, the mobo form factor and mounting hole positions seems to resemble SSI EEB in EP configuration. It is close, but not quite the same and I think the SPF connector position wont work with a standard case. Given that, I ended up buying the Aerocool DreamBox kit to build a frame of sorts to hold it. Once I have worked that out, next step will be to skin it somehow.
 
nightmarecase1.jpg

I'm gonna start referring to the Aerocool Dream Box as the Nightmare Box now... above photo is a work in progress. Note the "case" bottom is to the right. It is effectively laying on its back at the moment as I haven't affixed the motherboard mount rails to the frame yet. There seem to be some parts that can be used for that but I can't figure out a nice way to do it short of using cable ties.

I didn't realise just how long it took to assemble. Every joint takes 4 screws. There's a lot of joints. There's just enough parts for the design I settled on, which is essentially your regular rectangular arrangement. So the frame itself, while tedious, isn't particularly difficult. It took me several hours to get this done.

For now I'm using the mobo's original front panel controls (switches for power, reset, LED for HD, power). The Dram Box supplied unit would be redundant as I'm not sure there's any USB connectors. I wouldn't rule out making a newer, nicer set, but it certainly isn't a priority. I need to decide where, and how, to mount the PSU and SSD. The PSU along the bottom seems best bet, as it will keep all cable access to the near side as shown in photo. If you imagine this as a standard tower style case, the rear/IO side is nearest in photo. The area on top is what would be the normal window side. So I'm not really doing anything radical on the layout.

My intention, and it may take some time to implement, is to clad the outside to turn it into a normal-ish case. Window side will probably be all acrylic window. May acrylic panels all around? Haven't had an all acrylic case in a while. I'd also need to give airflow some thought. Currently my plan was to have airflow in an upward direction, but nicely sealing off the IO panel could be difficult for my skill level. So maybe a front to back flow might be easier to implement.
 
nightmarecase2.jpg

Finally getting there. It is now in a more or less runnable state. The motherboard is mounted. The front fan (Noctua) is mounted. The PSU is half mounted - yay for cable ties! I'm going to need some more metalwork to hold that down as the original case had two screws on the back, and two on the bottom. I can only use the bottom ones for now, nothing on back. The SSD is free floating. Undecided what to do with that, but thinking it could go on the same rails as PSU once I sort that out.

Why the silly top fan? I had a long moment of worry. After I had the whole setup, I booted and started it off on some work, and it locked up. Strange, reboot, nothing in Windows logs, nothing in computer management logs. CPU temps were fine. I tried undoing all the changes I had made since it was last working. What was it? The upper banks of ram were overheating is my best guess. They're hot to the touch, borderline painful. Since putting the fan on, it hasn't locked up again. The heasinks have been rotated 90 degrees since last time. I had decided to go right to left on flow as displayed. Before I did that, arguably the ram had enough spill to keep it cool.

Undecided what to do now... The intent is to put a skin around the case later on, with front intake, rear exhaust. Hopefully that airflow would be enough. I'm now thinking of putting an 80/92mm above the Noctua to give that ram area extra air.

On the skinning... I have an unconventional idea that will be distinctive, if I have the artistic and mechanical skills to pull it off...
 
I don't know what that is, but imagine something similar with 4 legs and you're not far off...

The instability had come back, and I now suspect it is a dodgy ram contact more than temperature. I took out half the ram and re-seated what remained and it has been stable overnight at least.
 
Odd but I don't have any heat issues with the RAM in my closed up box and all I have is the one intake and one exhaust fan that came with the case. When/if you do close it in, I found moving the CPU2 fan to a pull configuration lowered the CPU2 temperature. I'm guessing the extra space between the heatsinks allowed some of the exhaust from CPU1 to escape up and out the top of the case. The top of my Corsair Carbide Clear 400C is 100% vented at the top.
 
I like the idea of making the (left) CPU fan suck instead of blow. I'm guessing it might be a spacing and airflow thing. The air moving out of the right CPU cooler might not be best to go straight into a fan. Now I'm debating getting an incense stick out just to smoke test it...
 
Well in my "case" (pun intended), there was a temperature drop of around 5C in the CPU2 cores and both CPUs run at roughly the same temperature now.
 
Back