• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Back to bring the PAIN!

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
The motherboard is a Supermicro X9DRX+-F. They have an X10 version also, it’s basically the same but uses more expensive Xeon E5 V3/V4 and DDR4 Ram. I stick with X9 boards and super cheap Reg ECC DDR3

It has a total of eleven [11] PCIe 8x slots. 10 of them are PCIe 3.0, and the top 11th slot is PCIe 2.0 (4x electrically). You can use all slots at full lanes at the same time. I only limited myself to 8 cards because I was a little worried about power draw from the PCIe slots. I’m using an EVGA power injector on the PCIe 2.0 11th slot, and normal USB risers which don’t use power from the slot for slots 9 and 10.

Specs here: https://www.supermicro.com/products/motherboard/Xeon/C600/X9DRX_-F.cfm

I do not have it in a case. I’m using it on a mining rack (shown in pics). Cases for this board do exist from Supermicro, but they are expensive ($1000+), and rarely show up on the used market, still expensive when they do. But if you want to put more than 5 GPUs you’ll need to do some custom stuff like risers or single slot watercooled cards. There’s no single elegant solution.

I keep hoping to find PCIe 3.0 (ie, shielded) riser cables that have external power connections. But it seems no one wants that except me lol

Thanks!! You have given me much to digest and plot. One of my goals here is to consolidate systems. True, the board you linked isn't eco-friendly, but it's better than 3-4 separate systems. I've also looked around thanks to "you might also like" and found a gaggle of other mulit-gpu boards. A couple even had a power connector for the PCIe bus which I think great when you start talking about mashing so many gpus on one board. Anyway, thanks again :thup:
 
I'm working on a new build (that will take a while). I want to have 7-GPUs all single slot water cooled, and all plugged directly into the motherboard with no risers.

I decided to go with the ASUS P9X79-E WS motherboard, so that I can use my E5-2630Lv2 6c/12t 60W CPU, with some cheap DDR3 ECC memory. this board will not do registered ECC like the Supermicro boards however since it's an X79 board and you need the C602 chipset for registered ECC support. I have some ECC UDIMMS that I can steal from one of my other servers though lol (it will accept the RDIMMs in exchange). It bothers me how inflated prices are for these boards these days, but I was able to get one for $250 shipped on ebay. (the X9DRX supermicro board will run you about $500+)

The plus side is that this board does also have a 6-pin PCIe connection for the PCIe slots, where the Supermicro boards do not. So I wont be afraid to power 7 GPUs from the motherboard.

I'll probably go with 7x RTX 2080's. I'll need the ASUS Turbo models since those seem to be the only RTX cards that come in single slot IO.
 
What do you mean not eco-friendly motherboard?

Damn, you guys! I'd pick up another 2080Ti, but despite my solar I'd end up paying out the nose for power. Have fun passing me, Holdolin.
 
Well, I guess in the context of current high-end motherboards in general it means little. I was just comparing the price of the SM motherboard to my threadripper board which was about 400 bucks. Sorry if I did a poor job explaining myself, it's been a crazy week and my brain is just plain fried. Wait, y'all weren't thinking I was talking about power use were ya? Anyway, i just got one of those open air setups in and need to put it together. Happy crunching all :)
 
Nice. That’s the same rack I use. But there’s enough space between the cards with 2slot cards. So I have all 10 GPUs on that rack.

How many are you planning?
 
/me waves to Holdolin as he passes me, slaughtering all in his path.

1.5Mpd puts you in the top 10 Seti clients, which is only filled by members of two other teams. You'll take the second place slot on our team in no time.

Region Capture.png
 
What x16 riser ribbon cables are you guys using?? The basic ~ $8 - x16 to x16 powered cables??
I would be happy with another (cheap) Evga 1080 or 1080ti :thup:
 
I'm currently using (8) of these on one of my systems.

https://www.amazon.com/gp/product/B07K9SRKCT/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1

I think I had to return 2 of them due to being defective (unstable PCIe speeds, caused dropouts and hangups). but the 8 that i have now have been rock stable for over a year of constant crunching.

Keep in mind that these are not externally powered. they will feed power from the PCIe slot, through the riser to the GPU. I have been unable to find ANY PCIe 3.0 capable risers that provide external power as well as 8x or 16x lanes. they just don't exist. You either get the grey ribbon cables (PCIe 2.0) with shoddily soldered Molex connections, or the USB based risers that have good external power connetions, but only 1x lane capable. you CAN use the USB risers for SETI with little or no penalty, but I'd really like to have more lanes available for other projects that might be more sensitive to the PCIe bandwidth.

Just be aware that you're pulling power from the board. if you use 4+ be very conscious of the power delivery to the motherboard and use the extra PCIe power connections if you have them.
 
Looks like Teque5 added a 2080Ti on a Threadripper. Going to give me issues pretty quick.

That 3080Ti can't come out fast enough.
 
So in Linux, I can get BOINC to run when I want it to, but I cannot get the manager to always come up. Sometimes a reboot will let me start the manager, but not always. Not sure what gives here.
 
New issue: I can't get Boinc to run and then load my user profile. No options to load a project or enter user info.
 
Last edited:
Back