• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Back to bring the PAIN!

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
looks good!

your host is currently in 60th place in terms of RAC. will probably come up to around 30-35th place once the RAC levels off
 
After a driver update, I'm doing mid 175-185k now. Had my first 198k on the rig yesterday. About to make all my top10 days in 2019 now. First 200k+ day yesterday.

Planning on maintaining full power until I get to 50 million credits, and have my Boincstats RAC intersect my Berkeley Seti RAC. Then I'll go back to dual booting and getting other stuff done in the evenings, like gaming.
 
Ok, I kinda broke it.

Installed new nvidia drivers from the ubuntu auto update to 430.50 and now my GPU is no longer crunching.
 
I'm bringing it all back. 2 1080Tis, 5 2070s and 3 2080Tis. I have plans for the 4th 2080Ti which will go in the newest build, prometheus which houses 1 right now. I debated cramming 4 cards per box but between the power draw and heat generation I figured it just easier to build dual-card boxes. Oh, for grins and giggles I have my Windows workstation crunching on it's 1060. All i really do on it is write code which is more CPU intensive anyway so why not.
 
Just curious, how do you affordably power all these rigs?

My power costs in California are absurd. The only reason I'm crunching is because I installed a 5kW solar setup this spring that helps offset it.

Only have my single 2080Ti but I might add another.
 
It's not cheap, spending ~200/mo to power them, though this time of year the heat they're generating would be needed anyway. Next year I plan on setting up a 10kw solar plant. Not looking forward to the heat come summer though.
 
if you want 4+ cards, you either need water cooling, or space them out appropriately.

My 10-GPU system (all 2070) are a little close, but airflow is enough to keep all the cards under 80C so that's good enough for me.

My 3-GPU system (2080ti, 2080, 2080), I have them watercooled as the cards are right next to each other.

I want to build a 7-GPU with all cards watercooled using one of those EK X7 block connectors. 7x 2070s/2080s in a nice tight package sounds cool. hard to find 1-slot IO cards these days though.
 
Yeah it’s on a mining rack. But I’m using a crazy server MB to get 8x PCIe lanes to 8 of the GPUs. 1x lane to the other 2, worried about power from the motherboard with more.

Pics:

This link is a little older from when I had 1080tis, but shows the motherboard and risers

7-GPU system: (6 GPUs on 1x riser, pics taken before 7th GPU added)

3-GPU watercooled system: (server case, rack mounted, external pump/rad)
 
Yeah it’s on a mining rack. But I’m using a crazy server MB to get 8x PCIe lanes to 8 of the GPUs. 1x lane to the other 2, worried about power from the motherboard with more.

Pics:

This link is a little older from when I had 1080tis, but shows the motherboard and risers

7-GPU system: (6 GPUs on 1x riser, pics taken before 7th GPU added)

3-GPU watercooled system: (server case, rack mounted, external pump/rad)

Ok, so what MB are you using to get 8x to 8 GPU slots? Been looking around the web and found some that have that many slots but none were clear weather I could use all 8 slots (most said they could use 4 cards at full x16). For that matter, what case? The more I think about your design, the more I'm thinking it would be better to just "consolidate" as many cards as I can into as few a rigs.
 
The motherboard is a Supermicro X9DRX+-F. They have an X10 version also, it’s basically the same but uses more expensive Xeon E5 V3/V4 and DDR4 Ram. I stick with X9 boards and super cheap Reg ECC DDR3

It has a total of eleven [11] PCIe 8x slots. 10 of them are PCIe 3.0, and the top 11th slot is PCIe 2.0 (4x electrically). You can use all slots at full lanes at the same time. I only limited myself to 8 cards because I was a little worried about power draw from the PCIe slots. I’m using an EVGA power injector on the PCIe 2.0 11th slot, and normal USB risers which don’t use power from the slot for slots 9 and 10.

Specs here: https://www.supermicro.com/products/motherboard/Xeon/C600/X9DRX_-F.cfm

I do not have it in a case. I’m using it on a mining rack (shown in pics). Cases for this board do exist from Supermicro, but they are expensive ($1000+), and rarely show up on the used market, still expensive when they do. But if you want to put more than 5 GPUs you’ll need to do some custom stuff like risers or single slot watercooled cards. There’s no single elegant solution.

I keep hoping to find PCIe 3.0 (ie, shielded) riser cables that have external power connections. But it seems no one wants that except me lol
 
Back