• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Q: Load balancing on 12v Rails

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

m.standish

Registered
Joined
Jan 26, 2011
Ive had a hunt and cant find a answer to this

I have just bought and odin gt 800w to power 2x OC 470`s and an OC Q6600 after my BFG 680w Blew.

My issue is that the 12v rails are split badly

12v1 - 18A (OCP 25A) CPU

12v2 - 18A (OCP 25A) PCIE 1

12v3 - 25A (OCP 30A) PCIE 2 + 3

12v4 - 25A (OCP 30A) PCIE 4 + molex

The problem is that my GPUS are capable of just over 30A (so 15A on each each) and trip the PSU, however over all of the PCIE Rails there is enough power to keep me below the OCP limit, the problem is that no matter which way i split the pcies over my GPU`s at max stress testing i can go over 30A on the middle rail whilst having 10-15A spare on the other 2 rails.

I NEED to distribute power better???

I have one GPU that pulls a lot more power than the other id say GPU1 - 35A GPU2 - 30A

given that no pcie pulls more than 18A then rail 1 is fine as one half of the GPU1

this leaves me about 47A left over 2 rails which are rated at 25A BUT its actually logically going to be split at 18/30 (Back over OCP).

My question is, can i combine the remaining 3 PCIE lines (2 rails) into one and then send power to the cards to essentially balance the load between them

12v1 - 18A (OCP 25A) CPU

12v2 - 18A (OCP 25A) PCIE 1 -> GPU1

12v3 - 25A (OCP 30A) PCIE 2+3
> ALL Combined then split to GPU 1 + 2
12v4 - 25A (OCP 30A) PCIE 4


Physically

GPU1 --- 18A ----------------------------------- 12v2 (18A) *rail2
35A
GPU1 ----18A-----\
............................\..................../-------------- 12v3 (25A) *rail3*
GPU2 ----15A-------+---=48A----+------*A*----- 12v3 (25A)
30A......................./...................\
GPU2 ----15A ---- /.....................\---- -*B*----- 12v4 (25A) *rail4*

This way it will keep all my loads within the limits

Can anyone tell me if i can actually just splice the 12v3 + 12v4 yellow/black together and then split that back over 3 lines to the cards

And secondly will the load be even at points *A* and *B*

Please dont reply if your going to say buy a bigger/better PSU, i just want some electronics advise?

Ive created a pic that shows better what im after, I just need to know if it would work or any pirtfalls
 
How are you measuring the 30 amps?

If you were to splice all the PCIe cables together and then apart again you would have one big rail with the combined OCP, so to trip OCP on any of the wires would take ~100 amps.

If your comfortable taking a soldering iron to your new PSU have at it, in theory it should work.

I know you don't want to hear it, but I do recommend returning that PSU and buying a better one.
 
And combining the rails would work, if you know for sure that the rails are "virtual" and are actually generated by the same regulator.

If your power supply has "real" rails, shorting them together could make things literally explode (they will have slightly different voltages due to part tolerances, and higher voltage rails would force current back into lower voltage rails, and PSUs are not designed for that).
 
First up, they are OCP-only rails.
Second up, it'd take a hell of an imbalance to cause issues. The PSU would be plastered in warnings to only use one rail per GPU if that were the case. If one GPU with two power plugs had two rails plugged into it they'd be connected and off to the races you go.
In reality, it wouldn't be an issue. If it was you wouldn't be able to pair PSUs up for multi-GPU benching. You can, I've seen as many as three PSUs used all connected in parallel.
 
GPUs are actually required to limit current drawn from each rail, too. They are not simply shorted together on a video card.

Otherwise, for example, if a card has a 6 pin (75W) and 8 pin (150W) connectors, there would be no way for NVIDIA to guarantee that it won't draw more than the maximum from each connector.

Also, by ATX specs, I don't believe the rails need to be balanced, as long as they are both within the voltage limits. I believe they are also not rated for current sinking. If a video card simple shorts the lines together, bad things could happen.
 
I wonder how then, for example, a GTX580 can use more than 375W when extreme overclocking if they are shorted out (8pin + 8pin + socket = 375W) and cannot use more than that..
 
On Kepler it should be limited by the maximum power setting ("power target"). On Fermi I'm not sure how the current steering is done. Maybe it's just a ratio (eg. 2/5 comes from plug 1, 2/5 comes from plug 2, 1/5 comes from socket), so if you overclock it to use more than 375W, it overdraws from the rails proportionally.

That's just my guess.
 
I thought you said the GPU's 'limit the current from each rail'? Now you are say it just overdraws? Im confused.

Sorry a bit OT but... Im confused with the conflicting statements.
 
Ah by current limiting I just meant the circuit is designed to draw less than the rated current from each connector at nominal conditions, even if the rail voltages are different. I don't know if there's actually a physical current limiting device.

When overclocked, it's undefined behavior.
 
The circuit is designed to stay under the current limits, that is very different than it limits current.
GTX580s have been clocked at over 700w being drawn (briefly). That violates spec by just a touch :p
They don't do it at stock because they're carefully designed not to, but the moment you start OCing all the stock limits and caps go out the window.

Out of curiosity I checked a couple GPUs here, the connectors do indeed stay separate for long enough for my multimeter to say infinite resistance. That's on a 580 and a 280.
It seems like they must be connected somewhere (probably post-MOSFET for core power), either wise the poor 8p plug would be stuck with 200odd watts of core even at stock clocks on a 480. It also explains the motherboard death issues that have been seen with big OCs on SLI, I'd been wondering about that.
Thanks for that bit of info :D
 
Interesting discussion. I also just found a new line for my sig...

My pleasure :D.

Out of curiosity I checked a couple GPUs here, the connectors do indeed stay separate for long enough for my multimeter to say infinite resistance. That's on a 580 and a 280.
It seems like they must be connected somewhere (probably post-MOSFET for core power), either wise the poor 8p plug would be stuck with 200odd watts of core even at stock clocks on a 480. It also explains the motherboard death issues that have been seen with big OCs on SLI, I'd been wondering about that.
I just looked at the GTX 680 reference schematics. There's a huge current steering circuit in there between the various 12V sources and the global 12V net, so unpowered, I'm not surprised it reads as open (I'm sure you know this already, but in case some Googler stumbles upon this - it's a very bad idea to probe resistances on circuits when the circuit is powered, because of the way ohmmeters work).
 
It's more that you can't probe resistances with the circuit live, it doesn't work at best.
 
It also measures resistance by injecting current. If there's any voltage between the 2 points the reading will be way off. It can also damage sensitive components (ICs).
 
Back