Results 1 to 20 of 41
02-21-08, 01:35 AM #1
Single Rail vs Multiliple Rail [Answer inside!]
I was wondering if someone can shead light for me on the advantages of single rail vs multiple 12v rails.
When you start looking at high end power supplies it appears things go two seperate routes. On the one hand you have power supplies with 4 12v20A rails (OCZ and Fortorn Source power supplies). On the other hand you have power supplies with a single rail yet a huge amp rating such as the PCP&P Power supplies which have a single 12v@90Am rail.
I heard that having multiple rails is better for distributed power, but can someone elighten me to the difference and advantages and disadvantages of the 2 apporaches please?
GreyspectreLaptop: Dell Inspiron 9300 w 6800go and 1.2 gigs of ram
Desktop: intel i7-3770, 16Gb Ram, Radeon 7970, 18.5 Tb HD space, Dell 3008 WFP 30"
02-21-08, 08:26 AM #2
Man.. we get this question daily. I really need to keep a response on my desktop I can copy and paste.
What is "multiple +12V rails", really?
In most cases, multiple +12V rails are actually just a single +12V source just split up into multiple +12V outputs each with a limited output capability.
There are a few units that actually have two +12V sources, but these are typically very high output power supplies. And in most cases these multiple +12V outputs are split up again to form a total of four, five or six +12V rails for even better safety. To be clear: These REAL multiple +12V rail units are very rare and are all 1000W+ units (Enermax Galaxy, Topower/Tagan "Dual Engine", Thermaltake Tough Power 1000W & 1200W, for example.)
In some cases, the two +12V rail outputs are actually combined to create one large +12V output (Ultra X3 1000W, PC Power & Cooling Turbo Cool 1000W, for example.)
So why do they do they split up +12V rails??
Short circuit protection only works if there's minimal to no resistance in the short (like two wires touching or a hot lead touching a ground like the chassis wall, etc.) If the short occurs on a PCB, in a motor, etc. the resistance in this circuit will typically NOT trip short circuit protection. What does happen is the short essentially creates a load. Without an OCP the load just increases and increases until the wire heats up and the insulation melts off and there's a molten pile of flaming plastic at the bottom of the chassis. This is why rails are split up and "capped off" in most power supplies; there is a safety concern.
Is it true that some PSU's that claim to be multiple +12V rails don't have the +12V rail split at all?
Yes, this is true. But it's the exception and not the norm. It's typically seen in Seasonic built units (like the Corsair HX and Antec True Power Trio.) It's actually cheaper to make a single +12V rail PSU because you forego all of the components used in splitting up and limiting each rail and this may be one reason some OEM's will not split the rails, but say they are split. Some system builders adhere very closely to ATX12V specification for liability reasons, so a company that wants to get that business but also save money and reduce R&D costs will often "fib" and say the PSU has it's +12V split when it does not.
Why don't those PSU companies get in trouble? Because Intel actually lifted the split +12V rail requirement from spec, but they didn't actually "announce" it. They just changed the verbiage from "required" to "recommended" leaving system builders a bit confused as to what the specification really is.
So does splitting the +12V rails provide "cleaner and more stable voltages" like I've been told in the past?
It is true that marketing folks have told us that multiple +12V rails provides "cleaner and more stable voltages", but this is usually a falsehood. Quite frankly, the use this explaination because "offers stability and cleaner power" sounds much more palletable than "won't necessarily catch fire". Like I said before, typically there is only one +12V source and there is typically no additional filtering stage added when the rails are split off that makes the rails any more stable or cleaner than if they weren't split at all.
Why do some people FUD that single is better?
Because there are a few examples of companies that have produced power supplies with four +12V rails, something that in theory should provide MORE than ample power to a high end gaming rig, and screwed up. These PSU companies followed EPS12V specifications, which is for servers, not "gamers". they put ALL of the PCIe connectors on one of the +12V rails instead of a separate +12V rail. The +12V rail was easily overloaded and caused the PSU to shut down. Instead of correcting the problem, they just did away with the splitting of +12V rails altogether. Multiple +12V rail "enthusiast" PSU's today have a +12V rail just for PCIe connectors or may even split four or six PCIe connectors up across two different +12V rails. The rails themselves are capable of far more power output than any PCIe graphics card would ever need. In fact, Nvidia SLI certification these days REQUIRE that the PCIe connectors be on their own +12V rail to avoid any problems from running high end graphics cards on split +12V rail PSU's.
There's less components and less engineering to make a PSU that DOES NOT have the +12V rail split up, so it's cheaper to manufacturer (about $1.50 less on the BOM, $2 to $3 at retail) and typically this cost savings is NOT handed down to the consumer, so it actually behooves marketing to convince you that you only need single +12V rails.
But some people claim they can overclock better, etc. with a single +12V rail PSU
B.S. It's a placebo effect. The reality is that their previous PSU was defective or just wasn't as good as their current unit. If the old PSU was a cheap-o unit with four +12V rails and the new one is a PCP&C with one +12V rail, the new one isn't overclocking better because it's a single +12V rail unit. It's overclocking better because the old PSU was crap. It's only coincidental if the old PSU had multiple +12V rails and the current one has just one.
The only "problem" the occurs with multiple +12V rails is that when a +12V rail is overloaded (for example: more than 20A is being demanded from a rail set to only deliver up to 20A), the PSU shuts down. Since there are no "limits" on single +12V rail PSU's, you can not overload the rails and cause them to shut down..... unless you're using a "too-small" PSU in the first place. Single +12V rails do not have better voltage regulation, do not have better ripple filtering, etc. unless the PSU is better to begin with.
So there are no disadvantages to using a PSU with multiple +12V rails?
No! I wouldn't say that at all. To illustrate potential problems, I'll use these two examples:
An FSP Epsilon 700W has ample power for any SLI rig out there, right? But the unit only comes with two PCIe connectors. The two PCIe connectors on the unit are each on their own +12V rail. Each of these rails provides up to 18A which is almost three times more than what a 6-pin PCIe power connector is designed to deliver! What if I want to run a pair of GTX cards? It would have been ideal if they could put two PCIe connectors on each of those rails instead of just one, but instead those with GTX SLI are forced to use Molex to PCIe adapters. Here comes the problem: When you use the Molex to PCIe adapters, you have now added the load from graphics cards onto the rail that's also supplying power to all of your hard drives, optical drives, fans, CCFL's, water pump.. you name it. Suddenly, during a game, the PC shuts down completely.
Solution: To my knowledge, there aren't one-to-two PCIe adapters. Ideally, you'd want to open that PSU up and solder down another pair of PCIe connectors to the rails the existing PCIe connectors are on, but alas... that is not practical. So even if your PSU has MORE than ample power for your next graphics cards upgrade, if it doesn't come with all of the appropriate connectors, it's time to buy another power supply.
Thermal Electric Coolers take a lot of power and are typically powered by Molex power connectors. I, for one, prefer to run TEC's on their own power supply. But that's not always an option. If you had a power supply with split +12V rails and powered your TEC's with Molexes, you would be putting your TEC's on the same +12V rail as the hard drives, optical drives, fans, CCFL's, water pump.. you name it, just as you did with the Molex to PCIe adapters. The power supply could, essentially, shut down on you in the middle of using it. A power supply with a single, non-split, +12V rail would not have any kind of limit as to how much power is delivered to any particular group of connectors, so one could essentially run several TEC's off of Molex power connectors and not experience any problems if one had a single +12V rail PSU.
Typical multiple +12V rail configurations:
- 2 x 12V rails
- Original ATX12V specification's division of +12V rails.
- One rail to the CPU, one rail to everything else.
- VERY old school since it's very likely that "everything else" may include a graphics card that requires a PCIe connector.
- Typically only seen on PSU's < 600W.
- 3 x 12V rails
- A "modified" ATX12V specification that takes into consideration PCIe power connectors.
- One rail to the CPU, one rail to everything else but the PCIe connectors and a third rail just for PCIe connectors.
- Works perfectly for SLI, but not good for PC's requiring four PCIe connectors.
- 4 x 12V rails (EPS12V style)
- Originally implemented in EPS12V specification
- Because typical application meant deployment in dual processor machine, two +12V rails went to CPU cores via the 8-pin CPU power connector.
- "Everything else" is typically split up between two other +12V rails. Sometimes 24-pin's two +12V would share with SATA and Molex would go on fourth rail.
- Not really good for high end SLI because a graphics card always has to share with something.
- Currently Nvidia will NOT SLI certify PSU's using this layout because they now require PCIe connectors to get their own rail.
- In the non-server, enthusiast/gaming market we don't see this anymore. The "mistake" of implementing this layout was only done initially by two or three PSU companies in PSU's between 600W and 850W and only for about a year's time.
- 4 x 12V rails (Most common arrangement for "enthusiast" PC)
- A "modified" ATX12V, very much like 3 x 12V rails except the two, four or even six PCIe power connectors are split up across the additional two +12V rails.
- If the PSU supports 8-pin PCIe or has three PCIe power connectors on each of the +12V rails, it's not uncommon for their +12V rail to support a good deal more than just 20A.
- This is most common in 700W to 1000W power supplies, although for 800W and up power supplies it's not unusual to see +12V ratings greater than 20A per rail.
- 5 x 12V rails
- This is very much what one could call an EPS12V/ATX12V hybrid.
- Dual processors still each get their own rail, but so do the PCIe power connectors.
- This can typically be found in 850W to 1000W power supplies.
- 6 x 12V rails
- This is the mack daddy because it satisfies EPS12V specifications AND four or six PCIe power connectors without having to exceed 20A on any +12V rail
- Two +12V rails are dedicated to CPU cores just like an EPS12V power supply.
- 24-pin's +12V, SATA, Molex, etc. are split up across two more +12V rails.
- PCIe power connectors are split up across the last two +12V rails.
- This is typically only seen in 1000W and up power supplies.
Ok... What's the bottom line?
The bottom line is, for 99% of the folks out there single vs. multiple +12V rails is a NON ISSUE. It's something that has been hyped up by marketing folks on BOTH SIDES of the fence. Too often we see mis-prioritized requests for PSU advice: Asking "what single +12V rail PSU should I get" when the person isn't even running SLI! Unless you're running a plethora of Peltiers in your machine, it should be a non-issue assuming that the PSU has all of the connectors your machine requires and there are no need for "splitters" (see Example 1 in the previous bullet point).
The criteria for buying a PSU should be:
- Does the PSU provide enough power for my machine?
- Does the PSU have all of the connectors I require (6-pin for high end PCIe, two 6-pin, four 6-pin or even the newer 8-pin PCIe connector)?
- If using SLI or Crossfire, is the unit SLI or Crossfire certified (doesn't matter if a PSU is certified for one or the other as long as it has the correct connectors. If it passed certification for EITHER that means it's been real world tested with dual graphics cards in a worst case scenario).
Figure out if there are any variables that may affect the actual output capability of the PSU:
- What temperature is the PSU rated at? Room (25° to 30°C) or actual operating temperature (40°C to 50°C)
- If room temperature, what's the derating curve? As a PSU runs hotter, it's capability to put out power is diminished. If no de-rate can be found, assume that a PSU rated at room temperature may only be able to put out around 75% of it's rated capability once installed in a PC.
After that, narrow selection down with finer details that may be more important to others than it may be to you....
- Does the unit have power factor correction?
- Is the unit efficient?
- Is the unit quiet?
- Is the unit modular?
- Am I paying extra for bling?
- Do I want bling?
Last edited by jonnyGURU; 03-27-08 at 12:11 PM. Reason: Updated and added to....
- 2 x 12V rails
02-21-08, 09:55 AM #3
Well, looks like this should be stuck for future reference.Motherboard Repair Tutorial || Power Supply Repair Tutorial || A Closer Look at PSU Signals ||
Understanding SMPS Technology || ASUS P5K Capacitor mod || Kernel Compile Guide || Guide to Wiping an SSD
ASUS Maximus Hero || i5-4670K || 2 x Crucial M500-480Gb || Antec HCP-850M || ASUS GTX 750Ti || EVGA GTX 970|| Carbide 300R || Coolermaster Glacer 240L
NAS-Backup (FreeNAS 9.2): MSI Z87 G41 || Pentium G3285 || Corsair CX600 || 2xWD Red 2Tb
06-28-08, 05:18 PM #4
It's never a bad thing to bump an old thread that's really good stuff!
Thanks, saved me from endless googling.ihrsetrdr aka Tim
Main Desktop: Asrock 79 ext4 | Intel Xeon E2650 | 4x4 DDR3 1600 | Debian 8(jessie) PLUS my new gtx 970 for folding.
Be strong when you are weak, brave when you are scared, and humble when you are victorious.
08-17-08, 01:36 AM #5
04-21-09, 05:44 PM #6
- Join Date
- Apr 2003
jonnyguru is the man.
your website is also the best.
helped me choose my hx620.
too bad the antec sigs are too expensive and not avail though :-p~INTEL Q6600 @ 9x400mhz @ 1.525v
EK Supreme cpu block.
Gigabyte 3D Galaxy pump.
HW Labs single row 3x120 rad.
Reference HD6990 w/ reference cooler
ASUS P5E3 Premium
4 x 60gb SSD (Raid 0)
2 x 500gb Seagate SATAII (Raid 0)
4 x 2gb G.Skill Ripjaws 1866mhz
09-15-09, 10:56 AM #7
FYI, When I was working at Intel, we had to obey the ATX 2.0 spec very closely, and it says explicitly that no single voltage supply (individual dc-dc converter output) can exceed a maximum output power of 240 watts, at 12V that's 20A. So if you wanted 12V 40A you had to put in 2 dc-dc converters each limited to 20A and use load sharing.
This spec is for safety and regulation of consumer products, its strangely difficult to market >1KW supplies to the public without getting into some regulatory hassles.
11-14-09, 07:16 AM #8
Geez louise this answered so many questions for me. Thanks a lot for this one.Current (Plain Jane) Laptop: HP G70 | Intel Core 2 Duo T6500 @ 2.1 | 4GB RAM | 64 Bit Win7 HP
Plan (First Build) Desktop: i7-860 | Gigabyte GAP55UD5 | Corsair 750TX | Corsair 2 x 2GB DDR3 | ATI 5850 | WD 1 x 80GB / WD 1 x 1TB | Antec 900 | Win7U 64
11-14-09, 09:27 AM #9HTPC - i5 2500k - 212+ - GA-Z68MX-UD2H-B3 - 2x4GB G.Skill DDR3-1600 - Crucial MX100 512GB, Spinpoint F3 1TB w/M4 64GB ISRT Cache
MSI GTX 970 4GB - Silverstone LC10B-E - Corsair RM550
-----Main - i5 6500 - GA-H170M-D3H - 2x8GB Corsair DDR4-2133 - Samsung 950 Pro 256GB
Gigabyte GTX 960 2GB - Silverstone TJ08 - Corsair CX400W
Nothin' up my sleeve..
12-30-09, 02:21 PM #10
Now I finally know the differences, thanks!:: Gaming Desktop ::CPU: AMD Phenom II X6 1055T @ 3.9 Ghz 1.4v | MOBO: MSI 790FX-GD70 AM3GPU: EVGA GeForce GTX 570 (732/1464/1900) | RAM: 4 GB G-Skill DDR3 1600SSD: OCZ Vertex 120 GB | HD1: WD Black 640 GB | HD2: WD Black 2 TBSC: Xfi Titanium Fatal1ty | PSU: Corsair HX 620 WKB: Saitek Eclipse | MOUSE: Logitech MX 518CASE: Antec 1200 | OS: Windows 7 x64:: Laptop ::MOD: Macbook Pro 13” 2011 | CPU: Intel Core i5 @ 2.3 GhzRAM: 4 GB DDR3 1333 | OS: OS X Lion 10.7.2
03-10-10, 06:32 AM #11
06-11-10, 06:28 PM #12
- Join Date
- Jun 2010
wow jonny.. tht was awesome... thts such an awesome read especially for a newbee like myself.. actually i wanted to buy a HP power supply 5188-2625 and landed up at the following link: http://www.itwinkle.com/beathp3024ps.html and interestingly, since they r out of stock for it they tell me to buy 5187-6114 along with 2 y-splitters to replace it... I was wondering if thts true?? and should I go for it..? I mean if i connect the splitters to the molex connector, it will definitely give me the desired length of connectors same as in 5188-2625 but wat abt the amp rating??
06-11-10, 07:04 PM #13
^^ Are you seriously considering that PSU or just trolling??
Id pray to god you are just trolling. If not, definitely look into a QUALITY psu. JonnyGurus site provides a wealth of information. That PSU would probably go up in flames with a 100w load. You can find cheap QUALITY 400w PSUs on newegg.
06-12-10, 12:08 PM #14
- Join Date
- Jun 2010
06-12-10, 12:15 PM #15
Maybe educate your client....
06-12-10, 12:29 PM #16****************************************
Phenom II 1045T, Gigabyte MA790XT,CM 212 EVO, Corsair Vengence 1600 2x4 gigs,EVGA SC GTX 660,Patriot Torqx 64 gig SSD,Seagate 320 Hd, Corsair 520HX,S.B Extreme Music, Antec 300 Illusion case. Windows 7 64bit HP 23" IPS Panel
06-12-10, 01:14 PM #17
Personally, the Delta is the only one of the two I would even consider buying. That's the one out of stock. The Hipro I'd pass on, though it's not terrible quality.Asus P8Z68 Pro-V Gen3
2500K w/ Thermalright Ultra-120 Extreme
16GB G.Skill Ripjaws
Inwin GRone, Thermaltake DPS-G 1250W Titanium
Asus 7970 DC-II
250GB Samsung 840 Evo, 2TB WD Green x2, 2TB Samsung F4, 3TB Seagate 7200.14
06-13-10, 01:02 PM #18
- Join Date
- Jun 2010
09-01-12, 12:30 PM #19
In theory a multiple rail is better for safety purpose but i still see up to 1200 W with single rail. Whats the deal? All they want is to save up money and how much of single watt each rail can be considered "safe"? Isnt it dangerous when 850W+ is single rail?
I just dont get the deal... when a strong PSU already cost way to much, why do they still want to save up on cost?Screen Panasonic TX-P50VTW60 @1080P Computer Series Gen Alpha: Every PC may act as a backup for another one.
SFF Gamer PC: Shuttle SZ77R5 modded/I5 3570K @4 Ghz/MSI Gaming 970 @1300+/7500/Gskill 2x8 GB @1600,1.5V,9,9,9,24, 2T/120 + 240 GB OCZ Vertex 3 MI V. 2.22/750 GB WD 7500BPKT/External Install drive/Win7 PRO
Workstation: Gigabyte GA-X58A-OC/I7 990X (stock)/6950 FLEX 2GB (stock)/Mushkin 3x4 GB @1600,1.5V, 9,9,9,24, 1T/128 GB Crucial M4 V. 0309/4 TB Hitachi, 2 TB WD RE4/Seasonic X-850/Lian Li PC-C60/NH-C12P SE14/External Install drive/Win7 ULT
HTPC/File-PC: Asrock Extreme 4-M/I7 3930K (stock)/650Ti AMP! 2GB (stock)/Gskill 4x4 GB @1600,1.5V, 9,9,9,24, 1T/128 GB Samsung 840 PRO/4 TB Hitachi, 2 TB WD Green/Corsair 1200i/Lian Li PC-V354/Phanteks TH-TC14CS/Pioneer BDR-207EBK/Win7 PRO
Mini Browsing PC: Asus AM1I-A SKT/Athlon 5350 (Jaguar + GCN based)/Kingston HyperX Fury 1x4GB 1866 @1600/240 GB Crucial M500/Thermaltek Element Q +200W PSU included/boxed cooler/External Install drive/OS not final
HIFI System: Pioneer SC-LX56-S/Front: Klipsch RB61IIx2/Surround: KEF Q300x2/Center: Wharfedale Diamond 10CS/Subwoofer: KEF R400b/Channel 5.1/Power Capacity ~1kW/1kW ~100 dB @4.5m/Speaker @6dB loss ~26-40k Hz. @10-15 dB loss ~15-50k Hz/Audible @80dB ~15-15.5k Hz
09-01-12, 12:53 PM #20
They want to save on costs because then they'd have to charge even more.
Plus the single/multiple rail marketing is still going strong.
Plus there are some situations where you really do need single rails units, or to be very careful of which rails you put where. Pretty much only extreme benching, but we are part of the market for those huge units.
Personally I'd rather see big units with a stack of 30a rails, much like the Antec HCP series.
30a still isn't enough for extreme OCing on some CPUs though. Gulftowns have been clocked at well over 400w on LN2.
It all comes down to money as usual.