• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Which one is better: Apogee or Storm?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Paapaa

Registered
Joined
Mar 20, 2006
We had an interesting discussion here about Storm and Apogee. :argue: Let me make a summary:

1. Systemcooling had a test showing that Storm is clearly the better waterblock having significantly lower thermal resistance than Apogee. The tests were conducted using a 14mm x 14mm die simulator and later with 36mm x 36mm simulator.

14mm
36mm

2. When the two waterblocks were tested in real setups using real processors and WC rig, the differences disappeared and Apogee got even slightly better results (lower CPU temperature) than Storm. A few tests:

SystemCooling
Cooling-Masters
OverClock Intelligence Agency

The question is: why the big difference and what should we believe? A few explanations were provided by me and Otter:

"The thermal diode inside the CPU might not be properly placed."

I tried but found no evidence or information about this on modern processors. The thermal diode is definitely located within the core but is it located in the hottest part? If not, then Storm can possibly cool the hottest parts better than Apogee but still get the same temperature reading. On the other hand, the Cooling-Masters test indicates that Apogee was able to heat the water more effectively (CPU-Eau column) - with those two Intel Pentiums. Diode placement should not affect that. As the CPU-Water temperature difference was smaller it is very likely that more heat was transferret into the water.


"The Intel TTV simulator might not simulate reality very well."


Simulators most likely provide very repeatable results. But it is obvious that a copper block simulating a real CPU has to have also some compromises. Can they affect the test results and how much? I found this very new article at Overclockers:

The Evolution of Aftermarket Heat Sink / Waterblock Testing

"Today, the copper slug with a thermocouple is no longer capable of yielding test results predictive of the cooling device's performance on a specific CPU."
According to Bill Adams there are some definite problems when testing waterblocks with a die simulator: measurement location differences, secondary heat path losses, wrong copper slug dimensions and flatness of the slug face. He also mentioned this Apogee/Storm dilemma and speaks about integrated heat Spreader and thermal interface materials which also contribute to the thermal characteristics of a CPU. All of them should be taken into accound when using a simulator. Very interesting reading. :clap:

So which one is better? :shrug: In my opinion test made with real systems give more meaningful data than tests made with simulators - despite the potential flaws in testing. It is also very common that testing individual components make the differences look bigger than they really are: for most users Apogee and Storm have very similar performance. This holds true also in D5/DDC comparison in another thread.

Any comments on this?
 
Last edited:
what a blunder if the apogee was engineered better than the storm. maybe we're just reaching the point where any new waterblock technology has very VERY diminishing results. we should just move to pure silver blocks or diamond!
 
sims are better . the reason is you are compairing hard data that comes directly and evenly across the platform and your objective in a waterblock is to lower its heat resistance and hydralic power requirement weather its a sim or real world you are only trying to isolate those numbers . its easyer to determine this with steady data. in a real world as most people like to put it the data gets slightly corrupted by other factors so you are no longer taking into account just the block design as a factor but also other factors. this is where people almost ALWAYS forget that in a real world there are more factors but they keep siting that xx block does this.

well its not xx block does this in real world its xx block with yzab factor does this. the blocks in and of themselves are NO LONGER the pure test issue because people have muddled them with their so called re world data .
saying real world data tests a lot more than the block design while a die simulator is more accurate for Just the block design to isolate that alone
 
Paapaa
your understanding is correct

thorilan
test bench 'winners' are test bench specific
system performance, what is sometimes referred to as 'real world', is determined in a system
- the foolishness commences when bench results are elevated over system performance

end users are only concerned with their computer's performance,
hence that is the data of relevance

PennyBag
blunder ??
quite the opposite
 
thorilan said:
well its not xx block does this in real world its xx block with yzab factor does this.

You forget one important thing: that "yzab" factor affects the cooling performance! If you want to just compare thermal resistance at various GPMs, then a simulator is fine as you can isolate one variable and test that. The bad thing is that when you switch from Apogee to Storm, other factors also change besides C/W. That pretty much makes it pointless to compare just one variable.

What if there was a block which has C/W only 1/10th of Storm's at 1GPM. Would that be c00lomg! What if it restricted the flow so much that you need this to get at 1GPM. It means that such a WB couldn't be used in WC rigs. It also means that simply comparing C/W at 1GPM doesn't give the whole truth.

Cooling-Masters performed a test where only the block is changed. Apogee was better in that setup. Could you please tell how is their result worse than simulator result and why would that test give users worse information that simulator test? Why don't the simulator test correllate with real world tests?
 
Paapaa said:
"The thermal diode inside the CPU might not be properly placed."

I tried but found no evidence or information about this on modern processors. The thermal diode is definitely located within the core but is it located in the hottest part? If not, then Storm can possibly cool the hottest parts better than Apogee but still get the same temperature reading. On the other hand, the Cooling-Masters test indicates that Apogee was able to heat the water more effectively (CPU-Eau column) - with those two Intel Pentiums. Diode placement should not affect that. As the CPU-Water temperature difference was smaller it is very likely that more heat was transferret into the water.
first of all, CPU diodes can be off by as much as 7 degrees centigrade. that's quite a large amount of error. second, it's well known that they are placed in parts of the core that are not necessary producing the most heat. the purpose of this diode is not to display accurate temps, it's purpose is to alert the mobo when the CPU is in danger. 7 degrees of error is plenty to keep your CPU alive (or functional at least) by automatically shutting down when the CPU reports a high enough temp. therefore, diode placement doesn't matter to manufacturers and, if possible, they will purposely place the diode in an area with less heat production.

Paapaa said:
"The Intel TTV simulator might not simulate reality very well."

Simulators most likely provide very repeatable results. But it is obvious that a copper block simulating a real CPU has to have also some compromises. Can they affect the test results and how much? I found this very new article at Overclockers:

The Evolution of Aftermarket Heat Sink / Waterblock Testing
using real CPU's introduces a margin of error bigger than George Bush's ego. this is why the Apogee compares so closely to the Storm in the "reality" tests. if you were to compare the TDX and MCW6002 with those other two blocks on real CPUs, you will find the gap closing a lot more than they do with die sims. first, heat is evenly distributed over a die sim. with the CPU, heat sources are in different places depending on the kind of load experienced and in some places on the die, heat isn't that intense regardless of load. second, it's impossible, no matter what the test bed is, to get accurate temps from a real-world test. every time you produce the test bed or change any component, you ruin the ability to produce repeatable results. if Lee were to setup that test bed a second time to include more blocks, you would see different results than you see here for the Storm an Apogee. die-sims produce repeatable, reliable results.

of course, you can't expect the same temps on your CPU than on a die sim. so why use one? the die sim produces reliable results that we can easily repeat in testing. furthermore, because the heat on the sim is spread evenly, the results are more dramatic (in terms of deltaT) than what you would see on a CPU die. this allows you to more easily compare results.

for this reason and the reason stated above (imperfections in diode temp readings), the Apogee may show better temp results than a Storm but the Storm will produce a better OC than the Apogee. how can this be? the diode is useless. the Storm carries heat away from the most concetrated heat-producing areas (directly above the die). the Apogee is evenly spread. arguably, the Apogee is supposed to be a better solution for IHS capped CPUs, but this is not the case. your diode may read that temps drop with the Apogee but max OC will be better with the Storm. this has shown to be true in many cases.


Paapaa said:
According to Bill Adams there are some definite problems when testing waterblocks with a die simulator: measurement location differences, secondary heat path losses, wrong copper slug dimensions and flatness of the slug face. He also mentioned this Apogee/Storm dilemma and speaks about integrated heat Spreader and thermal interface materials which also contribute to the thermal characteristics of a CPU. All of them should be taken into accound when using a simulator. Very interesting reading. :clap:

So which one is better? :shrug: In my opinion test made with real systems give more meaningful data than tests made with simulators - despite the potential flaws in testing. It is also very common that testing individual components make the differences look bigger than they really are: for most users Apogee and Storm have very similar performance. This holds true also in D5/DDC comparison in another thread.

Any comments on this?
as stated above, tests with real CPUs are completely unreliable. those temps mean absolutely nothing regarding tangible data. the fact that the die sim and CPU vary so wildly proves this.
 
it is never pointless to evalute just 1 veriable at a time. good science requires it.

the fact remains constant . for billa i was not talking about winners or loosers for the test bench . i was talking aobut using them to make a better block. maybe i should have specified that more clearly. as far as what the public sees in thier systems , i for more than 1 ( build many systems a year) have seen on a consistant basis that storm has performed better ( most customers request removed IHS ) so there will be no convincing me who has built more than a few systems using both that my numbers are miraculously inconsistant every single time
 
moonlightcheese said:
your diode may read that temps drop with the Apogee but max OC will be better with the Storm. this has shown to be true in many cases..

Could you post a link (or as many as possible) to such test? I also thought that if diodes can't be trusted, an overclocking test could show more reliable results. But I still haven't found any such test with Google.

Could you also post some information about diodes of modern processors being off? I couldn't find anything useful on this.
 
for finding out about diodes being off go to any board makers forums and read in there about it. for instance do a search at dfi street or abit and you will find tons of referances mostly related to bios updates
 
If you want to run CPU, chipset, GPU all in 1 loop I would go with the Apogee, its less restrctive. Those tests were interesting.
 
I plan on running the storm with a gpu block...If it doesnt perform well, i can always sell the storm and get something else...
 
Paapaa said:
Could you post a link (or as many as possible) to such test? I also thought that if diodes can't be trusted, an overclocking test could show more reliable results. But I still haven't found any such test with Google.

Could you also post some information about diodes of modern processors being off? I couldn't find anything useful on this.
certainly. here's a great article that shows two motherboards reporting the same CPU with 14C temp difference. it also details the particulars of how the diode varies from chip to chip and gives sources of info. this article is a perfect example of showing how useless die sensors are.
http://www.legitreviews.com/article/79/1/
-aDaM^ said:
If you want to run CPU, chipset, GPU all in 1 loop I would go with the Apogee, its less restrctive. Those tests were interesting.
this isn't true either. the Storm is designed to perform the same regardless of flow rate. you will notice that the dT vs Q (delta temp vs flow rate) is relatively flat on the Storm compared to other water blocks. chipset temps and GPU temps should be relatively unaffected by the Storm.

edit: ah, here's another photo showing diode location and major heat source:
http://www.silentpcreview.com/article191-page1.html
 
Last edited:
Last time I read one of those giant threads my head felt like it was going to explode.

I just came to the understanding that in the average watercooling loop the apogee is going to give you damn close to the performance that a storm will give you. It will probably even give you better temps on a GPU block if you have it in the same loop since the apogee is less restrictive.

But in a loop where you are trying to get maximum performance there isn't any better choice than the storm. If you are pumping 1gpm+ through a Storm, you aren't going to find a better solution. Well maybe a G7 :D
 
jamesavery22 said:
I just came to the understanding that in the average watercooling loop the apogee is going to give you damn close to the performance that a storm will give you.
not quite:
http://www.systemcooling.com/images/reviews/LiquidCooling/Swiftech_Apogee/image26big.gif
jamesavery22 said:
It will probably even give you better temps on a GPU block if you have it in the same loop since the apogee is less restrictive.
you're willing to sacrifice CPU OC for GPU OC? and not proportionally if i might add...
jamesavery22 said:
But in a loop where you are trying to get maximum performance there isn't any better choice than the storm. If you are pumping 1gpm+ through a Storm, you aren't going to find a better solution. Well maybe a G7 :D
if you are doing WC isn't the whole point max OC?

guys these arguments are like 6 months old. all of this has been laid to rest.
 
I asked Lee (robotech) to perform his "real world" cpu testing with a Maze 3. As you see in the graph HERE the Maze 3 is quite the dog, horrible performance as tested on the die sim, about 7-8C worse than the Storm at 100W. When Lee tested the Maze 3 on the real world cpu test, guess what? The maze was within .5C of all the other blocks, Apogee, Storm and 6002. I see this data has now been edited out. But you may reference HERE and HERE . The last link is the results compared to the other blocks. Seem odd to you? Does to me. With the current theory of use the TTV or IHS based CPU for testing, all blocks will come out near identical in performance, I mean, a flat copper plate soldered to a copper pipe with water running though it will do the job the same.
 
If this gets out of hand and nasty, thread will be closed and vacations will be granted.

Fair warning, keep it CIVIL please.
 
hookem2oo7 said:
I plan on running the storm with a gpu block...If it doesnt perform well, i can always sell the storm and get something else...

I use to run Cathar's G4 and a DD Maze4, it worked great. So im guessing swiftech storm is the same, youll be fine :)
 
moonlightcheese said:
not quite:
http://www.systemcooling.com/images/reviews/LiquidCooling/Swiftech_Apogee/image26big.gif

you're willing to sacrifice CPU OC for GPU OC? and not proportionally if i might add...

if you are doing WC isn't the whole point max OC?

guys these arguments are like 6 months old. all of this has been laid to rest.


Yeah I saw that test. As others have said in a real setup temp differences are slim to none.

Am I willing to sacrifice CPU temps for GPU temps? No. I'd add another rad and/or pump if I was going to see a drop in temps when adding a block. But I'm not talking about me. I'm talking about the majority. Tons of people come in here asking what will happen when they add a GPU block. How many people here have a CPU + GPU + single pump + single rad setup? Thats the majority. And if that single pump isn't an iwaki then it applies.

And no WC is definitely not always about the lowest temps possible. Quite often its about quieting a box down.
 
jamesavery22 said:
How many people here have a CPU + GPU + single pump + single rad setup? Thats the majority. And if that single pump isn't an iwaki then it applies.
is it? i'm not so sure. maybe it is >.>

jamesavery22 said:
And no WC is definitely not always about the lowest temps possible. Quite often its about quieting a box down.
true XD
 
Back