More CPU Heat

SUMMARY: CPU heat becoming a major cost issue.

I reported on the growing size of heatsinks I saw at Computex HERE. It didn’t take long for these cooling beasties to show up on mainstream systems – I received a flyer from Dell with this picture on the cover:

Dell HS

A prediction: At the current rate of heatsink and CPU heat trends, heatsinks will be larger than the case in three years.

Now start to multiply heat dissipation requirements by a few hundred PCs and you get what is a growing problem in computer data centers. I saw an article in the July 18th issue of eWeek entitled “Servers to keep their cool” that piqued my interest – the lead paragraph gives you an idea of what’s happening:

“Vendors are looking to hardware and software to help customers handle growing problems with heat in data centers. The focus comes at a time when the promise of greater computer power is being thwarted by the increased heat generated by faster processors and denser form factors.”

In other words, hotter CPUs packed into less space yields LOTS of waste heat. IBM introduced a product called “eServer Rear Door Heat Exchanger” (aka Cool Blue) which uses chilled water to cool air exiting from server racks. Frankly, this seems a bit inefficient – why not cool the CPUs directly with chilled water??

CONCLUSIONS

The point of all this is dissipating CPU heat is now becoming a limiting factor; the era of the $3 OEM heatsink is over, and alternative cooling technologies are becoming more viable as heat trends accelerate.

I received this comment from someone who is invloved with IBM’s servers and wishes to remain anonymous:

“The purpose of the rear rack cooling is to give customers a certain level of comfort.

The cost, maintenance, and complexity of water-cooling CPU’s, especially 4, 8, or 16 in a single server, is way too much for the average x86-based server/datacenter manager to swallow at this point. Is it inevitable? Perhaps. But IBM cannot be the first to come to market and say it’s “necessary.’ They’d get persecuted with FUD from the competition.

It would also be a very hard sell.

The old mainframe customers would buy right up, but they aren’t the ones making x86-based technology purchasing decisions. It is also a solution that keeps the water away from the inside of the system, hence, away from the worry of the customer. It is also a lot easier to get a single water pipe to the rack, as opposed to trying to maintain reservoirs inside each system, or worse yet, maintain multiple pipes into every server.

The amount of cooling required in a datacenter environment is a lot more than a desktop or even a bunch of workstations sitting next to each other; recycling the same old water would not prove to be an extremely efficient solution. By cooling the hot air coming out the back of the rack, one essentially eliminates “hot spots” in the aisles and alleviates the major issue of physical positioning of racks and having to move them further and further apart. It’s an excellent solution for today and perhaps a good segue into internally water-cooled servers.

Email Joe

Loading new replies...

Avatar of jcw122
jcw122

Member

4,706 messages 0 likes

I heard a while back their developing fiber optic processors, like using fiber optic lines instead of silicon... heat=0 price=? awesomeness=10!

Reply Like

Avatar of dropadrop
dropadrop

Member

2,719 messages 0 likes

We've had problems in some server rooms at work when it get's very hot. Once we even had to turn off some not so crucial servers! Problem was solved by upgrading the ac rather then the servers though...

Reply Like

Avatar of TollhouseFrank
TollhouseFrank

Senior Headphone Guru

6,608 messages 0 likes

heat's always been a problem for computers. However, i do believe we are soon approaching a critical point of heat vs value in computers. Its soon going ot get to where peopel will find more value AWAY from computers than near them, with all the heat they throw off nowadays.

Reply Like

Avatar of ShadowPho
ShadowPho

Member

2,561 messages 0 likes

Future Mom:
"Lets Boil Eggs!
MIKE!!! Why did you close that PC case AGAIN?"

Reply Like

Avatar of Voodoo Rufus
Voodoo Rufus

Powder Junkie Moderator

6,663 messages 183 likes

I think the future is more efficiently designed CPUs like the Dothan and AMD's chips (to a lesser extent). The key is to keep voltage down, leakage current under control, and keep IPC high.

Reply Like

Avatar of drewmister
drewmister

Member

387 messages 0 likes

Well yes the eventual jump would have to be light... Its the limit of the universe. But the time before they have completely developed that will be a very long time. But I can't say in the future I don't see fiber optic motherboards instead of the integrated circuit board. The technology has come for the 100GB/ s fiber optic ethernet, so it can't be too far of a stretch to take that to the next level, just maybe not a stretch i will live for.

Reply Like

Avatar of JaY_III
JaY_III

Senior of BX

4,054 messages 0 likes

It looks like more efficient processors like the Pentium-M are going to be needed sooner rather than later in the server market.

1st week with a new job and we are having this problem
P6 core, the best intel has ever done needs to make a comback.
P68 is too hot for its own good

Reply Like

Avatar of =ACID RAIN=
=ACID RAIN=

Member

2,711 messages 0 likes

I had a similar problem at home with 4 computers in one spare bedroom. I ditched folding@home because of the heat, and the room was still too warm even with idle processors. In trying to cool that room off, my wife would get cold somewhere else in the house and complain. With the extreme AC bill and cold feet, we decided to just get an 80 dollar window unit one day. Best 80 bucks I've spent all summer ;)

Now I just leave the door cracked so the cats can go in and out, and the room stays nice and cool, and my wife isn't complaining any more ;)

Reply Like

j
jamesavery22

Member

1,286 messages 0 likes

The amount of cooling required in a datacenter environment is a lot more than a desktop or even a bunch of workstations sitting next to each other; recycling the same old water would not prove to be an extremely efficient solution. By cooling the hot air coming out the back of the rack, one essentially eliminates “hot spots” in the aisles and alleviates the major issue of physical positioning of racks and having to move them further and further apart. It’s an excellent solution for today and perhaps a good segue into internally water-cooled servers.

That make any sense to anyone?

Watercooling multiple PCs on the same loop wouldn't be efficient. OK makes sense.

Cooling the hot air exhausted out of the rear of a rack from multiple boxes (via some AC I assume :shrug: ) allows you to have boxes closer together and gets rid of "hot spots." OK that makes sense also.

But how does the later make transitioning to water-cooling any easier? ----> :confused: <----

Reply Like

click to expand...