SUMMARY: CPU heat becoming a major cost issue.
I reported on the growing size of heatsinks I saw at Computex HERE. It didn’t take long for these cooling beasties to show up on mainstream systems – I received a flyer from Dell with this picture on the cover:

A prediction: At the current rate of heatsink and CPU heat trends, heatsinks will be larger than the case in three years.
Now start to multiply heat dissipation requirements by a few hundred PCs and you get what is a growing problem in computer data centers. I saw an article in the July 18th issue of eWeek entitled “Servers to keep their cool” that piqued my interest – the lead paragraph gives you an idea of what’s happening:
“Vendors are looking to hardware and software to help customers handle growing problems with heat in data centers. The focus comes at a time when the promise of greater computer power is being thwarted by the increased heat generated by faster processors and denser form factors.”
In other words, hotter CPUs packed into less space yields LOTS of waste heat. IBM introduced a product called “eServer Rear Door Heat Exchanger” (aka Cool Blue) which uses chilled water to cool air exiting from server racks. Frankly, this seems a bit inefficient – why not cool the CPUs directly with chilled water??
The point of all this is dissipating CPU heat is now becoming a limiting factor; the era of the $3 OEM heatsink is over, and alternative cooling technologies are becoming more viable as heat trends accelerate.
I received this comment from someone who is invloved with IBM’s servers and wishes to remain anonymous:
“The purpose of the rear rack cooling is to give customers a certain level of comfort.
The cost, maintenance, and complexity of water-cooling CPU’s, especially 4, 8, or 16 in a single server, is way too much for the average x86-based server/datacenter manager to swallow at this point. Is it inevitable? Perhaps. But IBM cannot be the first to come to market and say it’s “necessary.’ They’d get persecuted with FUD from the competition.
It would also be a very hard sell.
The old mainframe customers would buy right up, but they aren’t the ones making x86-based technology purchasing decisions. It is also a solution that keeps the water away from the inside of the system, hence, away from the worry of the customer. It is also a lot easier to get a single water pipe to the rack, as opposed to trying to maintain reservoirs inside each system, or worse yet, maintain multiple pipes into every server.
The amount of cooling required in a datacenter environment is a lot more than a desktop or even a bunch of workstations sitting next to each other; recycling the same old water would not prove to be an extremely efficient solution. By cooling the hot air coming out the back of the rack, one essentially eliminates “hot spots” in the aisles and alleviates the major issue of physical positioning of racks and having to move them further and further apart. It’s an excellent solution for today and perhaps a good segue into internally water-cooled servers.
9 replies
Loading new replies...
Member
Member
Senior Headphone Guru
Member
Powder Junkie Moderator
Member
Senior of BX
Member
Member
Join the full discussion at the Overclockers Forums →