• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Quad Core = Water Cooling?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Phrenetical said:
As a programmer i believe your dead wrong.

Sure most people code in a linear fashion because they can't understand OO methods of programming, just as a quick example.

??????

OO has nothing to do with the number of processors or any form of parallelism.

While the person you're responding to isn't entirely correct, the examples you give do not demonstrate any real understanding of programming for multiple processors.

There is very real and often very expensive (development-time wise) coding that needs to be done to make efficient use of mulitple cores. The OS doesn't automagically spread the instructions from a single program across multiple cores. Nor can a compiler automagically take a program and compile it to make use of multiple cores.
 
Good aircooling is enough to cool an OC'd C2D.

I think watercooling will never become a necessity, as it will never appeal to the main stream (OEM manufacturers etc.) Which is Intel's and AMD's target market.
 
Phrenetical said:
as an even more base example:
Now apply this to game data, lets say pixels, instead of one core rendering your whole screen lets say it takes 1 second, you have two cores rendering half the screen each now you have done the same thing in half the time 0.5 seconds, now 4 cores 1/4 screen each = .25 seconds (theoretically) // now think about Xfire and SLI, this is not multicore, but multi cpu, its a tiny step from multicore gaming cards. But it proves its more then possible to divide something up and put it back together.

Programming is only limited by the people writing the code, multi threading any app is possible even things as linear as gaming.
Woah.. hold on there, cowboy. You have some basic ignorance of multithreaded/multicore programming. Your last statement is just plain false. While I understand your enthusiasm for multiple core processors, you're not helping prove your case at all.

Do you have any experience with parallel/multicore programming?

Might I suggest some intel documentation:

http://www.intel.com/cd/ids/developer/asmo-na/eng/219575.htm
 
Last edited:
Immortal_Hero said:
As a programmer I disagree also. In essence there won't be much change in the actual program. Say for JAVA for example there is a JVM (JAVA Virtural Machiene) this guy makes the app run. The changes for the dual core/quad core optimization will occur here and in the OS. The way compiled code is read and run will have to change. I am not saying there won't be language changes and enhancements but I feel the "JVM" will be the biggest area of change.

Just-in-time languages like Java, and interpreted languages, can make better use of multiple cores, even with legacy code, for sure. But programs must be written to specifically take advantage of parallelism in processors to effectively use multiple cores.
 
^^^ The man knows what he speaks.

I wish multithreaded programming was as easy as having MS or Apple rewrite the OS to do it for me. Man I'd have a heck of a lot easier job lol.

Also with some programs there are just limits to what you can do. Word processing, for example, is not going to gain much in itself from four cores. Neither is your email client or most of the programs people use. Even if they could gain performance its just not worth the money necessary to implement it. I doubt even MS would be able to justify it. MHz will still dominate these programs for the most part. Having 4+ cores will be meaningless to those who just surf the web and type reports unless the marketing department at Intel, AMD, and the PC companies earn their pay.

Anyways,

Water Cooling is a great option. Set yourself up with a nice setup built for low to medium noise and you'll be happy.
 
Moto7451 said:
^^^ The man knows what he speaks.

The number of occasions where I'm dead wrong far outweigh the number of occasions where I'm even partially correct. :beer:

Moto7451 said:
I wish multithreaded programming was as easy as having MS or Apple rewrite the OS to do it for me. Man I'd have a heck of a lot easier job lol.

To relate this back to the OP - water-cooling a quad-core CPU is probably much easier than learning to write code which effectively uses multiple cores. :)
 
i haven't programmed in a while but for the OO example, if the program knows how many cores its dealing with it could then split the number of objects up equally (or as equal as possible) send them to seperate queue's (or threads i suppose) to each core and execute in parrallel no? word processing and web surfing programs should never go multithreaded because its not necessary for the little jobs they do. more intensive software for graphics, vid, 3D rendering.. that kidna stuff could benefit a whole lot more, but at the expense of program size, photoshop CS13 is gonna be a 27 gig install. you just wait.
 
Phrenetical - I'm not a programmer, but neither are complete ignorant. I might be wrong, however what you presented as simplified way how to see more core is helping to speed up things is a bit wrong for two major reasons.

First. Class person(); //command new person 1 - your idea is that every person is entierly independent on the others. That is major flaw in your logic, since (let's assume these persons are bots in game) actions of person 2, 3 and 4 can't collide with person 1. We can give it as simple, as all four persons can't stand in the same spot, just because that is the most optimal way to play right, now, yep?
So, you need to process person 1 first and update his (we say simple, so we do it simple) position. THEN (and not a picosecond sooner) you can process person 2 WITH the already known position of person 1, witch he can't use, so he go for the second best shooting position at your a$$ ;)

That was logical reason why your example is entierly wrong.

Second reason is highly technical. To do ANY kind of work, processor need to use memory, where the data are. Even todays big caches can't help notably, because if you ever did low-level assembler coding you know that many stuff you need to write to memory, and not only to cache. And the position of the soldier/person above is exactly this example. What is the problem with dual/quad core there? Oh, wery simple. Only ONE memory buss. So, using dual or quad core you can't effectively say you got 2 or 4 processors, because they are all sharing one memory bus (it get deeper than that, they also sharing one incomming instruction pipeline, but that is way deeper we want to go) and therefore they can't run in paralel at all, except for what they store in their caches. (hint, that is why latest intel models come with 8MB of L2) But as we said already, many informations are need to update in memory and increasing the number of core's only add another reasons for that. However storing informations in memory are time consupting. Average CPU run at 3 000 Mhz, while average memory run at 250 Mhz, somewhere 12x slower and I did not even go to the details like the ram is not CL1 but Cl2, 3 and even worser + refreshes, bank timings... etc.
Hence adding core only put more stress on the memory bus and it is not coincidence that many things are on same machines using dual/single core CPU's a little slower on dual core ones.


Funny part.
Your imagination how "CPU render pixels" is laughable at best. Yes, some demos/intros and other experimental stuff use realtime raytrace to produce 3D scenes, but usualy games todays use GFX cards to draw pixels. The difference (CPU commanding GPU to draw a pixel) might not sound big on the first time, but we get there. 3D world on computers are created using geometric models that use polygons. (no nurbs curves yet, eh) The very simplified version of explaination 3D engine goes like this. In BSP (Binary Space Partion) are found the position of the player and his rotation of his look. Given this, processed is everything with the fog of his view for preparation of screen draw. (important - at the time, everything has to be computed already, no paraelizing again, sorry) Now are taken every polygon in the fog by his normal is determined the Z-distance and possible overlapping to sort out polygons that are just hiddenand stay hidden for player, so there is no need to bother with them anymore. Ater this, these geometric data are being feed to the GPU, that know that for each polygon he has to use certain texture + another textures to create the effect desired...

The only way to paraelize that stuff is to proces geometry on one core, and AI + sound + memory management in another. Your imagination that from quad core each core draw (or prepare to draw) 1/4 of the screen is pretty vivid and clearly it can catch some users, however it is completely false, as each step require that the step before it is already done (!!!).
And even paraellizing the 3D screen setup for 4 cores willbe insane, because many polygons are overlaping between the 1/4 of the screen rendered and these data has to be shared using main memory (slow) - witch will lead to massive slow-down...

its more then possible to divide something up and put it back together

Certainly. But while you spend your time by dividing and then putting it back together, the single core is already twice time done with it ;)

need a way to detect if a machine can even do paralell processing, creating heaps of overhead in most current languages and slowing down the app

That is exactly why multicores support are next-to none even in professional apps. One thing is paraelize a benchmark and another application where steps depend on each other. Thing that obviously did not cross your mind yet.

multicore software programming, they have proven you can run things like windows ona CPU with 32 cores each running at 1.5Mhz

If you bother to look into computers history you know, that things better that lazy winblows can be run on 7.15Mhz CPU. (Amiga 1000 in 1985 and Amiga OS that can do much more that winblows can do today)


Immortal_Hero -
Do you really think that the CPU makers are just spending millions of R&D money to develop multiple cores because they can market them?

LOL Your "argument" is very laughable, mate. Listen to yourself. What you are saing is just "because they do it, it must be the only and best way to do it" :D
Because milions of people killed themselves, it must be great thing to do and you should do it ;) ASAP! :D
Nah. I seen millions of bucks invested into way more stupid things so I can take this as something more that laughable taunt.
Besides my point was not that dual/quad cores are useless. My point was, that it is very hard to paraellize code and mainly games, so I see no point of multicore stuff other that sell CPU's for high prices.
Sure, servers (massive paraell requests) and encoding benefit greatly from them. And you can even carve some benchmarks to show speed-up, so, bingo, let's sell them not only for servers, but for general public too, jay!


aaronjb -
There is very real and often very expensive (development-time wise) coding that needs to be done to make efficient use of mulitple cores.

Carve that to stone and bang with it some people 24/7 till they understand how hard it is to paraellize stuff. Meanwhile correct me w/o mercy, I know that I'm not entierly right, however the post lenght is increasing way too much with proper explainations... and for some people this is just a waste of time, since if they come with "arguments" like "they are doing it, so it must be good" - come on, that is not worth reply.


Moto7451 -
Neither is your email client or most of the programs people use. Even if they could gain performance its just not worth the money necessary to implement it.

Wait, it is actually worse. If they implement it (and it is very costy and time consupting and the gain will be hardly noticable), it cause significiant slow-down on single core ;)

4+ cores will be meaningless to those who just surf the web and type reports unless the marketing department at Intel, AMD, and the PC companies earn their pay.

Wait, they already conviced some that it will be faster! :) :D So they earn their paychecks pretty well :)


BMac420 -
the program knows how many cores its dealing with it could then split the number of objects up equally

True. But how many object you need to:
1) execute in parallel
2) have absolutely NO relationship to other objects?
The second problem is the very essence of multiprocessing. You can paraellize only processes, that using data that are not altered in any way by these other processes you just distributing to the separate cores.

word processing and web surfing programs should never go multithreaded because its not necessary for the little jobs they do

It is not only not necessary, it is very impossible to do. All I can imagine is that flash plugin will run on other core, while the browser run on other core... But it is need on current CPU's? Nah, most of the time the flash wait in loops, so...

more intensive software for graphics, vid, 3D rendering.. that kidna stuff could benefit a whole lot more, but at the expense of program size, photoshop CS13 is gonna be a 27 gig install. you just wait

Clearly there is the problem. Size grow. And ask yourself, considering the size. Do the memory speed grow with it? No? Then how things can be faster, when they only grow in size, yet the ram is pretty slow, no matter the marketing BS we hear all over.

Yes, I'm very skeptical in the core way of slow-down done todays. Yes, there are speed-up things, but for home user I see a little gain. Except there is a big gain for the servers and companies that is convincing us that more cores = better... Eh, what a scam.

Anyway, one user mentioned (on O/C forum!) that there is "Mhz wall". No, there is not. We all know that these Conroes are not only damn fast, but they are also damn well overclockable, so what wall?
Each wall in history was overcomed by better technology and we aren't at the limits clearly yet. Nowhere near of them. If some can clock Conroe at 6Ghz, then untill 6Ghz is sold for 200 bucks, then I see no need for more cores - except for encoding and servers.

I think that the PR magic primary concern is to convince users that more core = better to make these users bear most of the cost to develop server multicore CPU's. So the servers got their CPU's cheaper, because enough people - that do not need them and can't utilize them - buy multicore CPU's even with their prices are way higher and unjustified for the little slow down in some applications and little speed up for another ones - or tell me, how much will my FPS in BF2 increase on dual core on same clock verzus single core???
 
Bumping this with a quote from a recent (CES 07) interview with John Carmack ( http://www.gameinformer.com/News/Story/200701/N07.0109.1737.15034.htm?Page=3 ):

Microsoft has made some pretty nice tools that show you what you can make on the Xbox 360. I get a nice multi-frame graph, and I can label everything across six threads and three cores. They are nice tools for doing all of that, but the fundamental problem is that it’s still hard to do. If you want to utilize all of that unused performance, it’s going to become more of a risk to you and bring pain and suffering to the programming side. It already tends to be a long pole in the tent for getting a game out of the door. It’s no help to developers to be adding all of this extra stuff where we can spend more effort on this. We’re going to be incentivized, obviously, to take advantage of the system, because everybody’s going to be doing that. It’s not like anyone’s going to say that it’s impossible to do. People tend to look at it from the up side. It gives you this many more flops and it gives you this much more power to do that. But you have to recognize that there is another edge to that sword, and you will suffer in some ways for dealing with this. I don’t have any expectation that anytime soon, a massive breakthrough will occur that will make parallel programming much easier. It’s been an active research project for many years. Better tools will help and somewhat better programming methodologies will help. One of the big problems with modern game development with C/C++ languages is that your junior programmer who’s supposed to be over there working on how the pistol works can’t have one tiny little race condition that interacts with the background thread doing something. I do sweat about the fragility of what we do with the large-scale software stuff with multiple programmers developing on things, and adding multi-core development makes it much scarier and much worse in that regard.
 
Back