Dual Musings

The world is abuzz with the talk of dual-core and multi-core processors and what this could mean.

Salespeople look at it as a new big thing to pitch at people to get them to buy new computers. Intel and AMD execs look at it as a way to keep from completely redesigning their chips from scratch. And enthusiast and overclockers are looking at it for a new performance boost.

But like many people I find I’m left with a lot of questions. There has been proof that dual processors in a system does not equate to a 200% boost in performance. There is talk that dual-cores will be no difference. Plus, as Ed has mentioned in the past, some tasks can’t be divided up. But then again, maybe they wont have to be.

When we hear dual-cores, we assume they’re matched cores. IE, two Barton cores, two Hammer cores, two Prescott cores, or two Dothan cores.

Doing so raises a few questions in itself.

If Intel puts 2 3.0ghz Prescott chips on a single chip, can they call it a 6ghz without too many people getting up in arms? Will they finally achieve a 10ghz chip by making a 4 core processor with 4 2.5ghz chips? AMD could do the same, and release a 5ghz Hammer by using 2 2.5ghz cores on a single chip.

It is the truth in a way, the total ghz of the chip is 5, 6, or 10 if you add the totals. And since Intel already uses the quad pumped bus to advertise an 800mhz front side bus when it really only runs at 200mhz, I can see them doing this.

(Ed. note: So do I.–Ed)

The next matter to consider is overclocking. Does changing a setting in the bios affect only one of the cpu cores, or will it affect them both? One would think you’d have to change both cores the same way, to keep synchronous operation.

But if dual-cores are like running dual cpus, chances are one side of the core can overclock better than the other side, just like one processor in a dual processor system can perform better than the other processor.

And if synchronous operation is required, then you’ll be limited to the slowest cores maximum. One core might hit 3.2ghz, but the other side might only hit 3.0ghz, so you’re stuck at 3.0, or rather, a 6ghz chip instead of a 6.4ghz chip.

Maybe we will be able to change the settings on the cores separately, and have a 3.0ghz core and a 3.2ghz core for a 6.2ghz chip. Then perhaps one could set the faster core to do the more speed related tasks, and have the slower core do the background tasks.

(Ed. note: This would probably create timing nightmares for tasks being shared between two processors. However, it’s quite possible to designate tasks to one processor or the other.)

Other Possibilities??…

Other Possibilities?

Of course, there is still another option I haven’t heard mentioned yet. No one has said we need to have both cores on the chip be the same. And no one said that both cores had to be running at the same time.

Perhaps we will soon see a hybrid dual-core chip with a Barton and a Hammer core sharing the same die. At boot, the 32 bit 1.5ghz Barton chip with 256k of cache can handle the startup and small tasks. For example, word processing, picture viewing, listening to music, and internet browsing, while putting out very low heat.

Let’s say a 1.5ghz Barton running at 1.5 volts and made with 90nm is putting out 40-50 watts of heat. Compared to current processors, this is ice cold. A high quality heatsink and fan combo could possibly have the fan stop spinning and dissipate the heat with the heatsink alone.

As soon as more processing power was required, the other core could become activated, instantly mirror the information being done by the other core, shut down the Barton, and take over. Suddenly, we have the Hammer core awake, to play games, or do strenuous tasks like converting mpeg movies to divx.

(Ed. note: This is unlikely to happen, not because it couldn’t be done, but because there’s no benefit that would be derived from having two different type cores in such an arrangement that you couldn’t get from having two of a kind, and running one of them more slowly.)

Having the two cores be awake at separate times will require some work, no doubt, but as computers have shown the world, nothing is too impossible. If squeezing 222 million transistors into something smaller than a bottle cap, turning something off and on should hardly be a challenge.

With the two cores being on at different times, it could allow a different breed of overclocking. There would be those who would leave the Barton core alone or maybe even underclock it to make it super quiet and cool, and overclock to the limits of the Hammer core for when they need the performance boost. Then there would be those who say heat be damned, and overclock both the cores to the max.

Intel could do the same thing. A 64 bit enabled Prescott waiting to be turned on, with a low powered Dothan doing the ordinary computer chores. It would keep the heat down, while keeping the Prescott power there for a user to use, IF they need it. As it is, most users do NOT use 100% of their cpu power, except when it comes to gaming and converting. And even then, it’s not done all day long. Well, except by some gamers, but those are exceptions.

AMD has already realized this, giving us Cool and Quiet, a processor speed downgrade that automatically reduces the voltage and speed of a chip to reduce the heat and thus, the heat controlled fans noise. Maybe the next step is to switch from the fast, hot chip to the slower cooler chip when it’s no longer needed.

(Ed. note: Some CPU architects have already been speaking about doing just that.)

The naming of these chips would be confusing I’m sure. A 1.5ghz Barton paired with a 2.5ghz Hammer could in theory be called a 4ghz chip, but maybe later AMD will want to put out a 1ghz Barton paired with a 3ghz Hammer.

In the end, Intel and AMD may put the chips ratios on the box. “4ghz, 1.5:2.5 ratio chip!” But then, AMD and Intel seem a bit fond lately of model numbers. So maybe we’ll see a 4ghz 330:650 chip to confuse all of us to no end.

This would also make benchmarking a lot more interesting. With dual-cores and 4 different processors, some tests will need to be done twice. The Dothan core may out perform the Barton core when it comes to certain windows tasks, but the Hammer core outpaces the Prescott core when it comes to gaming. Decided what chip to buy will become an even more harrowing experience.

I guess the only answer to these questions is time. And the ones in charge of that are AMD and Intel.

Christopher Aussant

Loading new replies...

Avatar of I.M.O.G.


25,037 messages 3 likes

I also thought this article raised some interesting questions from some different perspectives.

Thread rated a 5.

Reply Like

Avatar of ogboot


593 messages 0 likes

great article, can't wait for a smidget of this technology to become available.

Reply Like

Avatar of Sjaak


7,036 messages 0 likes

I like the idea of the hybrid chips. Too bad that they are from different manufacturers, but i believe that a triple core consisting of one Via Nehemia 1 Ghz and two Dothan cores of 2.5Ghz would kick *** with the same idea as is put up in the article: Nehemiah for windows stuff, dothan for performance stuff. Extremely low heat output, and as we have seen, the dothans offer an interesing overclocking perspective, and are high-performance.

For AMDroids, replace the dothans by A64's ;)

Reply Like

Avatar of cV


565 messages 0 likes

Bah humbug - to hell with the heat; mewants quad core now. :p

Actually, the idea of multipurpose dedicated CPUs in one core package is quite interesting. The most infamous suggestion would be that of a prescott core and A64 working in tandem, the two canceling out each other's negatives (computational efficiency and high-bandwidth performance, respectively)

Also, a dedicated sub-CPU that could do window drawing and text rasterizing and antialiasing would really be sweet. However, it'd require a new API. But the chip itself would consume next to no power and accelerate window drawing like never before.

Reply Like



830 messages 0 likes

Umm, cV, that's what a GPU is (used) for :)

A few comments on the article itself:

{} Unmatched-speed CPUs: This can already be done on a dual-mobile-athlon system, with the small problem that Windows stuffs up if you try to do it, and Linux stuffs up unless you disable RDTSC timekeeping. This is obviously just a simple software issue.

{} Putting a Barton and an A64 on the same CPU doesn't help much. The only way for the two to work together (access the same RAM) would be to re-engineer the Barton to use hypertransport for memory access, at which point you've more or less got a dual-core hammer. Having a dual-core where you can shut down one core (which can *almost* be done with dual Athlon systems via halt disconnect) or step down the frequency/voltage (can already be done on dual Athlon systems) is a much more practical/efficient way to go. On the Intel side, though, a Dothan/Presscott combination would be good (as long as Intel didn't throw out the MP capability when they were moving from the P3 design), since the Dothan probably won't be as good at encoding as the Prescott.

I'd say that CPUs are going to drift back more towards being general purpose chips (ie: more P3, less P4), and GPUs will become much more general DSPs (which the most recent ones are, to a limited extent). In terms of pure FPU grunt, a P4 or Athlon gets absolutely crushed by the latest GPUs. We're talking an order of magnitude here (X800 XT does ~200 gflops peak IIRC, compared to a 3.6GHz P4's paltry 15 gflops), but unfortunately it just can't be harnessed in todays cards (for both hardware and political issues).

Reply Like

click to expand...