What To Know Now About Dual Core

It Starts In 2006

Unless you belong to the server or the Have Too Much Money niches in the computer market, dual-core processors are a 2006 technology.

For the desktop, the story in 2005, good, bad or indifferent, is going to be AMD and Hammer. Any advances that are going to occur are going to occur there, and it’s not going to be a whole lot over what’s possible today.

For many current socket A owners, shifting over to Hammer is going to be a tough call. If you expect double the speed of a socket A system, any reasonably-priced Hammer solution in 2005 is likely to fall somewhat short of that.

However, a good answer to that problem is not going to be “just wait for dual-core to show up.” Dual-core technology is no instant panacea. It will take time (and probably a long time) for it to become better for the average person, and this article will explain why.

Xbit Labs has an overview of dual-core technology, and it makes it clear that neither AMD nor Intel have any intentions of getting serious about dual-cores for the desktop until 2006 (and in AMD’s case, probably deep into 2006).

There’s a number of good reasons for that. First, the minor ones:

Initial chips will be big and expensive: The first dual-core chips will essentially be two current 90nm processors cobbled together. That means it will be about the size of two 90nm processor cores, or about 200 sq.mm.

That in-and-of-itself won’t make them particularly expensive to make in dollar terms (maybe these will cost $50-60 to make rather than today’s ($20-30), but they are expensive in capacity terms. If dual-cores chew up at least as many resources as two single-core processors, it would be foolish for AMD/Intel to charge any less for them than they would for two single core processors. So they won’t (and in all likelihood will probably charge considerably more than that).

It will take conversion to 65nm to get die sizes down to somewhat reasonable levels. For Intel, 65nm conversion ought to start at the end of 2005, and ramp up seriously in the first half of 2006. For AMD, the dates are probably six-nine months behind that.

Initial chips will be slow Both AMD and Intel have said that the initial dual-cores will run at least three speed grades below the fastest current processors. For Intel, that will be 3.2GHz, max. For AMD, max speed will probably be 2.2GHz.

Both AMD and Intel have said that the speed of the initial dual-cores will be held down to meet the thermal envelopes of current platforms. For AMD, that means around 95W total, for Intel, 130W. So those speed limits are very likely to stick.

(Is this an overclocking opportunity? Well, don’t forget this chips will be expensive, and if that doesn’t bother you, for practical purposes, cranking up two processors occupying one die to any real degree is likely to take water or better.)

If your application or game is single-minded (and if it’s a typical desktop app, odds are it is), for all practical purposes, you don’t have an OMG fast dual-core system. You have a relatively slow single-core system. Just look at any benchmarking review of a dual-processor system to see what you get.

Which leads us to the big factor on dual-cores, and the new motto/acronym for the next few years.

ITSS: It’s The Software, Stupid…

ITSS: It’s The Software, Stupid

To an extent not seen before in the PC industry, dual-core means hardware will be held hostage to software development.

There’s two ways to approach the increased potential of dual-core technology: increased speed and/or increased functionality. The big problem with dual-core is that the initial advocates will want the first, but likely will get the second.

Some tasks can easily be split up into two pieces to be handled by two processors. Some can’t. The situation for most programs will probably be “Yes, they can be split, but is it worth the bother?”

If the task is already relatively quick to complete with single-cores, odds are developers won’t risk developing hernias to make their apps dual-core capable. For instance, it’s hard to see the makers of Winzip sweating bullets recoding their app to use two processing cores.

Let’s take gaming as an example. Inherently, gaming is if anything more dual-core friendly than most other computer activities. It’s a computationally-intense, lengthy process where the task can reasonably be split into two parts.

However, gaming programmers who would have to do a lot of work making sure the two CPUs play well together have to ask themselves, “How much will we gain pumping the results of two CPUs into one video card?” If the answer is “Not much,” the odds on those programmers doing all that work decreases.

You are likely saying, “But what about SLI?” Why yes, SLI does provide an answer to that particular question. The odds are that the performance improvement from using two CPUs to feed two video cards will be much greater than that from using two CPUs to feed one video card (or using one CPU to feed two video cards, which is why we wrote this a month ago). The time for SLI will be in the dual-core era.

However, that just leads the game developers to ask a different question, “How many people buying our game are going to use SLI?” If the answer is “Not many,” the odds on those programmers doing all that work decreases, too.

This isn’t to say you won’t see any games going seriously multi-core. It means there’s a question as to how many games will go that way, and how soon.

This is going to be a debate that is going to be held by every software developer out there: Is it worth it?

The Easy Way Out

Those who write most mundane desktop apps will probably conclude, “There’s no point in us ripping our app apart, but it would be nice if our app got more space than it has now. We could do more if it did.” This would be particularly true for background processes and maintenance work.

They’ll then point the finger to Redmond, and say, “Let Bill do it.”

At the very least, any OS is going to have to have an intelligent traffic cop built into it to tell these single-minded apps which CPU to go to. As current dual-processor users will tell you, it’s a lot smoother having two CPUs handle two major tasks, or even letting one CPU do big work, and letting the other do the minor stuff.

Look beyond the immediate, though, and two CPUs offer the opportunity for computers to be much more self-maintaining than they are today. To oversimplify, for the average Joe Sixpack system, you’re likely to eventually see the “working” CPU and the “maintenance” CPU doing all the tasks Joe never seems to get around to doing. (To probably more to a slight degree, this is probably what Intel’s Vanderpool virtual-machine technology will end up doing, too.)

Give the maintenance people free run of a CPU rather than a few stolen cycles here and there, and you can keep the average Joe up and running problem-free a lot longer than is likely today. Imagine a spyware program thoroughly checking out everything coming in and telling Joe, “You just downloaded spyware; do you really want to do this?” Yes, that can be done today with some effort on Joe’s part, but you can’t count on Joe. Multiply that by a few dozen little maintenance apps, and you’ll get an idea of what can be done.

Of course, Microsoft is going to have to start playing traffic cop, and the odds are they won’t even begin the task until we see Longhorn in 2006-2007 (bet on early 2007). There may be a patch to XP for rudimentary traffic direction (especially if Longhorn gets delayed some more), but that’s probably about it.

(A 2007 date to enable CPUs Intel will be pumping out a year before that will hardly sit well hand in Santa Clara. On the other hand, AMD, which basically has had to wait for Windows for x86-64 until Intel saw the light and caught up, will obviously see payback time and will do all it can to let MS be fair and return the favor while it’s finishing up that 65nm fab. This ought to lead to some rather amusing statements from both parties starting about six months from now, a complete role reversal from what happened with x86-64.)

Even if this all works out wonderfully and quickly, all this will do is give you a better, smoother-running computer, not a faster one. This is likely to become the general direction of the computing industry, though, because Joe doesn’t especially want a faster computer, and probably won’t replace anything fairly current for additional speed until it breaks.

What Joe would want is a much more hassle-free computer than he has now, one that just works without a lot of help on his part. Promise and deliver on that, and he’ll buy a new one quicker than just making the box quicker.

There’s certainly nothing wrong with that, but that’s not going to make speed demons any happier.

What’s A Poor Speed Demon To Do?…

What’s A Poor Speed Demon To Do?

The first thing a speed demon needs to do is realize that the good old days are over, and that lots of free extra speed every year is no longer going to be the path the industry will tread.

That’s a hard, bitter lesson for many to learn, and I don’t think even now, most enthusiasts accept that.

Unfortunately, if you looking for a doubling of effective speed from a relatively current system (high-end Northwood/Prescott or Hammer systems), an optimistic forecast for that on the desktop would be 2007/2008. It’s going to take that long for the software infrastructure to get into place.

Socket A owners are a bit more behind the curve, and in all likelihood, whatever AMD comes up with the first half of next year is likely to be the best they’re going to get for quite some time to come. It may not be quite the speed boost you want, but it’s not like waiting another six months is going to help much.

We think socket A users will be better off moving to Hammer sometime next year, then sit back and watch dual-core technology mature, which will take a while under the best of circumstances, and maybe longer than that because of . . . .

The Unasked and Unanswered Question

There is one big unasked (and unanswered) question in the midst of all this talk about dual-cores. What happens to single-core processors when this happens?

Intel is currently assuming that dual-cores will become the standard, and quickly. They think 70% of desktop CPUs will be dual-core in 2006. AMD hasn’t addressed this yet, but presumably thinks the same thing.

On the one hand, if one or both companies continue to make single-core processors, it’s by no means certain that the average person will pay more for dual-cores, especially if they don’t find any particular advantage to doing so (which may well be the case initially).

On the other hand, for the vast majority of the world’s population that doesn’t have any computer yet, cost is a paramount factor, and selling dual-cores only means a much smaller market here. If your idea of a future computer is to put a fully-functioning PC into a phone, dual-cores are hardly a high priority.

It’s hard to see how the world can quickly flip over to a dual-core-only world for these reasons. There will be a lot of people who want to pay less, or not pay at all, and it’s hard to imagine all the CPU companies forfeiting that business.

Yes, the CPU companies can do things like charge about the same for duals, but that boils down to the same thing. They’re hardly going to want to make less money with duals than with singles.

I don’t think dual-cores are going to be as easy a sell as Intel thinks. Not that some won’t want and need them, and that “some” will probably include most reading this, but what about the rest of the world?

I’m not predicting failure, but something more like dual-cores may inadvertently be the cause of a split between complex computers for those who need them, and simple ones for those who don’t.

Even if that doesn’t happen, though, odds are dual-cores will take a few years to become the hardware AND software standard, much as it took 32-bits years to become the standard over 16-bits.

Ed

Be the first to comment

Leave a Reply