When The Speed Goes, Where Goes The Software?

Just got this interesting email:

While reading about the cMac and the “end of speed era” something struck me.

The initial betas of Longhorn seemed rather sluggish and stuffed with useless fancy stuff. Could it be that MS expected the CPUs to catch up in 2004-2005 due to Moore’s law so it would present a fluid experience?

What will happen now when cpus aren’t improving at the same speed, will we see a sluggish Longhorn that uses all cpu resources to do pretty 3d animations on the windows or will MS in fact do something that runs on a 1-2ghz system?

If you have an opinion in the matter I would like you to share it with me 🙂

That’s a very good question, but let’s make it even better by expanding the horizon: What happens to all software when CPU speed improvements slow down to a crawl?

Postponing The Answer

The quick, easy answer to the question is burden sharing, or to put it more technically, multithread over multiple processors.

So, for the example given in the email, one CPU could do the “pretty 3D animations” while the other could handle the nuts-and-bolts stuff.

However, as we go from the Era of Speed to what looks like the Era of Noah’s Ark (two CPUs, two GPUs), there’s one factor that can easily be overlooked. When you have multiple parties working on something, you can’t let everybody do their own thing.

You have to coordinate them, and coordination isn’t free.

Let’s say you’re given sole responsibility for a project. Having one head handling everything is efficient (provided the one head is good, of course). All aspects of the project are under the control of one mind that knows everything that is going on in that project.

However, most of the time, while one head may be efficient, it may not be effective. It does you no good to have someone able to write a hundred page report in a week when you need it in two days. More often, tasks are too big and complicated for one head to handle. You’d probably not want just one person maintaining the Windows operating system; there’s just too much for one person to know to do it well.

So the task gets split among two or more parties. When that occurs, effectiveness goes up (more gets done quicker), but efficiency goes down because since we are not telepaths, no one knows everything that is going on as it is happening.

That requires coordination, and you’re not writing or coding when you’re having a meeting. When two or more heads are working on one thing, the chances of A doing one thing, and B doing something else which will prevent A’s thing from working right, if at all increase expotentially. That requires more meetings, and more rework to redo the areas in conflict.

Indeed, much unproductive effort and time can be spent just deciding a simple task like who does what.

True, CPUs aren’t egotistical like people, but that’s simply because they do whatever they’re told. However, unlike people, if the boss doesn’t say so, or didn’t consider all the possibilities, they do no more than what they’ve been told, and if the instructions aren’t enough, down they go.

This makes life considerably more complicated for the people giving the instructions, and takes up much more of their time and effort than if only one head was producing.

This doesn’t mean splitting a task isn’t usually worth doing. It does mean that, even at best, two heads won’t give you twice the work.

The nature of the task also has a big effect on how much of an extra bang you get from having extra heads on the job. If the “task” is really a series of mostly independent subtasks that require little interaction with other subtasks, splitting up the burden is likely to be more successful that trying to split up one integrated task.

For instance, let’s take that one hundred page report. It could well be very effective and pretty efficient to have one person write the first half, and the other write the second half of the report. It would not be effective or efficient at all if one person was in charge of providing the consonants, and the other was in charge of the vowels found in the report.

Some tasks are better left in one hand because they are simple. If the “task” was adding ten numbers together, you could have one person add five numbers, and the other person add five, then get them together to add the two results together, but would it be worth the bother?

None of these factors make using multiple CPUs a bad idea, but they’ll all serve as little anchors to reduce the increase in effectiveness you can expect from using two or four or whatever heads rather than one.

Some might say, “Well, what about servers?” Obviously, multiCPU servers (and scientific computing) have been around a long time, and they must be OK since they’re still around.

If you look closely at what these computers do, though, they’re really either handling multiple instances of the same task, or handling huge tasks simply too big for one CPU to handle which can be split up at least decently into a lot of single-CPU-sized chunks..

If you look at what a PC normally does, though, the pressing need to have multiple heads at work is nowhere near as pressing or necessary as they are with servers or scientific computers to begin with. Most average PC buyers are going to need some convincing to believe that dual cores aren’t a solution in search of a problem, at least as far as they’re concerned.

Persuading people that two heads are better than one is probably doable, but in the years ahded, if the only way to advance in the years ahead is to go from two to four to eight to whatever to handle God knows what kind of bloated software that would actually benefit from computing by committee.

At some point, and probably sooner than later, people will say, “Enough.”

The Other Path: Leaner and Meaner…

The Other Path: Leaner and Meaner

Have you ever gone to a restaurant where the waiter, without asking, served you everything on the menu, then handed you a check charging you for one of each?

No? That would be insane, right? Yet software developers essentially do that to you, giving and charging you everything they have even when you don’t want it.

Even worse, not only do they do that, they literally jam the food down your throat until you have to hire a hitman to shoot the waiter.

If you think the last part is a wild exaggeration, isn’t that exactly what a program like XP Lite is, a hitman hired to stop the Windows waiter from shoving bloat down your throat?

Well, yes, many programs do offer some flexibility into what features go into my machine. I can say “No” to having Serbo-Croatian displaying properly in my machine.

However, people would like a little more flexibility than that.

At least in geekdom, the Firefox browser is spreading like, well, fire, and at least part of the reason is because it is leaner, meaner, faster, and has more friendly flexibility than IE.

If the machines won’t get faster, the software can, and the easiest way to do that is to simply allow people to remove what they don’t use.

There are other ways to increase speed that don’t require a faster CPU, too. Right now, I’m using Firefox contained completely in a RAMdrive. It’s not a completely braindead activity, but it’s as easy as peeing compared to even attempting the task with IE.

I’ll talk more about what I’m finding in the future, but the argument against big RAMdrives is one of current cost-effectiveness, not performance improvement.

It’s not miraculous, but it does give a sizable speed kick. Eventually, some form of this is going to be the future, and since memory will cost more than hard drive space for the foreseeable future, bloat will be a four-letter word.

More Versus Better…

More Versus Better

Of course, it’s hard to persuade people to pay out more to get less. Especially when people don’t realize “more” isn’t free.

When I visit the machine of a Joe Sixpack, I take a look at what is starting up along with the rest of the machine. More often than not, I see a tribe of icons gradually clogging the lower right-hand corner of the machine, a sign the user has no idea he’s dealing with a machine with finite resources.

As I head over to msconfig, behead the spyware, and start looking around for additional victims, I ask them if they use these icons, and most of the time, they don’t even know they’re there. They get completely mystified when I tell them that having all these things start up with the machine slows it down. How can more be bad?

This illustrates the problem with “more.” It’s not that “more” is bad. It’s the indiscriminate addition of “more” to your plate, regardless of whether you want or need it, that’s bad.

Perhaps what the software manufacturers need to do is start giving people more choice as to what should be installed or not, touting it not as “giving you less” but instead “giving you more speed and freedom.”

One advantage any future XBox/Playstation PCs will have is that they’ll have a fresh beginning when it comes to software bloat, and reason to keep the beast in check.

When ever-increasing CPU speed is no longer a given, and alternative means like RAMdisks move towards an expensive practicality, providing a lean and mean option will stop being an option and start becoming a necessity.

It may be a very difficult lesson for the Sixpacks to get, but if Firefox is any indication, the power users already have gotten it.

Ed

Be the first to comment

Leave a Reply