• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Intel may have canned Tejas project?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
dustybyrd said:

so does this mean hyperthreading has just taken a step up to true SMP?

will using both processors on a single application require multithreaded programs?

is the cheap 760mpx dual AMD board going to benefit from the likely more multithreaded applications that will follow?


This was always their plan, HT to full multi-core CPU's.

As far as will the applications need to be multithreaded? Definitely.

A change is coming in software design where paralllel development will be required to ensure the continued long term improvements in performance.

Some problems like 3D graphics happily lend themselves to extreme parallelism, and GPU take advantage of this. Other software like word processors, less so, but honestly, how fast does Word XP need to run. :) On the server side most software will make use of multi-core CPU's either because it is inherently multi-threaded, or it just follows the Unix forking model (one process/cpu).

In high performance computing clusters we use SMP and HT by simply running multiple instances of a non-threaded app. So there are lots of computing that will benefit signficantly from multi-core CPU from AMD and Intel.

Like I mentioned though a lot of software development for many applications (for games the CPU part) will need to be redesigned to take advantage of the new architectures.

I would suspect that for non-threaded applications the new cores will be underwhelming. Probably below the performance of the state of the art in monolithic CPU's when the multi core ones come out. The software is going to have to adapt just as much as the CPU's.

I am very much encouraged though by the use of the superb M core. One of the key reasons that the Pentium M is so efficient is that instead of more and more Mhz they solved problems like branch prediction much more effectively. The Pentium M simply makes less mistakes when predicting what instructions to pre-load and pre-execute so even at lower clock rates it gets more work done in a given amount of time (less do-overs). Less clock and less power for the same amount of work.
 
doormat said:
I severly doubt the Tejas project is canned. Just a few weeks ago, I talked to a chip designer from intel who was working on it... just seems really weird to can it that quick...

The people doing the actual work on any big project are generally always the last ones to be told their baby is getting shot down.
 
Back