• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

3 Core Phenom is real

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
This is how I see it... When AMD/ATI introduce their CPU/GPU combo chips, Intel, Gigabyte, Evga, BFG, XFX, etc., etc.... will either have to start doing the same thing (which they have little or no experience with), or they will have to buy out or merge with other companies (which the democrats may not allow if they are in office). If AMD's current chipsets with onboard video are any indication of what that product will look like, I say, "Intel, you'd better get your arse on the move!"
 
Considering they will release Neha sooner than AMD's fusion they are on the move since they come out first with the gpu/cpu.

Cpu/gpu won't break videocard makers there will be a need for videocards, cpu/gpu will be faster than what we are used to from integrated video but gonna be far from an RV670.
 
Intels video motive has been the same as Microsoft, it they can put it in one box and make the add ons unnecessary then that's what they will do. I think if Intel took it's graphics serious then there would be no PCI-E* slots on future intel brand boards and I'm sure they could do it. We sit at a crossroads where AMD and Intel sit with integrated GFX solutions that threaten to obsolete ATI and Nvidia though ATI is already absorbed into AMD only leaves NVidia to think quick and jump the two rivals on setting the next standard (again).
 
http://www.vr-zone.com/articles/AMD_Triple-Core_Is_Phenom_7-Series/5327.html

"We can expect DVT samples to be available by January, production by February and launch by March 2008."

This is really upsetting. The performance isn't even ground shattering clock per clock from Barcelona to Keinstfield as it is. Therefore why continue to extend this wait? It should have been out last summer. This is ridiculous for an inferior product thats 65nm at launch when Penryn is 45nm. I call it inferior because this "new" product is comparable to Intel's last year product. To top it off.. since it still is a 65nm die then the wattage is going to be just as bad as Intels non-native 65nm quads compared to their revised 45nm. So really what main advancement is there that AMD is doing that they have to make the consumer wait all this long?

If they even have the nerve to price these chips up High like they are doing to the current x2 6000 and 6400 then I'll lose even more respect for them. The 90w version of the 6000 shouldn't be anymore than E6550 since they are both neck and neck for performance even the E6650 is still more energy efficient. The 125w version should be even less but, it's a mere $10 less at newegg then the E6550 for 35w more power compared to the 90w version.
 
Last edited:
Integrated GFX which threatens ATI and NV ? At best on low end, check the size of a moderate gpu.
The Quickpath and Direct HT buses might, but as I said I that would hurt if you tell people what videocard to put in your rigs unless those really kick *** now that won't happen with 2 buses and 3 gpus unlike they make a standard what doesn't seem to happen.

Intel is pushing hard to put raytracing in gaming rigs and blow out AMD and NV from gaming with that move, a good cpu gpu combo can grab future consoles as well.
Though it's really hard to catch up, so far they could only run it on few years old engines with two way rigs which come out a year from now.
They have to develop faster than ATi and NV, and start with 3-4 years handicap. Coming up a big boost from neha and who knows how much from the hybrid sli/CF.
Hope gaming won't become a battle of dual procs vs dual+ gpus.

Will boil down to price/performance and developer support even if they pull it off, what would be really surprising.

Oblivion it's not their choice. They have their outputs with the yields, and they can't make enough for the server market itself.
Today the first batches showed up here in Japan finally sooner than the whispers of november. We have 550 brand new servers from the old opty sales waiting for procs plus the upgrades this will top more than 1000 procs and more as they push the clocks. Yes they optys are better for servers but might not win the desktop over yet. On the other hand just like ATI 19X0 was it can turn into a better future investment as multicore apps appear, the question will be 2 shared l2 and an FSB vs shared l3. Old games probably Intel win but more than playable with AMD new games gonna be interesting.

So to put it simple it's not out because they can sell it elsewhere and it might disappoint ppl, with few games like crysis it will be more interesting and Neha might spoil the party.

C2D beat K8 pretty easily right ?
This picture just to show the real difference in NEW games, even if intel wins against K10 with 5 FPS so what ?

15804.png
 
Last edited:
The chart you linked does not give justice for any cpu comparison. I'll explain..

http://www.anandtech.com/video/showdoc.aspx?i=3127&p=6

The problem when viewing CPU performance while looking at any GPU bound chart is that the GPU is bottlenecking the processor from contributing any sufficient amount of performance which is why you don't look at any GPU bound charts when comparing cpu's. What that chart was displaying was a "realistic situation" gaming for UT3 since UT3 is more GPU limited than CPU compared to other "new" games like Supreme Commander and World In Conflict which both stress the CPU and GPU greatly then just concentrating on 1 over the other. In that chart Intel managed a 9% advantage over AMD clock per clock. However considering this bench, it still shouldn't be this high of a percentage difference from Intel over AMD since it's a GPU bound scenario. They even mentioned it...

"We suspect that there's something fishy going on as the test is quite GPU-bound, yet going from Intel to AMD yields a reasonable performance drop."

"We looked at a 3.0GHz Athlon 64 X2 and compared it to its closest Intel price competitor, the Core 2 Duo E6550 (2.33GHz) at our high res settings: The Intel performance advantage drops to 7% on average, but it's much larger than it should be given that we're dealing with a GPU-bound scenario."

Also if you read the wording before looking at the GPU bound charts you'll see that they made a clear error..

"We then cranked up the resolution to 1920 x 1200, and increased the world detail slider up to 5 to give us a more realistic situation for this clock speed comparison. The results were a bit surprising:"

However in the charts, the resolution is displayed at 1024x768 which is clearly not a GPU bound resolution thus making it a bit misleading. They need to correct the resolution in these charts.

--------------------------------

http://www.anandtech.com/video/showdoc.aspx?i=3127&p=6
http://www.anandtech.com/video/showdoc.aspx?i=3127&p=7

Now if you look at the strictly CPU bound comparisons in the 2 above links, then it is a clear diffrence from Intel to AMD of a good 40-50fps clock per clock. This is clearly not a fair comparison price point wise but, that's not the purpose of this chart since they are only comparing clock vs clock.

On a final note.. I'm not a fan boy to either company because I own both and love competition but, if they can't keep up with the pace then they need to get some help. The rumor is either Samsung or IBM. Simply a rumor which I doubt will hold any ground but, if it did pull through then AMD will have enough money to catch up to Intel again.
 
Last edited:
I know it was gpu bond resolution, but that's what I think most of us use to play, that's why I chose that.
The point is not whether who is faster but does it really matter, we know Intel is faster I just tried to show how much in matters in the newest engine we see so far.
Next month we gonna have plenty of games to compare the two and I do not doubt the gap will widen but also quite sure AMD procs will be able to run the games as well.

With all that, what really matters to me is the price/performance of the whole platform and upgrade ability.

Regarding triplecore Intel can either cut prices on quads or boost clockspeed on duals either way is better for them than giving AMD the sales.
 
If you look into the ways that these numbers were brought out then you'll see just why it's really GPU bound more so than CPU. They've done "flyby" benching which is strictly synthetic, nothing in the sense that will show any real world performance which they even said...

"Real world UT3 performance will be more strenuous than what these flybys show but it's the best we can muster for now."

The flybys in this game won't include random calculations since...

"Epic left in three scripted flybys that basically take a camera and fly around the levels in a set path, devoid of all characters."

Thus proving even how much more of a GPU bound test this is.

What you should consider is real world performance over tests like these. What is surprising is that as I said above.. even in a simplistic flyby test, Intel got 7% more performance than AMD price per price when comparing the E6550 to 6000x2 and that's a far less load on the CPU then real world calculations. You can just imagine how far the gap will be when the final comes out with it's newer test method. Every extra percentage gain does add up with hardware components.

If I had to make the price/performance choice last year then it would be Intel simply for the fact that you can take a mid to low end E6xxx series at launch and oc it 1000+mhz beyond it's stock speed on decent air cooling compared to AM2 which is nonexistent. Even if you aren't looking at the overclocking factor but, strictly stock vs stock then, you have to consider that the E6600 came out mid last year and AMD was only able to touch it's performance with their 6000X2 that came out Jan this year. That's a 6 month part difference. Also the fact that Intel already had quads out at the time. The price/performance choices are even more today since its been 1.5 years from the launch of Core 2 Duo.

I currently only consider AMD as the best choice for a low-budget solution. If the Tri-Core's are in fact priced healthy, considering the position AMD is in vs Intel, then I'll deffinently recommend it.
 
Tri-Core photoged next to a 90nm DC

tricore.jpg
 
Image input equipment model: Canon EOS-1DS
Software used: Adobe Photoshop CS Macintosh
Person who created the image: Tomas Pantin


Hmmm.... Bin In CS2 Mac that has.
 
LMAO, that was fun.
Source: Internet, AMD
Edited: Gimp
By: AlabamaCajun

I think the core on the right was a mobile with that purple-grey carrier. The idea was if AMD could run single cores with one shared NB and HT linked bus, this might be what one would look like. This arch could end up slower then the XBar due to the inter-die transfers. This might be pulled on by Intel but I doubt AMD would invest in that much chippery.
 
Actually AC, the carrier on the right looks like one of the old ceramic ones that the Tbird Athlons and Spitfire Durons used. Nice chop job, man :cool:
 
AMD tri-core CPU codenames and features revealed

http://www.digitimes.com/mobos/a20071023PD216.html

Monica Chen, Taipei; Joseph Tsai, DIGITIMES [Wednesday 24 October 2007]


Codenames and features of the tri-core CPUs AMD is planning to launch next year and in 2009 have been revealed by sources at motherboard makers.

Toliman adopts a 65nm process, supports socket AM2+ motherboards, and supports HyperTransport 3.0. The CPUs will have 2MB L3 cache and will be launched by the end of March 2008, according to the sources.

The next-generation, 45nm-based CPUs will adopt socket AM3, support HyperTransport 3.0 and DDR2/DDR3 memory, and will launch in first half of 2009, noted the sources.

The motherboard makers pointed out that whether AMD's tri-core CPUs are successful in the market will depend on their price/performance ratio compared quad-core and dual-core CPUs.
 
Back