• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

AMD going 20nm this year?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
No, die shrinks just reduce the cost of a chip and what can be included in that chip. It's still beneficial, but not as much as it once was.

You know, I didn't think that Intel had to take that out of their compiler? I wasn't aware of that aspect of things. I thought the court case was purely based on their sales/rebate practices where they'd withold the rebates from anyone who sold AMD CPU's instead of just Intel chips.
 
No, die shrinks just reduce the cost of a chip and what can be included in that chip. It's still beneficial, but not as much as it once was.

You know, I didn't think that Intel had to take that out of their compiler? I wasn't aware of that aspect of things. I thought the court case was purely based on their sales/rebate practices where they'd withold the rebates from anyone who sold AMD CPU's instead of just Intel chips.

Ooops, I was wrong. It was in the settlement with the lawsuit from AMD that they're required to "fix" the compiler. The FTC just requires quote "disclose to software developers that Intel computer compilers discriminate between Intel chips and non-Intel chips, and that they may not register all the features of non-Intel chips. Intel also will have to reimburse all software vendors who want to recompile their software using a non-Intel compiler."
http://www.ftc.gov/opa/2010/08/intel.shtm
 
Yeah, basically they're allowed to continue with their dodgy compiler. Problem is that their compiler is better than most others out there for both Intel and AMD processors otherwise everyone would be using GCC by now.
 
instead of talking about importing things, this has now turned into a "bashing thread".
 
you guys talk about intel compliers, yet the one used for superpi was not a intel specific coder but a plain-jane one(from what i recall being talked about, since yall are rehashing an OLD TOPIC on superpi). On top of that even the Benchmarking community as a whole said they where not sure how/why the linux setup was faster for both intel and amd vs windows. As they said for being able to make sure the numbers were legit, the only way they could do that as with superpi. second windoz is way more bloated then linux, not to mention with linux. you can recomplie the kernal to be more specific to a cpu arch/instruction set. Meaning is it really bloat ware on windows or is there something else to the program in linux running a 32m Pi run that it is able to some how do it faster. a large TIME difference between two os's and two different programs used for calculating Pi. simply makes me ask more questions about how it is being calculated on both intel and amd/windows vs linux/superpi vs system stability tester. how did you guys start going off on, yes, "intel bashing". i thought you guys didnt like fanboys yet act like them?

the only thing that has been shown by frakk, is not just a amd thing. intels are even faster on linux with the same program, so you havent shown anything to do with it being "intel" only compiler. there for what came after a few posts is now bashing "all" programs that are faster on intel vs amd. not the ones that were more tweaked for intel only cpus. i have no clue which programs were tweaked for intel only cpus but there is no way superpi is one of them, has been discussed to death(in the past).

this thread is not about compliers or how much faster my amd is on linux vs windows. it is about AMD GOING TO 20nm THIS YEAR?, yes or no, so lets stay on topic.

this whole, i dont like fanboys but i see some acting like a fanboy, its OLD.
Why are you suprised how much the fire flamed up after you emptied the entire bottle of ligher fluid on it?
that is the direction this thread is going...
 
Last edited:
Let's just try to get this back on track about whether or not AMD is going 20nm this year :) There is a bit too much potential here for Intel vs AMD arguing.
 
Meh, I don't trust SuperPi, nor any other syntehtic benchmarks. They're all irrelvant.

Yes, I do hope they can move to 20nm in the near future though I believe that won't be until early next year at the soonest. If they are able to jump from 32nm to 20nm this should get them to similar levels that Intel are at, probably with lower power useage or perhaps sacrificing power for performance ;)
 
Its is off topic, but then again it is mjw21a's thread and if he does not mind why should it be a problem?
Anyway my thinking is the Court case and all related to it are a valid discussion, i did not know the details of it until now and i'm grateful Bubba-Hotepp put it there, its pretty relevant information and one is not going to know about it unless the info is available

Its not bashing.

To an impartial bencher, or overclockers forum benching software that might be holding one brand of CPU back from its full potential is pretty significant when benching and comparing different CPU's, would you not agree Evilsizer? :)

It is an inconvenience to people who would like that information buried, Fanboi's for example. To every one els.... of course it matters.

Now i'm pretty interested to know, either way

System Stability Tester is available for Windows, it does exactly the same thing as Super PI and uses the same Gauss–Legendre algorithm.

I had a 32MB run of 2m 21.851s with it @ 4Ghz, i'm not sure where that relates to a 2500K / 3570K @ 4Ghz, but i suspect its probably closer in this than it is in Super PI.

For anyone wanting to run it make sure its the 64Bit one (as that matters) to set 32MB and use the Gauss–Legendre algorithm.

Have fun. :)
 
Last edited:
Its is off topic, but then again it is mjw21a's thread and if he does not mind why should it be a problem?
Anyway my thinking is the Court case and all related to it are a valid discussion, i did not know the details of it until now and i'm grateful Bubba-Hotepp put it there, its pretty relevant information and one is not going to know about it unless the info is available

Its not bashing.

To an impartial bencher, or overclockers forum benching software that might be holding one brand of CPU back from its full potential is pretty significant when benching and comparing different CPU's, would you not agree Evilsizer? :)

It is an inconvenience to people who would like that information buried, Fanboi's for example. To every one els.... of course it matters.

Now i'm pretty interested to know, either way

System Stability Tester is available for Windows, it does exactly the same thing as Super PI and uses the same Gauss–Legendre algorithm.

I had a 32MB run of 2m 21.851s with it @ 4Ghz, i'm not sure where that relates to a 2500K / 3570K @ 4Ghz, but i suspect its probably closer in this than it is in Super PI.

For anyone wanting to run it make sure its the 64Bit one (as that matters) to set 32MB and use the Gauss–Legendre algorithm.

Have fun. :)

Exactly, I don't mean to "bash" Intel at all. I think the 2500K is a fantastic CPU. All of their CPU's are very good just IMHO not always the "best bang for the buck" depending on the price and the performance of competing CPU's at that price level (take the FX-6100 vs i3-2120 for example). There are always multiple factors that must be taken into account when choosing a CPU and price is one of them. Personally I think the i3's would be a fantastic choice at around $90 - $100 give or take but again that's just IMHO. Now, in the month that I've upgraded my CPU to an FX-6100 and joined this and another forum (mostly in the "other" forum) I've heard everything from "you're stupid for not buying an i3" to "AMD sucks at gaming" and all sorts of other "AMD bashing". A lot of people just searching for information on performance can easily be misinformed especially when they have no idea of things to consider such as what happens when using programs compiled using Intel's compiler and how that can create a false measure of performance. It's never good to bury or ignore that kind of information as it obscures the truth. That being said, I'm not saying that, for example, the 8120 is "better" than the 2500K etc. IMHO neither is a bad choice. But I would think that the whole purpose of this forum an these discussions is to inform and be informed with knowledge and cut through all of the "spin" and murky half truths.
 
Yeah, Frakk knows me. I'm not worried about things going off topic. Often some of the most interesting reads can be found in derailed threads ;)

Funny though, I am an AMD fanboy else I'd not still be running AMD still. On the other hand, in the mid range AMD is still reasonably competitive and thats where I always buy because I'm cheap ;)

Anyway, my point is that I believe I remain fairly impartial when recommending purchases to others. Most of the time my recommendation is the i5 2500K, below that though I'll recommend an FX-4100 over an i3 any day. They unlocked after all ;)
 
you want to get to brass taxs on the concern yet look at programs done by people. who dont care which one is faster, it is plainly coded so the better arch shows through. however throwing out false facts ie the windozs vs linux 32m run with two different programs is a huge red. that doesnt dig to find that truth that just makes the water even more muddy. there might be some coded games or programs out there that may favor intel. if your going to go down that road, then you would need to figure out which game is going to favor AMD vs NV hw in gaming. as both tie into this exact same thing, both have two different ways of doing work. some games are coded to take more advantage of one gpu then the other due to the resourses it has, samething with cpus.

before you can figure one thing out your going to have to sort through a whole mess of other factors. yes it was bashing since from your posts where making "false claims", had there been a intel tweaked coder. im sure someone would have figured it out, just like how some of the older video card hw think it was NV. was using a trick in its drivers to make the 3Dmark scores higher. people figured that out, if someone has steped up or found out what you want to claim. then i think your just grasping at straws.....

historicly AMD has been better at FPU and intel at ALU. not sure how much or if it has changed at all. right there is another factor too.

just tring to point out somethings i think yall are missing.
 
you want to get to brass taxs on the concern yet look at programs done by people. who dont care which one is faster, it is plainly coded so the better arch shows through. however throwing out false facts ie the windozs vs linux 32m run with two different programs is a huge red. that doesnt dig to find that truth that just makes the water even more muddy. there might be some coded games or programs out there that may favor intel. if your going to go down that road, then you would need to figure out which game is going to favor AMD vs NV hw in gaming. as both tie into this exact same thing, both have two different ways of doing work. some games are coded to take more advantage of one gpu then the other due to the resourses it has, samething with cpus.

before you can figure one thing out your going to have to sort through a whole mess of other factors. yes it was bashing since from your posts where making "false claims", had there been a intel tweaked coder. im sure someone would have figured it out, just like how some of the older video card hw think it was NV. was using a trick in its drivers to make the 3Dmark scores higher. people figured that out, if someone has steped up or found out what you want to claim. then i think your just grasping at straws.....

historicly AMD has been better at FPU and intel at ALU. not sure how much or if it has changed at all. right there is another factor too.

just tring to point out somethings i think yall are missing.

You mean like "false" claims like these?

- disclose to software developers that Intel computer compilers discriminate between Intel chips and non-Intel chips, and that they may not register all the features of non-Intel chips. Intel also will have to reimburse all software vendors who want to recompile their software using a non-Intel compiler.
Source - http://www.ftc.gov/opa/2010/08/intel.shtm

TECHNICAL PRACTICES
Intel shall not include any Artificial Performance Impairment in any Intel product or require any Third Party to include an Artificial
Performance Impairment in the Third Party’s product
. As used in this Section 2.3, “ Artificial Performance Impairment ” means an affirmative
engineering or design action by Intel (but not a failure to act) that (i) degrades the performance or operation of a Specified AMD product, (ii) is
not a consequence of an Intel Product Benefit and (iii) is made intentionally to degrade the performance or operation of a Specified AMD
Product. For purposes of this Section 2.3, “ Product Benefit ” shall mean any benefit, advantage, or improvement in terms of performance,
operation, price, cost, manufacturability, reliability, compatibility, or ability to operate or enhance the operation of another product.
In no circumstances shall this Section 2.3 impose or be construed to impose any obligation on Intel to (i) take any act that would
provide a Product Benefit to any AMD or other non-Intel product, either when such AMD or non-Intel product is used alone or in combination
with any other product, (ii) optimize any products for Specified AMD Products, or (iii) provide any technical information, documents, or know
how to AMD.
2.4

Source - AMD/Intel Lawsuit Settlement Agreement http://www.amd.com/us/Documents/Intel 8K with Full Settlement Agreement.pdf

Pre Settlement news article -
http://betanews.com/2005/07/13/suit-intel-sabotaged-compiler-for-amd/

Here's another explaining what it does pre FTC settlement agreement -
http://www.theinquirer.net/inquirer/news/1567108/intel-compiler-cripples-code-amd-via-chips

Yep, definately "false" claims and the same thing as Nvidia or AMD optimizing their drivers.
 
Yes, historically AMD has had a strong FPU however I'm not sure that holds true with their current generation of chips. With BD an 8 core 4 module chip really is euivalent to a 4 core chip for FPU as the floating point unit is shared between the two cores on each module.....

I'm not sure what you mean regarding the compiler? It was proven many years ago that Intels compiler turns off most of the SSE instructions when a non Intel chip is used. On the other hand this is just a fact of life so people really don't need to know about it. All that is relevant from an end user perspective is that a certain piece of software is faster on Intel than AMD, the reason why isn't necessary. On the other hand, this is a discussion between some very knowledgeable people, we're not recommending one product over another. Just discussing technicalities. I think we all just need to chill a bit. ;)
 
Last edited:
you want to get to brass taxs on the concern yet look at programs done by people. who dont care which one is faster, it is plainly coded so the better arch shows through. however throwing out false facts ie the windozs vs linux 32m run with two different programs is a huge red. that doesnt dig to find that truth that just makes the water even more muddy. there might be some coded games or programs out there that may favor intel. if your going to go down that road, then you would need to figure out which game is going to favor AMD vs NV hw in gaming. as both tie into this exact same thing, both have two different ways of doing work. some games are coded to take more advantage of one gpu then the other due to the resourses it has, samething with cpus.

before you can figure one thing out your going to have to sort through a whole mess of other factors. yes it was bashing since from your posts where making "false claims", had there been a intel tweaked coder. im sure someone would have figured it out, just like how some of the older video card hw think it was NV. was using a trick in its drivers to make the 3Dmark scores higher. people figured that out, if someone has steped up or found out what you want to claim. then i think your just grasping at straws.....

historicly AMD has been better at FPU and intel at ALU. not sure how much or if it has changed at all. right there is another factor too.

just tring to point out somethings i think yall are missing.

So give this a run in Windows just i have, http://systester.sourceforge.net/downloads.php see how far you can get past my time, if at all. seriously, i'm so interested to know.
 
Last edited:
Bubba-Hotepp,
the way things were being said, came off as all these companies that make software. even benching software are out to hurt amd, that is just lunacy. i dont have time right now to go diggin in that link to find dates. i know that at one point intel had their own benchmark setup that make them out to be faster then amd cpus. i just dont see it being able to be hidden that in 3rd party programs made by non-intel people would be this way.

mjw21a,
the compiler is a important part to what has been brought up about this "cheating". if your write the code, it just doesnt run on its own. it has to be put into a package that something like windows or linux can work with so it gets compiled other wise nothing happens. if you have the options to tell the complier if certian things are detected then to use them or not. the programmer for superpi left things plain jain, no use of any insturction sets at all. that program was brought into question when the linux vs widows with superpi vs systemstability tester was posted. maybe system stability tester is in fact using these instruction sets and super pi is not and that the better cpu at number cruching was faster. after all intel core 2 cpus are still king for superpi 1m to some degree as you would notice even for the same cpu speed with more L2 cache. the one with the higher L2 was always faster, since more of the raw date could fit in there. my stance looking at the i's from intel is that they should have and still need to have at least 512k per core min. this is looking at a gaming pov.
Im cool, so im not sure who is getting hot. things just are getting thrown around willy nilly IMO.

Frakk,
i did try to run that program many times on my i7 rig. keeps popping up with some MS C++ error so i dont have clue what to do, im not a debugger. Now my question to you, is why didnt you run that program in windows vs super pi. hell linux is faster with intel then widows is, this is not some kind of surprise.
 
I did, I can't remember exactly but it was under 17 minutes for Super PI. But Super PI is 32Bit, or at least the one i have is, the 32Bit version of this is also much slower.

I have found some 4MB benches of this with a 2500K, my 4MB bench was 12. 34s @ 4Ghz, there is 2500K @ 4.3Ghz with 10.87s and another 2 @ 3.3Ghz - 13.9s and 14.5s.

Thats 1.5s slower than the 4.3Ghz 2500k and 1.6s to 2.1s faster than the ones @ stock.

I can't get mine to 4.3Ghz as for some reason it will not boot at volts over 1.41, yet it should.

But it looks to be about par with the 2500K.

http://forums.aria.co.uk/showthread.php/84267-System-Stability-Tester-Benchmark

Edit, actualy there is one here @ 4Ghz, http://forums.aria.co.uk/showthread...er-Benchmark?p=1537119&viewfull=1#post1537119 12.12s thats 0.22 faster, about 2% i think
 

Attachments

  • 6.PNG
    6.PNG
    97.2 KB · Views: 44
  • 4Ghz.PNG
    4Ghz.PNG
    80.6 KB · Views: 44
Last edited:
Bubba-Hotepp,
the way things were being said, came off as all these companies that make software. even benching software are out to hurt amd, that is just lunacy. i dont have time right now to go diggin in that link to find dates. i know that at one point intel had their own benchmark setup that make them out to be faster then amd cpus. i just dont see it being able to be hidden that in 3rd party programs made by non-intel people would be this way.

Not "at one time", it's currently. Sysmark was written using that same compiler. Let me explain what they got caught doing, what's changed and how that affects all third parties using intel's software. Currently intel's compiler is the most widely used out there. Here's the problem. The compiler does a CPUID check. If it detects a genuineintel ID then it optimizes the code to use "cpu optimizers" like SSE, SSE2, SSE3 etc. If it detects any other geniuneXXX CPUID it specifically does not use those same optimizers even if the CPU in question has flags that identify that it is compatible with them. It was proven by a guy who used a VIA CPU since they have the ability to change the CPUID on a VIA CPU to whatever they want. He changed the VIA's CPUID to "geniuneintel" and the program ran faster. That became part of AMD's lawsuit as well as the Anti-trust lawsuit by the European Union (which resulted in the largest anti-trust fine in EU history to date). As part of the settlement agreement with AMD they "promised" to change that in the compiler (hasn't been done yet). The US Federal Trade Commision initiated an investigation shortly after the settlement and eventually brought a suit against Intel. Intel settled that as well and paid a large fine. As part of the settlement they are required to list on their website (which they currently do) that their compiler discriminates against non Intel CPU's (they found a way to say that without using the word "discriminate") as well as reimburse any 3rd party developer for costs associated with "recompiling" their software. And it's not over. States like New York still have anti-trust suits pending against intel over that as well as paying companies to either not use AMD processors or to not use AMD processors at all. You can argue it all you want but this is all a matter of public record in the courts. To this day the compiler hasn't been changed and still operates that way. That means any program written using Intel's compiler will optimize a program for Intel CPU's while refusing to optimize for any others.

Edit - Don't take this all as a "bash" of Intel. It's merely information that people need to know about benchmarking programs. By using some benchmarking programs like sysmark (and especially older ones) it can give people a false impression of performance.
 
I really wish AMD gave use the option to change their CPU's to report genuineintel..... It would be a sweet feature to include in the BIOS.
 
Back