ever heard of anti trust laws, and the many intel has recently violated?
AMD CANT show the market benchmarks without killing intel and getting all the pie.
you are wrong, companies strategize given the R&D developed.
like for example nehalem was planned on 2002 for 2009. seems intel has had to push back the agenda for nehalem half a year.
AMD ironically claims they are half a year behind. so again how wrong am i?
I explained this on another thread, and on this one too, and if you dont believe me (more mathematically), so be it.
It is a logarithmic decrease in the data being bottlenecked by the subsystem of the L3.
The effects are noted through a Scalar Curve( or half wave) technology(A Data Management system)
Over the time period the subsystem requires, to handle a certain amount and type of data. This is achieved Through Virtualization.
the time frame limiting the Subsytem to do work is now given by Hertz/S. or cycle/S^2.
1)
say 1 cycle is 32kbits of data processed per second:
Assuming 1 Second = 2π π=pi dont know the symbol key code.
32Kb/x Seconds.
For Versors we can say x=1 in the beginning
now with scalar technology
32kb(x cores)/x(Second squared, s^2)= Maximum Data to be handled.
2) now apply the fibonnacci series sequence, which is an irrational number sequence, of time(i cant say irrational proportion, now can i?).
the irrational number is obtained by taking the square root of the scalar system.
My Scalar system says:
x(set of bits)per cycle( or Maximum Data to be handled) * S^2= Required Processing Power,
this equation exposes itself as a Scalar product
Scalar*[Matrix^2]= Required Processing Power.
now you may take the Square root of Time since this is cycles/Time Squared.
or the square root of the required processing power, the answer will be the same, AN IRRATIONAL INFINITESIMAL NUMBER, you can approximate to this number Thru FPU operations.
this is the major improvement of the core!
3)take this into account
http://anandtech.com/cpuchipsets/showdoc.aspx?i=2939&p=10
"Translation Lookaside Buffers, TLBs for short, are used to cache what virtual addresses map to physical memory locations in a system. TLB hit rates are usually quite high but as programs get larger and more robust with their memory footprint, microprocessor designers generally have to tinker with TLB sizes to accommodate."
"Each Barcelona core gets its own set of data and instruction prefetchers, but the major improvement is that there's a new prefetcher in town - a DRAM prefetcher. Residing within the memory controller where AMD previously never had any such logic, the new DRAM prefetcher takes a look at overall memory requests and attempts to pull data it thinks will be used in the future. As this prefetcher has to contend with the needs of four separate cores, it really helps the entire chip improve performance and can do a good job of spotting trends that would positively impact all cores. "
consideration of NUMA
"AMD Virtualization Improvements
The performance-related improvement to Barcelona comes in the way of speeding up virtualized address translation. In a virtualized software stack where you have multiple guest OSes running on a hypervisor there's a new form of memory address translation that must be dealt with: guest OS to hypervisor address translation, as each guest OS has its own independent memory management. According to AMD, currently this new layer of address translation is handled in software through a technique called shadow paging. What Barcelona offers is a hardware accelerated alternative to shadow paging, which AMD is calling Nested Paging.
Supposedly up to 75% of the hypervisor's time can be spent dealing with shadow pages, which AMD eliminates by teaching the hardware about both guest and host page tables. The translated addresses are cached in Barcelona's new larger TLBs to further improve performance. AMD indicates that Barcelona's support for Nested Paging requires very little to implement; simply setting a mode bit should suffice, making the change easy for software vendors to implement."
Do you get it now?
or must i go on?