• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

AMD Sitch

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Status
Not open for further replies.
Gautam said:
What about performance per watt, heat output, cost of manufacturing, cost of running? Practicality in general?
What about the cost of setting it up? That is why people don't use it. Did you SEE the price tag on the 4x4?

Look, I am sick of bickering with you. If you want to get in bed with Intel, be my guest. I'm not going to argue my personal opinion and the integrity i've come to find in a certain company with you forever. If I can't post my positive opinion about AMD in the AMD section, then somethings wrong.

Unsubscribed.
 
If all you are doing is playing games on a 8 core box then you are wasting money. If on the other hand you work with graphics, cad or software development then dual sockets is one hell of a platform and I don't care what brand is in it. In my book if you can't afford the electricity to run one or don't have the funds, don't knock, go back to your games. Sorry if that last line is a little harsh but sometimes you have to knock some of them off the porch so they learn.
 
I dont think the cost of electricity has any bearing on this conversation. I could easily run ten quad core machines 24/7 if I wanted to. My choice is not to run them.

Dont even run my dual core 24/7, no point in it being on when I am not using it.
 
CGR said:
I dont think the cost of electricity has any bearing on this conversation. I could easily run ten quad core machines 24/7 if I wanted to. My choice is not to run them.

Dont even run my dual core 24/7, no point in it being on when I am not using it.

"I have run two CPU's and heat was an issue and man did they pull a lot of power."
Sorrt, thought that "Pull a lot of power" meant electricity. Not pointed directly at you CGR but my statment applies to others as well.

To clear that point up, I've not got onto that porch miself due to the MOBO power consumption but with newer MOBOs on the horizon, I'll buy into it for my last post's stated reason.
 
Last edited:
Hmm. Looks like I am late to the party. Well, let me correct a few things here and there.


All things being equal 2 cpus > 2 cores. Its true in any sense.
No way. If you are working on a single application, then it is much faster to send data to the other core then sending it to RAM and then to the second CPU.


So no wonder AMD isnt releasing benchmarks of their k10,
A scalable K10 at 3ghz would push Intel out of the market.
Oh yeah, and AMD loses all that money why? Why does it not finish Intel now? Why does it not release a decent CPU then?

IF AMD truly wanted it, they couldve already pushed intel out of the market By just releasing a full line of K10´s and ramping up clock speed.
instead they are prolonging the battle and buying them time by releasing K8´s with L3.
Why? Is AMD such a nice company that they decided to not finish Intel out? Or do they simply want to not earn money? Let me remind you -- they are losing money each day. They are a company. Companies exist not to be nice, but to earn money. Therefore, AMD has no choice but to lose money because they can't release a full line of K10's.

savageseb said:
do you people see it?
AMD improves performance logarithmicly overtime.
as more time passes by, efficiency increases.

of course a quick and dirty benchmark at 1.6ghz with ddr2 667 is all they need to Prove the chip is production and testing. up the frequency by 50%, add a scalable benchmark to the solution over a time graph and wacth amd efficiency rise.

It would be nice if they gave us 2 chips clocked differently so we could work out the increment ratios.
I repeat y=mx+b doesnt work for scaling, lol. core efficiency is no longer linear.

I am afraid I don't quite understand you. If O(log n) has a smaller growth then O(n). Therefore as more time passes by, less efficiency increase will occur. So, 2x1.6GHz CPU's will do more than a single 3.2 GHz 0_o?

And that is why I am giving you the "sir, you are gravely mistaken" hat. If we are given 2 chips clocked differently, the growth will depend on the benchmark and will certainly NOT be the same. If you want, go out and get a E4300. Then clock it at 1.8,2.0,2.4,2.8, 3.2 GHz and watch it not be logarithmic.
It is not all about the GHz. It is also about the number of registers, cache, speed of cache, leakage, FSB, floating points usage and many other factors.
Y=MX+B certainly works for scaling better then logarithmic growth. Logarithmic growth is very slow.

Maybe you were talking about exponential growth? From the point of 386 CPUs until now has been a nice speed bump.
 
Cheator said:
All things being equal 2 cpus > 2 cores. Its true in any sense. 2 x Opteron 244s > 1x Opteron 165. Its simply truth. And to me, thats what true smp is. And if you want it to be an atempt at quad core then thats fine with me. But don't tell me my emotions are clouding my thinking. and DONT put words in my mouth.
Actually it's not, due to the nature of the Opteron platform causing a latency hit from the second socket and most applications not being particularly bandwidth sensitive, you'll find that the 165 beating the 244s across the board. Like how the Opteron 175 is faster than dual Opteron 248s here in this Techreport review:

http://www.techreport.com/reviews/2005q2/opteron-x75/index.x?pg=6

Also the reason why the 3GHz FX-74 loses to a 2.8GHz FX-62 in most desktop applications.
 
ShadowPho said:
Hmm. Looks like I am late to the party. Well, let me correct a few things here and there.



No way. If you are working on a single application, then it is much faster to send data to the other core then sending it to RAM and then to the second CPU.



Oh yeah, and AMD loses all that money why? Why does it not finish Intel now? Why does it not release a decent CPU then?
ever heard of anti trust laws, and the many intel has recently violated?
AMD CANT show the market benchmarks without killing intel and getting all the pie.
ShadowPho said:
Why? Is AMD such a nice company that they decided to not finish Intel out? Or do they simply want to not earn money? Let me remind you -- they are losing money each day. They are a company. Companies exist not to be nice, but to earn money. Therefore, AMD has no choice but to lose money because they can't release a full line of K10's.
you are wrong, companies strategize given the R&D developed.
like for example nehalem was planned on 2002 for 2009. seems intel has had to push back the agenda for nehalem half a year.
AMD ironically claims they are half a year behind. so again how wrong am i?


ShadowPho said:
I am afraid I don't quite understand you. If O(log n) has a smaller growth then O(n). Therefore as more time passes by, less efficiency increase will occur. So, 2x1.6GHz CPU's will do more than a single 3.2 GHz 0_o?

And that is why I am giving you the "sir, you are gravely mistaken" hat. If we are given 2 chips clocked differently, the growth will depend on the benchmark and will certainly NOT be the same. If you want, go out and get a E4300. Then clock it at 1.8,2.0,2.4,2.8, 3.2 GHz and watch it not be logarithmic.
It is not all about the GHz. It is also about the number of registers, cache, speed of cache, leakage, FSB, floating points usage and many other factors.
Y=MX+B certainly works for scaling better then logarithmic growth. Logarithmic growth is very slow.

Maybe you were talking about exponential growth? From the point of 386 CPUs until now has been a nice speed bump.

I explained this on another thread, and on this one too, and if you dont believe me (more mathematically), so be it.

It is a logarithmic decrease in the data being bottlenecked by the subsystem of the L3.

The effects are noted through a Scalar Curve( or half wave) technology(A Data Management system) Over the time period the subsystem requires, to handle a certain amount and type of data. This is achieved Through Virtualization.
the time frame limiting the Subsytem to do work is now given by Hertz/S. or cycle/S^2.

1)
say 1 cycle is 32kbits of data processed per second:
Assuming 1 Second = 2π π=pi dont know the symbol key code.
32Kb/x Seconds.
For Versors we can say x=1 in the beginning

now with scalar technology
32kb(x cores)/x(Second squared, s^2)= Maximum Data to be handled.


2) now apply the fibonnacci series sequence, which is an irrational number sequence, of time(i cant say irrational proportion, now can i?).
the irrational number is obtained by taking the square root of the scalar system.

My Scalar system says:
x(set of bits)per cycle( or Maximum Data to be handled) * S^2= Required Processing Power,
this equation exposes itself as a Scalar product
Scalar*[Matrix^2]= Required Processing Power.
now you may take the Square root of Time since this is cycles/Time Squared.
or the square root of the required processing power, the answer will be the same, AN IRRATIONAL INFINITESIMAL NUMBER, you can approximate to this number Thru FPU operations.
this is the major improvement of the core!

3)take this into account http://anandtech.com/cpuchipsets/showdoc.aspx?i=2939&p=10

"Translation Lookaside Buffers, TLBs for short, are used to cache what virtual addresses map to physical memory locations in a system. TLB hit rates are usually quite high but as programs get larger and more robust with their memory footprint, microprocessor designers generally have to tinker with TLB sizes to accommodate."

"Each Barcelona core gets its own set of data and instruction prefetchers, but the major improvement is that there's a new prefetcher in town - a DRAM prefetcher. Residing within the memory controller where AMD previously never had any such logic, the new DRAM prefetcher takes a look at overall memory requests and attempts to pull data it thinks will be used in the future. As this prefetcher has to contend with the needs of four separate cores, it really helps the entire chip improve performance and can do a good job of spotting trends that would positively impact all cores. "
consideration of NUMA

"AMD Virtualization Improvements

The performance-related improvement to Barcelona comes in the way of speeding up virtualized address translation. In a virtualized software stack where you have multiple guest OSes running on a hypervisor there's a new form of memory address translation that must be dealt with: guest OS to hypervisor address translation, as each guest OS has its own independent memory management. According to AMD, currently this new layer of address translation is handled in software through a technique called shadow paging. What Barcelona offers is a hardware accelerated alternative to shadow paging, which AMD is calling Nested Paging.

Supposedly up to 75% of the hypervisor's time can be spent dealing with shadow pages, which AMD eliminates by teaching the hardware about both guest and host page tables. The translated addresses are cached in Barcelona's new larger TLBs to further improve performance. AMD indicates that Barcelona's support for Nested Paging requires very little to implement; simply setting a mode bit should suffice, making the change easy for software vendors to implement."

Do you get it now?

or must i go on?
 
savageseb said:
I can Because i read Architecture Differences. And if you did you would understand what i am talking about.
QUOTE]

Show me the chip and I'll believe it. If you want I'll write you up a nice article that says AMD is going to use time travelling nano robot technology to make their new processors run faster than Intel's. Until it is out and on the market it's not a valid point. Every company has something bigger and better on the drawing board or in pre-production or testing that is better than what is out now.
 
savageseb said:
ever heard of anti trust laws, and the many intel has recently violated?
AMD CANT show the market benchmarks without killing intel and getting all the pie.

that is an interesting theory!

so, if AMD showed us benchmarks of a 3ghz barcelona, the CPU market would immediately crash putting Intel out of business, leaving only AMD with a full monopoly (ignoring for a moment VIA, IBM, Motorolla, etc)?

i've got some property for sale that you might be interested in... http://en.wikipedia.org/wiki/Golden_gate_bridge :p
 
savageseb said:
...... amd will copy intel in dielectrics and pipelining for better MHZ on 2009.

Savage, this is all speculation right ? Or you're working in that real-real deep ring 0 within AMD's CIA unit in there ? :)

savageseb said:
IF AMD truly wanted it, they couldve already pushed intel out of the market
By just releasing a full line of K10´s and ramping up clock speed.
instead they are prolonging the battle and buying them time by releasing K8´s with L3.
(of course im assuming that Cartwheel is a K8 given the name is Athlon and not Phenom)

Really never understand that highlighted sentence ?

Is that like in the cheap kungfu movie that a good guy master sharpened his kungfu in the cave, waiting ... with the final happy ending and sweet revenge, he blowed the villain with the leet kungfu strike ? :D

I think its time to wake up and face the reality.

This kind of comments do more harm than good to AMD camper imo.
 
Last edited:
it would seem you people are in need for some math lessons then, and this **** is a book worth.
So if you people dont know how Dimmensional Elements Interact, then it is not my fault.
Here we call it Vector Spaces, a product of Elemental Reality, and the Axioms are veeeery well defined.

So if you dont "know" it you are screwed in all your theories.

I have a long mathematical argument of which you have not grasped to its basics, not to mention historical accounting already Documented(for those who don´t believe that math is a language).
what do you have?

let me start you off:
http://en.wikipedia.org/wiki/Real_numbers
http://en.wikipedia.org/wiki/Square_root
http://en.wikipedia.org/wiki/Logarithm

if you are in need of some bibliography just ask.... I guess i can name a few dr.´s in mathematics, who are aware of this fact.
 
woot? looky here


http://www.theinquirer.net/default.aspx?article=41324

What you are seeing is part two of the reason AMD bought ATI, they needed a robust chipset division so they could tightly couple things into a platform offering. OEMs like this, no love and need it, so AMD is doing it. OEMs have a single product to wrap a case around and put out for a defined time period, and they can plot, plan and scheme without additional headaches. They are simple creatures, and this makes their offerings simple.

It also shifts a lot of engineering burden onto the shoulders of AMD, which is fine by the OEMs because they don't have to do it. This level of engineering bandwidth would have been impossible without ATI, as would the tight coupling between CPU and chipset. Intel uses this strategy quite effectively, and now AMD is looking to do the same.

One interesting side effect is openness, AMD is really trying to be open to all chipset and related silicon vendors. They are not going down the destructive path of shutting out of partners like Intel, but you have to wonder if the partners can compete in this new paradigm. I am specifically thinking of Nvidia, but SiS and Via will have the same headaches. Will they devote the resources to putting out platforms like Intel and now AMD, or will they be left by the wayside? AMD has stepped up to the plate on the table Intel made, will NV?

In the end, this is a fundamental shift for AMD. They are doing what they have to to stay competitive, and have shown they have the will to compete. It will be quite interesting to see the battle moving from chips to platforms over the next few years. µ




After the inquirer has been bashing amd for the past 1.5year<X
we seem to finally hear good news? you ****ing joking me?
Anyhoooowww...
the inquirer article now sways over to AMD proving even further that what i am saying is right!

Unless that article is based on what they read here....

However i might point out, that AMD decided on the shift late 2006....
 
QuietIce said:
Ah, the price of greed ... :burn:

QuietIce said:
Ah, the price of greed ... :burn:

probably $^2, hahahaha.

sorry but i cant stand bull**** talk, math isnt bull****

A graph of what occurs as i increase the number of cores per die with L3.

http://s91.photobucket.com/albums/k317/savageseb/?action=view&current=Dibujo.gif

proportional increments of data processed over time depend upon Sub-system organization and processing speed.

Major change of rate is noticed through the change in the incline of the tangent at half -time of the Data chunk being processed. (this assumes 0<x<100% load of the cores). goes 500 years back....
 
Last edited:
Savage, that's a little rough on the brain in the morning :eek: Your math is more than likely correct but the average person is not going to take the time to understand quantum mathematics or chaos theory. I think we might sum up some of that if we look the the queue to a four track roller coaster. It has a single queue handling 3000 riders an hour loading 64 people on onto each car times 4 cars per track times 4 tracks. You have to keep the groups of people together, you have to deal with flash passes and VIPs. This all changes the loading patterns affecting the stream of people coming in. It's rarely the same and has many patterns. As for branch predication, you migh liken it to track three with a restraint failure so you now have to move those folks to track four. Then a group of 64 VIP tourists show up and you have to dedicate next car on track one and track 2 to accommodate them possibly putting some folks out in the L3 holding area waiting for the next car. Time factors play in with 4 cars on 4 tracks at a given time frame each one is idle in the station, loading/unloading riders, on the lift, in the course or sitting stuck somewhere.

How all this speeds up the simulation of this beast of a coaster beast is what I'm interested in and if your math and all it's exponents prove to work my screen won't jump just as I'm on the inside loop of the "Cobra Roll" or at the top of the "Zero G" loop causing the whole effect to suck!
 
It is a logarithmic decrease in the data being bottlenecked by the subsystem of the L3.
....snip....
Do you get it now?

or must i go on?
Please go on, I like to see new ideas born. :beer:
While L3 cache is certainly... nice... it is still not necessary. L2 cache is faster (much), easier to access and Intel is planning to have tons of it.
As the number of cores increase so will the number of L2 cache.

And this in no way will get AMD to Intel's level. You need faster CPU's not better memory managed. :rolleyes:
Lemme ask you this:
When was the last time YOU made an application that was bottlenecked by the L3 cache?

AlabamaCajun said:
Savage, that's a little rough on the brain in the morning :eek: Your math is more than likely correct but the average person is not going to take the time to understand quantum mathematics or chaos theory. I think we might sum up some of that if we look the the queue to a four track roller coaster. It has a single queue handling 3000 riders an hour loading 64 people on onto each car times 4 cars per track times 4 tracks. You have to keep the groups of people together, you have to deal with flash passes and VIPs. This all changes the loading patterns affecting the stream of people coming in. It's rarely the same and has many patterns. As for branch predication, you migh liken it to track three with a restraint failure so you now have to move those folks to track four. Then a group of 64 VIP tourists show up and you have to dedicate next car on track one and track 2 to accommodate them possibly putting some folks out in the L3 holding area waiting for the next car. Time factors play in with 4 cars on 4 tracks at a given time frame each one is idle in the station, loading/unloading riders, on the lift, in the course or sitting stuck somewhere.

How all this speeds up the simulation of this beast of a coaster beast is what I'm interested in and if your math and all it's exponents prove to work my screen won't jump just as I'm on the inside loop of the "Cobra Roll" or at the top of the "Zero G" loop causing the whole effect to suck!
Quite on the contrary I understand what Savage is trying to say. And also what you are saying. It is a nice idea for increasing FSB to RAM... but... I hate to break it to you... it won't get Intel.


I just hope that the Fusion thing that AMD does will be good. Personally I would really LOVE to try out codzering that Fusion thing. It might be the next great thing or it might ...die.... miserably.
That Fusion thing is a revolutionary idea and not the L3 cache. L3 cache and better caching methods come and go, while getting rid of PCI-express is a big idea. It would make for a completely different idea. Instead of reading models from HDD, fiddling with them in VPU and then sending it to monitor, you could do it all on the motherboard. Now we are just to see if the RAM will be fast enough.
 
Kuroimaho said:
Probably the side effect of seeing pics of a 3G K10.

you are a moron, period.




When you people stop thinking of time as an IMPLICIT factor, then you will understand TIME as a machine sees it. it is not but an extension of Elemental Reality. And this Dimmensional reality becomes Irrational, in order for our feeble brains to understand it, we take shortcuts in R^2. Newton took shortcuts and assumptions @ R^2, when explaining his laws. And truthful as his laws maybe, this is where calculus and derivatives are born.
He took the work Left to be Done by Descartes in the 1600´s, and manipulated it to conform his own understanding of Vectors at R^2. but he never resolved the problem of irrationality left by the understanding of what happens at a certain point in time.

You can also see it through the simple eyes of Euclidean Geometry.
Degrees are not but an approximation of time. So when programmers understand Binary Better, And how information is Restructured under different Ranges(dimmensions) of a Matrix. then and only then you will know what i am talking about.

A Tangent Line, is that which explains what happens at a certain point Through the work done By a Function, it explains the curvature generated by x^2. or in Einstein´s equation of E=M * (C^2)
In the AMD case the curvature is generated through time, and this evidence is irrefutable through out mathematical history, and analyses.

When taking the Square root of x^2 you end up with 2 answers, X1 and X2, a - or +, a Truth and a False.
If both answers Exist in the body of Rational Number, The matrix will have an inverse. not only that, you can find the proportion of a/b and these will be at the most, whole numbers or coprime(no irrationality here).
But if the Root of the Quadratic or Square numbers are Undecipherable on either side(X1, X2), then the matrix will not have an inverse, and you have an Irrational Expression in the form of Imaginary numbers, non re-normalizable.

example:
X1/X2=a/b=1/Infinitesimal Irrational Number= Indeterminability IF 0<b<1
or
Infinitesimal Irrational Number/1= is ever increasing= Indeterminability. IF a 0<a<1

Important to be able to understand that Irrational Infinitesimal Numbers follow this law,

0<X<1.


I am not a programmer....
However, Math considers Irrational numbers to be Part of the BODY of Real Numbers. So when structuring your system you have to consider conformity in different Infinitesimal levels. X * 10^n. the Scalar Product of Exponential Numbers.

So lets review Sequencing.
Sequencing-
Sequences ASSUMES the Existence of Natural Numbers(part of the body, and beginning of, Q, Rational numbers).
Out of combining the Assumptive Existence of these Natural Numbers, with Rational Numbers, Infinitesimal Numbers are born, and So are Irrational Numbers.

This is not theory, this is called Algebra and if you don´t know how it works, then you dont know binary, and im wasting my time with you people.
you must review Sequencing in order to understand AMD´s Train of though.

A Sequence will determine the Vector component of a matrix along the Row´s or Columns, and Thus Virtualization Mapping is achieved.

The map is Given by taking the Co-Factors and Determinants(excuse my lame translation).
A LI set of Generators can compose LD Generators, and both of these can tpye of generators can compose, Linear Combination Generators of these.
These are the Vectors of a Binary Matrix.
(if you dont know what im talking about you must study Vector Spaces).

so must i go on?
Havent you people ever Studied Gauss?
cause it seems i´m teaching you people how to read binary.....
 
Last edited:
Status
Not open for further replies.
Back