• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

apple dual g4 vs. dual amd

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Mac isnt evil, its a hardware manufacturer. Microsfot only handles software. If it is suddnely released for standard PC's, Apple is walking dead.

On the other hand, if Appple wanted to go x86 they could release it using AMD, but have a properiety motherboard type. That way they would still have hardware control. Thats the only way it will ever be released on the x86.

On the topic of GUI, Apple stole it from Xeorx and developed it. Microsoft. stole the GUI of Apple. Then some linux companies stole the GUI off windows. Lets pretend I never made the point about gui :). Firewire on the other hand.....
 
Motorola should stick with making radios, not chips.
I know this ain't supposed to be a mac bashing thread, but I think everybody should have a mac to kik around..I have one in my shed, and I take it out once in a while kik it a few times, smile,and go back to my PC with renued enthusiasm.
Mac's are just plain ugly,especialy the Imac...looks like a fancy doorstop.

When doing the test, keep it as fair as possible, if not everybody will raz you for it. Photoshop or some video rendering software should give you good benchies to compare....something that crunches a ton of numbers is what you should be after for the bench tests.

Did I mention that I hate mac's? I do...I really do
 
Motorola is really screwed, Apple is going IBM soon enough.

I know this ain't supposed to be a mac bashing thread, but I think everybody should have a mac to kik around..I have one in my shed, and I take it out once in a while kik it a few times, smile,and go back to my PC with renued enthusiasm.

That is just pathetic, to let you know.
 
I spy with my little eye, on mac FAN, Fedorenko, it's ok,
some ppl have to be wrong and like dumb things...
Hope u didnt like the cube though...

Hopefully apple will take the IBM/amd route, anything is better than overclocking your own cpus at manufactue because they cant compete with real computers....
(as they did with the 1.25ghz g4s)
 
There is no company named Mac. It annoys me to no end when people keep using the word Mac like it is a company or organization. Its the name of a product, like the word Pentium or Athlon.

Does it sound very silly when someone says that Athlon is in the toilet because they are losing money and firing people?
 
XWRed1 said:
There is no company named Mac. It annoys me to no end when people keep using the word Mac like it is a company or organization. Its the name of a product, like the word Pentium or Athlon.

Does it sound very silly when someone says that Athlon is in the toilet because they are losing money and firing people?


we know, we know. their name is Apple and one of their models of comps is macintosh It only sounds funny to someone who pays attention to apple a lot. I mean Athlon (as a company name) would sound normal to the average person because that is the name touted all over the place. I think people using the name "mac" for Apple is caused by laziness. there is no short term for "Apple" i mean it's two syllables for god's sake, who would want to say, much less type two whole syllables?!?!?! Plus, what kind of name is Apple? At least "AMD" and "Intel" have some pizazz. There is no object named "AMD, or "Intle", but "Apple" on the other hand is a non-threatening ball of juicy goodness. An "Apple" just sits around and rolls when kicked. no fun. An "AMD" on the other hand...it...well...bursts into flame at the sight of light. how cool is that. everybody loves fire :p. I don't know what an "Intel" does that's cool, but I'm sure they do something.

Seriously though. I think it does have to do with laziness. I don't know how, but it does. two more letters isn't hard.

Crap. My main thought was obscurred by bad jokes and dumb analogies.

Here is the gem: I have many friends who, when asked what kind of comp they have say "Windows." or "Windows 98." or my favorite "Windows ME." This is no different than an Apple being called a Macintosh.
 
nealric said:
so- back to the original topic-
Has this competition taken place yet? Who won?

what he said...

if this doesnt happen, this thread will either get deleted, or moved to debates. (i hope it will be the former)

ive been following this thread for a little bit, but im a bit annoyed by the fact that nobody has posted any benchmarks...

bah..whatever
 
Results results, we definately need them, I've been looking on the web for results Its getting that bad...

Each website/company normally limits its benchmarks to whatever platform it prefers, so they look better...
Gflops my arse, "supercomputer on a chip" it is not, lets have some unbiased results...
I'll be happy to admit the altivec code helps some apps and that huge backside cache helps for others....
 
i find it amusing that most of the replies in this thread don't even touch on the technical differences between these processors. :rolleyes:

way back in the day, i started a thread on this, and it was Quite alot different. :(


The "clock-speed controversy"
That's the debate between the meaning of "MHz" and now "GHz."

All things being equal, the faster the clock speed, the more powerful the computer, because more instructions can be processed in the same amount of time when the clock runs faster.

However, all things are not equal.

Most people know that Macs send instructions to the processor in a protocol called RISC, or Reduced Instruction Set Commands.

PCs send the same instructions in a protocol called CISC, or Complex Instruction Set Commands.

It does this by:

1) reducing the number of steps through which instructions must go in order to be solved,

2) combining them into 128 bit chunks, rather than the 32- of 64-bit chunks common in PCs.

CISC is the traditional design philosophy. Make lots of instructions based on what engineers think is cool, and then compiler writers use a small sub-set of those instructions and the rest sit around taking space or are occasionally used by assembly programmers. But once added these instructions can not be removed (legacy) - so most CISC designs are saddled with 20 year old instruction sets, complex instruction designs, lots of gates (switches and space on chip) to get work done, and are often forced into using micro-code (an emulator for hardware -- which slows down execution) to get all the instructions on chip in the time or space requirements.

RISC is the design concept of simplifying the instruction sets complexity (not necessarily the number of instructions) so that designers can use those freed up gates (switches - which equals space on the chip) to do other things. Usually those gates are put to work making the whole chip faster (like using super-scalar, super-pipelining, caches, branch prediction, caching, etc.). Simplifying the instruction set also means that you can bring a chip to market faster and take advantage of newer processes with less costs... or that you could design multiple specialized flavors of a chip for the same cost

CISC can be fast, or CISC can be power-efficient - but it is hard to do both with CISC (compared to RISC). CISC machines just flat out require more gates to get the same work done - that means more heat/power. RISC is better for laptops and low end consumer machines where power consumption matters. (Why almost all PDA's and home appliance computers are RISC).

RISC machines have less complex instructions and a lot larger amounts of their real-estate devoted to Cache. Cache is easier to create (map out) than instructions, and less likely to have bugs. RISC machines therefore are less expensive to design and will have fewer bugs (for same effort) than CISC.
Memory and chip capabilities are growing dramatically. As the manufacturing keep growing the chip that is the easiest to design (RISC) CAN be the first to implement that technology (that does not mean they will always do so, just that they could be if they start from the same point). This is part of the reason why most of the big jumps in process or performance are seen on RISC first, and will continue to do so. Money can compensate for some of this - but time is continuous.
MMX is a way to make a processor MORE CISC like and MORE proprietary. Intels MMX philosophy is to add MORE complexity to the instruction set (which they will have to carry around forever). Those instructions will only be executed a small fraction of the total time. A better design philosophy is to instead of using $50 of gates on your processor to do this work (and tie up your processor in the process), use a $50 dedicated chip (and evolve it separately) to off load this task, do this task faster, and to leave your processor free to do other work during that same time. In other words - an MMX based Pentium will be slower at both processor and DSP functions than two chips specialized to do each. They will likely cost the same (possibly)- but the Intel approach ties each tasks evolution to redesigns of the other (you can only scale the technologies together). It also makes it harder to split the tasks up, and have multiple processors or DSP's. Also when one of the units is working on a problem it is more likely to have resource conflicts with the other or prevent the other unit from doing what it wants (in the Intel approach than two separate processors).

more tech here: http://amiga.emugaming.com/riscisc.html

That's why a Mac G4 running at 800 MHz is able to dramatically outperform a Dell running at 1.2 GHz while performing identical tasks.

and check this out too:
http://www.applieddata.net/design_riscCisc.asp


seen here, it's not always the case of Mac is hands down faster: http://www.digitalvideoediting.com/2002/05_may/features/cw_aeshowdown.htm
if Mac users are under the impression that their machines can render After Effects composites faster than any Windows-based workstation, our tests do not support that conclusion.

however, one must note, they were Not using a 1.2Ghz Athlon, it was a test based on duallies.

You may not be very familiar with the Apple platform (RISC architecture) and operating system though, so here is a quick run through.
Apple systems tend to run at higher prices than PC’s, but because of the fact that they make the operating system and the systems they tend to have tighter integration between products.

This is for one simple reason – Microsoft doesn’t manufacture the computers that run its operating systems, so they have the added task of trying to create drivers for systems as well as an operating system that will have to support several different processors such as Celerons, Pentiums, Athlons, Durons, etc plus the many different hardware configurations available.

This task is not on Apple’s shoulders so they don’t have to worry about all this extra work. Therefore they have been able to come up with an operating system that works seamlessly with their computer systems.

Unfortunately for Apple, they simply do not have the support that Microsoft has from the literally thousands of software companies out there. Therefore, yes, there is less software available for the Macintosh operating system known as Mac OS.

The latest version of the Mac OS, OS X (v.10) has taken a totally different approach from previous editions. Previously, Mac OS releases such a V.9.2 were based on Apples own operating system, but this is not the case with V.10. It is based on the BSD Unix operating system, making it more stable than previous versions (Unix is renowned for its stability) .

The new operating system also has support for dual processors, and manages the systems RAM much more efficiently than the previous editions. Recently, V.10.2 was released as well.
This latest version of the operating system is also know as “Jaguar”, and sports an X logo with jaguar fur.

Most data-intensive computing tasks (video, audio, graphics) involve floating-point calculations.
Apple's new dual 1GHz PowerPC G4 processor accomplishes this task at speeds up to 15 gigaflops -- that's 15 billion floating-point operations per second.
(You just found out why it is taking longer for games to be written for this environment; game-writers are very slowly getting used to having this amount of power at their disposal.)





some more tidbits:
http://www.geocities.com/imac_driver/conclusion.html


i hope this will bring to light the true differences between the platforms and end some of this silly argument.

Mac is a great product for folks who need 'That' product.

The PC is for the rest of us who don't mind getting our hands dirty, or messing with crappy software, hardcore gamers, enthusiasts, overclockers, and geek freaks.
(and cheapskate do-it-yerselfers too)

Apple has a package called "iTools" which covers almost all the bases for the everyday Mac user's needs, and aside from Gaming... the Apple Macs leave little to be desired.

Mac has a well-rounded software library, is perfect for beginners, students, audio/video professionals, and even light gamers.

The large cost of the Mac does include a DVD-Burner, and everything else a person really needs...
(placing a Mac's cost comparable to a Dell)
in a very compatible, and reasonably stable, easy to use package.
it's non-geek for the most part.

And, all that said...
i must point out... Even the PC world is arguing over whether, and how, clock speed really measures productivity; http://netscape.com.com/2100-1103-869796.html
 
Last edited:
will win what?

can't you see theres not a true way to compare two totally different things??

From: http://forums.zdnet.com/group/zd.An...D-,D@ALL/@article@81158?EXP=ALL&VWM=hr&ROS=1&

The point of the P4 is to have obscenely high MHz to compensate for the really long pipeline.

And the reverse: the long pipeline allows obscene cycling rates. But, higher cycling rates burn far more energy than lower-cycled but more efficient (due to either fewer stages or overall better design efficiency) processors. (RISC)

For "predictable" code, as you said, the Intel processors are great. For branching code, however, you stand a very high chance of throwing out and rolling back the pipeline so much that the P4 can end up at about the same speed as a G4 at a quarter the cycle rate (ie, 2.4GHz P4 performing approximately as well as a 600MHz G4). In general, though, consumer apps are a good mix of predictable and branching code, and most would pit the top G4s against a P4 at about 2x the cycle rate (1.25GHz G4 vs 2.5 GHz P4).


why don't you race Linux against Windoze both on Athlons?


how come we never hear ppl talk of "is Sun Sparc better/faster than PC" do ya know why??

Two Totally Different Animals!

just like Mac vs. PC!!

there can be no winner of a fair fight comparing apples to oranges cuz it's NOT a fair fight.



apparently...

someone did run that test:

Subject: mhz myth is true. I kid you not.
Poster: PengLuber (7/19/2002, 12:12 pm EDT)

Despite what the article says, the mhz myth is true.

I am my self a large penguin lover, which means I use Linux. Now I did some testing awhile ago with Linux on my x86 machine, and Linux on my PPC machine.

The x86 maching was an Intel Pentium III 800mhz, and the PPC was like a G4 667mhz I believe (I cant remember clearly). Both machines ran Mandrake 8.0 with the default kernel. I then wrote some benchmarking programs that did some serious calculations. I wrote the programs in ANSI C to maintain portability.

The results of the benchmarks showed the G4 beating the P3 by nearly 25% of the speed. Which proves once and for all the mhz myth does exist. Now I wanna do these tests with a DP 1ghz G4, and my AMD Athlon MP 2100+ DP system.

from: http://www.macobserver.com/comments/commentindivdisplay.shtml?id=17094


another good read here:
http://forum.pcvsconsole.com/viewthread.php?tid=146

and another article...
RISC vs. CISC: the Post-RISC Era

In this paper, I'll argue the following points:

RISC was not a specific technology as much as it was a design strategy that developed in reaction to a particular school of thought in computer design. It was a rebellion against prevailing norms--norms that no longer prevail in today's world. Norms that I'll talk about.


"CISC" was invented retroactively as a catch-all term for the type of thinking against which RISC was a reaction.


We now live in a "post-RISC" world, where the terms RISC and CISC have lost their relevance (except to marketing departments and platform advocates). In a post-RISC world, each architecture and implementation must be judged on its own merits, and not in terms of a narrow, bipolar, compartmentalized worldview that tries to cram all designs into one of two "camps."

http://www.arstechnica.com/cpu/4q99/risc-cisc/rvc-1.html
 
Last edited:
I dont see how its not a fair fight. If one cant get it done as fast as the other it loses. It dosent matter if it loses because the software wasnt as optimized blah blah blah- Theres always the "I would have won if..."
Why do the differences in the processors have anything to do with it?
 
because the processors are basically designed to do two different things. (handle different instructions) one does the job it was designed to do better, plain and simple.

that's why Altivec, cache, and low Mhz works.
it's optimized to do what it's designed for.
 
hey now genius can't be rushed ;) noone has helped with the subject of an actual benchmark!! assuming i use photoshop where/how can i get or run a set of tests with it that will take a while and give some sort of result in terms of seconds or something? come on someone has to know something.
 
Back