• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Difference between 64 and 32 bit processors?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

bterry13

Member
Joined
Sep 20, 2002
Location
Naptown!
My friend brought up to me that the Mac G5 has a 64bit processor but niether of us knew the difference between that and a 32 bit processor.

I was thinking the athlon mp(?) series was a server series that used 64bit processors. Also, the G5 has two processors. They advertise the G5 as a personal computer, but it seems to be more of a server. Aren't two processors practically worthless on a personal computer?
 
32bit CPUs handle 32 bits of data at a time, 64bit CPUs handle 64bits at a time. Therefore there is a theoretical potential of doubling your throughput at a given speed, it actually should translate to a 10-20% performance gain.

As metioned this requires a 64bit OS and 64bit games/apps.

Itanium was before the G5 or opie/a64 and it was/is available in workstations which in my mind is a desktop machine. Opie is available in workstations, again a workstation is just a high end desktop. Apple has neither the first 64bit CPU, the first desktop 64bit CPU nor the fastest CPU...just ignore them, they are trying to drum up some sales by using marketing to say things to confuse the common computer consumer.
 
DaddyB said:
32bit CPUs handle 32 bits of data at a time, 64bit CPUs handle 64bits at a time. Therefore there is a theoretical potential of doubling your throughput at a given speed, it actually should translate to a 10-20% performance gain.

That's not really the case. The difference between 32-bit and 64-bit designs can mean two things: either increased precision (and larger numbers), or increased address space.

For over a decade and a half now, desktop computers have had up to 80-bit floating point numbers. And the SSE2 instruction set can handle 128-bit numbers. Most SIMD instruction sets (such as SSE/2, 3DNow, Altivec, etc) let you pack two 32-bit numbers into 64-bits, or 4 into 128-bits, which can be far more useful than one 64-bit number in common tasks. So saying a CPU can handle 64-bit data is nothing impressive.

When people talk about 64-bit CPUs today, they mean chips that have 64-bits of address space. That's the only difference. And currently, it's not a very important difference for the vast majority of desktop users.
 
Not to be anal (well, perhaps I am being anal), but 64-bit refers to the width of the General Purpose Registers (or GPR's) which are used to store Integer and Memory Addresses (pointers). A "64-bit" MPU, by definition, will have 64-bit wide GPR's. It doesn't have to have 64-bits of address space (Opteron and Itanium 2 both "only" have 48-bits of virtual address space) and it doesn't neccessarily need to have 64-bit wide ALU's. Although most 64-bit implementations do use 64-bit wide ALU's.
This translates into more flat memory address and the ability to use bigger integers without a performance hit. The former seems to be the more important feature (modern 32-bit MPU's can address up to 4 GB of flat memory, which is becomming a limitation in the workstation market). The latter would be useful in things such as cryptography. However, for the majority of home-applications (which are becomming more and more floating-point intensive), it means very little, if anything at all. Of course, most of the new "64-bit" MPU's also bring other improvements other than just the "64-bitness". x86-64 (the ISA used by Opteron and the Athlon64), for instance, brings about 8 additional GPR's and 8 additional SSE register. These, if utilized, should bring about a 10-15% performance improvements. The "64-bitness" has nothing to do with it.
 
I didn't exactly word my original post correctly, I was just trying to illustrate the difference between 32 and 64bit. I will try to put together a more comprehensive post tomorrow (it's too late now)but...

If what you two are saying were correct then there would be no need for a x86-64bit OS, SSE (and SSE2) is supported in current OSs and the amount of RAM is not going to effect speed that much.

When there is a 64bit Windows for AMD CPUs you will see a 10%-20% performance improvement (over an identicle system with the same amount of RAM) as is already the case with Linux. Where will the speed come from if not what I said?
 
simply put a 32 bit processor is based on 32bits. . . to handle 64 it requires 2 clock cycles. this in mind a 64bit (true and totally 64 bit) will out perform a 32bit system at the same clock speed. ALSO a 64bit system add 'accuracy' i.e. 32 bit graphics as opposed to 64. and this too is just ONE aspect. . . remember the 386 was a 16 bit chip but would do 32 bit virtual. . . as in it took 2 clock ticks to process 32bits.
 
bterry13 said:
Aren't two processors practically worthless on a personal computer?
The G5 is a workstation... Apple just has it's head on backwards. Duals can be useful, especially if you are encoding Divx or something that's very demanding on a cpu, and you want to do something else at the same time. And things like PhotoShop love duals, since they are designed to use them. If you don't do stuff like that, then a dual is pretty useless.
 
Zuzzz said:
simply put a 32 bit processor is based on 32bits. . . to handle 64 it requires 2 clock cycles. this in mind a 64bit (true and totally 64 bit) will out perform a 32bit system at the same clock speed. ALSO a 64bit system add 'accuracy' i.e. 32 bit graphics as opposed to 64. and this too is just ONE aspect. . . remember the 386 was a 16 bit chip but would do 32 bit virtual. . . as in it took 2 clock ticks to process 32bits.

First of all, the 386 was a fully 32-bit processor. It has 32-bit registers and 32-bit address space.

Second, pretty much any modern 32-bit processor can work with 64-bit numbers (be it integer or FP). These are of course functions of special instructions, but x86 has always been about extending the instruction set wherever needed.

And a 64-bit processor/OS really wouldn't have any impact on the accuracy of graphics. That's more a function of the GPU, most of which can do 128-bit vector processing already.
 
Well, if by "most" you mean NV30 and NV35. Those are the only GPU I'm aware of that works on 128-bit Vectors (which are much like SSE vectors, 4x 32-bit FP data). First and foremost, "64-bit", like I mentioned before, only refers to the GPR size. This has absolutely no bearing on FP precision (which is what graphics today heavily use). Secondly, the definition of a "64-bit MPU" is merely that its GPR's are 64-bits wide. It *does not* need to have a 64-bit wide ALU (meaning it can actually calculate a whole 64-bit number in one clock). It can have only 32-bit wide ALU's and merely execute the instructions in multiple clock iterations (much like how the double-pumped simple ALU's on the P4 are 16-bits wide, but they're double-pumped). It would still, by definition, be a "64-bit" MPU. So no, it has very little to do with execution speed.
 
As you were saying with the P4, it has "double pumped" 16bit ALU's which gives it the same performance as 32bit, if it was only a single 16bit ALU then it would take more clock cycles to execute the instructions. Same with 64bit, it will be able to do 64bit instructions in half the number of clock cycles as compared to an identicle CPU in 32bit mode.

As I has said previously performance increases with a 64bit OS, take a look at these numbers, in some things you see a 20% (or more) performance improvement and in some things there is very little performance gain. Since it is the same CPU in the same machine it has the same amount of RAM and all instruction sets/registers (ie SSE2). The performance increase is purely due to being able to handle larger intergers in fewer cycles and without the use of extra registers is it not?

There may not be a 'need' for 64bit CPUs for the majority of us but it will speed things up a bit and as 64bit CPUs become more common place so will 64bit code. It's like with video cards, who would have imagined a few years ago that the pixel pipelines would be used to apply effects while video editing? Thanks to DX9 it is now possible. Once 64bit CPUs are in our systems the programmers will find ways to code more effectivly use it to boost performance.
 
As you were saying with the P4, it has "double pumped" 16bit ALU's which gives it the same performance as 32bit, if it was only a single 16bit ALU then it would take more clock cycles to execute the instructions. Same with 64bit, it will be able to do 64bit instructions in half the number of clock cycles as compared to an identicle CPU in 32bit mode.

The point is, of course, that just because an MPU is "64-bit" does not mean it will neccessary have 64-bit execution units. Merely that it has 64-bit GPR's.

As I has said previously performance increases with a 64bit OS, take a look at these numbers, in some things you see a 20% (or more) performance improvement and in some things there is very little performance gain. Since it is the same CPU in the same machine it has the same amount of RAM and all instruction sets/registers (ie SSE2). The performance increase is purely due to being able to handle larger intergers in fewer cycles and without the use of extra registers is it not?

Not the same registers. In "64-bit mode", x86-64 allows access to 8 additional SSE register and 8 additional GPR's. This is unique to x86-64, not a feature of "being 64-bit" but rather a feature of x86-64.

There may not be a 'need' for 64bit CPUs for the majority of us but it will speed things up a bit and as 64bit CPUs become more common place so will 64bit code. It's like with video cards, who would have imagined a few years ago that the pixel pipelines would be used to apply effects while video editing? Thanks to DX9 it is now possible. Once 64bit CPUs are in our systems the programmers will find ways to code more effectivly use it to boost performance.

There's no "need" for 64-bit because most intensive applications nowadays use FP data, not integer. Like I said before, being "64-bit" applies only to the GPR's. Modern x86 MPU's (Athlon, P4, P3, etc.) already use 80-bit FP precision for FP data and they do have 80-bit wide execution units.
The only feasible improvement would be the ability to address more than 4 GB of flat memory in terms of consumer applications. The only applications that use integers that large (64-bit integers) are probably cryptography and scientific calculators. The bulk of computing has moved or are moving towards FP-intensive. And being "64-bit" means has absolutely no meaning in terms of FP.

I find it a common mistake that people confuse x86-64 with "64-bit" in general. x86-64 is *not just* a 64-bit extension, it brings more and *those* features will provide speed improvements in your typical consumer applications. AMD claims 10-15%.
 
DaddyB said:
Same with 64bit, it will be able to do 64bit instructions in half the number of clock cycles as compared to an identicle CPU in 32bit mode.

But if the CPU that takes twice as many cycles runs at twice the clock rate as the other CPU, it doesn't matter.


As I has said previously performance increases with a 64bit OS, take a look at these numbers, in some things you see a 20% (or more) performance improvement and in some things there is very little performance gain.

Running massive MySQL queries under linux is not exactly representative of typical applications. It doesn't correspond to the kind of processing required for gaming, digital video, etc. Plus those benchmarks are just from one program. It's likely that it has some specific optimizations for Opteron in 64-bit mode.
 
ironically many video game systems have been running at 64 bits for some time now. it's amazing that desktops are now catching up to what are essentially toys!
 
The X-Box is technically "32-bit" (it runs a P3) and it's more powerful than the N64 or Jaguar (or was it Saturn that Sega made that was "64-bit"?). This is exactly what I mean when I say that it means very little for graphics/gaming. Games have shifted towards floating-point and no longer use integers to represent pixel values like they did in the N64 days. "32-bit" or "64-bit" means very little as it refers to only integer and memory address size.
 
Back