• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

IBM CPUs?"If IBM made such good CPUs, then why don't we see them dominating the CPU."

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Blazing fire

Member
Joined
Sep 5, 2007
IBM CPUs?"If IBM made such good CPUs, then why don't we see them dominating the CPU."

This is a continuation from this thread: xbox 360 cpu vs pc cpus . There are some questions which I wish to follow up on thus I started this thread.

DavidJa said in this post

maybe they think its easier to make money from the software side than hardware and only licensing their hardware R&D.
they also wouldn't have to invest billions in building and updating fabs.

It does make sense to me. Those engineers at IBM who managed to create "ultra High performance, super computer chips. Chips that make Neha look like a celeron. " - Dapman02 should have gone to AMD or intel by now. If they are that good in making CPUs, then they have no reason to switch to software.
 
Basically IBM's processors are so good because they, as stated in the posts you linked, are NOT X86 based Cpus. The problem is that current main stream applications are coded for X86 and basically every programmer working on these applications is trained for X86 based programming. Although lots of (especially open source) programmes are compatible with other architectures as well.
Some very efficient architectures are using their own programming languages, though, that are often very hard to learn.
The problems arise when you introduce the main stream market to other architectures, that lots of programmes, and the windows OS will simply not work on them.

Compatibility will most certainly be possible, and in a long term as well the complete shift to a new architecture or architectures, but you would have to expect strong resistance from Intel, and most likely even AMD, as they are the only main competitors that share licenses for the X86 market and would have to face extremely strong competition in other architecture markets.

That said, for the consumer a shift to a new architecture would have mostly positive consequences, apart from a likely very buggy time in the beginning..
 
"If IBM made such good CPUs, then why did Apple go through all the trouble of changing arch to Intel"

Imagine how much work Apple had to do to go from PPC to x86, and giving up on PPC even after investing big resources in G5. Why would they go through all that trouble to get an inferior instructionset by an inferior CPU maker? I'm not buying this "IBM is so powerful and would crush Intel if they wanted to" anymore.

Wheres the benches that shows IBMs superiority? Theres plenty Linux stuff for the PPC arch, should be easy to cough up a few benches.
 
@Rinne. Since IBM are so good at making CPUs, they won't have to worry about competition. They would beat intel and amd hollowwww!

@th3. That's true. No benchmarks whatsoever.
 
IBM makes CPUs that are very specialized for the application. Consoles and HPC come to mind. IIRC the Cell processor does not use out-of-order execution which means a big performance hit when doing multiple general processing tasks.

The x86 ISA is very good when dealing with a magnitude of different types of applications, with new instructions improving execution latencies for certain workloads. I think many mainstream consumers would rather have this than pure data throughput.

For compute-bound specialized applications the Cell is excellent, but even then modern GPUs would destroy it in pure FLOPS.

As Rinne said the other side of the story is the programmability. The x86 architecture has been around for a long time and many developers are used to the tried and true model of C++, Java, etc. OpenCL might change this for GPU, Cell, and ARM architectures; opening a better programming model.
 
Last edited:
I just wish to clarify some things. HPC = High Performance Computing. or Handheld PC?
What's IIRC and "out-of-order execution"?

Could something explain the various types of CPU please? To understand why IBM looses out, I think I need to know something about this.

Thanks in advance! :D
 
HPC = High Performance Computing
IIRC = If I recall correctly

Out-of-order execution basically means the processor has an instruction "window" for choosing which instruction(s) to execute first (usually the faster ones). In-order execution is when the processor can only execute instructions in the order which they are fetched.

The only types of CPUs I know of are RISC (Reduced instruction set computing) and CISC (Complex instruction set computing) but these are becoming more and more intertwined. Modern CISC architectures can string multiple micro-ops in one instruction and then break those down in to smaller micro-ops for execution. In a CISC architecture there is less strain on the fetch phase, but the instructions have to go through a decode phase to break down the instructions into RISC-like micro-ops. There are trade-offs with both schemes.

When IBM loses out I think is it because of the programmability. The hardware/architecture is there with FLOPS, I/O performance, interprocessor communication, cache coherence, and memory hierarchy; but the programming model is not.
 
Last edited:
@Rinne. Since IBM are so good at making CPUs, they won't have to worry about competition. They would beat intel and amd hollowwww!

That doesn't really contradict with what I said.

A list of different CPU architectures, each with its own explanation:

http://en.wikipedia.org/wiki/Notable_CPU_architectures

As Firestrider mentioned, each architecture has its own up- and downsides..

Regarding X86 CPU's you can say that they are a quite good compromise, not as fast as other architectures, but very flexible and general purpose, while others, especially High performance CPU's are specialized and might execute some applications extremely slow, if at all. They are often also designed for CPU Clusters, and thus for extremely high parallelism which it is generally hard to code for.
 
1. Apple had already done the work for the x86 transition long before they left IBM. They'd been compiling x86 osx for a while without optimization as sort of a just in case senario which was smart. There was an article in Macworld I believe that said this, with an interview with someone.

2. No matter how good a cPU is, it won't be the best until people can use it. PPC6 is meant to be a high end server CPU. IT does lots of work per cycle and at a good performance per watt. But the biggest reason why you can't buy a Dell laptop with one.. is that it will not run windows.

Back in the day of Windows NT 4, MS compiled it for x86, PPC, SPARC, and one other one that I forget. I actually ran Windows NT on a PPC and it worked! Its neat! But the problem was that all other windows released were for x86, so no devs made ANY programs for WinNT for any other arch. So even way back then, when x86 was behind these other arches, it was obvious that x86 was the consumer winner.

Now, these days x86 still lags behind in performance. It has it's uses, but it isn't even a true 64 bit cpu. Its 48 actual bits and the rest are virtual... and even THAT is just extensions, not a rework, so that it can do 32bit and 64bit.

So to answer your question, they don't compete with Intel or AMD because they're not x86. Unfortunately, we're stuck with em.
 
1. Apple had already done the work for the x86 transition long before they left IBM. They'd been compiling x86 osx for a while without optimization as sort of a just in case senario which was smart. There was an article in Macworld I believe that said this, with an interview with someone.

Steve Jobs mentioned it in his keynote address when he announced the transition.

But the biggest reason why you can't buy a Dell laptop with one.. is that it will not run windows.

If it is that much faster and better, why not use software emulation during the transition time? It can be done quite fast if done right.
 
So even though Cell is the first one to make a hybrid CPU/GPU in one package with PPE/SPE units, a discrete CPU from Intel/AMD is better for sequential/scalar task-based workloads and a discrete GPU from ATI/Nvidia is better parallel/vector data-based workloads.

I still am trying to learn out the limitations and benefits from each route. But what I've gathered from AMD is that their GPU cannot do heap memory allocation and general purpose recursion (maybe a programmer can step in and tell me what this means). Many datatypes are supported when using the GPU such as integers, booleans, and double-precision floating point (only 1/4 speed than single-precison floating point). I would imagine the CPU is much better at conditionals and branch prediction but nonetheless conditionals are supported with ATI GPUs (at least)
 
If it is that much faster and better, why not use software emulation during the transition time? It can be done quite fast if done right.

Because its not as simple as that. It is complicated, heck companies specialized in it, like transmeta, but it was never perfect, and it was never easy.

Its the reason why it's not prolific at all. OSX has it, but only 1 way (x86 won't run on PPC macs).
 
Slightly off topic but I found more limitations to the GPU:

-Compared, for example, to traditional floating point accelerators such as the 64-bit floating point (FP64) CSX600 math processor from ClearSpeed that is used in today's supercomputers, current and older GPUs from ATI (and NVIDIA) are running on 32-bit processors with only single-precision data capabilities. [16]
-Instead of the 64-bit double-precision capability of supercomputers [17], the second generation of stream processors (the AMD FireStream 9170) is able to handle double-precision data. This is a result of FP32 filtering support contained as part of the requirements of the DirectX 10.1 API. However, the double precision operations (frequently used in supercomputer benchmarks) can achieve only half of the performance in theory compared to single precision operations, the actual figures may be lower, as the GPU do not have full double-precision units implemented.
-Recursive functions are not supported.
-Only bilinear texture filtering is supported; mipmapped textures and anisotropic filtering are not supported at this time.
-Various deviations from the IEEE 754 standard. Denormal numbers and signaling NaNs are not supported; the rounding mode cannot be changed, and the precision of division/square root is slightly lower than single-precision.
-Functions cannot have a variable number of arguments. The same problem occurs for recursive functions.
-Conversion of floating-point numbers to integers on GPUs is done differently than on x86 CPUs; it is not fully IEEE-754 compliant.
-Doing "global synchronization" on the GPU is not very efficient, which forces the GPU to divide the kernel and do synchronization on the CPU. Given the variable number of multiprocessors and other factors, there may not be a perfect solution to this problem.
-The bus bandwidth and latency between the CPU and the GPU may become a bottleneck, which may be alleviated in the future by introducing interconnects with higher bandwidth.
 
That doesn't really contradict with what I said.

A list of different CPU architectures, each with its own explanation:

http://en.wikipedia.org/wiki/Notable_CPU_architectures

As Firestrider mentioned, each architecture has its own up- and downsides..

Regarding X86 CPU's you can say that they are a quite good compromise, not as fast as other architectures, but very flexible and general purpose, while others, especially High performance CPU's are specialized and might execute some applications extremely slow, if at all. They are often also designed for CPU Clusters, and thus for extremely high parallelism which it is generally hard to code for.

Excellent point Rinne. You guys are assuming that the things you do with your desktop machine are the same things everyone does with all computers. IBM focuses on narrow segments of the market that allow them to tailor their processors to specific duties.

Think of it this way. An SUV will let you commute to work, haul stuff to the dump, carry the kids to soccer practice, go off road, etc. A Ferrari will go very fast around a road car and look fantastic doing it. Each has a purpose and each is well suited for what it's designed to do.
 
Wheres the benches that shows IBMs superiority? Theres plenty Linux stuff for the PPC arch, should be easy to cough up a few benches.
PPC just like RIsc for linux got dropped from what i found a few years back. As i was searching for risc based computers, i wanted to mess with something different. I found a IBM laptop with a risc 400mhz cpu in it i never bought it. simply cause the version of linux that would run on it was so old.

there is now hope for Risc making something of a come back right now. the problem is that linux is the only os that will run on it. as it seems Belco Alpha-400 is the first risc based laptop/computer in years for the consumer market. i will wish to find a retailer to get one to play with, i mean at $149 why not?

@th3. That's true. No benchmarks whatsoever.
yea your not going to people running linux on PPC is going to be hard. As hard as finding a working prototype of the 3DFX Voodoo 5 6000.

for other notes for you BF, Intels Atom cpu is a In-Order cpu. Just FYI on the atom cpu since i dont know if you ever read a few of my atom posts. if you have and posted then i forgot,LOL.
 
Evilsizer said:
there is now hope for Risc making something of a come back right now. the problem is that linux is the only os that will run on it.

No way! Many if not all smartphones use a RISC based processor and there are many OSes can can be run on them: SymbianOS, iPhone OS, Windows Mobile, Android, PalmOS, RIM Blackberry OS, etc.
 
No way! Many if not all smartphones use a RISC based processor and there are many OSes can can be run on them: SymbianOS, iPhone OS, Windows Mobile, Android, PalmOS, RIM Blackberry OS, etc.

hmm im memory slips i thought smart phones used a non-risc cpu. i remember the big deal about transmetta cpus being used back in the day. lol as you can tell im not a smartphone person... for me a phone NEEDS to be a phone nothing more,lol

hmm windows mobile? now where can i get a copy to buy and use in on that ultra light! :eek:
 
Yeah, most use an ARM (Advanced RISC Machine) processor. I think Intel is getting into system-on-a-chip market though. I don't think Windows Mobile can be sold as a standalone product.
 
Back