• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

AMD vs Intel

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Small update to this thread. There are a couple open questions that are being worked on. Specifically the RISC vs CISC instruction set question asked by freakdiablo. I hope to be sitting down with Dr. Yale Patt to pick his brain about ISAs and where current micro-architecture is heading.
 
I'm under the impression that for years, Intel chips (and maybe AMD as well, I'm not sure) haven't been true CISC chips, but rather more RISC based with some sort of onboard converter. Is this done purely as a power/money saving move, or is there something more to it? Also, would they gain anything by manufacturing a RISC chip and competing with ARM on that front?

This question took longer than expected, but I learned a lot more than I would have imagined. I could write a lot more on this subject now, but here is a quick overlook.

RISC & CISC
Most people are familiar with the basic architecture of a CPU: fetch, decode, execute, memory. Instructions must first be fetched and translated for the CPU to execute and store it into memory. CPUs perform theses tasks based on a pre-defined program architecture. How the CPU computes, moves, and works with instructions are defined in a way that provides the highest degree of efficiency.

CPUs contain an instruction set that is used by programmers to instruct the CPU to perform tasks. For instance, there is an instruction set that tells the CPU to add two numbers together, and to move it to memory. When a CPU is designed, these instruction sets are designed along with it. The efficiency and design of these instructions is what helps defines a CPU.

In the x86 CPU world, there are two types of instruction sets that are commonly used: RICS and CISC. RISC (Reduced Instruction Set Computing) and CISC (Complex Instruction Set Computing) were also part of the first instruction sets designed for CPUs. These two instruction sets had opposite approaches to increasing instruction throughput. RISC is an approach to prioritize the number of cycles per an instruction, while CISC focuses on minimizing the size of the instruction. This all translates back to throughput by determining how many programs can be executed in a second. A program has a defined number of instructions that must be processed to complete. How quickly the program runs through the instructions determines the speed of the algorithm. RISC instruction sets utilizes accurate instructions to quickly calculate and manipulate each instruction. Rather, CISC minimizes the instruction size so that it can calculate it faster even though it may take more cycles than a RISC instruction to complete. Below I will go over each architecture and summarize how it works with a program.

RISC Architecture:
RISC pretty much defines what a CPU is to most people. Like all CPUs a RISC based CPU has four stages: Fetch, Decode, Execute, Memory. The number of clock cycles it takes for an instruction to be completed inside each stage can be used to determine instruction throughput. Since RISC focuses on speed, its highest degree of efficiency is when an instruction takes one clock cycle per a stage. The down side to this architecture is the size of the instruction. If an instruction is too large, than it gets split into smaller instructions. This can cause a cascading effect that increases overhead of the program.

CISC Architecture:
Contrast to RISC, CISC has dynamic instruction sizes to increase the number of instruction that can be executed in a cycle. With this case, CISC ultimately reduces the number of instructions to be computed inside a program. The downside is that it sacrifices clock cycles per an instruction. Unlike RISC, instructions will always take several clock cycles to execute. The highest degree of efficiency for a CISC based CPU is when it can execute an entire program in one instruction. CISC based CPUs have the same number of stages as a RISC CPU, but the number of cycles it takes for an instruction to pass through each stage will be varied.

AMD, Intel: CISC, or RISC?
I have been working on this portion off and on for a couple months now. I was able to meet with Dr. Yale Patt, a UT professor that has contributed a work toward the decoding section of the x86 architecture. He has contributed a lot to the x86 architecture and was a pleasure to sit and pick his mind.

All in all, its hard to really break down the information I gathered and make it applicable toward the question I am trying to solve: Does AMD and Intel use RISC or CISC? Generally speaking, neither company creates a full RISC or CISC chip, neither does ARM. Generally speaking, the RISC and CISC ISA architecture is pretty dead. ARM, AMD, and Intel utilize their own ISA that is based off the RISC and CISC ISA, but each company comes up with their own approach and answer. In the end, the answer is yes, no, and kinda.

If anyone is curious to understand specific parts of each companies ISA/Architecture, I can dive more into it. For now, this is the answer I'm going to give. There are several parts to the CPU that could be considered more RISC than CISC and vise versa.
 
I have decided.
No matter what the hardware is, or capabilities........Intel vs AMD will continue on.
It's like determinism vs free will.

That said.......I take free will....aka gimmie a Quantum chip, the REAL "yes, no, and kinda" *yes, I know.........this is decades and decades away from being available to consumers.......if not centuries away lol*
 
All in all, its hard to really break down the information I gathered and make it applicable toward the question I am trying to solve: Does AMD and Intel use RISC or CISC? Generally speaking, neither company creates a full RISC or CISC chip, neither does ARM. Generally speaking, the RISC and CISC ISA architecture is pretty dead. ARM, AMD, and Intel utilize their own ISA that is based off the RISC and CISC ISA, but each company comes up with their own approach and answer. In the end, the answer is yes, no, and kinda.
What you have said is correct as ISAs designs evolve to balance speed, accuracy and reliability with energy efficiency. But there do seem to be some fundamental RISC design elements for ARMs vs. fundamental CISC design elements for X86. Neither is pure though, mixing in RISC and CISC techniques as appropriate. ISA evolution focuses on enabling specialization and is not concerned adhering strictly to either the RISC or CISC architectures.

Code:
ARM/RISC-Like Characteristics		X86/CISC-Like Characteristics
Format		
	Fixed length instructions	       Variable length instructions
	Relatively simple encoding	       Common instructions shorter/simpler
	ARM: 4B, THUMB(2B, optional)	       Special instructions longer/complex
		                               x86: from 1B to 16B long
Operations		
	Simple, single function operations     Complex, multi-cycle instructions
	Single cycle	                       Transcendentals
		                               Encryption
		                               String manipulation
Operands		
	Operands: registers, immediates	       Operands: memory, registers, immediates
	Few addressing modes	               Many addressing modes
	ARM: 16 general purpose registers      x86: 8 32b & 6 16b registers
 
Probably best way to show the difference between the different ISAs. Thanks for that DaveB. I'll be adding that to my giant list of notes and cript sheets :)
 
Back