I'm under the impression that for years, Intel chips (and maybe AMD as well, I'm not sure) haven't been true CISC chips, but rather more RISC based with some sort of onboard converter. Is this done purely as a power/money saving move, or is there something more to it? Also, would they gain anything by manufacturing a RISC chip and competing with ARM on that front?
This question took longer than expected, but I learned a lot more than I would have imagined. I could write a lot more on this subject now, but here is a quick overlook.
RISC & CISC
Most people are familiar with the basic architecture of a CPU: fetch, decode, execute, memory. Instructions must first be fetched and translated for the CPU to execute and store it into memory. CPUs perform theses tasks based on a pre-defined program architecture. How the CPU computes, moves, and works with instructions are defined in a way that provides the highest degree of efficiency.
CPUs contain an instruction set that is used by programmers to instruct the CPU to perform tasks. For instance, there is an instruction set that tells the CPU to add two numbers together, and to move it to memory. When a CPU is designed, these instruction sets are designed along with it. The efficiency and design of these instructions is what helps defines a CPU.
In the x86 CPU world, there are two types of instruction sets that are commonly used: RICS and CISC. RISC (Reduced Instruction Set Computing) and CISC (Complex Instruction Set Computing) were also part of the first instruction sets designed for CPUs. These two instruction sets had opposite approaches to increasing instruction throughput. RISC is an approach to prioritize the number of cycles per an instruction, while CISC focuses on minimizing the size of the instruction. This all translates back to throughput by determining how many programs can be executed in a second. A program has a defined number of instructions that must be processed to complete. How quickly the program runs through the instructions determines the speed of the algorithm. RISC instruction sets utilizes accurate instructions to quickly calculate and manipulate each instruction. Rather, CISC minimizes the instruction size so that it can calculate it faster even though it may take more cycles than a RISC instruction to complete. Below I will go over each architecture and summarize how it works with a program.
RISC Architecture:
RISC pretty much defines what a CPU is to most people. Like all CPUs a RISC based CPU has four stages: Fetch, Decode, Execute, Memory. The number of clock cycles it takes for an instruction to be completed inside each stage can be used to determine instruction throughput. Since RISC focuses on speed, its highest degree of efficiency is when an instruction takes one clock cycle per a stage. The down side to this architecture is the size of the instruction. If an instruction is too large, than it gets split into smaller instructions. This can cause a cascading effect that increases overhead of the program.
CISC Architecture:
Contrast to RISC, CISC has dynamic instruction sizes to increase the number of instruction that can be executed in a cycle. With this case, CISC ultimately reduces the number of instructions to be computed inside a program. The downside is that it sacrifices clock cycles per an instruction. Unlike RISC, instructions will always take several clock cycles to execute. The highest degree of efficiency for a CISC based CPU is when it can execute an entire program in one instruction. CISC based CPUs have the same number of stages as a RISC CPU, but the number of cycles it takes for an instruction to pass through each stage will be varied.
AMD, Intel: CISC, or RISC?
I have been working on this portion off and on for a couple months now. I was able to meet with Dr. Yale Patt, a UT professor that has contributed a work toward the decoding section of the x86 architecture. He has contributed a lot to the x86 architecture and was a pleasure to sit and pick his mind.
All in all, its hard to really break down the information I gathered and make it applicable toward the question I am trying to solve: Does AMD and Intel use RISC or CISC? Generally speaking, neither company creates a full RISC or CISC chip, neither does ARM. Generally speaking, the RISC and CISC ISA architecture is pretty dead. ARM, AMD, and Intel utilize their own ISA that is based off the RISC and CISC ISA, but each company comes up with their own approach and answer. In the end, the answer is yes, no, and kinda.
If anyone is curious to understand specific parts of each companies ISA/Architecture, I can dive more into it. For now, this is the answer I'm going to give. There are several parts to the CPU that could be considered more RISC than CISC and vise versa.