From what I've read (I could be wrong), current AMD and Intel x86 processors are part of the "post-RISC" generation. They break up instructions into RISC-like micro-ops/macro-ops to enhance performance. A programmer could undoubtedly explain it much better then I can.
IMHO the whole "RISC is better" thing was mainly an Apple marketing strategy from the 90s to tout their Macs as being equal or better in performance to an x86 machine of double the clock speed. When I worked at CompUSA the guy that hung around the Mac corner of the store used to tell people to buy a $1300 iMac G3 233 MHz instead of a $2500-$3000 PII 450 MHz, saying the iMac was faster because of its RISC processor.