Do x86/x64 chips still use microprogramming?

If I understand these two articles, the Intel architecture, at it's lowest level, has transitioned to using RISC instructions, instead of the the traditional CISC instruction set that Intel is known for:,264-6.html

If that's the case, then are x86/x64 chips still microprogrammed or does it use hardwired control like traditional RISC chips? I'm going to guess it's still microprogrammed but wanted to verify.


Microcode has been around for a long time, if that's what you're referring to. So I don't know what the HardwareSecrets article is on about, unless Intel is now building RISC processors on top of CISC processors.

Even the HardwareSecrets article calls them Micro-Instructions. Potato, potahto.

On modern x86 processors, most instructions execute without microcode (*), but some complex or infrequently executed ones do use microcode.

(*) Not to be confused with micro-ops -- in x86 out-of-order processors, x86 instructions are typically decoded into one or more micro-ops which then are queued for execution (sans microcode!) in the out-of-order execution pipeline.

It is also interesting to note that modern x86 processors have a facility to patch/update microcode in order to fix errata in the field.

Just found the answer. Reference to "COMPUTER SYSTEMS ORGANIZATION" by Andrew Tanenbaum, page 54 to page 59. Intel chips are CISC based and all CISC based chips have an interpreter (microcode) to break the complex instructions into small steps. Earlier all chips contained microprogram. there was no CISC term, till the time RISC concept was introduced by David Patterson and Carlo Sequin in 1980. RISC stands for reduced instruction set computer. In today's time size of the instruction set does not matter. what matters in RISC design is simplicity of the instructions, but the name 'reduced'stuck. RISC design is about issuing more and more simple instructions quickly. how long an instruction took mattered less than how many could be started per second. Also the advantage of using faster CPU ROM over slower main memory CISC design had, is gone by the advent of equally faster main memory. RISC is definitely better than CISC performance wise. then why chip maker Intel didn't move to RISC? for two reasons. first of all, there is the issue of backward compatibility and the billions of dollars companies have invested in software for intel line. secondly, intel could manage to use the idea of RISC in its CISC chips. starting from 486, intel CPUs contain a RISC core that executes the smiplest and most common instructions in a single data path cycle, while interpreting the more complicated instructions in the usual CISC way. I guess Intel superficially moved on to the hybrid approach to keep the face/fame/goodwill in the market in line with the technology advances. I would take intel chips as CISC based only.

Current x86 CPUs still use microcode because the x86 instruction set is very complex relative to typical RISC processors. This is true at least for some instructions.

Internally, the complex instructions are broken into simple RISC-like instructions which are then processed by a sophisticated RISC-like core. The RISC-like instructions are sometimes re-ordered or executed in parallel.

Typical examples of microcoded instructions are division and multiplication and this is the case for both CISC and RISC. It is just not worth it to implement division in hardware considering how (relatively) seldom it is used. Multiplications are much simpler to implement yet are also micro-coded though of course not to the same degree. According to this document Instruction Latencies and Throughput for ... x86 Processors the latencies for mul and div for the K10 processor is 5 and 77 or 15.4X. For intel SBR(?) the corresponding values are 4 and 92 or 23X. An additional insight as to their relative complexity is their respective throughput: on K10 a multiplication every other clock cycle can be sustained (2.5 - 5/2 - in operation simultaneously) but only sustain one division every 77th clock cycle (same as division latency).

Other examples are sh?d (shift ? double) and bs? (bit scan ?).

Need Your Help

One pixel displacement of footer image on mobile browsers?

css html5 google-chrome mobile mobile-safari

The footer on our website is displaced by 1px on mobile browsers, and we don't understand why.

how to change the height of two views (One view shrink and the other expand) using objective c?

objective-c uiview uiviewcontroller

i have two views(A&B) in the same view controller(popOver window) and want to toggle between them, view A is larger than view B, when the popOver window first loads i want view A only to appear...

About UNIX Resources Network

Original, collect and organize Developers related documents, information and materials, contains jQuery, Html, CSS, MySQL, .NET, ASP.NET, SQL, objective-c, iPhone, Ruby on Rails, C, SQL Server, Ruby, Arrays, Regex, ASP.NET MVC, WPF, XML, Ajax, DataBase, and so on.