It may be useful to graphically represent the different Intel x86 microarchitectures, how they are related to each other, and the processor series and models that implement each of them. I’ve made such a graph and I’m sharing it here in case anyone else finds it useful as well. I’ve used the PDF format instead of some image format to make it searchable. I’ll try to keep it up-to-date. You can report errors, give suggestions, or ask for clarifications by posting a comment below.
The most recent version: IntelMap4.2.pdf.
One of the first things that anyone learns about hardware performance monitoring on modern Intel processors is that there are three fixed-function counters and four general-purpose counters per logical core. As long as there are sufficient counters for all the events that need to be measured simultaneously, assigning one counter for each event is doable. However, if there are more events than counters, multiplexing occurs where different events are measured in different time intervals. In the perf stat tool, if you see a column of percentages to the right hand side of the output, it means that multiplexing has been used to measure the given events. But have you ever seen an output and said “WTF, why is there multiplexing?”
I discussed in a previous article the exact meaning of the 0xD1 family of events and the ALL_LOADS event on Ivy Bridge, Haswell, Skylake, Kaby Lake, and Coffee Lake. The 0xD1 events include the data hit and miss events at each level of the cache hierarchy (except the L4 cache, which is available on a few processors). There is still a LOT more to say about how to correctly count cache hit and miss events. The purpose of this article is to extend the description of the events to cover the cases where a cache line is accessed by more than one physical core. This occurs when multiple threads from the same application or different applications access the same cache line from different physical cores. This can also occur when a thread running on one physical core accesses a cache line and then gets migrated to another physical core and accesses the same cache line. This article can also be useful for those who want to learn the basics of cache coherence on modern Intel processors.
It is generally important to analyze the cache access behavior of an application to determine whether some performance-critical pieces of code poorly utilize the cache hierarchy. Ivy Bridge and later microarchitectures offer a fairly rich set of performance monitoring events to count various cache-related events and estimate their impact on the overall execution time of the application. On Ivy Bridge, Haswell, and Broadwell, these events include the following: Continue reading
The SFENCE instruction was first introduced in the Intel Pentium III (1999), AMD Athlon XP (2001), and AMD Morgan (2001). On the early AMD processors, it was part of the AMD 3DNow! Extensions instruction set. Since then, any processor that supports SSE (as indicated by the corresponding CPUID bit) also supports SFENCE. That is, there isn’t a dedicated CPUID bit for SFENCE.
Note: SFENCE is discussed in another blog post. This post is about LFENCE.
The x86 ISA currently offers three “fence” instructions: MFENCE, SFENCE, and LFENCE. Sometimes they are described as “memory fence” instructions. In some other architectures and in the literature about memory ordering models, terms such as memory fences, store fences, and load fences are used. The terms “memory fence” and “load fence” have not been used in the Intel Manual Volume 3, but they have been used in the Intel Manual Volume 2 and in the AMD manuals a couple of times. I’ll focus in this article on “load fences”. Throughout this article, I’ll be referring to the latest Intel and AMD manuals at the time of writing this article.
The fact that the term “load fence” has been used in different ISAs, textbooks, and research papers has resulted in a critical misunderstanding of the x86 LFENCE instruction and confusion regarding what it does and how to use it. Continue reading
Most compilers convert the input source code into one or more intermediate representations (IRs) to make it easier and faster to analyze and optimize the code. Static single assignment (SSA) is a property of IRs that helps in not only simplifying the algorithms that analyze the code, but also improve their results at the same time, leading to more effective and efficient optimizations. The definition of SSA according to Wikipedia is currently as follows: Continue reading