How to understand macro `likely` affecting branch prediction? - performance

I noticed if we know there is good chance for control flow is true or false, we can tell it to compiler, for instance, in Linux kernel, there are lots of likely unlikely, actually impled by __builtin_expect provided by gcc, so I want to find out how does it work, then checked the assembly out there:
20:branch_prediction_victim.cpp **** if (array_aka[j] >= 128)
184 .loc 3 20 0 is_stmt 1
185 00f1 488B85D0 movq -131120(%rbp), %rax
185 FFFDFF
186 00f8 8B8485F0 movl -131088(%rbp,%rax,4), %eax
186 FFFDFF
187 00ff 83F87F cmpl $127, %eax
188 0102 7E17 jle .L13
Then for __builtin_expect
20:branch_prediction_victim.cpp **** if (__builtin_expect((array_aka[j] >= 128), 1))
184 .loc 3 20 0 is_stmt 1
185 00f1 488B85D0 movq -131120(%rbp), %rax
185 FFFDFF
186 00f8 8B8485F0 movl -131088(%rbp,%rax,4), %eax
186 FFFDFF
187 00ff 83F87F cmpl $127, %eax
188 0102 0F9FC0 setg %al
189 0105 0FB6C0 movzbl %al, %eax
190 0108 4885C0 testq %rax, %rax
191 010b 7417 je .L13
188 - setg set if greater, here set if greater than what?
189 - movzbl move zero extend byte to long, I know this one move %al to %eax
190 - testq bitwise OR then set ZF CF flags, is this right?
I want to know how do they affect branch prediction, and improve performance, three extra instruction, more cycles needed right?

setcc reads FLAGS, in this case set by the cmp right before. Read the manual.
This looks like you forgot to enable optimization, so __builtin_expect is just creating a 0 / 1 boolean value in a register and branching on it being non-zero, instead of branching on the original FLAGS condition. Don't look at un-optimized code, it's always going to suck.
The clues are: the braindead booleanizing as part of likely, and loading j from the stack using RBP as a frame pointer with movq -131120(%rbp), %rax
likely generally doesn't improve runtime branch prediction, it improves code layout to minimize the amount of taken branches when things go the way the source code said they would (i.e. the fast case). So it improves I-cache locality for the common case. e.g. the compiler will lay things out so the common case is a not-taken conditional branch, just falling through. This makes things easier for the front-end in superscalar pipelined CPUs that fetch/decode multiple instructions at once. Continuing to fetch in a straight line is easiest.
likely can actually get the compiler to use a branch instead of a cmov for cases that you know are predictable, even if compiler heuristics (without profile-guided optimization) would have gotten it wrong. Related: gcc optimization flag -O3 makes code slower than -O2

Related

Handling Register Integer Value Wrap Around In x86 Assembly GNU on MacOSX

I am working with a simple example in x86 GNU GAS on MacOSX whereby an integer value of 300 is moved into the eax register. As expected, only 300 mod 256 (the value 44), is actually stored in %eax, as echo$? reveals from Mac terminal:
.globl _main
_main:
pushq %rbp
movq %rsp, %rbp
subq $16, %rsp
movl $300, %eax;
leave
ret
However, I was under the impression that there is an overflow/wrap-around flag to denote that a wraparound occurred or a register storing the result of the integer division of 300 and 256, the result being 1. I have been unable to find any information detailing this process (if it exists) for x86 GNU. Does anyone know how the wraparound value or an overflow flag can be accessed?
There are a couple of misconceptions in your question.
First, eax can hold values from 0 to 4294967295, so mov $300, %eax does in fact store 300 into eax.
Second, a mov instruction cannot overflow or wrap around; the size of the source and the size of the destination are the same. The overflow flag is used for arithmetic operations.
The reason echo $? prints 44 is that the operating system reports the low byte of the exit status of the process to the shell.

What methods can be used to efficiently extend instruction length on modern x86?

Imagine you want to align a series of x86 assembly instructions to certain boundaries. For example, you may want to align loops to a 16 or 32-byte boundary, or pack instructions so they are efficiently placed in the uop cache or whatever.
The simplest way to achieve this is single-byte NOP instructions, followed closely by multi-byte NOPs. Although the latter is generally more efficient, neither method is free: NOPs use front-end execution resources, and also count against your 4-wide1 rename limit on modern x86.
Another option is to somehow lengthen some instructions to get the alignment you want. If this is done without introducing new stalls, it seems better than the NOP approach. How can instructions be efficiently made longer on recent x86 CPUs?
In the ideal world lengthening techniques would simultaneously be:
Applicable to most instructions
Capable of lengthening the instruction by a variable amount
Not stall or otherwise slow down the decoders
Be efficiently represented in the uop cache
It isn't likely that there is a single method that satisfies all of the above points simultaneously, so good answers will probably address various tradeoffs.
1The limit is 5 or 6 on AMD Ryzen.
Consider mild code-golfing to shrink your code instead of expanding it, especially before a loop. e.g. xor eax,eax / cdq if you need two zeroed registers, or mov eax, 1 / lea ecx, [rax+1] to set registers to 1 and 2 in only 8 total bytes instead of 10. See Set all bits in CPU register to 1 efficiently for more about that, and Tips for golfing in x86/x64 machine code for more general ideas. Probably you still want to avoid false dependencies, though.
Or fill extra space by creating a vector constant on the fly instead of loading it from memory. (Adding more uop-cache pressure could be worse, though, for the larger loop that contains your setup + inner loop. But it avoids d-cache misses for constants, so it has an upside to compensate for running more uops.)
If you weren't already using them to load "compressed" constants, pmovsxbd, movddup, or vpbroadcastd are longer than movaps. dword / qword broadcast loads are free (no ALU uop, just a load).
If you're worried about code alignment at all, you're probably worried about how it sits in the L1I cache or where the uop-cache boundaries are, so just counting total uops is no longer sufficient, and a few extra uops in the block before the one you care about may not be a problem at all.
But in some situations, you might really want to optimize decode throughput / uop-cache usage / total uops for the instructions before the block you want aligned.
Padding instructions, like the question asked for:
Agner Fog has a whole section on this: "10.6 Making instructions longer for the sake of alignment" in his "Optimizing subroutines in assembly language" guide. (The lea, push r/m64, and SIB ideas are from there, and I copied a sentence / phrase or two, otherwise this answer is my own work, either different ideas or written before checking Agner's guide.)
It hasn't been updated for current CPUs, though: lea eax, [rbx + dword 0] has more downsides than it used to vs mov eax, ebx, because you miss out on zero-latency / no execution unit mov. If it's not on the critical path, go for it though. Simple lea has fairly good throughput, and an LEA with a large addressing mode (and maybe even some segment prefixes) can be better for decode / execute throughput than mov + nop.
Use the general form instead of the short form (no ModR/M) of instructions like push reg or mov reg,imm. e.g. use 2-byte push r/m64 for push rbx. Or use an equivalent instruction that is longer, like add dst, 1 instead of inc dst, in cases where there are no perf downsides to inc so you were already using inc.
Use SIB byte. You can get NASM to do that by using a single register as an index, like mov eax, [nosplit rbx*1] (see also), but that hurts the load-use latency vs. simply encoding mov eax, [rbx] with a SIB byte. Indexed addressing modes have other downsides on SnB-family, like un-lamination and not using port7 for stores.
So it's best to just encode base=rbx + disp0/8/32=0 using ModR/M + SIB with no index reg. (The SIB encoding for "no index" is the encoding that would otherwise mean idx=RSP). [rsp + x] addressing modes require a SIB already (base=RSP is the escape code that means there's a SIB), and that appears all the time in compiler-generated code. So there's very good reason to expect this to be fully efficient to decode and execute (even for base registers other than RSP) now and in the future. NASM syntax can't express this, so you'd have to encode manually. GNU gas Intel syntax from objdump -d says 8b 04 23 mov eax,DWORD PTR [rbx+riz*1] for Agner Fog's example 10.20. (riz is a fictional index-zero notation that means there's a SIB with no index). I haven't tested if GAS accepts that as input.
Use an imm32 and/or disp32 form of an instruction that only needed imm8 or disp0/disp32. Agner Fog's testing of Sandybridge's uop cache (microarch guide table 9.1) indicates that the actual value of an immediate / displacement is what matters, not the number of bytes used in the instruction encoding. I don't have any info on Ryzen's uop cache.
So NASM imul eax, [dword 4 + rdi], strict dword 13 (10 bytes: opcode + modrm + disp32 + imm32) would use the 32small, 32small category and take 1 entry in the uop cache, unlike if either the immediate or disp32 actually had more than 16 significant bits. (Then it would take 2 entries, and loading it from the uop cache would take an extra cycle.)
According to Agner's table, 8/16/32small are always equivalent for SnB. And addressing modes with a register are the same whether there's no displacement at all, or whether it's 32small, so mov dword [dword 0 + rdi], 123456 takes 2 entries, just like mov dword [rdi], 123456789. I hadn't realized [rdi] + full imm32 took 2 entries, but apparently that' is the case on SnB.
Use jmp / jcc rel32 instead of rel8. Ideally try to expand instructions in places that don't require longer jump encodings outside the region you're expanding. Pad after jump targets for earlier forward jumps, pad before jump targets for later backward jumps, if they're close to needing a rel32 somewhere else. i.e. try to avoid padding between a branch and its target, unless you want that branch to use a rel32 anyway.
You might be tempted to encode mov eax, [symbol] as 6-byte a32 mov eax, [abs symbol] in 64-bit code, using an address-size prefix to use a 32-bit absolute address. But this does cause a Length-Changing-Prefix stall when it decodes on Intel CPUs. Fortunately, none of NASM/YASM / gas / clang do this code-size optimization by default if you don't explicitly specify a 32-bit address-size, instead using 7-byte mov r32, r/m32 with a ModR/M+SIB+disp32 absolute addressing mode for mov eax, [abs symbol].
In 64-bit position-dependent code, absolute addressing is a cheap way to use 1 extra byte vs. RIP-relative. But note that 32-bit absolute + immediate takes 2 cycles to fetch from uop cache, unlike RIP-relative + imm8/16/32 which takes only 1 cycle even though it still uses 2 entries for the instruction. (e.g. for a mov-store or a cmp). So cmp [abs symbol], 123 is slower to fetch from the uop cache than cmp [rel symbol], 123, even though both take 2 entries each. Without an immediate, there's no extra cost for
Note that PIE executables allow ASLR even for the executable, and are the default in many Linux distro, so if you can keep your code PIC without any perf downsides, then that's preferable.
Use a REX prefix when you don't need one, e.g. db 0x40 / add eax, ecx.
It's not in general safe to add prefixes like rep that current CPUs ignore, because they might mean something else in future ISA extensions.
Repeating the same prefix is sometimes possible (not with REX, though). For example, db 0x66, 0x66 / add ax, bx gives the instruction 3 operand-size prefixes, which I think is always strictly equivalent to one copy of the prefix. Up to 3 prefixes is the limit for efficient decoding on some CPUs. But this only works if you have a prefix you can use in the first place; you usually aren't using 16-bit operand-size, and generally don't want 32-bit address-size (although it's safe for accessing static data in position-dependent code).
A ds or ss prefix on an instruction that accesses memory is a no-op, and probably doesn't cause any slowdown on any current CPUs. (#prl suggested this in comments).
In fact, Agner Fog's microarch guide uses a ds prefix on a movq
[esi+ecx],mm0 in Example 7.1. Arranging IFETCH blocks to tune a loop for PII/PIII (no loop buffer or uop cache), speeding it up from 3 iterations per clock to 2.
Some CPUs (like AMD) decode slowly when instructions have more than 3 prefixes. On some CPUs, this includes the mandatory prefixes in SSE2 and especially SSSE3 / SSE4.1 instructions. In Silvermont, even the 0F escape byte counts.
AVX instructions can use a 2 or 3-byte VEX prefix. Some instructions require a 3-byte VEX prefix (2nd source is x/ymm8-15, or mandatory prefixes for SSSE3 or later). But an instruction that could have used a 2-byte prefix can always be encoded with a 3-byte VEX. NASM or GAS {vex3} vxorps xmm0,xmm0. If AVX512 is available, you can use 4-byte EVEX as well.
Use 64-bit operand-size for mov even when you don't need it, for example mov rax, strict dword 1 forces the 7-byte sign-extended-imm32 encoding in NASM, which would normally optimize it to 5-byte mov eax, 1.
mov eax, 1 ; 5 bytes to encode (B8 imm32)
mov rax, strict dword 1 ; 7 bytes: REX mov r/m64, sign-extended-imm32.
mov rax, strict qword 1 ; 10 bytes to encode (REX B8 imm64). movabs mnemonic for AT&T.
You could even use mov reg, 0 instead of xor reg,reg.
mov r64, imm64 fits efficiently in the uop cache when the constant is actually small (fits in 32-bit sign extended.) 1 uop-cache entry, and load-time = 1, the same as for mov r32, imm32. Decoding a giant instruction means there's probably not room in a 16-byte decode block for 3 other instructions to decode in the same cycle, unless they're all 2-byte. Possibly lengthening multiple other instructions slightly can be better than having one long instruction.
Decode penalties for extra prefixes:
P5: prefixes prevent pairing, except for address/operand-size on PMMX only.
PPro to PIII: There is always a penalty if an instruction has more than one prefix. This penalty is usually one clock per extra prefix. (Agner's microarch guide, end of section 6.3)
Silvermont: it's probably the tightest constraint on which prefixes you can use, if you care about it. Decode stalls on more than 3 prefixes, counting mandatory prefixes + 0F escape byte. SSSE3 and SSE4 instructions already have 3 prefixes so even a REX makes them slow to decode.
some AMD: maybe a 3-prefix limit, not including escape bytes, and maybe not including mandatory prefixes for SSE instructions.
... TODO: finish this section. Until then, consult Agner Fog's microarch guide.
After hand-encoding stuff, always disassemble your binary to make sure you got it right. It's unfortunate that NASM and other assemblers don't have better support for choosing cheap padding over a region of instructions to reach a given alignment boundary.
Assembler syntax
NASM has some encoding override syntax: {vex3} and {evex} prefixes, NOSPLIT, and strict byte / dword, and forcing disp8/disp32 inside addressing modes. Note that [rdi + byte 0] isn't allowed, the byte keyword has to come first. [byte rdi + 0] is allowed, but I think that looks weird.
Listing from nasm -l/dev/stdout -felf64 padding.asm
line addr machine-code bytes source line
num
4 00000000 0F57C0 xorps xmm0,xmm0 ; SSE1 *ps instructions are 1-byte shorter
5 00000003 660FEFC0 pxor xmm0,xmm0
6
7 00000007 C5F058DA vaddps xmm3, xmm1,xmm2
8 0000000B C4E17058DA {vex3} vaddps xmm3, xmm1,xmm2
9 00000010 62F1740858DA {evex} vaddps xmm3, xmm1,xmm2
10
11
12 00000016 FFC0 inc eax
13 00000018 83C001 add eax, 1
14 0000001B 4883C001 add rax, 1
15 0000001F 678D4001 lea eax, [eax+1] ; runs on fewer ports and doesn't set flags
16 00000023 67488D4001 lea rax, [eax+1] ; address-size and REX.W
17 00000028 0501000000 add eax, strict dword 1 ; using the EAX-only encoding with no ModR/M
18 0000002D 81C001000000 db 0x81, 0xC0, 1,0,0,0 ; add eax,0x1 using the ModR/M imm32 encoding
19 00000033 81C101000000 add ecx, strict dword 1 ; non-eax must use the ModR/M encoding
20 00000039 4881C101000000 add rcx, strict qword 1 ; YASM requires strict dword for the immediate, because it's still 32b
21 00000040 67488D8001000000 lea rax, [dword eax+1]
22
23
24 00000048 8B07 mov eax, [rdi]
25 0000004A 8B4700 mov eax, [byte 0 + rdi]
26 0000004D 3E8B4700 mov eax, [ds: byte 0 + rdi]
26 ****************** warning: ds segment base generated, but will be ignored in 64-bit mode
27 00000051 8B8700000000 mov eax, [dword 0 + rdi]
28 00000057 8B043D00000000 mov eax, [NOSPLIT dword 0 + rdi*1] ; 1c extra latency on SnB-family for non-simple addressing mode
GAS has encoding-override pseudo-prefixes {vex3}, {evex}, {disp8}, and {disp32} These replace the now-deprecated .s, .d8 and .d32 suffixes.
GAS doesn't have an override to immediate size, only displacements.
GAS does let you add an explicit ds prefix, with ds mov src,dst
gcc -g -c padding.S && objdump -drwC padding.o -S, with hand-editting:
# no CPUs have separate ps vs. pd domains, so there's no penalty for mixing ps and pd loads/shuffles
0: 0f 28 07 movaps (%rdi),%xmm0
3: 66 0f 28 07 movapd (%rdi),%xmm0
7: 0f 58 c8 addps %xmm0,%xmm1 # not equivalent for SSE/AVX transitions, but sometimes safe to mix with AVX-128
a: c5 e8 58 d9 vaddps %xmm1,%xmm2, %xmm3 # default {vex2}
e: c4 e1 68 58 d9 {vex3} vaddps %xmm1,%xmm2, %xmm3
13: 62 f1 6c 08 58 d9 {evex} vaddps %xmm1,%xmm2, %xmm3
19: ff c0 inc %eax
1b: 83 c0 01 add $0x1,%eax
1e: 48 83 c0 01 add $0x1,%rax
22: 67 8d 40 01 lea 1(%eax), %eax # runs on fewer ports and doesn't set flags
26: 67 48 8d 40 01 lea 1(%eax), %rax # address-size and REX
# no equivalent for add eax, strict dword 1 # no-ModR/M
.byte 0x81, 0xC0; .long 1 # add eax,0x1 using the ModR/M imm32 encoding
2b: 81 c0 01 00 00 00 add $0x1,%eax # manually encoded
31: 81 c1 d2 04 00 00 add $0x4d2,%ecx # large immediate, can't get GAS to encode this way with $1 other than doing it manually
37: 67 8d 80 01 00 00 00 {disp32} lea 1(%eax), %eax
3e: 67 48 8d 80 01 00 00 00 {disp32} lea 1(%eax), %rax
mov 0(%rdi), %eax # the 0 optimizes away
46: 8b 07 mov (%rdi),%eax
{disp8} mov (%rdi), %eax # adds a disp8 even if you omit the 0
48: 8b 47 00 mov 0x0(%rdi),%eax
{disp8} ds mov (%rdi), %eax # with a DS prefix
4b: 3e 8b 47 00 mov %ds:0x0(%rdi),%eax
{disp32} mov (%rdi), %eax
4f: 8b 87 00 00 00 00 mov 0x0(%rdi),%eax
{disp32} mov 0(,%rdi,1), %eax # 1c extra latency on SnB-family for non-simple addressing mode
55: 8b 04 3d 00 00 00 00 mov 0x0(,%rdi,1),%eax
GAS is strictly less powerful than NASM for expressing longer-than-needed encodings.
Let's look at a specific piece of code:
cmp ebx,123456
mov al,0xFF
je .foo
For this code, none of the instructions can be replaced with anything else, so the only options are redundant prefixes and NOPs.
However, what if you change the instruction ordering?
You could convert the code into this:
mov al,0xFF
cmp ebx,123456
je .foo
After re-ordering the instructions; the mov al,0xFF could be replaced with or eax,0x000000FF or or ax,0x00FF.
For the first instruction ordering there is only one possibility, and for the second instruction ordering there are 3 possibilities; so there's a total of 4 possible permutations to choose from without using any redundant prefixes or NOPs.
For each of those 4 permutations you can add variations with different amounts of redundant prefixes, and single and multi-byte NOPs, to make it end on a specific alignment/s. I'm too lazy to do the maths, so let's assume that maybe it expands to 100 possible permutations.
What if you gave each of these 100 permutations a score (based on things like how long it would take to execute, how well it aligns the instruction after this piece, if size or speed matters, ...). This can include micro-architectural targeting (e.g. maybe for some CPUs the original permutation breaks micro-op fusion and makes the code worse).
You could generate all the possible permutations and give them a score, and choose the permutation with the best score. Note that this may not be the permutation with the best alignment (if alignment is less important than other factors and just makes performance worse).
Of course you can break large programs into many small groups of linear instructions separated by control flow changes; and then do this "exhaustive search for the permutation with the best score" for each small group of linear instructions.
The problem is that instruction order and instruction selection are co-dependent.
For the example above, you couldn't replace mov al,0xFF until after we re-ordered the instructions; and it's easy to find cases where you can't re-order the instructions until after you've replaced (some) instructions. This makes it hard to do an exhaustive search for the best solution, for any definition of "best", even if you only care about alignment and don't care about performance at all.
I can think of four ways off the top of my head:
First: Use alternate encodings for instructions (Peter Cordes mentioned something similar). There are a lot of ways to call the ADD operation for example, and some of them take up more bytes:
http://www.felixcloutier.com/x86/ADD.html
Usually an assembler will try to choose the "best" encoding for the situation whether that is optimizing for speed or length, but you can always use another one and get the same result.
Second: Use other instructions that mean the same thing and have different lengths. I'm sure you can think of countless examples where you could drop one instruction into the code to replace an existing one and get the same results. People that hand optimize code do it all the time:
shl 1
add eax, eax
mul 2
etc etc
Third: Use the variety of NOPs available to pad out extra space:
nop
and eax, eax
sub eax, 0
etc etc
In an ideal world you'd probably have to use all these tricks to get code to be the exact byte length you want.
Fourth: Change your algorithm to get more options using the above methods.
One final note: Obviously targeting more modern processors will give you better results due to the number and complexity of instructions. Having access to MMX, XMM, SSE, SSE2, floating point, etc instructions could make your job easier.
Depends on the nature of the code.
Floatingpoint heavy code
AVX prefix
One can resort to the longer AVX prefix for most SSE instructions.
Note that there is a fixed penalty when switching between SSE and AVX on intel CPUs [1][2]. This requires vzeroupper which can be interpreted as another NOP for SSE code or AVX code which doesn't require the higher 128 bits.
SSE/AVX NOPS
typical NOPs I can think of are:
XORPS the same register, use SSE/AVX variations for integers of these
ANDPS the same register, use SSE/AVX variations for integers of these

Branch alignment for loops involving micro-coded instructions on Intel SnB-family CPUs

This is related, but not the same, as this question: Performance optimisations of x86-64 assembly - Alignment and branch prediction and is slightly related to my previous question: Unsigned 64-bit to double conversion: why this algorithm from g++
The following is a not real-world test case. This primality testing algorithm is not sensible. I suspect any real-world algorithm would never execute such a small inner-loop quite so many times (num is a prime of size about 2**50). In C++11:
using nt = unsigned long long;
bool is_prime_float(nt num)
{
for (nt n=2; n<=sqrt(num); ++n) {
if ( (num%n)==0 ) { return false; }
}
return true;
}
Then g++ -std=c++11 -O3 -S produces the following, with RCX containing n and XMM6 containing sqrt(num). See my previous post for the remaining code (which is never executed in this example, as RCX never becomes large enough to be treated as a signed negative).
jmp .L20
.p2align 4,,10
.L37:
pxor %xmm0, %xmm0
cvtsi2sdq %rcx, %xmm0
ucomisd %xmm0, %xmm6
jb .L36 // Exit the loop
.L20:
xorl %edx, %edx
movq %rbx, %rax
divq %rcx
testq %rdx, %rdx
je .L30 // Failed divisibility test
addq $1, %rcx
jns .L37
// Further code to deal with case when ucomisd can't be used
I time this using std::chrono::steady_clock. I kept getting weird performance changes: from just adding or deleting other code. I eventually tracked this down to an alignment issue. The command .p2align 4,,10 tried to align to a 2**4=16 byte boundary, but only uses at most 10 bytes of padding to do so, I guess to balance between alignment and code size.
I wrote a Python script to replace .p2align 4,,10 by a manually controlled number of nop instructions. The following scatter plot shows the fastest 15 of 20 runs, time in seconds, number of bytes padding at the x-axis:
From objdump with no padding, the pxor instruction will occur at offset 0x402f5f. Running on a laptop, Sandybridge i5-3210m, turboboost disabled, I found that
For 0 byte padding, slow performance (0.42 secs)
For 1-4 bytes padding (offset 0x402f60 to 0x402f63) get slightly better (0.41s, visible on the plot).
For 5-20 bytes padding (offset 0x402f64 to 0x402f73) get fast performance (0.37s)
For 21-32 bytes padding (offset 0x402f74 to 0x402f7f) slow performance (0.42 secs)
Then cycles on a 32 byte sample
So a 16-byte alignment doesn't give the best performance-- it puts us in the slightly better (or just less variation, from the scatter plot) region. Alignment of 32 plus 4 to 19 gives the best performance.
Why am I seeing this performance difference? Why does this seem to violate the rule of aligning branch targets to a 16-byte boundary (see e.g. the Intel optimisation manual)
I don't see any branch-prediction problems. Could this be a uop cache quirk??
By changing the C++ algorithm to cache sqrt(num) in an 64-bit integer and then make the loop purely integer based, I remove the problem-- alignment now makes no difference at all.
Here's what I found on Skylake for the same loop. All the code to reproduce my tests on your hardware is on github.
I observe three different performance levels based on alignment, whereas the OP only really saw 2 primary ones. The levels are very distinct and repeatable2:
We see three distinct performance levels here (the pattern repeats starting from offset 32), which we'll call regions 1, 2 and 3, from left to right (region 2 is split into two parts straddling region 3). The fastest region (1) is from offset 0 to 8, the middle (2) region is from 9-18 and 28-31, and the slowest (3) is from 19-27. The difference between each region is close to or exactly 1 cycle/iteration.
Based on the performance counters, the fastest region is very different from the other two:
All the instructions are delivered from the legacy decoder, not from the DSB1.
There are exactly 2 decoder <-> microcode switches (idq_ms_switches) for every iteration of the loop.
On the hand, the two slower regions are fairly similar:
All the instructions are delivered from the DSB (uop cache), and not from the legacy decoder.
There are exactly 3 decoder <-> microcode switches per iteration of the loop.
The transition from the fastest to the middle region, as the offset changes from 8 to 9, corresponds exactly to when the loop starts fitting in the uop buffer, because of alignment issues. You count this out in exactly the same way as Peter did in his answer:
Offset 8:
LSD? <_start.L37>:
ab 1 4000a8: 66 0f ef c0 pxor xmm0,xmm0
ab 1 4000ac: f2 48 0f 2a c1 cvtsi2sd xmm0,rcx
ab 1 4000b1: 66 0f 2e f0 ucomisd xmm6,xmm0
ab 1 4000b5: 72 21 jb 4000d8 <_start.L36>
ab 2 4000b7: 31 d2 xor edx,edx
ab 2 4000b9: 48 89 d8 mov rax,rbx
ab 3 4000bc: 48 f7 f1 div rcx
!!!! 4000bf: 48 85 d2 test rdx,rdx
4000c2: 74 0d je 4000d1 <_start.L30>
4000c4: 48 83 c1 01 add rcx,0x1
4000c8: 79 de jns 4000a8 <_start.L37>
In the first column I've annotated how the uops for each instruction end up in the uop cache. "ab 1" means they go in the set associated with address like ...???a? or ...???b? (each set covers 32 bytes, aka 0x20), while 1 means way 1 (out of a max of 3).
At the point !!! this busts out of the uop cache because the test instruction has no where to go, all the 3 ways are used up.
Let's look at offset 9 on the other hand:
00000000004000a9 <_start.L37>:
ab 1 4000a9: 66 0f ef c0 pxor xmm0,xmm0
ab 1 4000ad: f2 48 0f 2a c1 cvtsi2sd xmm0,rcx
ab 1 4000b2: 66 0f 2e f0 ucomisd xmm6,xmm0
ab 1 4000b6: 72 21 jb 4000d9 <_start.L36>
ab 2 4000b8: 31 d2 xor edx,edx
ab 2 4000ba: 48 89 d8 mov rax,rbx
ab 3 4000bd: 48 f7 f1 div rcx
cd 1 4000c0: 48 85 d2 test rdx,rdx
cd 1 4000c3: 74 0d je 4000d2 <_start.L30>
cd 1 4000c5: 48 83 c1 01 add rcx,0x1
cd 1 4000c9: 79 de jns 4000a9 <_start.L37>
Now there is no problem! The test instruction has slipped into the next 32B line (the cd line), so everything fits in the uop cache.
So that explains why stuff changes between the MITE and DSB at that point. It doesn't, however, explain why the MITE path is faster. I tried some simpler tests with div in a loop, and you can reproduce this with simpler loops without any of the floating point stuff. It's weird and sensitive to random other stuff you put in the loop.
For example this loop also executes faster out of the legacy decoder than the DSB:
ALIGN 32
<add some nops here to swtich between DSB and MITE>
.top:
add r8, r9
xor eax, eax
div rbx
xor edx, edx
times 5 add eax, eax
dec rcx
jnz .top
In that loop, adding the pointless add r8, r9 instruction, which doesn't really interact with the rest of the loop, sped things up for the MITE version (but not the DSB version).
So I think the difference between region 1 an region 2 and 3 is due to the former executing out of the legacy decoder (which, oddly, makes it faster).
Let's also take a look at the offset 18 to offset 19 transition (where region2 ends and 3 starts):
Offset 18:
00000000004000b2 <_start.L37>:
ab 1 4000b2: 66 0f ef c0 pxor xmm0,xmm0
ab 1 4000b6: f2 48 0f 2a c1 cvtsi2sd xmm0,rcx
ab 1 4000bb: 66 0f 2e f0 ucomisd xmm6,xmm0
ab 1 4000bf: 72 21 jb 4000e2 <_start.L36>
cd 1 4000c1: 31 d2 xor edx,edx
cd 1 4000c3: 48 89 d8 mov rax,rbx
cd 2 4000c6: 48 f7 f1 div rcx
cd 3 4000c9: 48 85 d2 test rdx,rdx
cd 3 4000cc: 74 0d je 4000db <_start.L30>
cd 3 4000ce: 48 83 c1 01 add rcx,0x1
cd 3 4000d2: 79 de jns 4000b2 <_start.L37>
Offset 19:
00000000004000b3 <_start.L37>:
ab 1 4000b3: 66 0f ef c0 pxor xmm0,xmm0
ab 1 4000b7: f2 48 0f 2a c1 cvtsi2sd xmm0,rcx
ab 1 4000bc: 66 0f 2e f0 ucomisd xmm6,xmm0
cd 1 4000c0: 72 21 jb 4000e3 <_start.L36>
cd 1 4000c2: 31 d2 xor edx,edx
cd 1 4000c4: 48 89 d8 mov rax,rbx
cd 2 4000c7: 48 f7 f1 div rcx
cd 3 4000ca: 48 85 d2 test rdx,rdx
cd 3 4000cd: 74 0d je 4000dc <_start.L30>
cd 3 4000cf: 48 83 c1 01 add rcx,0x1
cd 3 4000d3: 79 de jns 4000b3 <_start.L37>
The only difference I see here is that the first 4 instructions in the offset 18 case fit into the ab cache line, but only 3 in the offset 19 case. If we hypothesize that the DSB can only deliver uops to the IDQ from one cache set, this means that at some point one uop may be issued and executed a cycle earlier in the offset 18 scenario than in the 19 scenario (imagine, for example, that the IDQ is empty). Depending on exactly what port that uop goes to in the context of the surrounding uop flow, that may delay the loop by one cycle. Indeed, the difference between region 2 and 3 is ~1 cycle (within the margin of error).
So I think we can say that the difference between 2 and 3 is likely due to uop cache alignment - region 2 has a slightly better alignment than 3, in terms of issuing one additional uop one cycle earlier.
Some addition notes on things I checked that didn't pan out as being a possible cause of the slowdowns:
Despite the DSB modes (regions 2 and 3) having 3 microcode switches versus the 2 of the MITE path (region 1), that doesn't seem to directly cause the slowdown. In particular, simpler loops with div execute in identical cycle counts, but still show 3 and 2 switches for DSB and MITE paths respectively. So that's normal and doesn't directly imply the slowdown.
Both paths execute essentially identical number of uops and, in particular, have identical number of uops generated by the microcode sequencer. So it's not like there is more overall work being done in the different regions.
There wasn't really an difference in cache misses (very low, as expected) at various levels, branch mispredictions (essentially zero3), or any other types of penalties or unusual conditions I checked.
What did bear fruit is looking at the pattern of execution unit usage across the various regions. Here's a look at the distribution of uops executed per cycle and some stall metrics:
+----------------------------+----------+----------+----------+
| | Region 1 | Region 2 | Region 3 |
+----------------------------+----------+----------+----------+
| cycles: | 7.7e8 | 8.0e8 | 8.3e8 |
| uops_executed_stall_cycles | 18% | 24% | 23% |
| exe_activity_1_ports_util | 31% | 22% | 27% |
| exe_activity_2_ports_util | 29% | 31% | 28% |
| exe_activity_3_ports_util | 12% | 19% | 19% |
| exe_activity_4_ports_util | 10% | 4% | 3% |
+----------------------------+----------+----------+----------+
I sampled a few different offset values and the results were consistent within each region, yet between the regions you have quite different results. In particular, in region 1, you have fewer stall cycles (cycles where no uop is executed). You also have significant variation in the non-stall cycles, although no clear "better" or "worse" trend is evident. For example, region 1 has many more cycles (10% vs 3% or 4%) with 4 uops executed, but the other regions largely make up for it with more cycles with 3 uops executed, and few cycles with 1 uop executed.
The difference in UPC4 that the execution distribution above implies fully explains the difference in performance (this is probably a tautology since we already confirmed the uop count is the same between them).
Let's see what toplev.py has to say about it ... (results omitted).
Well, toplev suggests that the primary bottleneck is the front-end (50+%). I don't think you can trust this because the way it calculates FE-bound seems broken in the case of long strings of micro-coded instructions. FE-bound is based on frontend_retired.latency_ge_8, which is defined as:
Retired instructions that are fetched after an interval where the
front-end delivered no uops for a period of 8 cycles which was not
interrupted by a back-end stall. (Supports PEBS)
Normally that makes sense. You are counting instructions which were delayed because the frontend wasn't delivering cycles. The "not interrupted by a back-end stall" condition ensures that this doesn't trigger when the front-end isn't delivering uops simply because is the backend is not able to accept them (e.g,. when the RS is full because the backend is performing some low-throuput instructions).
It kind of seems for div instructions - even a simple loop with pretty much just one div shows:
FE Frontend_Bound: 57.59 % [100.00%]
BAD Bad_Speculation: 0.01 %below [100.00%]
BE Backend_Bound: 0.11 %below [100.00%]
RET Retiring: 42.28 %below [100.00%]
That is, the only bottleneck is the front-end ("retiring" is not a bottleneck, it represents the useful work). Clearly, such a loop is trivially handled by the front-end and is instead limited by the backend's ability to chew threw all the uops generated by the div operation. Toplev might get this really wrong because (1) it may be that the uops delivered by the microcode sequencer aren't counted in the frontend_retired.latency... counters, so that every div operation causes that event to count all the subsequent instructions (even though the CPU was busy during that period - there was no real stall), or (2) the microcode sequencer might deliver all its ups essentially "up front", slamming ~36 uops into the IDQ, at which point it doesn't deliver any more until the div is finished, or something like that.
Still, we can look at the lower levels of toplev for hints:
The main difference toplev calls out between the regions 1 and region 2 and 3 is the increased penalty of ms_switches for the latter two regions (since they incur 3 every iteration vs 2 for the legacy path. Internally, toplev estimates a 2-cycle penalty in the frontend for such switches. Of course, whether these penalties actually slow anything down depends in a complex way on the instruction queue and other factors. As mentioned above, a simple loop with div doesn't show any difference between the DSB and MITE paths, a loop with additional instructions does. So it could be that the extra switch bubble is absorbed in simpler loops (where the backend processing of all the uops generated by the div is the main factor), but once you add some other work in the loop, the switches become a factor at least for the transition period between the div and non-div` work.
So I guess my conclusion is that the way the div instruction interacts with the rest of the frontend uop flow, and backend execution, isn't completely well understood. We know it involves a flood of uops, delivered both from the MITE/DSB (seems like 4 uops per div) and from the microcode sequencer (seems like ~32 uops per div, although it changes with different input values to the div op) - but we don't know what those uops are (we can see their port distribution though). All that makes the behavior fairly opaque, but I think it is probably down to either the MS switches bottlnecking the front-end, or slight differences in the uop delivery flow resulting in different scheduling decisions which end up making the MITE order master.
1 Of course, most of the uops are not delivered from the legacy decoder or DSB at all, but by the microcode sequencer (ms). So we loosely talk about instructions delivered, not uops.
2 Note that the x axis here is "offset bytes from 32B alignment". That is, 0 means the top of the loop (label .L37) is aligned to a 32B boundary, and 5 means the loop starts five bytes below a 32B boundary (using nop for padding) and so on. So my padding bytes and offset are the same. The OP used a different meaning for offset, if I understand it correctly: his 1 byte of padding resulted in a 0 offset. So you would subtract 1 from the OPs padding values to get my offset values.
3 In fact, the branch prediction rate for a typical test with prime=1000000000000037 was ~99.999997%, reflecting only 3 mispredicted branches in the entire run (likely on the first pass through the loop, and the last iteration).
4 UPC, i.e., uops per cycle - a measure closely related to IPC for similar programs, and one that is a bit more precise when we are looking in detail at uop flows. In this case, we already know the uop counts are the same for all variations of alignment, so UPC and IPC will be directly proportional.
I don't have a specific answer, just a few different hypotheses that I'm unable to test (lack of hardware). I thought I'd found something conclusive, but I had the alignment off by one (because the question counts padding from 0x5F, not from an aligned boundary). Anyway, hopefully it's useful to post this anyway to describe the factors that are probably at play here.
The question also doesn't specify the encoding of the branches (short (2B) or near (6B)). This leaves too many possibilities to look at and theorize about exactly which instruction crossing a 32B boundary or not is causing the issue.
I think it's either a matter of the loop fitting in the uop cache or not, or else it's a matter of alignment mattering for whether it decodes fast with the legacy decoders.
Obviously that asm loop could be improved a lot (e.g. by hoisting the floating-point out of it, not to mention using a different algorithm entirely), but that's not the question. We just want to know why alignment matters for this exact loop.
You might expect that a loop that bottlenecks on division wouldn't bottleneck on the front-end or be affected by alignment, because division is slow and the loop runs very few instructions per clock. That's true, but 64-bit DIV is micro-coded as 35-57 micro-ops (uops) on IvyBridge, so it turns out there can be front-end issues.
The two main ways alignment can matter are:
Front-end bottlenecks (in the fetch/decode stages), leading to bubbles in keeping the out-of-order core supplied with work to do.
Branch prediction: if two branches have the same address modulo some large power of 2, they can alias each other in the branch prediction hardware. Code alignment in one object file is affecting the performance of a function in another object file
scratches the surface of this issue, but much has been written about it.
I suspect this is a purely front-end issue, not branch prediction, since the code spends all its time in this loop, and isn't running other branches that might alias with the ones here.
Your Intel IvyBridge CPU is a die-shrink of SandyBridge. It has a few changes (like mov-elimination, and ERMSB), but the front-end is similar between SnB/IvB/Haswell. Agner Fog's microarch pdf has enough details to analyze what should happen when the CPU runs this code. See also David Kanter's SandyBridge writeup for a block diagram of the fetch/decode stages, but he splits the fetch/decode from the uop cache, microcode, and decoded-uop queue. At the end, there's a full block diagram of a whole core. His Haswell article has a block diagram including the whole front-end, up to the decoded-uop queue that feeds the issue stage. (IvyBridge, like Haswell, has a 56 uop queue / loopback buffer when not using Hyperthreading. Sandybridge statically partitions them into 2x28 uop queues even when HT is disabled.)
Image copied from David Kanter's also-excellent Haswell write-up, where he includes the decoders and uop-cache in one diagram.
Let's look at how the uop cache will probably cache this loop, once things settle down. (i.e. assuming that the loop entry with a jmp to the middle of the loop doesn't have any serious long-term effect on how the loop sits in the uop cache).
According to Intel's optimization manual (2.3.2.2 Decoded ICache):
All micro-ops in a Way (uop cache line) represent instructions which are statically contiguous in the code and have
their EIPs within the same aligned 32-byte region. (I think this means an instruction that extends past the boundary goes in the uop cache for the block containing its start, rather than end. Spanning instructions have to go somewhere, and the branch target address that would run the instruction is the start of the insn, so it's most useful to put it in a line for that block).
A multi micro-op instruction cannot be split across Ways.
An instruction which turns on the MSROM consumes an entire Way. (i.e. any instruction that takes more than 4 uops (for the reg,reg form) is microcoded. For example, DPPD is not micro-coded (4 uops), but DPPS is (6 uops). DPPD with a memory operand that can't micro-fuse would be 5 total uops, but still wouldn't need to turn on the microcode sequencer (not tested).
Up to two branches are allowed per Way.
A pair of macro-fused instructions is kept as one micro-op.
David Kanter's SnB writeup has some more great details about the uop cache.
Let's see how the actual code will go into the uop cache
# let's consider the case where this is 32B-aligned, so it runs in 0.41s
# i.e. this is at 0x402f60, instead of 0 like this objdump -Mintel -d output on a .o
# branch displacements are all 00, and I forgot to put in dummy labels, so they're using the rel32 encoding not rel8.
0000000000000000 <.text>:
0: 66 0f ef c0 pxor xmm0,xmm0 # 1 uop
4: f2 48 0f 2a c1 cvtsi2sd xmm0,rcx # 2 uops
9: 66 0f 2e f0 ucomisd xmm6,xmm0 # 2 uops
d: 0f 82 00 00 00 00 jb 0x13 # 1 uop (end of one uop cache line of 6 uops)
13: 31 d2 xor edx,edx # 1 uop
15: 48 89 d8 mov rax,rbx # 1 uop (end of a uop cache line: next insn doesn't fit)
18: 48 f7 f1 div rcx # microcoded: fills a whole uop cache line. (And generates 35-57 uops)
1b: 48 85 d2 test rdx,rdx ### PROBLEM!! only 3 uop cache lines can map to the same 32-byte block of x86 instructions.
# So the whole block has to be re-decoded by the legacy decoders every time, because it doesn't fit in the uop-cache
1e: 0f 84 00 00 00 00 je 0x24 ## spans a 32B boundary, so I think it goes with TEST in the line that includes the first byte. Should actually macro-fuse.
24: 48 83 c1 01 add rcx,0x1 # 1 uop
28: 79 d6 jns 0x0 # 1 uop
So with 32B alignment for the start of the loop, it has to run from the legacy decoders, which is potentially slower than running from the uop cache. There could even be some overhead in switching from uop cache to legacy decoders.
#Iwill's testing (see comments on the question) reveals that any microcoded instruction prevents a loop from running from the loopback buffer. See comments on the question. (LSD = Loop Stream Detector = loop buffer; physically the same structure as the IDQ (instruction decode queue). DSB = Decode Stream Buffer = the uop cache. MITE = legacy decoders.)
Busting the uop cache will hurt performance even if the loop is small enough to run from the LSD (28 uops minimum, or 56 without hyperthreading on IvB and Haswell).
Intel's optimization manual (section 2.3.2.4) says the LSD requirements include
All micro-ops are also resident in the Decoded ICache.
So this explains why microcode doesn't qualify: in that case the uop-cache only holds a pointer into to microcode, not the uops themselves. Also note that this means that busting the uop cache for any other reason (e.g. lots of single-byte NOP instructions) means a loop can't run from the LSD.
With the minimum padding to go fast, according to the OP's testing.
# branch displacements are still 32-bit, except the loop branch.
# This may not be accurate, since the question didn't give raw instruction dumps.
# the version with short jumps looks even more unlikely
0000000000000000 <loop_start-0x64>:
...
5c: 00 00 add BYTE PTR [rax],al
5e: 90 nop
5f: 90 nop
60: 90 nop # 4NOPs of padding is just enough to bust the uop cache before (instead of after) div, if they have to go in the uop cache.
# But that makes little sense, because looking backward should be impossible (insn start ambiguity), and we jump into the loop so the NOPs don't even run once.
61: 90 nop
62: 90 nop
63: 90 nop
0000000000000064 <loop_start>: #uops #decode in cycle A..E
64: 66 0f ef c0 pxor xmm0,xmm0 #1 A
68: f2 48 0f 2a c1 cvtsi2sd xmm0,rcx #2 B
6d: 66 0f 2e f0 ucomisd xmm6,xmm0 #2 C (crosses 16B boundary)
71: 0f 82 db 00 00 00 jb 152 #1 C
77: 31 d2 xor edx,edx #1 C
79: 48 89 d8 mov rax,rbx #1 C
7c: 48 f7 f1 div rcx #line D
# 64B boundary after the REX in next insn
7f: 48 85 d2 test rdx,rdx #1 E
82: 74 06 je 8a <loop_start+0x26>#1 E
84: 48 83 c1 01 add rcx,0x1 #1 E
88: 79 da jns 64 <loop_start>#1 E
The REX prefix of test rdx,rdx is in the same block as the DIV, so this should bust the uop cache. One more byte of padding would put it into the next 32B block, which would make perfect sense. Perhaps the OP's results are wrong, or perhaps prefixes don't count, and it's the position of the opcode byte that matters. Perhaps that matters, or perhaps a macro-fused test+branch is pulled to the next block?
Macro-fusion does happen across the 64B L1I-cache line boundary, since it doesn't fall on the boundary between instructions.
Macro fusion does not happen if the first instruction ends on byte 63 of a cache line, and the second instruction is a conditional branch that starts at byte 0 of the next cache line. -- Intel's optimization manual, 2.3.2.1
Or maybe with a short encoding for one jump or the other, things are different?
Or maybe busting the uop cache has nothing to do with it, and that's fine as long as it decodes fast, which this alignment makes happen. This amount of padding just barely puts the end of UCOMISD into a new 16B block, so maybe that actually improves efficiency by letting it decode with the other instructions in the next aligned 16B block. However, I'm not sure that a 16B pre-decode (instruction-length finding) or 32B decode block have to be aligned.
I also wondered if the CPU ends up switching from uop cache to legacy decode frequently. That can be worse than running from legacy decode all the time.
Switching from the decoders to the uop cache or vice versa takes a cycle, according to Agner Fog's microarch guide. Intel says:
When micro-ops cannot be stored in the Decoded ICache due to these restrictions, they are delivered from the legacy decode pipeline. Once micro-ops are delivered from the legacy pipeline, fetching micro-
ops from the Decoded ICache can resume only after the next branch micro-op. Frequent switches can incur a penalty.
The source that I assembled + disassembled:
.skip 0x5e
nop
# this is 0x5F
#nop # OP needed 1B of padding to reach a 32B boundary
.skip 5, 0x90
.globl loop_start
loop_start:
.L37:
pxor %xmm0, %xmm0
cvtsi2sdq %rcx, %xmm0
ucomisd %xmm0, %xmm6
jb .Loop_exit // Exit the loop
.L20:
xorl %edx, %edx
movq %rbx, %rax
divq %rcx
testq %rdx, %rdx
je .Lnot_prime // Failed divisibility test
addq $1, %rcx
jns .L37
.skip 200 # comment this to make the jumps rel8 instead of rel32
.Lnot_prime:
.Loop_exit:
From what I can see in your algorithm, there is certainly not much you can do to improve it.
The problem you are hitting is probably not so much the branch to an aligned position, although that can still help, you're current problem is much more likely the pipeline mechanism.
When you write two instructions one after another such as:
mov %eax, %ebx
add 1, %ebx
In order to execute the second instruction, the first one has to be complete. For that reason compilers tend to mix instructions. Say you need to set %ecx to zero, you could do this:
mov %eax, %ebx
xor %ecx, %ecx
add 1, %ebx
In this case, the mov and the xor can both be executed in parallel. This makes things go faster... The number of instructions that can be handled in parallel vary very much between processors (Xeons are generally better at that).
The branch adds another parameter where the best processors may start executing both sides of the branch (the true and the false...) simultaneously. But really most processors will make a guess and hope they are right.
Finally, it is obvious that converting the sqrt() result to an integer will make things a lot faster since you will avoid all that non-sense with SSE2 code which is definitively slower if used only for a conversion + compare when those two instructions could be done with integers.
Now... you are probably still wondering why the alignment does not matter with the integers. The fact is that if your code fits in the L1 instruction cache, then the alignment is not important. If you lose the L1 cache, then it has to reload the code and that's where the alignment becomes quite important since on each loop it could otherwise be loading useless code (possibly 15 bytes of useless code...) and memory access is still dead slow.
The performance difference can be explained by the different ways the instruction encoding mechanism "sees" the instructions. A CPU reads the instructions in chunks (was on core2 16 byte I believe) and it tries to give the different superscalar units microops. If the instructions are on boundaries or ordered unlikely the units in one core can starve quite easily.

Performance of modern processor

Being executed on modern processor (AMD Phenom II 1090T), how many clock ticks does the following code consume more likely : 3 or 11?
label: mov (%rsi), %rax
adc %rax, (%rdx)
lea 8(%rdx), %rdx
lea 8(%rsi), %rsi
dec %ecx
jnz label
The problem is, when I execute many iterations of such code, results vary near 3 OR 11 ticks per iteration from time to time. And I can't decide "who is who".
UPD
According to Table of instruction latencies (PDF), my piece of code takes at least 10 clock cycles on AMD K10 microarchitecture. Therefore, impossible 3 ticks per iteration are caused by bugs in measurement.
SOLVED
#Atom noticed, that cycle frequency isn't constant in modern processors. When I disabled in BIOS three options - Core Performance Boost, AMD C1E Support and AMD K8 Cool&Quiet Control, consumption of my "six instructions" stabilized on 3 clock ticks :-)
I won't try to answer with certainty how many cycles (3 or 10) it will take to run each iteration, but I'll explain how it might be possible to get 3 cycles per iteration.
(Note that this is for processors in general and I make no references specific to AMD processors.)
Key Concepts:
Out of Order Execution
Register Renaming
Most modern (non-embedded) processors today are both super-scalar and out-of-order. Not only can execute multiple (independent) instructions in parallel, but they can re-order instructions to break dependencies and such.
Let's break down your example:
label:
mov (%rsi), %rax
adc %rax, (%rdx)
lea 8(%rdx), %rdx
lea 8(%rsi), %rsi
dec %ecx
jnz label
The first thing to notice is that the last 3 instructions before the branch are all independent:
lea 8(%rdx), %rdx
lea 8(%rsi), %rsi
dec %ecx
So it's possible for a processor to execute all 3 of these in parallel.
Another thing is this:
adc %rax, (%rdx)
lea 8(%rdx), %rdx
There seems to be a dependency on rdx that prevents the two from running in parallel. But in reality, this is false dependency because the second instruction doesn't actually
depend on the output of the first instruction. Modern processors are able to rename the rdx register to allow these two instructions to be re-ordered or done in parallel.
Same applies to the rsi register between:
mov (%rsi), %rax
lea 8(%rsi), %rsi
So in the end, 3 cycles is (potentially) achievable as follows (this is just one of several possible orderings):
1: mov (%rsi), %rax lea 8(%rdx), %rdx lea 8(%rsi), %rsi
2: adc %rax, (%rdx) dec %ecx
3: jnz label
*Of course, I'm over-simplifying things for simplicity. In reality the latencies are probably longer and there's overlap between different iterations of the loop.
In any case, this could explain how it's possible to get 3 cycles. As for why you sometimes get 10 cycles, there could be a ton of reasons for that: branch misprediction, some random pipeline bubble...
At Intel, Dr. David Levinthal's "Performance Analysis Guide" investigates the answers to such questions in great detail.

Relative performance of x86 inc vs. add instruction

Quick question, assuming beforehand
mov eax, 0
which is more efficient?
inc eax
inc eax
or
add eax, 2
Also, in case the two incs are faster, do compilers (say, the GCC) commonly (i.e. w/o aggressive optimization flags) optimize var += 2 to it?
PS: Don't bother to answer with a variation of "don't prematurely optimize", this is merely academic interest.
Two inc instructions on the same register (or more generally speaking two read-modify-write instructions) do always have a dependency chain of at least two cycles. This is assuming a one clock latency for a inc, which is the case since the 486. That means if the surrounding instructions can't be interleaved with the two inc instructions to hide those latencies, the code will execute slower.
But no compiler will emit the instruction sequence you propose anyway (mov eax,0 will be replaced by xor eax,eax, see What is the purpose of XORing a register with itself?)
mov eax,0
inc eax
inc eax
it will be optimizied to
mov eax,2
If you ever wanna know raw performance stats of x86 instructions, see Dr Agner Fogs listings (volume 4 to be exact). As for the part about compilers, thats dependent on the compiler's code generator, and not something you should rely on too much.
on a side note: I find it funny/ironic that in a question about performance, you used MOV EAX,0 to zero a register instead of XOR EAX,EAX :P (and if MOV EAX,0 was done beforehand, the fastest variant would be to remove the inc's and add's and just MOV EAX,2).
For all purposes, it probably doesn't matter. But take into account that inc uses less bytes.
Consider the following code:
int x = 0;
x += 2;
Without using any optimization flags, GCC compiles this code into:
80483ed: c7 44 24 1c 00 00 00 movl $0x0,0x1c(%esp)
80483f4: 00
80483f5: 83 44 24 1c 02 addl $0x2,0x1c(%esp)
Using -O1 and -O2, it becomes:
c7 44 24 08 02 00 00 movl $0x2,0x8(%esp)
Funny, isn't it?
From the Intel manual that you can find here it looks like the ADD/SUB instructions are half a cycle cheaper on one particular architecture. But remember that Intel uses an out-of-order execution model for it's (recent) processors. This primarily means, performance bottlenecks show up wherever the processor has to wait for data to come in (eg. it ran out of things to do during the L1/L2/L3/RAM data-fetch). So if you're profiler tells you INC might be the problem; look at it form a data-throughput point of view instead of looking at raw cycle-counts.
Instruction Latency1 Throughput Execution Unit
2
CPUID 0F_3H 0F_2H 0F_3H 0F_2H 0F_2H
ADD/SUB 1 0.5 0.5 0.5 ALU
[...]
DEC/INC 1 1 0.5 0.5 ALU

Resources