Why jnz counts no cycle? - performance

I found in online resource that IvyBridge has 3 ALU. So I write a small program to test:
global _start
_start:
mov rcx, 10000000
.for_loop: ; do {
inc rax
inc rbx
dec rcx
jnz .for_loop ; } while (--rcx)
xor rdi, rdi
mov rax, 60 ; _exit(0)
syscall
I compile and run it with perf:
$ nasm -felf64 cycle.asm && ld cycle.o && sudo perf stat ./a.out
The output shows:
10,491,664 cycles
which seems to make sense at the first glance, because there are 3 independent instructions (2 inc and 1 dec) that uses ALU in the loop, so they count 1 cycle together.
But what I don't understand is why the whole loop only has 1 cycle? jnz depends on the result of dec rcx, it should counts 1 cycle, so that the whole loop is 2 cycle. I would expect the output to be close to 20,000,000 cycles.
I also tried to change the second inc from inc rbx to inc rax, which makes it dependent on the first inc. The result does becomes close to 20,000,000 cycles, which shows that dependency will delay an instruction so that they can't run at the same time. So why jnz is special?
What I'm missing here?

First of all, dec/jnz will macro-fuse into a single uop on Intel Sandybridge-family. You could defeat that by putting a non-flag-setting instruction between the dec and jnz.
.for_loop: ; do {
inc rax
dec rcx
lea rbx, [rbx+1] ; doesn't touch flags, defeats macro-fusion
jnz .for_loop ; } while (--rcx)
This will still run at 1 iter per cycle on Haswell and later and Ryzen because they have 4 integer execution ports to keep up with 4 uops per iteration. (Your loop with macro-fusion is only 3 fused-domain uops on Intel CPUs, so SnB/IvB can run it at 1 per clock, too.)
See Agner Fog's optimization guide and especially his microarch guide. Also other links in https://stackoverflow.com/tags/x86/info.
Control dependencies are hidden by branch prediction + speculative execution, unlike data dependencies.
Out-of-order execution and branch prediction + speculative execution hide the "latency" of the control dependency. i.e. the next iteration can start running before the CPU verifies that jnz should really be taken.
So each jnz has an input dependency on the previous dec rcx before it can verify the prediction, but later instructions don't have to wait for it to be checked before they can execute. In-order retirement makes sure that mis-speculation is caught before anything can "see" it happen (except for microarchitectural effects leading to the Spectre attack...)
10M iterations is not a lot. I'd normally use at least 100M for something that runs at only 1c per iter. Having a simple microbenchmark run for 0.1 to 1 second is normally good to get very high precision and hide startup overhead.
And BTW, you don't need sudo perf if you set kernel.perf_event_paranoid = 0 with sysctl. It's almost certainly better to do that than to use sudo all the time.

Related

Small branches in modern CPUs

How do modern CPUs like Kaby Lake handle small branches? (in code below it is the jump to label LBB1_67). From what I know the branch will not be harmful because the jump is inferior to the 16-bytes block size which is the size of the decoding window.
Or is it possible that due to some macro op fusion the branch will be completely elided?
sbb rdx, qword ptr [rbx - 8]
setb r8b
setl r9b
mov rdi, qword ptr [rbx]
mov rsi, qword ptr [rbx + 8]
vmovdqu xmm0, xmmword ptr [rbx + 16]
cmp cl, 18
je .LBB1_67
mov r9d, r8d
.LBB1_67: # in Loop: Header=BB1_63 Depth=1
vpcmpeqb xmm0, xmm0, xmmword ptr [rbx - 16]
vpmovmskb ecx, xmm0
cmp ecx, 65535
sete cl
cmp rdi, qword ptr [rbx - 32]
sbb rsi, qword ptr [rbx - 24]
setb dl
and dl, cl
or dl, r9b
There are no special cases for short branch distances in any x86 CPUs. Even unconditional jmp to the next instruction (architecturally a nop) needs correct branch prediction to be handled efficiently; if you put enough of those in a row you run out of BTB entries and performance falls off a cliff. Slow jmp-instruction
Fetch/decode is only a minor problem; yes a very short branch within the same cache line will still hit in L1i and probably uop cache. But it's unlikely that the decoders would special-case a predicted-taken forward jump and make use of pre-decode instruction-boundary finding from one block that included both the branch and the target.
When the instruction is being decoded to uops and fed into the front-end, register values aren't available; those are only available in the out-of-order execution back-end.
The major problem is that when the instructions after .LBB1_67: execute, the architectural state is different depending on whether the branch was taken or not.
And so is the micro-architectural state (RAT = Register Allocation Table).
Either:
r9 depends on the sbb/setl result (mov r9d, r8d didn't run)
r9 depends on the sbb/setb result (mov r9d, r8d did run)
Conditional branches are called "control dependencies" in computer-architecture terminology. Branch-prediction + speculative execution avoids turning control dependencies into data dependencies. If the je was predicted not taken, the setl result (the old value of r9) is overwritten by mov and is no longer available anywhere.
There's no way to recover from this after detecting a misprediction in the je (actually should have been taken), especially in the general case. Current x86 CPUs don't try to look for the fall-through path rejoining the taken path or figuring out anything about what it does.
If cl wasn't ready for a long time, so a mispredict wasn't discovered for a long time, many instructions after the or dl, r9b could have executed using the wrong inputs. In the general case the only way to reliably + efficiently recover is to discard all work done on instructions from the "wrong" path. Detecting that vpcmpeqb xmm0, [rbx - 16] for example still runs either way is hard, and not looked for. (Modern Intel, since Sandybridge, has a Branch Order Buffer (BOB) that snapshots the RAT on branches, allowing efficient rollback to the branch miss as soon as execution detects it while still allowing out-of-order execution on earlier instructions to continue during the rollback. Before that a branch miss had to roll back to the retirement state.)
Some CPUs for some non-x86 ISAs (e.g. PowerPC I think) have experimented with turning forward branches that skip exactly 1 instruction into predication (data dependency) instead of speculating past them. e.g. Dynamic Hammock Predication
for Non-predicated Instruction Set Architectures discusses this idea, and even deciding whether to predicate or not on a per-branch basis. If your branch-prediction history says this branch predicts poorly, predicating it instead could be good. (A Hammock branch is one that jumps forward over one or a couple instructions. Detecting the exactly 1 instruction case is trivial on an ISA with fixed-width instruction words, like a RISC, but hard on x86.)
In this case, x86 has a cmovcc instruction, an ALU select operation that produces one of the two inputs depending on a flag condition. cmove r9d, r8d instead of cmp/je would make this immune to branch mispredictions, but at the cost of introducing a data dependency on cl and r8d for instructions that use r9d. Intel CPU don't try to do this for you.
(On Broadwell and later Intel, cmov is only 1 uop, down from 2. cmp/jcc is 1 uop, and the mov itself is also 1 uop, so in the not-taken case cmov is also fewer uops for the front-end. And in the taken case, a taken branch can introduce bubbles in the pipeline even if predicted correctly, depending on how high throughput the code is: Whether queues between stages can absorb it.)
See gcc optimization flag -O3 makes code slower than -O2 for a case where CMOV is slower than a branch because introducing a data dependency is bad.

Why dependency in a loop iteration can't be executed together with the previous one

I use this code to test the impact of dependency in a loop iteration on IvyBridge:
global _start
_start:
mov rcx, 1000000000
.for_loop:
inc rax ; uop A
inc rax ; uop B
dec rcx ; uop C
jnz .for_loop
xor rdi, rdi
mov rax, 60 ; _exit(0)
syscall
Since dec and jnz will be macro-fused to a single uop, there are 3 uops in my loop, they are labeled in the comments.
uop B depends on uop A, so I think the execution would be like this:
A C
B A C ; the previous B and current A can be in the same cycle
B A C
...
B A C
B
Therefore the loop can be executed 1 cycle per iter.
However, the perf tool shows:
2,009,704,779 cycles
1,008,054,984 stalled-cycles-frontend # 50.16% frontend cycles idl
So it's 2 cycle per iter, and there are 50% frontend cycle idle.
What caused the frontend 50% idle? Why the hypothetical execution diagram can't be realized?
B and A form a loop-carried dependency chain. A in the next iteration can't run until it has the result of B in the previous.
Any given B can never run in the same cycle as an A: what input would the later one use, if the earlier one hasn't produced a result yet?
This chain is 2 cycles long (per iteration), because the latency of inc is 1 cycle. This creates a latency bottleneck in the back-end that out-of-order execution can't hide. (Except for very low iteration counts where it can overlap it with code after the loop).
Just like if you fully unrolled a huge chain of times 102400 inc eax, there's no instruction-level parallelism for the CPU to find between a chain of instructions that each depend on the previous.
The macro-fused dec rcx/jnz uop is independent of the RAX chain, and is a shorter chain (only 1 cycle per iteration, being only 1 dec&branch uop with 1c latency). So it can run in parallel with B or A uops.
See my answer on another question for more about the concept of instruction-level parallelism and dependency chains, and how CPUs exploit that parallelism to run instruction in parallel when they're independent.
Agner Fog's microarch PDF shows this with examples in an early chapter: Chapter 2: Out-of-order execution (All processors except P1,
PMMX).
If you started a new 2-cycle dep chain every iteration, it would run as you expect. A new chain forking off every iteration would expose instruction-level parallelism for the CPU to keep A and B from different iterations in flight at the same time.
.for_loop:
xor eax,eax ; dependency-breaking for RAX
inc rax ; uop A
inc rax ; uop B
dec rcx ; uop C
jnz .for_loop
Sandybridge-family handles xor-zeroing without an execution unit, so this is still only 3 unfused-domain uops in the loop, so IvyBridge has enough ALU execution ports to run all 3 in a single cycle. This also maxes out the front-end at 4 fused-domain uops per clock.
Or if you changed A to start a new dep chain in RAX with any instruction that unconditionally overwrites RAX without depending on the result of the inc, you'd be fine.
lea rax, [rdx + rdx] ; no dependency on B from last iter
inc rax ; uop B
Except for a couple instructions with an unfortunate output dependency: Why does breaking the "output dependency" of LZCNT matter?
popcnt rax, rdx ; false dependency on RAX, 3 cycle latency
inc rax ; uop B
On Intel CPUs, only popcnt, and lzcnt/tzcnt have an output dependency for no reason. It's because they use the same execution unit as bsf/bsr, which leave the destination unmodified if the input is zero, on Intel and AMD CPUs. Intel still only documents it on paper as undefined if the input is zero for BSF/BSR, but they build hardware that implements stronger guarantees. (AMD does even document this BSF/BSR behaviour.) Anyway, so Intel's BSF/BSR are like CMOV, and need the destination as an input in case the source reg is 0. popcnt, (and lzcnt/tzcnt on pre-Skylake) suffer from this, too.
If you made the loop more than 5 fused-domain uops, SnB/IvB could issue it at best 1 per 2 cycles from the front-end. Haswell and later "unroll" in the loop buffer or something so a 5 uop loop can run at ~1.25 c per iteration, but SnB/IvB don't. Is performance reduced when executing loops whose uop count is not a multiple of processor width?
The front-end issue/rename stage is 4 fused-domain uops wide in Intel CPUs since Core 2.

How does RIP-relative addressing perform compared to mov reg, imm64?

It is known fact that x86-64 instructions do not support 64-bit immediate values (except for mov). Hence, when migrating code from 32 to 64 bits, an instruction like this:
cmp rax, addr32
cannot be replaced with the following:
cmp rax, addr64
Under these circumstances, I'm considering two alternatives: (a) using a scratch register for loading the constant or (b) using rip-relative addressing. The two approaches look like this:
mov r11, addr64 ; scratch register
cmp rax, r11
ptr64: dq addr64
...
cmp rax, [rel ptr64] ; encoded as cmp rax, [rip+offset]
I wrote a very simple loop to compare the performance of both approaches (which I paste below). While (b) uses an indirect pointer, (a) has the the immediate encoded in the instruction (which could lead to a worse usage of i-cache). Surprisingly, I found that (b) run ~10% faster than (a). Is this result something to be expected in more common real-world code?
true: dq 0xFFFF0000FFFF0000
false: dq 0xAAAABBBBAAAABBBB
main:
or rax, 1 ; rax is odd and constant "true" is even
mov rcx, 0x1
shl rcx, 30
branch:
mov r11, 0xFFFF0000FFFF0000 ; not present in (b)
cmp rax, r11 ; vs cmp rax, [rel true]
je next
add rax, 2
loop branch
next:
mov rax, 0
ret
Surprisingly, I found that (b) run ~10% faster than (a)
You probably tested on a CPU other than AMD Bulldozer-family or Ryzen, which have a fast loop instruction. On other CPUs, loop is very slow, mostly on purpose for historical reasons, so you bottleneck on it. e.g. 7 uops, one per 5c throughput on Haswell.
mov r64, imm64 is bad for uop cache throughput because of the large immediate taking 2 slots in Intel's uop cache. (See the Sandybridge uop cache section in Agner Fog's microarch pdf), and Which is faster, imm64 or m64 for x86-64? where I listed the details.
Even apart from that, it's not too surprising that 1 extra uop in the loop makes it run slower. You're probably not on an AMD CPU (with single-uop / 1 per 2 clock loop), because the extra mov in such a tiny loop would make more than 10% difference. Or no difference at all, since it's just 3 vs. 4 uops per 2 clocks, if that's correct that even tiny loop loops are limited to one jump per 2 clocks.
On Intel, loop is 7 uops, one per 5 clocks throughput on most CPUs, so the 4-per-clock issue/rename bottleneck won't be what you're hitting. loop is micro-coded, so the front-end can't run from the loop buffer. (And Skylake CPUs have their LSD disabled by a microcode update to fix the partial-register erratum anyway.) So the mov r64,imm64 uop has to be re-read from the uop cache every time through the loop.
A load that hits in cache has very good throughput (2 loads per clock, and in this case micro-fusion means no extra uops to use a memory operand instead of register for cmp). So the main penalty in using a constant from memory is the extra cache footprint and cache misses, but your microbenchmark won't reveal that at all. It also has no other pressure on the load ports.
In the general case:
If possible, use a RIP-relative lea to generate 64-bit address constants.
e.g. lea rax, [rel addr64]. Yes, this takes an extra instruction to get the constant into a register. (BTW, just use default rel. You can use [abs fs:0] if you need it.
You can avoid the extra instruction if you build position-dependent code with the default (small) code model, so static addresses fit in the low 32 bits of virtual address space and can be used as immediates. (Actually low 2GiB, so sign or zero extending both work). See 32-bit absolute addresses no longer allowed in x86-64 Linux? if gcc complains about absolute addressing; -pie is enabled by default on most distros. This of course doesn't work in Linux shared libraries, which only support text relocations for 64-bit addresses. But you should avoid relocations whenever possible by using lea to make position-indepdendent code.
Most integer build-time constants fit in 32 bits, so you can use cmp r64, imm32 or cmp r32, imm32 even in PIC code.
If you do need a 64-bit non-address constant, try to hoist the mov r64, imm64 out of a loop. Your cmp loop would have been fine if the mov wasn't inside the loop. x86-64 has enough registers that you (or the compiler) can usually avoid reloads inside inner-most loops in integer code.

Unexpected slowdown from inserting a nop in a loop, and from reading near a movnti store

I cannot understand why the first code has ~1 cycle per iteration and second has 2 cycle per iteration. I measured with Agner's tool and perf. According to IACA it should take 1 cycle, from my theoretical computations too.
This takes 1 cycle per iteration.
; array is array defined in section data
%define n 1000000
xor rcx, rcx
.begin:
movnti [array], eax
add rcx, 1
cmp rcx, n
jle .begin
And this takes 2 cycles per iteration. but why?
; array is array defined in section data
%define n 1000000
xor rcx, rcx
.begin:
movnti [array], eax
nop
add rcx, 1
cmp rcx, n
jle .begin
This final version takes ~27 cycles per iteration. But why? After all, there is no dependency chain.
.begin:
movnti [array], eax
mov rbx, [array+16]
add rcx, 1
cmp rcx, n
jle .begin
My CPU is IvyBridge.
movnti is 2 uops, and can't micro-fuse, according to Agner Fog's tables for IvyBridge.
So your first loop is 4 fused-domain uops, and can issue at one iteration per clock.
The nop is a 5th fused-domain uop (even though it doesn't take any execution ports, so it's 0 unfused-domain uops). This means the frontend can only issue the loop at one per 2 clocks.
See also the x86 tag wiki for more links to how CPUs work.
The 3rd loop is probably slow because mov rbx, [array+16] is probably loading from the same cache line that movnti evicts. This happens every time the fill-buffer it's storing into is flushed. (Not every movnti, apparently it can rewrite some bytes in the same fill-buffer.)

Why is a memory round-trip faster than not performing the round-trip?

I've got some simple 32bit code which computes the product of an array of 32bit integers. The inner loop looks like this:
##loop:
mov esi,[ebx]
mov [esp],esi
imul eax,[esp]
add ebx, 4
dec edx
jnz ##loop
What I'm trying to understand is why the above code is 6% faster than these two versions of the code, which does not perform the redundant memory round-trip:
##loop:
mov esi,[ebx]
imul eax,esi
add ebx, 4
dec edx
jnz ##loop
and
##loop:
imul eax,[ebx]
add ebx, 4
dec edx
jnz ##loop
The two latter pieces of code execute in virtually the same time, and as mentioned both are 6% slower than the first piece (165ms vs 155ms, 200 million elements).
I've tried manually aligning the jump target to a 16 byte boundary, but it makes no difference.
I'm running this on an Intel i7 4770k, Windows 10 x64.
Note: I know the code could be improved by doing all sorts of optimizations, however I'm only interested in the performance difference between the above pieces of code.
I suspect but can't be sure that you are preventing a stall on a data dependency:
The code looks like this:
##loop:
mov esi,[ebx] # (1)Load the memory location to esi reg
(mov [esp],esi) # (1)optionally store the location on the stack
imul eax,[esp] # (3) Perform the multiplication
add ebx, 4 # (1) Add 4
dec edx # (1)decrement counter
jnz ##loop # (0**) loop
Those numbers in brackets are the latencies of the instructions ... that jump is 0 if the branch predictor guesses correctly (which since it will mostly loop it will most of the time).
So: while the multiplication is still going (3 instructions) we get back to the top of the loop after 2 and try to load in to the memory and has to stall. Or we could do a store ... which we can do at the same time as our multiplication and then not stall at all.
What about the dummy store you ask? Why does that work? Notice you are storing the critical value that we are using to multiply to memory. Thus the processor can use this value which is being stored in memory and clobber the register.
So why can't the processor do this anyway? The processor can't produce more memory accesses than you ask it to or it could interfere with multi-processor programs (imagine that cache line that you are writing to is shared and you have to invalidate it on other CPUs every loop by writing to it ... ouch!).
All of this is pure speculation, but it seems to match all the evidence (your code and my knowledge of the intel architecture ... and x86 assembly). Hopefully someone can point out if I have something wrong.

Resources