Related
I've had always been curious about the cost of jumps in assembly.
cmp ecx, edx
je SOME_LOCATION # What's the cost of this jump?
Does it need to do a search in a lookup table for each jumps or how does it work?
No, a jump doesn’t do a search. The assembler resolves the label to an address, which in most cases is then converted to an offset from the current instruction. The address or offset is encoded in the instruction. At run time, the processor loads the address into the IP register or adds the offset to the current value of the IP register (along with all the other effects discussed by #Brendan).
There is a type of jump instruction that can be used to get the destination from a table. The jump instruction reads the address from a memory location. (The instruction specifies a single location, so there still is no “search”.) This instruction could look something like this:
jmp table[eax*4]
where eax is the index of the entry in the table containing the address to jump to.
Originally (e.g. 8086) the cost of a jump wasn't much different to the cost of a mov.
Later CPUs added caches, which meant some jumps were faster (because the code they jump to is in the cache) and some jumps were slower (because the code they jump to isn't in the cache).
Even later CPUs added "out of order" execution, where conditional branches (e.g. je SOME_LOCATION) would have to wait until the flags from "previous instructions that happen to be executed in parallel" became known.
This means that a sequence like
mov esi, edi
cmp ecx, edx
je SOME_LOCATION
can be slower than rearranging it to
cmp ecx, edx
mov esi, edi
je SOME_LOCATION
to increase the chance that the flags would be known.
Even later CPUs added speculative execution. In this case, for conditional branches the CPU just takes a guess at where it will branch to before it actually knows (e.g. before the flags are known), and if it guesses wrong it'll just pretend that it didn't execute the wrong instructions. More specifically, the speculatively executed instructions are tagged at the start of the pipeline and held at the end of the pipeline (at retirement) until the CPU knows if they can be committed to visible state or if they have to be discarded.
After that things just got more complicated, with fancier methods of doing branch prediction, additional "branch target" buffers, etc.
Far jumps that change the code segment are more expensive. In real mode it's not so bad because the CPU mostly only does "CS.base = value * 16" when CS is changed. For protected mode it's a table lookup (to find GDT or LDT entry), decoding the entry, deciding what to do based on what kind of entry it is, then a pile of protection checks. For long mode it's vaguely similar. All of this adds more uncertainty (e.g. with the table entry be in cache?).
On top of all of this there's things like TLB misses. For example, jmp [indirectAddress] can cause a TLB miss at indirectAddress then a TLB miss at the stack top then a TLB miss at the new instruction pointer; where each TLB miss can cost a few hundred cycles.
Mostly; the cost of a jump can be anything from 0 cycles (for a correctly predicted jump) to maybe 1000 cycles; depending on which CPU it is, what kind of jump, what is in caches, what branch prediction predicts, cache/TLB misses, how fast/slow RAM is, and anything I may have forgotten.
For many years x86 CPUs supported the rdtsc instruction, which reads the "time stamp counter" of the current CPU. The exact definition of this counter has changed over time, but on recent CPUs it is a counter that increments at a fixed frequency with respect to wall clock time, so it is very useful as building block for a fast, accurate clock or measuring the time taken by small segments of code.
One important fact about the rdtsc instruction isn't ordered in any special way with the surrounding code. Like most instructions, it can be freely reordered with respect to other instructions which it isn't in a dependency relationship with. This is actually "normal" and for most instructions it's just a mostly invisible way of making the CPU faster (it's just a long winder way of saying out-of-order execution).
For rdtsc it is important because it means you might not be timing the code you expect to be timing. For example, given the following sequence1:
rdtsc
mov ecx, eax
mov rdi, [rdi]
mov rdi, [rdi]
rdtsc
You might expect rdtsc to measure the latency of the two pointer chasing loading loads mov rdi, [rdi]. In practice, however, even if both of these loads take a look time (100s of cycles if they miss in the cache), you'll get a fairly small reading for the rdtsc pair. The problem is that the second rdtsc doens't wait for the loads to finish, it just executes out of order, so you aren't timing the interval you think you are. Perhaps both rdtsc instruction actually even execute before the first load even starts, depending how rdi was calculated in the code prior to this example.
So far, this is sounding more like an answer to a question nobody asked than a real question, but I'm getting there.
You have two basic use-cases for rdtsc:
As a quick timestamp, in which can you usually don't care exactly how it reorders with the surrounding code, since you probably don't have have an instruction-level concept of where the timestamp should be taken, anyways.
As a precise timing mechanism, e.g., in a micro-benchmark. In this case you'll usually protect your rdtsc from re-ordering with the lfence instruction. For the example above, you might do something like:
lfence
rdtsc
lfence
mov ecx, eax
...
lfence
rdtsc
To ensure the timed instructions (...) don't escape outside of the timed region, and also to ensure instructions from inside the time region don't come in (probably less of a problem, but they may compete for resources with the code you want to measure).
Years later, Intel looked down upon us poor programmers and came up with a new instruction: rdtscp. Like rdtsc it returns a reading of the time stamp counter, and this guy does something more: it reads a core-specific MSR value atomically with the timestamp reading. On most OSes this contains a core ID value. I think the idea is that this value can be used to properly adjust the returned value to real time on CPUs that may have different TSC offsets per core.
Great.
The other thing rdtscp introduced was half-fencing in terms of out-of-order execution:
From the manual:
The RDTSCP instruction is not a serializing instruction, but it does
wait until all previous instructions have executed and all previous
loads are globally visible.1 But it does not wait for previous stores
to be globally visible, and subsequent instructions may begin
execution before the read operation is performed.
So it's like putting an lfence before the rdtscp, but not after. What is the point of this half-fencing behavior? If you want a general timestamp and don't care about instruction ordering, the unfenced behavior is what you want. If you want to use this for timing short code sections, the half-fencing behavior is useful only for the second (final) reading, but not for the initial reading, since the fence is on the "wrong" side (in practice you want fences on both sides, but having them on the inside is probably the most important).
What purpose does such half-fencing serve?
1 I'm ignoring the upper 32-bits of the counter in this case.
This question already has answers here:
Why is a conditional move not vulnerable to Branch Prediction Failure?
(5 answers)
Closed 6 years ago.
I'm confused where to use cmov instructions and where to use jump instructions in assembly?
From performance point of view:
What is the difference in both of them?
Which one is better?
If possible, please explain their difference with an example.
movcc is a so-called predicated instruction. That's fancy-speak for "this instruction executes under a condition (predicate)".
Many processors, including the x86, after doing an arithmetic operation (especially compare instructions), sets the condition code bits to indicate the status of the result of the operation.
A conditional jump instruction checks the condition code bits for a status, and if true, jumps to a designated target.
Because the jump is conditional, and the processor typically has a deep pipeline, the condition code bits may literally not ready for the jmp instruction to process when the CPU encounters the jmp instruction. The chip designers could simply wait for the pipeline to drain (often many clock cycles), and then execute the jmp, but that would make the processor slow.
Instead, most of them choose to have a branch prediction algorithm, which predicts which way a conditional jump will go. The processor can then fetch, decode, and execute the predicted branch (or not), and continue fast execution, with the proviso that if the condition code bits that finally arrive turn out to be wrong for conditional (branch mispredict), the processor undoes all work it did after the branch, and re-executes the program going down the other path.
Conditional jumps are harder for pipelined execution than normal data dependencies, because they can change which instruction should be next in the stream of instructions flowing through the pipeline. This is called a control dependency, as opposed to a data dependency (like an add where both inputs are outputs of other recent instructions).
The branch predictors turn out to be very good, because most branches tend to have bias about their direction. (The branch at the end of most loops, is going to branch back to top, typically). So most of the time the processor doesn't have to back out of wrongly predicted work.
If the direction of the branch is highly unpredictable, then the processor will guess wrong about 50% of the time, thus have to back out work. That's expensive.
OK, now, one often finds code like this:
cmp ...
jcc $
mov register1, register2
$: ; continue here
...
; use register1
If the branch predictor guesses right, this code is fast, no matter which way the branch goes. If it guesses wrong a lot... ouch.
Thus the conditional move instruction. This is a move that conditionally moves data, based on the condition code bits. We can rewrite the above:
cmp ...
movcc register1, register2
$: ; continue here
...
; use register1
Now we have no branch instructions, and thus no mispredicts that make the processor undo all the work. Since there is no control dependency, the following instructions need to be fetched and decoded regardless of whether the movcc acts like a mov or nop. The pipeline can stay full without predicting the condition and speculatively executing instructions that use register1. (You could build a CPU that way, but it would defeat the purpose of movcc.)
movcc converts a control dependency into a data dependency. The CPU treats it exactly like a 3-input math instruction, with the inputs being EFLAGS and its two "regular" inputs (dest register and source register-or-memory). On x86, adc is identical to cmovae (mov if CF==0) as far as how out-of-order execution tracks the dependencies: inputs are CF, and both GP registers. Output is the destination register.
For the x86, there are cmovcc, jcc, and setcc instructions for every condition combination cc. (setcc sets the destination to 0 or 1, according to the condition. So it has a data dependency on the flags, and no other input dependencies.)
When profiling code at the the assembly instruction level, what does the position of the instruction pointer really mean given that modern CPUs don't execute instructions serially or in-order? For example, assume the following x64 assembly code:
mov RAX, [RBX]; // Assume a cache miss here.
mov RSI, [RBX + RCX]; // Another cache miss.
xor R8, R8;
add RDX, RAX; // Dependent on the load into RAX.
add RDI, RSI; // Dependent on the load into RSI.
Which instruction will the instruction pointer spend most of its time on? I can think of good arguments for all of them:
mov RAX, [RBX] is taking probably 100s of cycles because it's a cache miss.
mov RSI, [RBX + RCX] also takes 100s of cycles, but probably executes in parallel with the previous instruction. What does it even mean for the instruction pointer to be on one or the other of these?
xor R8, R8 probably executes out-of-order and finishes before the memory loads finish, but the instruction pointer might stay here until all previous instructions are also finished.
add RDX, RAX generates a pipeline stall because it's the instruction where the value of RAX is actually used after a slow cache-miss load into it.
add RDI, RSI also stalls because it's dependent on the load into RSI.
CPUs maintains a fiction that there are only the architectural registers (RAX, RBX, etc) and there is a specific instruction pointer (IP). Programmers and compilers target this fiction.
Yet as you noted, modern CPUs don't execute serially or in-order. Until you the programmer / user request the IP, it is like Quantum Physics, the IP is a wave of instructions being executed; all so that the processor can run the program as fast as possible. When you request the current IP (for example, via a debugger breakpoint or profiler interrupt), then the processor must recreate the fiction that you expect so it collapses this wave form (all "in flight" instructions), gathers the register values back into architectural names, and builds a context for executing the debugger routine, etc.
In this context, there is an IP that indicates the instruction where the processor should resume execution. During the out-of-order execution, this instruction was the oldest instruction yet to complete, even though at the time of the interrupt the processor was perhaps fetching instructions well past that point.
For example, perhaps the interrupt indicates mov RSI, [RBX + RCX]; as the IP, but the xor had already executed and completed; however, when the processor would resume execution after the interrupt, it will re-execute the xor.
It's a good question, but in the kind of performance tuning I do, it doesn't matter.
It doesn't really matter because what you're looking for is speed-bugs.
These are things that the code is doing that take clock time and that could be done better or not at all. Examples:
- Spending I/O time looking in DLLs for resources that don't, actually, need to be looked for.
- Spending time in memory-allocation routines making and freeing objects that could simply be re-used.
- Re-calculating things in functions that could be memo-ized.
... this is just a few off the top of my head
Your biggest enemy is a self-congratulatory tendency to say "I wouldn't consciously write any bugs. Why would I?" Of course, you know that's why you test software. But the same goes for speed-bugs, and if you don't know how to find those you assume there are none, which is a way of saying "My code has no possible speedups, except maybe a profiler can show me how to shave a few cycles."
In my half-century experience, there is no code that, as first written, contains no speed-bugs. What's more, there's an enormous multiplier effect, where every speed-bug you remove makes the remaining ones more obvious. As a contrived example, suppose bug A accounts for 90% of clock time, and bug B accounts for 9%. If you only fix B, big deal - the code is 11% faster. If you only fix A, that's good - it's 10x faster. But if you fix both, that's really good - it's 100x faster. Fixing A made B big.
So the thing you need most in performance tuning is to find the speed-bugs, and not miss any. When you've done all that, then you can get down to cycle-shaving.
After reading this post (answer on StackOverflow) (at the optimization section), I was wondering why conditional moves are not vulnerable for Branch Prediction Failure. I found on an article on cond moves here (PDF by AMD). Also there, they claim the performance advantage of cond. moves. But why is this? I don't see it. At the moment that that ASM-instruction is evaluated, the result of the preceding CMP instruction is not known yet.
Mis-predicted branches are expensive
A modern processor generally executes between one and three instructions each cycle if things go well (if it does not stall waiting for data dependencies for these instructions to arrive from previous instructions or from memory).
The statement above holds surprisingly well for tight loops, but this shouldn't blind you to one additional dependency that can prevent an instruction to be executed when its cycle comes:
for an instruction to be executed, the processor must have started to fetch and decode it 15-20 cycles before.
What should the processor do when it encounters a branch? Fetching and decoding both targets does not scale (if more branches follow, an exponential number of paths would have to be fetched in parallel). So the processor only fetches and decodes one of the two branches, speculatively.
This is why mis-predicted branches are expensive: they cost the 15-20 cycles that are usually invisible because of an efficient instruction pipeline.
Conditional move is never very expensive
Conditional move does not require prediction, so it can never have this penalty. It has data dependencies, same as ordinary instructions. In fact, a conditional move has more data dependencies than ordinary instructions, because the data dependencies include both “condition true” and “condition false” cases. After an instruction that conditionally moves r1 to r2, the contents of r2 seem to depend on both the previous value of r2 and on r1. A well-predicted conditional branch allows the processor to infer more accurate dependencies. But data dependencies typically take one-two cycles to arrive, if they need time to arrive at all.
Note that a conditional move from memory to register would sometimes be a dangerous bet: if the condition is such that the value read from memory is not assigned to the register, you have waited on memory for nothing. But the conditional move instructions offered in instruction sets are typically register to register, preventing this mistake on the part of the programmer.
It is all about the instruction pipeline. Remember, modern CPUs run their instructions in a pipeline, which yields a significant performance boost when the execution flow is predictable by the CPU.
cmov
add eax, ebx
cmp eax, 0x10
cmovne ebx, ecx
add eax, ecx
At the moment that that ASM-instruction is evaluated, the result of the preceding CMP instruction is not known yet.
Perhaps, but the CPU still knows that the instruction following the cmov will be executed right after, regardless of the result from the cmp and cmov instruction. The next instruction may thus safely be fetched/decoded ahead of time, which is not the case with branches.
The next instruction could even execute before the cmov does (in my example this would be safe)
branch
add eax, ebx
cmp eax, 0x10
je .skip
mov ebx, ecx
.skip:
add eax, ecx
In this case, when the CPU's decoder sees je .skip it will have to choose whether to continue prefetching/decoding instructions either 1) from the next instruction, or 2) from the jump target. The CPU will guess that this forward conditional branch won't happen, so the next instruction mov ebx, ecx will go into the pipeline.
A couple of cycles later, the je .skip is executed and the branch is taken. Oh crap! Our pipeline now holds some random junk that should never be executed. The CPU has to flush all its cached instructions and start fresh from .skip:.
That is the performance penalty of mispredicted branches, which can never happen with cmov since it doesn't alter the execution flow.
Indeed the result may not yet be known, but if other circumstances permit (in particular, the dependency chain) the cpu can reorder and execute instructions following the cmov. Since there is no branching involved, those instructions need to be evaluated in any case.
Consider this example:
cmoveq edx, eax
add ecx, ebx
mov eax, [ecx]
The two instructions following the cmov do not depend on the result of the cmov, so they can be executed even while the cmov itself is pending (this is called out of order execution). Even if they can't be executed, they can still be fetched and decoded.
A branching version could be:
jne skip
mov edx, eax
skip:
add ecx, ebx
mov eax, [ecx]
The problem here is that control flow is changing and the cpu isn't clever enough to see that it could just "insert" the skipped mov instruction if the branch was mispredicted as taken - instead it throws away everything it did after the branch, and restarts from scratch. This is where the penalty comes from.
You should read these. With Fog+Intel, just search for CMOV.
Linus Torvald's critique of CMOV circa 2007
Agner Fog's comparison of microarchitectures
Intel® 64 and IA-32 Architectures Optimization Reference Manual
Short answer, correct predictions are 'free' while conditional branch mispredicts can cost 14-20 cycles on Haswell. However, CMOV is never free. Still I think CMOV is a LOT better now than when Torvalds ranted. There is no single one correct for all time on all processors ever answer.
I have this illustration from [Peter Puschner et al.] slide which explains how it transforms into single path code, and speedup the execution.