As we know from from C11-memory_order: http://en.cppreference.com/w/c/atomic/memory_order
And the same from C++11-std::memory_order: http://en.cppreference.com/w/cpp/atomic/memory_order
On strongly-ordered systems (x86, SPARC, IBM mainframe),
release-acquire ordering is automatic. No additional CPU instructions
are issued for this synchronization mode, only certain compiler
optimizations are affected (e.g. the compiler is prohibited from
moving non-atomic stores past the atomic store-release or perform
non-atomic loads earlier than the atomic load-acquire)
But is this true for x86-SSE-instructions (except of [NT] - non-temporal, where we always must use L/S/MFENCE)?
Here said, that "sse instructions ... is no requirement on backwards compatibility and memory order is undefined". It is believed that the strict orderability left for compatibility with older versions of processors x86, when it was needed, but new commands, namely SSE(except of [NT]) - deprived automatically release-acquire of order, is it?
Here is an excerpt from Intel's Software Developers Manual, volume 3, section 8.2.2 (the edition 325384-052US of September 2014):
Reads are not reordered with other reads.
Writes are not reordered with older reads.
Writes to memory are not reordered with other writes, with the following exceptions:
writes executed with the CLFLUSH instruction;
streaming stores (writes) executed with the non-temporal move instructions (MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD); and
string operations (see Section 8.2.4.1).
Reads may be reordered with older writes to different locations but not with older writes to the same location.
Reads or writes cannot be reordered with I/O instructions, locked instructions, or serializing instructions.
Reads cannot pass earlier LFENCE and MFENCE instructions.
Writes cannot pass earlier LFENCE, SFENCE, and MFENCE instructions.
LFENCE instructions cannot pass earlier reads.
SFENCE instructions cannot pass earlier writes.
MFENCE instructions cannot pass earlier reads or writes.
The first three bullets describe the release-acquire ordering, and the exceptions are explicitly listed there. As you might see, only cacheability control instructions (MOVNT*) are in the exception list, while the rest of SSE/SSE2 and other vector instructions obey to the general memory ordering rules, and do not require use of [LSM]FENCE.
It is true that normal1 SSE load and store instructions, as well the implied load when using a memory source operand, have the same acquire and release behavior in terms of ordering as normal loads and stores of GP registers.
They are not, however, generally useful directly to implement std::memory_order_acquire or std::memory_order_release operations on std::atomic objects larger than 8 bytes because there is no guarantee of atomicity for SSE or AVX loads and stores of larger than 8 bytes. The missing guarantee isn't just theoretical: there are several implementations (including brand new ones like AMD's Ryzen) that split large loads or stores up into two smaller ones.
1 I.e., those not listed in the exception list in the accepted answer: NT stores, clflush and string operations.
Related
I came across this code emission for x64 were "Atomic Load" is using a simple movq whereas "Atomic Store" is using xchgq.
This link explains that Atomic Load/Stores on aligned addresses are atomic by default. I'm assuming that's why Atomic Load in the above link is using a simple movq.
I have the following questions;
Is Atomic Store using a xchgq (which enables LOCK by default) to fix any issues with cache lines? essentially it's making sure all cache lines are updated properly? If cache line wasn't an issue they could have just used movq?
Does it also mean cache coherency is only an issue when Storing? As Load above is not using a locked instruction?
No, seq_cst stores use xchg (or mov + mfence but that's slower on recent CPUs) for ordering wrt. other operations. release or relaxed atomic stores can just use mov and will still be promptly visible to other cores. (Not before later loads in this thread might have executed, though.)
Cache coherence isn't the cause of memory-reordering, that's local to each core. (For x86, the memory model is program order + a store buffer with store-forwarding. It's the store buffer that causes stores to not become visible until after the store instruction has retired from out-of-order exec.)
The answer you linked which says "if I set this to true (or false), no other thread will read a different value after I've set it" (that's not quite such a certainty - you need a "lock" prefix to guarantee that). is somewhat misleading. They mean that (implicit-lock) xchg includes a full memory barrier, so no code in the storing thread can access memory until after the store is actually committed to cache, globally visible.
A clearer way to state that is that it makes this thread wait without doing anything until the store is visible. i.e. stall this thread until the store buffer has finished committing all previous stores. That would eventually happen on its own. So it's really about ordering of this thread relative to store visibility, not other threads. Other threads (cores) can locally do their own early loading / late storing, although on x86 all loads happen in program order. That's why I commented on that answer you linked to disagree with the way it was presenting things.
Can a speculatively executed CPU branch contain opcodes that access RAM? (What a store buffer does)
C++ How is release-and-acquire achieved on x86 only using MOV? discusses cache-coherency and limits on local reordering being enough to give release/acquire synchronization.
Why does a std::atomic store with sequential consistency use XCHG?
Acquire-release on x86
Which is a better write barrier on x86: lock+addl or xchgl? - shows in more detail why we need xchg or a separate memory barrier for a seq_cst store.
https://software.rajivprab.com/2018/04/29/myths-programmers-believe-about-cpu-caches/ (It's talking about Java volatile, which is like C++ std::atomic with memory_order_seq_cst.
How does memory reordering help processors and compilers?
Does atomic read guarantees reading of the latest value? - people often get hung up on "latest value" guarantees. Don't. Acquire/release just works, and stronger orders or memory barriers don't make stores visible to other cores sooner in any significant way.
I have the following scenario and looking for suggestions please:
Need to share data between two threads, A and B each running in different cores in the same processor, where thread A writes to an instance of data structure S and B thread reads it. I need the sharing of S to be as consistent and as fast as possible.
struct alignas(64) S
{
char cacheline [64];
};
Planning to leverage the consistency of a cache line, being visible to other cores as an atomic update. Therefore have thread A write to S as fast as possible (*1) so the update is consistent (atomic from a visibility perspective) and then demote (CLDEMOTE instruction) the cache line to the shared cache so that thread B can read it as fast as possible.
Note 1: The reason why it needs to happen fast is so that when core running thread A starts writing to the cache line, it can update all of its contents completely and then core making it visible in L1 (updates occur in the core store buffer), otherwise if it takes too long to update a "mid-state" of the cache-line may be pushed to L1 incurring into unnecessaries invalidation signals (MESI) penalties (as it needs to do the rest again), and worst inconsistent state in thread B.
Are there better ways to achieve this?
Thanks!
Yes, store then cldemote is a good plan. It runs as a nop on CPUs that don't support it, so you can use it optimistically. (Test that it actually helps your program on CPUs where it's not a nop, though, in case you accidentally demote before reading the line some more.)
Do you actually need atomicity, or is that just nice to have some of the time? If you need atomicity, you can't use separate store instructions. Coalescing in the store buffer isn't guaranteed; for L1d hits it may only sometimes happen on Ice Lake. And an interrupt can happen at any point (unless interrupts are disabled, but SMI and NMI can't be disabled). Including between two stores you were hoping would commit together.
32-byte AVX aligned loads and stores aren't guaranteed atomic, but in practice they probably are on Haswell and later (where the load/store units are 32 bytes wide).
Similarly, 64-byte AVX-512 loads and stores aren't guaranteed atomic, and very likely won't be in practice on Zen4 where they're done in two 32-byte halves. But they probably are on Intel CPUs with AVX-512, if you want to do some testing and find some "works in practice" functionality that doesn't show any tearing on the actual machine you care about.
16-byte loads/stores are guaranteed atomic on Intel CPUs that have the AVX feature flag. (Finally documented after being true for years, fortunately retroactive with an existing feature bit.) AMD doesn't document this yet, but it's probably true of AMD CPUs with AVX, too.
Related: https://rigtorp.se/isatomic/ / SSE instructions: which CPUs can do atomic 16B memory operations?
movdir64b will provide guaranteed 64-byte write atomicity, but only with NT semantics: evicting the cache line all the way to DRAM. It also doesn't provide 64-byte atomic read, so the read side would need to check sequence numbers or something, like a SeqLock.
Intel TSX (transactional memory) can let you commit changes to a whole cache line (or more) as a single atomic transaction. But Intel keeps disabling it with microcode updates. The HLE part (optimistic lock add handling) is fully gone, but the RTM part (xbegin / xend) can still be enabled on some CPUs, I think.
For a use case like this where one thread is only writing, you might consider a SeqLock, using 4 bytes of the cache line as a sequence number. Optimal way to pass a few variables between 2 threads pinning different CPUs / how to implement a seqlock lock using c++11 atomic library
The writer can load the sequence number, store seq+1 (with a plain mov store, no lock inc needed), store the payload with regular stores, or SIMD if convenient, then store seq+2.
Unfortunately without guarantees of vector load/store atomicity, or of ordering between parts of it, you can't have the reader just load the whole cache line at once, you do need 3 separate loads. (Seq number, whole line, then seq number again.)
But if you want to use 32-byte atomicity which appears to be true in practice on Haswell and Zen2 and later, maybe put a sequence number in each 32-byte half of a cache line, so the reader can check with vpcmpeqd / vpmovmskps / test al,1 to check that the first dword element (sequence number) matched between halves. Or maybe put them somewhere else within the vector to make reassembling the payload cheaper.
This spends space for two sequence numbers to save loads in the reader, but might cost more overhead in shuffling data into / out of vectors. I guess maybe store with vmovdqua [rdi+28], ymm1 / vmovdqu [rdi], ymm0 could leave you with 60 useful bytes starting at rdi+4, overwriting the 4 byte sequence number at the start of ymm1. Store-forwarding to a 32-byte load from [rdi+4] would stall, but narrower loads that don't span the boundary between the two earlier stores would be fine even.
Related Q&As about solving the same problem of pushing data for other cores to be able to read cheaply:
CPU cache inhibition
x86 MESI invalidate cache line latency issue
Why didn't x86 implement direct core-to-core messaging assembly/cpu instructions? - Sapphire Rapids has UIPI for user-space interrupt handling of special inter-processor interrupts. So that's fun if you want low latency notification. If you just want to read whatever the current state of a shared data structure is, SeqLock or RCU are good.
On the one hand, Wikipedia writes about the steps of the out-of-order execution:
Instruction fetch.
Instruction dispatch to an instruction queue (also called instruction buffer or reservation stations).
The instruction waits in the queue until its input operands are available. The instruction is then allowed to leave the queue before
earlier, older instructions.
The instruction is issued to the appropriate functional unit and executed by that unit.
The results are queued.
Only after all older instructions have their results written back to the register file, then this result is written back to the register file. This is called the graduation or retire stage.
The similar information can be found in the "Computer Organization and Design" book:
To make programs behave as if they were running on a simple in-order
pipeline, the instruction fetch and decode unit is required to issue
instructions in order, which allows dependences to be tracked, and the
commit unit is required to write results to registers and memory in
program fetch order. This conservative mode is called in-order
commit... Today, all dynamically scheduled pipelines use in-order commit.
So, as far as I understand, even if the instructions execution is done in the out-of-order manner, the results of their executions are preserved in the reorder buffer and then committed to the memory/registers in a deterministic order.
On the other hand, there is a known fact that modern CPUs can reorder memory operations for the performance acceleration purposes (for example, two adjacent independent load instructions can be reordered). Wikipedia writes about it here.
Could you please shed some light on this discrepancy?
TL:DR: memory ordering is not the same thing as out of order execution. It happens even on in-order pipelined CPUs.
In-order commit is necessary1 for precise exceptions that can roll-back to exactly the instruction that faulted, without any instructions after that having already retired. The cardinal rule of out-of-order execution is don't break single-threaded code. If you allowed out-of-order commit (retirement) without any kind of other mechanism, you could have a page-fault happen while some later instructions had already executed once, and/or some earlier instructions hadn't executed yet. This would make restarting execution after handing a page-fault impossible the normal way.
(In-order issue/rename and dependency-tracking takes care of correct execution in the normal case of no exceptions.)
Memory ordering is all about what other cores see. Also notice that what you quoted is only talking about committing results to the register file, not to memory.
(Footnote 1: Kilo-instruction Processors: Overcoming the Memory Wall is a theoretical paper about checkpointing state to allow rollback to a consistent machine state at some point before an exception, allowing much larger out-of-order windows without a gigantic ROB of that size. AFAIK, no mainstream commercial designs have used that, but it shows that there are in theory approaches other than strictly in-order retirement to building a usable CPU.
Apple's M1 reportedly has a significantly larger out-of-order window than its x86 contemporaries, but I haven't seen any definite info that it uses anything other than a very large ROB.)
Since each core's private L1 cache is coherent with all the other data caches in the system, memory ordering is a question of when instructions read or write cache. This is separate from when they retire from the out-of-order core.
Loads become globally visible when they read their data from cache. This is more or less when they "execute", and definitely way before they retire (aka commit).
Stores become globally visible when their data is committed to cache. This has to wait until they're known to be non-speculative, i.e. that no exceptions or interrupts will cause a roll-back that has to "undo" the store. So a store can commit to L1 cache as early as when it retires from the out-of-order core.
But even in-order CPUs use a store queue or store buffer to hide the latency of stores that miss in L1 cache. The out-of-order machinery doesn't need to keep tracking a store once it's known that it will definitely happen, so a store insn/uop can retire even before it commits to L1 cache. The store buffer holds onto it until L1 cache is ready to accept it. i.e. when it owns the cache line (Exclusive or Modified state of the MESI cache coherency protocol), and the memory-ordering rules allow the store to become globally visible now.
See also my answer on Write Allocate / Fetch on Write Cache Policy
As I understand it, a store's data is added to the store queue when it "executes" in the out-of-order core, and that's what a store execution unit does. (Store-address writing the address, and store-data writing the data into the store-buffer entry reserved for it at allocation/rename time, so either of those parts can execute first on CPUs where those parts are scheduled separately, e.g. Intel.)
Loads have to probe the store queue so that they see recently-stored data.
For an ISA like x86, with strong ordering, the store queue has to preserve the memory-ordering semantics of the ISA. i.e. stores can't reorder with other stores, and stores can't become globally visible before earlier loads. (LoadStore reordering isn't allowed (nor is StoreStore or LoadLoad), only StoreLoad reordering).
David Kanter's article on how TSX (transactional memory) could be implemented in different ways than what Haswell does provides some insight into the Memory Order Buffer, and how it's a separate structure from the ReOrder Buffer (ROB) that tracks instruction/uop reordering. He starts by describing how things currently work, before getting into how it could be modified to track a transaction that can commit or abort as a group.
On x86/x64, non-temporal store instructions such as MOVNTI and MOVNTPS make weaker memory ordering guarantees than "regular" stores. I understand fences (e.g. SFENCE) are necessary when sharing memory that will be written to non-temporally across threads. However, are fence instructions ever necessary for thread-local memory? If I write to a location via MOVNTPS, is the write guaranteed to be visible to subsequent instructions in the same thread without any fence instruction?
Yes, they will be visible without fences. See section 8.2.2 Memory Ordering in P6 and More Recent Processor Families in the IntelĀ® 64 and IA-32 Architectures Software Developer's Manual Volume 3A: System Programming Guide, Part 1 which says, among others:
for memory regions defined as write-back cacheable, [...]
Reads may be reordered with older writes to different locations but
not with older writes to the same location.
and
Writes to memory are not reordered with other writes, with the
following exceptions:
-- streaming stores (writes) executed with the non-temporal move instructions
(MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD);
Among the atomic operations provided by BSD (as given on the atomic(9) man page), there are atomic_load_acq_int() and atomic_store_rel_int(). In looking for the equivalent for other OSs (for example, by reading the atomic(3) man page for Mac OS X, the atomic_ops(3C) man page for Solaris, and the Interlocked*() functions for Windows), there don't seem to be any (obvious) equivalents for just atomically reading/writing an int.
Is this because that it's implied for those OSs that reads/writes for int are guaranteed to be atomic by default? (Or must you use declare them volatile in C/C++?)
If not, then how does one do atomic reads/writes of an int on those OSs?
(Atomic reads can be simulated by returning the result of an atomic add of 0, but there's no equivalent for doing atomic writes.)
I think you are mixing together atomic memory access with cache coherence. The former is the required hardware support for building synchronization primitives in software (spin-locks, semaphores, and mutexes), while the latter is the hardware support for multiple chips (several CPUs, and peripheral devices) working over the same bus, and having consistent view of the main memory.
Different compilers/libraries provide different utilities for the first. Here's, for example, GCC intrinsics for atomic memory access. They all boil down to generating either compare-and-swap or load-linked/store-conditional based instruction blocks depending on the platform support. Compile your source with, say, -S for GCC and see the assembler generated.
You don't have to do anything explicitly for cache coherency - it's all handled in hardware - but it definitely helps to understand how it works to avoid things like cache line ping-pong.
With all that, aligned single word reads and writes are atomic on all commodity platforms (somebody correct me if I'm wrong here). Since ints are less or equal to processor word in size, you are covered (see the GCC builtins link above).
It's the order of reads and writes that is important. Here's where architecture memory model is important. It dictates what operations can and cannot be re-ordered by the hardware. Example would be updating a linked list - you don't want other CPUs see a new item linked until the item itself is in consistent state. Explicit memory barriers (also often called "memory fences") might be required. Acquire barrier ensures that subsequent operations are not re-ordererd before the barrier (say you read the linked-list item pointer before the content of the item), Release barrier ensures that previous operations are not re-ordered after the barrier (you write the item content before writing the new link pointer).
volatile is often misunderstood as being related to all the above. In fact it is just an instruction to the compiler not to cache variable value in register, but read it from memory on each access. Many argue that it's "almost useless" for concurrent programming.
Apologies for lengthy reply. Hope this clears it a bit.
Edit:
Upcoming C++0x standard finally addresses concurrency, see Hans Boehm's C++ memory model papers for many details.