Cache Performance in Hash Tables with Chaining vs Open Addressing - performance

In this following website from geeksforgeeks.org it states that
Cache performance of chaining is not good as keys are stored using linked list. Open addressing provides better cache performance as everything is stored in same table.
So my questions are:
What causes chaining to have a bad cache performance?
Where is the cache being used?
Why would open addressing provide better cache performance as I cannot see how the cache comes into this?
Also what considerations what you take into account when deciding between chaining and linear probed open addressing and quadratic probed open addressing?

Sorry, due to quite wide questions, the answers also will be quite generic with some links for more detailed information.
It is better to start with the question:
Where is the cache being used?
On modern CPUs, cache is used everywhere: to read program instructions and read/write data in memory. On most CPUs cache is transparent, i.e. there is no need to explicitly manage the cache.
Cache is much faster than the main memory (DRAM). Just to give you a perspective, accessing data in Level 1 cache is ~4 CPU cycles, while accessing the DRAM on the same CPU is ~200 CPU cycles, i.e. 50 times faster.
Cache operate on small blocks called cache lines, which are usually 64 bytes long.
More info: https://en.wikipedia.org/wiki/CPU_cache
What causes chaining to have a bad cache performance?
Basically, chaining is not cache friendly. It is not only about this case in the hash tables, same issue with "classical" lists.
Hash keys (or list nodes) are far away from each other, so each key access generates a "cache miss", i.e. slow DRAM access. So checking 10 keys in a chain takes 10 DRAM accesses, i.e. 200 x 10 = 2000 cycles for our generic CPU.
The address of the next key is not known until a next pointer is read in the current key, so there is not much room for an optimization...
Why would open addressing provide better cache performance as I cannot see how the cache comes into this?
Linear probing is cache friendly. Keys are "clustered" together, so once we accessed the first key (slow DRAM access), most probably the next key will be already in cache, since the cache line is 64 bytes. So accessing the same 10 keys with open addressing takes 1 DRAM access and 9 cache accesses, i.e. 200 x 1 + 9 x 4 = 236 cycles for our generic CPU. It is much faster than 2000 cycles for chained keys.
Also, since we access the memory in predictable manner, there is a room for optimizations like cache prefetching: https://en.wikipedia.org/wiki/Cache_prefetching
Also what considerations what you take into account when deciding between chaining and linear probed open addressing and quadratic probed open addressing?
Chaining or linear probing is not a good sign anyway. So the first thing I would consider is to make sure the probability of collisions is at minimum by using a good hash function and reasonable hash size.
The second thing I would consider is a ready to use solution. Sure, there are still might be some rare cases when you need your own implementation...
Not sure about the language, but here is blazingly fast hash table implementation with BSD license: http://dpdk.org/browse/dpdk/tree/lib/librte_hash/rte_cuckoo_hash.h
So, if you still need your own hash table implementation and you do care about performance, the next quite easy thing to implement would be to use cache aligned buckets instead of plain hash elements. It will waste few bytes per each element (i.e. each hash table element will be 64 bytes long), but in case of a collision there will be some fast storage for at least few keys. The code to manage those buckets will be also a bit more complicated, so it is a thing to consider if it is worth for you to bother...

Related

How can I better understand the impact of modern caching on algorithm performance?

I'm reading the following paper: http://www-db.in.tum.de/~leis/papers/ART.pdf and in it, they say in the abstract:
Main memory capacities have grown up to a point where most databases
fit into RAM. For main-memory database systems, index structure
performance is a critical bottleneck. Traditional in-memory data
structures like balanced binary search trees are not efficient on
modern hardware, because they do not optimally utilize on-CPU caches.
Hash tables, also often used for main-memory indexes, are fast but
only support point queries.
How can I better understand this utilization of on-CPU caches and how it impacts the performance of particular data structures/algorithms?
Just somewhere to get started would be great because this sort of analysis is really opaque to me and I don't know where to go to start understanding.
This is going to be a really basic answer, as it would otherwise be extremely broad. I'm also not an expert on the subject (picking up bits and pieces to help understand how to optimize my hotspots better). But it might help you get started investigating this subject.
The topic reminds me of my university days when computer architecture
courses only taught about registers, DRAM, and disk, while glossing
over the CPU cache in between. The CPU cache is one of the most
dominant factors these days in performance.
The memory of the computer is divided into a hierarchy ranging from the absolute biggest but slowest (disk) to absolute smallest but fastest (registers).
Below disk is DRAM which is still pretty slow. And above registers is the CPU cache which is pretty damned fast (especially the smallest L1 cache).
Accessing One Node
Now let's say you request to access memory in some form from some data structure, say a linked structure like a tree or linked list and we're just accessing one node.
Note, I'm inverting the view of memory access for simplicity. Typically it begins with an instruction to load something into a register with the process working backwards and forwards, rather than merely forwards.
Virtual to Physical (DRAM)
In this case, unless the memory is already mapped to physical memory, the operating system has to map a page from virtual memory to a physical address in DRAM (this is freaking slow, especially in the worst-case scenario where the page fault involves a disk access). This is often done in pretty hefty chunks (the machine grabs memory by the handful), like aligned 4-kilobyte chunks. So we end up grabbing a big old 4-kilobyte aligned chunk of memory just for this one node.
DRAM to CPU Cache
Now that this 4-kilobyte page is physically mapped, we still want to do something with the node (most instructions have to operate at the register level) so the computer moves it down through the CPU cache hierarchy (this is pretty slow). Typically all levels of CPU cache have the same cache-line size, like 64-byte cache lines on Intel.
To move the memory from DRAM into these CPU caches, we have to grab a chunk of cache-line-sized-and-aligned memory from DRAM and move it into the CPU cache. We might also have to evict some data already in various levels of the CPU cache hierarchy on the way, like the least recently used memory. So now we're grabbing a 64-byte aligned handful of memory for this node.
Maybe at this point, the cache line memory might look like this. Let's say the relevant node data is 42, while the stuff in ??? is irrelevant memory surrounding it that's not part of our linked data structure.
CPU Cache to Register
Now we move the memory from CPU cache into a register (this occurs very quickly). And here we're still grabbing memory in sort of a handful, but a pretty small one. For example, we might grab a 64-bit aligned chunk of memory and move it into a general-purpose register. So we grab the memory around "42" here and move it into a register.
Finally we do some operations on the register and store the results, and the results often kind of work their way back up the memory hierarchy.
Accessing One Other Node
When we access the next node in the linked structure, we end up having to potentially do this all over again, just to read one little node's data. The contents of the cache line might look like this (with 22 being the node data of interest).
We can see potentially how much wasted effort the hardware and operating system are applying, moving big, aligned chunks of data from slower memory to faster memory only in order to access one little teeny bit of it prior to eviction.
And that's why little objects all allocated separately, as in the case of linked nodes or languages which can't represent user-defined types contiguously, aren't very cache or page-friendly. They tend to invoke a lot of page faults and cache misses as we traverse them, accessing their data. That is, unless they have help from a memory allocator which allocates these nodes in a more contiguous fashion (in which case the data or two or more nodes might be right next to each other and accessed together).
Contiguity and Spatial Locality
The most cache-friendly data structures tend to be based on contiguous arrays (it doesn't have to be one gigantic array, but perhaps arrays linked together, e.g., as is the case of an unrolled list). When we iterate through an array and access the first element, we might have to do the motions described above yet we might be able to get this once the memory is moved into a cache line:
Now we can iterate through the array and access all the elements while it's in the second-fastest form of memory on the machine, the L1 cache, simply moving data from L1 cache to register after the initial compulsory cache miss/page fault. If we start at 17, we have the initial compulsory cache miss but all the subsequent elements in this cache line can then be accessed without repeating the motions above. This is extremely fast, and the computer can blaze through such data.
So that was what was meant by this part:
Traditional in-memory data structures like balanced binary search
trees are not efficient on modern hardware, because they do not
optimally utilize on-CPU caches.
Note that it is possible to make linked structures like trees and linked lists substantially more cache-friendly than they would naturally be using a custom memory allocator, but they lack this inherent cache-friendliness at the basic data structure level.
Hash tables, on the other hand, tend to be contiguous table structures based on arrays. They might use chaining and linked bucket structures, but those are also easier to make cache-efficient with a little teeny bit of help from the custom allocator (far less than the tree due to the simpler, sequential access patterns within a hash bucket).
So anyway, that's a little brief overview on the subject, a bit oversimplified, but hopefully enough to help get started. If you want to understand this subject at a deeper level, keywords would be cache/memory efficiency/optimization and locality of reference.

Understanding Cache line invalidation and striped locks for Concurrent HashMap Implementation

If we put the striped locks very close to each other in memory for a concurrent hashmap, the cache line size can affect performance because we would have to invalidate caches unnecessarily. If you add padding to the array of striped locks, it will improve performance.
Can someone explain this?
To start with a non-concurrent hashmap, the basic principle is this:
Have a indexed structure (most often an array or set of arrays) for the keys and values.
Get a hash for the key.
Reduce this to be within the size of the arrays. (Modulo does this simply enough, so if the hash value is 123439281 and there are 31 slots available, then we use 123439281 % 31 which is 9 and use that as our index).
See if there's a key there, and if so if it matches (equals).
Store the key if it's new, and the value.
The same approach can be used to find the value for a given key (or to find that there is none).
Of course the above doesn't work if there's a key in the same slot that isn't equal to the key you're concerned with, and there are different approaches to dealing with this, mainly either continuing to look in different slots until one is free, or having slots actually act as a linked list of equal-indexed pairs. I won't go into the details of this.
If you are looking to other slots it won't work once you've filled the arrays (and will be slow before that point) and if you are using linked-lists to handle collisions you will be very slow if you have many keys at the same index (the O(1) you want becomes closer and closer to O(n) as this gets worse). Either way you're going to want a mechanism to resize the internal store when the amount stored gets too large.
Okay. That's a very high-level description of a hashmap. What if we want to make it threadsafe?
It won't be threadsafe by default as e.g. two different threads writing different keys whose hash modulo down to the same value, then one thread might stomp over the other.
The simplest way to make a hashmap threadsafe is to simply have a lock that we use on all operations. That means that every thread will wait on every other thread though, so it won't have very good concurrent behaviour. With a bit of clever structuring it's not too hard to have it that we can have multiple reading threads or a single writing thread (but not both), but that still isn't great.
It's possible to create hashmaps that are safely concurrent without using locks at all (see http://www.communicraft.com/blog/details/a-lock-free-dictionary for a description of one I wrote in C#, or http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable for one in Java by Cliff Click whose basic approach I used in my C# version).
But another approach is striped locks.
Because the basis of the map is either an array for key-value pairs (and likely a cached copy of the hashcode) or a set of arrays for them, and because it is generally safe to have two threads writing and/or reading to different parts of an array at a time (there are caveats, but I'll ignore them for now) the only problems are when either two threads want the same slot, or when a resize is necessary.
And therefore the different slots could have different locks, and then only threads that are operating on the same slot would need to wait on each other.
There'd still be the problem of resizing, but that isn't insurmountable; if you need to resize obtain every one of the locks (in a set order, so that you can prevent deadlocks from happening) and then do the resize, having first checked that no other thread did the same resize in the meantime.
However, if you've a hashmap with 10,000 slots this would mean 10,000 lock objects. That's a lot of memory used, and also a resize would mean obtaining every one of those 10,000 locks.
Striped locks are somewhere in-between the single-lock approach and the lock-per-slot approach. Have an array of a certain number of locks, say 16 as a nice (binary) round number. When you need to act on a slot then obtain lock number slotIndex % 16, and do your operation. Now while threads may still end up blocking on threads doing operations on completely different slots (slot 5 and slot 21 have the same lock) they can still act concurrently to many other operations, so it's a middle-ground between the two extremes.
So that's how striped locking works, at a high level.
Now, modern day memory access is not uniform, in that it does not take the same time to access arbitrary pieces of memory because there is a level of caching (generally at least 2 levels) in the CPU. This caching has both good and bad effects.
Obviously the good effects normally outweigh the bad, or chip manufacturers wouldn't use it. If you access a piece of memory, and then access a piece of memory very close to it, the chances are that second access will be very fast because it will have been loaded into the cache on the first read. Writes are also improved.
It's already natural enough that a given piece of code is likely to want to do several operations on blocks of memory close to each other (e.g. reading two fields in the same object, or two locals in a method), which is why this sort of caching worked in the first place. And programmers further work to take advantage of this fact as much as possible in how they design their code, and collections such as hashmaps are a classic example. E.g. we might have stored keys and stored hashes in the same array so that reading one brings the other into the cache to be quickly read, and so on.
There are though times when this caching has a negative effect. In particular if two threads are going to deal with bits of memory that are close to each other at around the same time.
This doesn't come up that often, because threads are most often dealing with their own stacks or bits of heap memory pointed to by their own stacks, and only occasionally heap memory that is visible to other threads. That in itself is a big part of why CPU caches are normally a big win for performance.
However, the use of concurrent hashmaps is inherently a case where multiple threads hit neighbouring blocks of memory.
CPU caches work on the basis of "cache lines". These are blocks of code that are loaded into the cache from the RAM, or written from the cache to the RAM as a unit. (Again, while we're about to discuss a case where this is a bad thing, this is an efficient model most of the time).
Now, consider a 64-bit processor with 64-byte cache-lines. Every pointer or reference to an object is going to take up 8 bytes. If a piece of code tries to access such a reference it will mean that 64 bytes are loaded into the cache, then 8 bytes of that dealt with by the CPU. If the CPU writes to that memory, then those 8 bytes are changed on the cache, and the cache written back to the RAM. As said, this is generally good, because the odds are high that we'll also want to do the same with other bits of RAM nearby, and hence in the same cache line.
But what if another thread wants to hit the same block of memory?
If CPU0 goes to read from a value that is in the same cachline that CPU1 has just written to, it will have a stale cacheline that has been invalidated and have to read it again. If CPU0 was trying to write to it it may well not only have to read it again, but redo the operation that gave it the result to write.
Now, if that other thread had wanted to hit the exact same bit of memory, there'd have been a conflict even without caching, so things aren't that much worse than they would have been (but they are worse). But if the other thread was going to hit nearby memory it will still suffer.
This is obviously bad for our concurrent map's slots, but its even worse for its striped locks. We'd said we might have 16 locks. With 64-byte cachelines and 64-bit references that's 2 cachelines for all the locks. The odds a lock is in the same cacheline as that wanted by the other thread is 50%. With 128-byte cachelines (Itanium has those) or 32-bit references (all 32-bit code uses those) it's 100%. With lots of threads its effectively 100% that you're going to be waiting. And waiting again if there's yet another hit. And waiting.
Our attempt to prevent threads waiting on the same lock has turned into them waiting on the same cacheline.
Worse, the more cores you have using the locks, the worse this becomes. Each extra core slows down the total throughput roughly exponentially. 8 cores might take over 200 times as long to execute as 1 core would!
If however we pad out our striped locks with blank space so that there is a 56-byte gap between each one, then this doesn't happen; the locks are all on different cachelines, and operations on neighbouring locks don't affect it any more. This costs memory, and makes normal reading and writing slower (the point of caches is that it makes things faster most of the time after all), but is appropriate in cases where particularly frequent concurrent access is expected, and we're not likely to want to hit the next lock (we aren't, except for resize operations). (Another example would be striped counters; have different threads increment different integers and sum them when you want to get the tally).
This problem of threads hitting neighbouring pieces of memory (called "false-sharing" because it has a performance impact caused by shared access to the same memory even though they are actually accessing neighbouring memory rather than the same memory) will also affect the internal storage of the hashmap itself, but not as much because the map itself is likely larger and so the odds of two accesses hitting the same cacheline is less. It would also be more expensive to use padding here for the same reason; being larger the amount of padding that would involve could be huge.

CUDA: When to use shared memory and when to rely on L1 caching?

After Compute Capability 2.0 (Fermi) was released, I've wondered if there are any use cases left for shared memory. That is, when is it better to use shared memory than just let L1 perform its magic in the background?
Is shared memory simply there to let algorithms designed for CC < 2.0 run efficiently without modifications?
To collaborate via shared memory, threads in a block write to shared memory and synchronize with __syncthreads(). Why not simply write to global memory (through L1), and synchronize with __threadfence_block()? The latter option should be easier to implement since it doesn't have to relate to two different locations of values, and it should be faster because there is no explicit copying from global to shared memory. Since the data gets cached in L1, threads don't have to wait for data to actually make it all the way out to global memory.
With shared memory, one is guaranteed that a value that was put there remains there throughout the duration of the block. This is as opposed to values in L1, which get evicted if they are not used often enough. Are there any cases where it's better too cache such rarely used data in shared memory than to let the L1 manage them based on the usage pattern that the algorithm actually has?
2 big reasons why automatic caching is less efficient than manual scratch pad memory (applies to CPUs as well)
parallel accesses to random addresses are more efficient. Example: histogramming. Let's say you want to increment N bins, and each are > 256 bytes apart. Then due to coalescing rules, that will result in N serial reads/writes since global and cache memory is organized in large ~256byte blocks. Shared memory doesn't have that problem.
Also to access global memory, you have to do virtual to physical address translation. Having a TLB that can do lots of translations in || will be quite expensive. I haven't seen any SIMD architecture that actually does vector loads/stores in || and I believe this is the reason why.
avoids writing back dead values to memory, which wastes bandwidth & power. Example: in an image processing pipeline, you don't want your intermediate images to get flushed to memory.
Also, according to an NVIDIA employee, current L1 caches are write-through (immediately writes to L2 cache), which will slow down your program.
So basically, the caches get in the way if you really want performance.
As far as i know, L1 cache in a GPU behaves much like the cache in a CPU. So your comment that "This is as opposed to values in L1, which get evicted if they are not used often enough" doesn't make much sense to me
Data on L1 cache isn't evicted when it isn't used often enough. Usually it is evicted when a request is made for a memory region that wasn't previously in cache, and whose address resolves to one that is already in use. I don't know the exact caching algorithm employed by NVidia, but assuming a regular n-way associative, then each memory entry can only be cached in a small subset of the entire cache, based on it's address
I suppose this may also answer your question. With shared memory, you get full control as to what gets stored where, while with cache, everything is done automatically. Even though the compiler and the GPU can still be very clever in optimizing memory accesses, you can sometimes still find a better way, since you're the one who knows what input will be given, and what threads will do what (to a certain extent of course)
Caching data through several memory layers always needs to follow a cache-coherency protocol. There are several such protocols and the decision on which one is the most suitable is always a trade off.
You can have a look at some examples:
Related to GPUs
Generally for computing units
I don't want to get in many details, because it is a huge domain and I am not an expert. What I want to point out is that in a shared-memory system (here the term shared does not refer to the so called shared memory of GPUs) where many compute-units (CUs) need data concurrently there is a memory protocol that attempts to keep the data close to the units so that can fetch them as fast as possible. In the example of a GPU when many threads in the same SM (symmetric multiprocessor) access the same data there should be a coherency in the sense that if thread 1 reads a chunk of bytes from the global memory and in the next cycle thread 2 is going to access these data, then an efficient implementation would be such that thread 2 is aware that data are found already in L1 cache and can access it fast. This is what the cache coherency protocol attempts to achieve, to let all compute units be up to date with what data exist in caches L1, L2 and so on.
However, keeping threads up to date, or else, keeping threads in coherent states, comes at some cost which is essentially missing cycles.
In CUDA by defining the memory as shared rather than L1-cache you free it from that coherency protocol. So access to that memory (which is physically the same piece of whatever material it is) is direct and does not implicitly call the functionality of coherency protocol.
I don't know how fast should this be, I didn't perform any such benchmark but the idea is that since you don't pay anymore for this protocol the access should be faster!
Of course, the shared memory on NVIDIA GPUs is split in banks and if someone wants to use it for performance improvement should have a look at this before. The reason is bank conflicts that occur when two threads access the same bank and this causes serialization of the access..., but that's another thing link

LRU vs FIFO vs Random

When there is a page fault or a cache miss we can use either the Least Recently Used (LRU), First in Fist Out (FIFO) or Random replacement algorithms. I was wondering, which one provides the best performance aka the least possible future cache miss'/page faults?
Architecture: Coldfire processor
The expression "There are no stupid questions" fits this so well. This was such a good question I had to create an account and post on it and share my views as somebody who has modelled caches on a couple of CPUs.
You specify the architecture a 68000 which is a CPU rather than a GPU or USB controller, or other piece of hardware which may access a cache however...
Therefore the code you run on the 68000 will make a huge difference to the part of the question "least possible future cache miss'/page faults".
In this you differentiate between cache miss's and page faults, I'm not sure exactly which coldfire architecure you are refering to but I'm assuming this doesn't have a hardware TLB replacement, it uses a software mechansim (so the cache will be shared with the data of applications).
In the replacement policy the most important factor is the number of associations (or ways).
A direct map cache (1 way), directly correlates (all most always) with the low order bits of the address (the number of bits specify the size of the cache) so a 32k cache would be the lower 15bits. In this case the replacement algorthims LRU, FIFO or Random would be useless as there is only one possible choice.
However Writeback or Writethrough selection of the cache would have more effect. For writes to memory only Writethrough means the cache line isn't allocated as apposed to a writeback cache where the line currently in the cache which shares the same lower 15 bits is ejected out of the cache and read back in then modified, for use IF the code running on the CPU uses this data).
For operations that write and don't perform multiple operations on the data then writethrough is usually much better, also on modern processors (and I don' know if this architecture supports it), but Writethrough or Writeback can be selected on a TLB/Page basis. This can have a much greator effect on the cache than the policy, you can programme the system to match the type of data in each page, especially in a direct map cache ;-)
So a direct map cache is pretty easy to understand, it's also easy to understand the basis of cache's worst case, best case and average case.
Imagin a memcpy routine which copies data which is aligned to the cache size. For example a 32k direct mapped cache, with two 32k buffers aligned on a 32k boundry....
0x0000 -> read
0x8000 -> write
0x8004 -> read
0x8004 -> write
...
0x8ffc -> read
0x8ffc -> write
Here you see the read and write's as they copy each word of data, notice the lower 15 bits are the same for each read and write operation.
A direct mapped cache using writeback (remember writeback allocates lines does the following)
0x0000 -> read
cache performs: (miss)
0x0000:0x001f -> READ from main memory (ie. read 32 bytes of the source)
0x8000 -> write
cache performs: (miss)
invalidate 0x0000:0x001f (line 0)
0x8000:0x801f -> READ from main memory (ie. read 32 bytes of the destination)
0x8000 (modify this location in the cache with the read source data)
<loop>
0x0004 -> read
cache performs: (miss)
writeback 0x8000:0x801f -> WRITE to main memory (ie. write 32 bytes to the desitnation)
0x0000:0x001f -> READ from main memory (ie. read 32 bytes of source (the same as we did just before)
0x8004 -> write
cache performs: (miss)
invalidate 0x0000:0x001f (line 0)
0x8000:0x801f -> READ from main memory (ie. read 32 bytes of the destination)
0x8004 (modify this location in the cache with the read source data)
</loop> <--- (side note XML is not a language but we use it as such)
As you see a lot of memory operations go on, this in fact is called "thrashing" and is the best example of a worst case scenairo.
Now imagine that we use a writethrough cache, these are the operations:
<loop>
0x0000 -> read
cache performs: (miss)
0x0000:0x001f -> READ from main memory (ie. read 32 bytes of the source)
0x8000 -> write
cache performs: (not a miss)
(not a lot, the write is "posted" to main memory) (posted is like a letter you just place it in the mailbox and you don't care if it takes a week to get there).
<loop>
0x0004 -> read
cache performs: (hit)
(not a lot, it just pulls the data it fetched last time which it has in it's memory so it goes very quickly to the CPU)
0x8004 -> write
cache performs: (not a miss)
(not a lot, the write is "posted" to main memory)
</loop until next 32 bytes>
</loop until end of buffer>
As you can see a huge difference we now don't thrash, in fact we are best case in this example.
Ok so that's the simple case of write through vs. write back.
Direct map caches however are now not very common most people use, 2,4 or 8 way cache's, that is there are 2, 4 or 8 different possible allocations in a line. So we could store 0x0000, 0x8000, 0x1000, 0x1800 all in the cache at the same time in a 4 way or 8 way cache (obviously an 8 way could also store 0x2000, 0x2800, 0x3000, 0x3800 as well).
This would avoid this thrashing issue.
Just to clarify the line number in a 32k direct mapped cache is the bottom 15 bits of the address.
In a 32k 2 way it's the bottom 14 bits.
In a 32k 4 way it's the bottom 13 bits.
In a 32k 8 way it's the bottom 12 bits.
And in a fully associatve cache it's the lines size (or the bottom 5 bits with a 32 byte line). You can't have less than a line. 32 bytes is typically the most optimum operation in a DDR memory system (there are other reasons as well, someimes 16 or sometimes 64 bytes can be better, and 1 byte would be optimal in the algorthmic case, lets uses 32 as that's very common)
To help understand LRU, FIFO and Random consider the cache is full associative, in a 32k 32 byte line cache this is 1024 lines.
A random replacement policy would on random cause a worst case hit every 1024 replacements (ie. 99.9% hit), in either LRU or FIFO I could always write a programme which would "thrash" ie. always cause a worst case behavouir (ie. 0% hit).
Clearly if you had a fully associative cache you would only choose LRU or FIFO if the programme was known and it was known the exact behavouir of the programme.
For ANYTHING which wasn't 99.9% predictable you would choose RANDOM, it's simply the best at not being worst, and one of the best for being average, but how about best case (where I get the best performance)...
Well it depends basically on the number of ways...
2 ways and I can optimise things like memcpy and other algorthims to do a good job. Random would get it wrong half the time.
4 ways and when I switch between other tasks I might not damage the cache so much that their data is still local. Random would get it wrong quater of the time.
8 ways now statistics can take effect a 7/8% hit rate on a memcpy isn't as good as a 1023/1024% (fully associative or optimised code) but for non optimised code it makes a difference.
So why don't people make fully associative cache's with random replacement policies!
Well it's not because they can't generate good random numbers, in fact a pseudo random number generator is just as good and yes I can write a programme to get 100% miss rate, but that isn't the point, I couldn't write a useful programme that would have 100% miss, which I could with an LRU or FIFO algo.
A 32k 32 byte line Fully associatve cache's require you to compare 1024 values, in hardware this is done through a CAM, but this is an expensive piece of hardware, and also it's just not possible to compare this many values in a "FAST" processing time, I wonder if a quantum computer could....
Anyway to answer your question which one is better:
Consider if writethrough might be better than writeback.
Large way's RANDOM is better
Unknown code RANDOM is better for 4 way or above.
If it's single function or you want the most speed from something your willing to optimise, and or you don't care about worst case, then LRU is probably what you want.
If you have very few way's LRU is likely what you want unless you have a very specific scenario then FIFO might be OK.
References:
http://www.freescale.com/files/training/doc/APF_ENT_T0801_5225_COLDFIRE_MCU.pdf
http://en.wikipedia.org/wiki/Cache_algorithms
No perfect caching policy exists because it would require knowledge of the future (how a program will access memory).
But, some are measurably better than others in the common memory access pattern cases. This is the case with LRU. LRU has historically given very good performance in overall use.
But, for what you are trying to do, another policy may be better. There is always some memory access pattern that will cause a caching policy to perform poorly.
You may find this thread helpful (and more elaborate!)
Why is LRU better than FIFO?
Many of the architectures I have studied use LRU, as it generally provides not only efficiency in implementation, but also is pretty good on average at preventing misses. However, in the latest x86 architectures, I think there are some more complicated things going on. LRU is sort of a basic model.
It really depends on what kind of operations you are performing on your device. Depending on the types of operations, different evacuation policies will work better. For example, FIFO works well with traversing memory sequentially.
Hope this helps, I'm not really an architecture guy.
Between the three, I'd recommend LRU. First, it's a good approximation to optimal scheduling when locality is assumed (this turns out to be a good assumption). Random scheduling cannot benefit from locality. Second, it doesn't suffer from Belady's anomaly (like FIFO); that is, bigger caches mean better performance, which isn't necessarily true with FIFO.
Only if your specific problem domain strongly suggests using something else, LRU is going to be hard to beat in the general case.
Of the three, LRU is generally the best while FIFO is the worst and random comes in somewhere between. You can construct access patterns where any of the three is superior to either of the others, but it is somewhat tricky. Interestingly enough, this order is also roughly how expensive they are to implement -- LRU is the most expensive and FIFO is the cheapest. Just goes to show, there's no free lunch
If you want the best of both worlds, consider an adaptive approach that alters it strategy based on actual usage patterns. For example, look at the algorith IBM's Adaptive Replacement Cache: http://code.activestate.com/recipes/576532-adaptive-replacement-cache-in-python/

How does one write code that best utilizes the CPU cache to improve performance?

This could sound like a subjective question, but what I am looking for are specific instances, which you could have encountered related to this.
How to make code, cache effective/cache friendly (more cache hits, as few cache misses as possible)? From both perspectives, data cache & program cache (instruction cache),
i.e. what things in one's code, related to data structures and code constructs, should one take care of to make it cache effective.
Are there any particular data structures one must use/avoid, or is there a particular way of accessing the members of that structure etc... to make code cache effective.
Are there any program constructs (if, for, switch, break, goto,...), code-flow (for inside an if, if inside a for, etc ...) one should follow/avoid in this matter?
I am looking forward to hearing individual experiences related to making cache efficient code in general. It can be any programming language (C, C++, Assembly, ...), any hardware target (ARM, Intel, PowerPC, ...), any OS (Windows, Linux,S ymbian, ...), etc..
The variety will help to better to understand it deeply.
The cache is there to reduce the number of times the CPU would stall waiting for a memory request to be fulfilled (avoiding the memory latency), and as a second effect, possibly to reduce the overall amount of data that needs to be transfered (preserving memory bandwidth).
Techniques for avoiding suffering from memory fetch latency is typically the first thing to consider, and sometimes helps a long way. The limited memory bandwidth is also a limiting factor, particularly for multicores and multithreaded applications where many threads wants to use the memory bus. A different set of techniques help addressing the latter issue.
Improving spatial locality means that you ensure that each cache line is used in full once it has been mapped to a cache. When we have looked at various standard benchmarks, we have seen that a surprising large fraction of those fail to use 100% of the fetched cache lines before the cache lines are evicted.
Improving cache line utilization helps in three respects:
It tends to fit more useful data in the cache, essentially increasing the effective cache size.
It tends to fit more useful data in the same cache line, increasing the likelyhood that requested data can be found in the cache.
It reduces the memory bandwidth requirements, as there will be fewer fetches.
Common techniques are:
Use smaller data types
Organize your data to avoid alignment holes (sorting your struct members by decreasing size is one way)
Beware of the standard dynamic memory allocator, which may introduce holes and spread your data around in memory as it warms up.
Make sure all adjacent data is actually used in the hot loops. Otherwise, consider breaking up data structures into hot and cold components, so that the hot loops use hot data.
avoid algorithms and datastructures that exhibit irregular access patterns, and favor linear datastructures.
We should also note that there are other ways to hide memory latency than using caches.
Modern CPU:s often have one or more hardware prefetchers. They train on the misses in a cache and try to spot regularities. For instance, after a few misses to subsequent cache lines, the hw prefetcher will start fetching cache lines into the cache, anticipating the application's needs. If you have a regular access pattern, the hardware prefetcher is usually doing a very good job. And if your program doesn't display regular access patterns, you may improve things by adding prefetch instructions yourself.
Regrouping instructions in such a way that those that always miss in the cache occur close to each other, the CPU can sometimes overlap these fetches so that the application only sustain one latency hit (Memory level parallelism).
To reduce the overall memory bus pressure, you have to start addressing what is called temporal locality. This means that you have to reuse data while it still hasn't been evicted from the cache.
Merging loops that touch the same data (loop fusion), and employing rewriting techniques known as tiling or blocking all strive to avoid those extra memory fetches.
While there are some rules of thumb for this rewrite exercise, you typically have to carefully consider loop carried data dependencies, to ensure that you don't affect the semantics of the program.
These things are what really pays off in the multicore world, where you typically wont see much of throughput improvements after adding the second thread.
I can't believe there aren't more answers to this. Anyway, one classic example is to iterate a multidimensional array "inside out":
pseudocode
for (i = 0 to size)
for (j = 0 to size)
do something with ary[j][i]
The reason this is cache inefficient is because modern CPUs will load the cache line with "near" memory addresses from main memory when you access a single memory address. We are iterating through the "j" (outer) rows in the array in the inner loop, so for each trip through the inner loop, the cache line will cause to be flushed and loaded with a line of addresses that are near to the [j][i] entry. If this is changed to the equivalent:
for (i = 0 to size)
for (j = 0 to size)
do something with ary[i][j]
It will run much faster.
The basic rules are actually fairly simple. Where it gets tricky is in how they apply to your code.
The cache works on two principles: Temporal locality and spatial locality.
The former is the idea that if you recently used a certain chunk of data, you'll probably need it again soon. The latter means that if you recently used the data at address X, you'll probably soon need address X+1.
The cache tries to accomodate this by remembering the most recently used chunks of data. It operates with cache lines, typically sized 128 byte or so, so even if you only need a single byte, the entire cache line that contains it gets pulled into the cache. So if you need the following byte afterwards, it'll already be in the cache.
And this means that you'll always want your own code to exploit these two forms of locality as much as possible. Don't jump all over memory. Do as much work as you can on one small area, and then move on to the next, and do as much work there as you can.
A simple example is the 2D array traversal that 1800's answer showed. If you traverse it a row at a time, you're reading the memory sequentially. If you do it column-wise, you'll read one entry, then jump to a completely different location (the start of the next row), read one entry, and jump again. And when you finally get back to the first row, it will no longer be in the cache.
The same applies to code. Jumps or branches mean less efficient cache usage (because you're not reading the instructions sequentially, but jumping to a different address). Of course, small if-statements probably won't change anything (you're only skipping a few bytes, so you'll still end up inside the cached region), but function calls typically imply that you're jumping to a completely different address that may not be cached. Unless it was called recently.
Instruction cache usage is usually far less of an issue though. What you usually need to worry about is the data cache.
In a struct or class, all members are laid out contiguously, which is good. In an array, all entries are laid out contiguously as well. In linked lists, each node is allocated at a completely different location, which is bad. Pointers in general tend to point to unrelated addresses, which will probably result in a cache miss if you dereference it.
And if you want to exploit multiple cores, it can get really interesting, as usually, only one CPU may have any given address in its L1 cache at a time. So if both cores constantly access the same address, it will result in constant cache misses, as they're fighting over the address.
I recommend reading the 9-part article What every programmer should know about memory by Ulrich Drepper if you're interested in how memory and software interact. It's also available as a 104-page PDF.
Sections especially relevant to this question might be Part 2 (CPU caches) and Part 5 (What programmers can do - cache optimization).
Apart from data access patterns, a major factor in cache-friendly code is data size. Less data means more of it fits into the cache.
This is mainly a factor with memory-aligned data structures. "Conventional" wisdom says data structures must be aligned at word boundaries because the CPU can only access entire words, and if a word contains more than one value, you have to do extra work (read-modify-write instead of a simple write). But caches can completely invalidate this argument.
Similarly, a Java boolean array uses an entire byte for each value in order to allow operating on individual values directly. You can reduce the data size by a factor of 8 if you use actual bits, but then access to individual values becomes much more complex, requiring bit shift and mask operations (the BitSet class does this for you). However, due to cache effects, this can still be considerably faster than using a boolean[] when the array is large. IIRC I once achieved a speedup by a factor of 2 or 3 this way.
The most effective data structure for a cache is an array. Caches work best, if your data structure is laid out sequentially as CPUs read entire cache lines (usually 32 bytes or more) at once from main memory.
Any algorithm which accesses memory in random order trashes the caches because it always needs new cache lines to accomodate the randomly accessed memory. On the other hand an algorithm, which runs sequentially through an array is best because:
It gives the CPU a chance to read-ahead, e.g. speculatively put more memory into the cache, which will be accessed later. This read-ahead gives a huge performance boost.
Running a tight loop over a large array also allows the CPU to cache the code executing in the loop and in most cases allows you to execute an algorithm entirely from cache memory without having to block for external memory access.
One example I saw used in a game engine was to move data out of objects and into their own arrays. A game object that was subject to physics might have a lot of other data attached to it as well. But during the physics update loop all the engine cared about was data about position, speed, mass, bounding box, etc. So all of that was placed into its own arrays and optimized as much as possible for SSE.
So during the physics loop the physics data was processed in array order using vector math. The game objects used their object ID as the index into the various arrays. It was not a pointer because pointers could become invalidated if the arrays had to be relocated.
In many ways this violated object-oriented design patterns but it made the code a lot faster by placing data close together that needed to be operated on in the same loops.
This example is probably out of date because I expect most modern games use a prebuilt physics engine like Havok.
A remark to the "classic example" by user 1800 INFORMATION (too long for a comment)
I wanted to check the time differences for two iteration orders ( "outter" and "inner"), so I made a simple experiment with a large 2D array:
measure::start();
for ( int y = 0; y < N; ++y )
for ( int x = 0; x < N; ++x )
sum += A[ x + y*N ];
measure::stop();
and the second case with the for loops swapped.
The slower version ("x first") was 0.88sec and the faster one, was 0.06sec. That's the power of caching :)
I used gcc -O2 and still the loops were not optimized out. The comment by Ricardo that "most of the modern compilers can figure this out by itselves" does not hold
Only one post touched on it, but a big issue comes up when sharing data between processes. You want to avoid having multiple processes attempting to modify the same cache line simultaneously. Something to look out for here is "false" sharing, where two adjacent data structures share a cache line and modifications to one invalidates the cache line for the other. This can cause cache lines to unnecessarily move back and forth between processor caches sharing the data on a multiprocessor system. A way to avoid it is to align and pad data structures to put them on different lines.
I can answer (2) by saying that in the C++ world, linked lists can easily kill the CPU cache. Arrays are a better solution where possible. No experience on whether the same applies to other languages, but it's easy to imagine the same issues would arise.
Cache is arranged in "cache lines" and (real) memory is read from and written to in chunks of this size.
Data structures that are contained within a single cache-line are therefore more efficient.
Similarly, algorithms which access contiguous memory blocks will be more efficient than algorithms which jump through memory in a random order.
Unfortunately the cache line size varies dramatically between processors, so there's no way to guarantee that a data structure that's optimal on one processor will be efficient on any other.
To ask how to make a code, cache effective-cache friendly and most of the other questions , is usually to ask how to Optimize a program, that's because the cache has such a huge impact on performances that any optimized program is one that is cache effective-cache friendly.
I suggest reading about Optimization, there are some good answers on this site.
In terms of books, I recommend on Computer Systems: A Programmer's Perspective which has some fine text about the proper usage of the cache.
(b.t.w - as bad as a cache-miss can be, there is worse - if a program is paging from the hard-drive...)
There has been a lot of answers on general advices like data structure selection, access pattern, etc. Here I would like to add another code design pattern called software pipeline that makes use of active cache management.
The idea is borrow from other pipelining techniques, e.g. CPU instruction pipelining.
This type of pattern best applies to procedures that
could be broken down to reasonable multiple sub-steps, S[1], S[2], S[3], ... whose execution time is roughly comparable with RAM access time (~60-70ns).
takes a batch of input and do aforementioned multiple steps on them to get result.
Let's take a simple case where there is only one sub-procedure.
Normally the code would like:
def proc(input):
return sub-step(input))
To have better performance, you might want to pass multiple inputs to the function in a batch so you amortize function call overhead and also increases code cache locality.
def batch_proc(inputs):
results = []
for i in inputs:
// avoids code cache miss, but still suffer data(inputs) miss
results.append(sub-step(i))
return res
However, as said earlier, if the execution of the step is roughly the same as RAM access time you can further improve the code to something like this:
def batch_pipelined_proc(inputs):
for i in range(0, len(inputs)-1):
prefetch(inputs[i+1])
# work on current item while [i+1] is flying back from RAM
results.append(sub-step(inputs[i-1]))
results.append(sub-step(inputs[-1]))
The execution flow would look like:
prefetch(1) ask CPU to prefetch input[1] into cache, where prefetch instruction takes P cycles itself and return, and in the background input[1] would arrive in cache after R cycles.
works_on(0) cold miss on 0 and works on it, which takes M
prefetch(2) issue another fetch
works_on(1) if P + R <= M, then inputs[1] should be in the cache already before this step, thus avoid a data cache miss
works_on(2) ...
There could be more steps involved, then you can design a multi-stage pipeline as long as the timing of the steps and memory access latency matches, you would suffer little code/data cache miss. However, this process needs to be tuned with many experiments to find out right grouping of steps and prefetch time. Due to its required effort, it sees more adoption in high performance data/packet stream processing. A good production code example could be found in DPDK QoS Enqueue pipeline design:
http://dpdk.org/doc/guides/prog_guide/qos_framework.html Chapter 21.2.4.3. Enqueue Pipeline.
More information could be found:
https://software.intel.com/en-us/articles/memory-management-for-optimal-performance-on-intel-xeon-phi-coprocessor-alignment-and
http://infolab.stanford.edu/~ullman/dragon/w06/lectures/cs243-lec13-wei.pdf
Besides aligning your structure and fields, if your structure if heap allocated you may want to use allocators that support aligned allocations; like _aligned_malloc(sizeof(DATA), SYSTEM_CACHE_LINE_SIZE); otherwise you may have random false sharing; remember that in Windows, the default heap has a 16 bytes alignment.
Write your program to take a minimal size. That is why it is not always a good idea to use -O3 optimisations for GCC. It takes up a larger size. Often, -Os is just as good as -O2. It all depends on the processor used though. YMMV.
Work with small chunks of data at a time. That is why a less efficient sorting algorithms can run faster than quicksort if the data set is large. Find ways to break up your larger data sets into smaller ones. Others have suggested this.
In order to help you better exploit instruction temporal/spatial locality, you may want to study how your code gets converted in to assembly. For example:
for(i = 0; i < MAX; ++i)
for(i = MAX; i > 0; --i)
The two loops produce different codes even though they are merely parsing through an array. In any case, your question is very architecture specific. So, your only way to tightly control cache use is by understanding how the hardware works and optimising your code for it.

Resources