Problem (think of the mark phase of a GC)
I have a graph of “objects” that I need to walk, visiting all objects.
I can store in each object if it has been visited.
All the objects are stored in memory and linked together using normal pointers.
The objects are not all the same size.
Sometimes there is not enough ram in the system to hold all the objects in memory at the same time, and I wish to avoid “page thrashing”.
I also wish to avoid TLB faults
Other times, there is more than enough ram.
I do not mind writing low-level code.
I do not mind different code for windows and linux.
The code must run in “user space” without needing none standard permissions.
I don't care the order I visit the nodes in.
I am going to ask more detail questions about possible solutions, linking back to this questions.
Page faults aren't necessarily bad, as long as they're not stalling your progress.
This means that if you have a node Node* p with two candidate successors p->left and p->right, it can be useful to pick the nearest (in terms of (char*)p - (char*)p->next) and pre-fetch the other (e.g. with PrefetchVirtualMemory).
How efficient this will be cannot be predicted; it greatly depends on your graph topology. But the prefetch is virtually free when you have enough RAM.
Closer to the CPU, there's cache prefetching. Same idea, different storage
Use 2M hugepages for address ranges that are full of "hot" data that the kernel can't usefully swap out any / many 4k chunks of. This will reduce TLB misses, but costs extra physical memory if there are any 4k chunks of a hugepage that aren't hot.
Linux does this transparently for anonymous pages (https://www.kernel.org/doc/Documentation/vm/transhuge.txt), but you can use madvise(MADV_HUGEPAGE) on pages you know are worth it, to encourage the kernel to defrag physical memory even if that's not the default in /sys/kernel/mm/transparent_hugepage/defrag. (You can look at /proc/PID/smaps to see how many transparent hugepages are in use for any given mapping.)
Based on what you posted in your answer: An ordered set of nodesToVisit would give you the most locality, but might be too expensive to maintain. Multiple accesses within the same 64-byte cache line are much cheaper than coming back to it later after it's been evicted from L3 cache and has to come from DRAM again.
If you have lots of addresses to visit in your Set, doing one pass of a radix-sort into 2M buckets would give you locality within one hugepage. 2M is also smaller than L3 cache size, so you'll probably get some cache hits when visiting multiple objects in the same cache line, even if you don't hit them back to back.
Depending on how big your Set is, throwing around that many pointers even to partial-sort them might not be worth the memory traffic that takes. But there's probably some sweet spot of taking a window of data and at least partially sorting it. Using the pointers before they are evicted from cache is nice.
SW prefetch can trigger a page-walk to avoid a TLB miss, so you could _mm_prefetch(_MM_HINT_T2) one address from the next 2M bucket before starting on the current bucket. See also Prefetching Examples?. I haven't tested this, but it might work well. It won't help with page faults: prefetch from an unmapped page won't cause a page fault, and you don't want to trigger an actual PF until you're ready to touch the page.
MSalter's suggestion to ask the OS to prefetch and wire the next page is interesting (I think madvise(MADV_WILLNEED) is the Linux equivalent), but a system call will be slow for no benefit if the page was already mapped+wired into the HW page table. There's no x86 asm instruction that just asks if a page is mapped without faulting if it isn't, so I can't think of a way to efficiently choose not to call it. And BTW, I think Linux breaks up transparent hugepages into 4k regular pages for paging in/out. But don't write a big loop that just does _mm_prefetch() or madvise on all the 4k pages in a 2M block; that probably sucks. The prefetcht2 part would probably just result in excess prefetch requests being dropped.
Use perf counters to look at cache hit/miss rates. On Intel CPUs, the mem_load_retired.l1_miss and/or .l2_miss event should show you whether you're getting cache hits on accessing the Set itself, as well as on accessing dereferencing those pointers. Those counters are precise events, so they should map accurately to asm load instructions. (e.g. perf record -e mem_load_retired.l2_miss ./my_program / perf report on Linux).
We remove one item at a time from nodesToVisit
I don't know much about GC design, but can't you use a sequence number or tagged-pointer or something to avoid modifying the Set data structure itself every GC pass? If your minimum object alignment is 4 bytes, you have 2 bits to play with at the bottom of every pointer. ANDing them off before dereferencing is very cheap.
x86-64 with full 64-bit pointers currently requires the top 16 to be the sign-extension of the low 48. So you could use bits there (16 bits, or maybe just the top byte) if you re-canonicalize pointers. (redo sign extension, or just zero the high 16 bits if you want to assume user-space pointers; Linux uses a high-half kernel VM layout so user-space addresses are always in the low half of virtual address space. IDK what Windows does.)
On x86-64, you might consider using the x32 ABI (32-bit pointers in long mode) if 4GiB of address space is enough, especially if you're hitting physical memory limits and swapping. Smaller pointers mean smaller data structures, thus half the cache footprint.
Some Linux systems are built without kernel support for x32, though, only classic x86-64 and usually 32-bit mode. But if it works on your systems, consider gcc -mx32.
These are my first thoughts about a possible solution, they are clearly not optimal. I will delete this answer if someone posts a better answer.
The basic method:
Assume we have a Set<NodePointer> nodesToVisit that contains all nodes we have not yet visited.
We remove one item at a time from nodesToVisit,
and if it has not been visited before we add all “pointers to other nodes” to nodesToVisit.
Improvements:
But we can clearly do better, by ordering nodesToVisit based on address, so that we are more likely to visit nodes that are contained in pages we have recently accessed. This could be as simple as having a second Set<NodePointer> nodesToVisitLater, and putting any node that has an address a long way from the current node into it.
Or we could skip over any node that are contained in pages that are not resident in memory, visiting these nodes after we have visited all nodes that are currently in memory.
(The"set" could just be a stack, as visiting a node more than once is a "no-opp")
https://patents.google.com/patent/US7653797B1/en seems to be related, but I have not read it yet.
https://hosking.github.io/links/Cher+2004ASPLOS.pdf
https://people.cs.umass.edu/~emery/pubs/cramm.pdf
https://people.cs.umass.edu/~emery/pubs/f034-hertz.pdf
https://people.cs.umass.edu/~emery/pubs/04-16.pdf
Related
I'm writing and running software on Windows and Linux of the last five years, say, on machines of the last five years.
I believe the running processes will never get close to the amount of memory we have (we're typically using like 3GB out of 32GB) but I'm certain we'll be filling up the cache to overflowing.
I have a data structure modification which makes the data structure run a bit slower (that is, do quite a bit more pointer math) but reduces both the cache line count and page count (working set) of the application.
My understanding is that if processes' memory is being paged out to disk, reducing the page count and thus disk I/O is a huge win in terms of wall clock time and latency (e.g. to respond to network events, user events, timers, etc.) However with memory so cheap nowadays, I don't think processes using my data structure will ever page out of memory. So is there still an advantage in improving this? For example if I'm using 10,000 cache lines, does it matter whether they're spread over 157 4k pages (=643k) or 10,000 4k pages (=40MB)? Subquestion: even if you're nowhere near filling up RAM, are there cases a modern OS will page a running process out anyways? (I think Linux of the 90s might have done so to increase file caching, for instance.)
In contrast, I also can reduce cache lines and I suspect this will probably wall clock time quite a bit. Just to be clear on what I'm counting, I'm counting the number of 64-byte-aligned areas of memory that I'm touching at least one byte in as the total cache lines used by this structure. Example: if hypothetically I have 40,000 16 byte structures, a computer whose processes are constantly referring to more memory that it has L1 cache will probably run everything faster if those structures are packed into 10,000 64 byte cache lines instead of each straddling two cache lines apiece for 80,000 cache lines?
The general recommendation is to minimize the number of cache lines and virtual pages that exhibit high temporal locality on the same core during the same time interval of the lifetime of a given application.
When the total number of needed physical pages is about to exceed the capacity of main memory, the OS will attempt to free up some physical memory by paging out some of the resident pages to secondary storage. This situation may result in major page faults, which can significantly impact performance. Even if you know with certainty that the system will not reach that point, there are other performance issues that may occur when unnecessarily using more virtual pages (i.e., there is wasted space in the used pages).
On Intel and AMD processors, page table entries and other paging structures are cached in hardware caches in the memory management unit to efficiently translate virtual addresses to physical addresses. These include the L1 and L2 TLBs. On an L2 TLB miss, a hardware component called the page walker is engaged to peform the required address translation. More pages means more misses. On pre-Broadwell1 and pre-Zen microarchitectures, there can only be one outstanding page walk at any time. On later microarchitectures, there can only be two. In addition, on Intel Ivy Bridge and later, it may be more difficult for the TLB prefetcher to keep up with the misses.
On Intel and AMD processors, the L1D and L2 caches are designed so that all cache lines within the same 4K page are guaranteed to be mapped to different cache sets. So if all of the cache lines of a page are used, in contest to, for example, spreading out the cache lines in 10 different pages, conflict misses in these cache levels may be reduced. That said, on all AMD processors and on Intel pre-Haswell microarchitectures, bank conflicts are more likely to occur when accesses are more spread across cache sets.
On Intel and AMD processors, hardware data prefetchers don't work across 4K boundaries. An access pattern that could be detected by one or more prefetchers but has the accesses spread out across many pages would benefit less from hardware prefetching.
On Windows/Intel, the accessed bits of the page table entries of all present pages are reset every second by the working set manager. So when accesses are unnecessarily spread out in the virtual address space, the number of page walks that require microcode assists (to set the accessed bit) per memory access may become larger.
The same applies to minor page faults as well (on Intel and AMD). Both Windows and Linux use demand paging, i.e., an allocated virtual page is only mapped to a physical page on demand. When a virtual page that is not yet mapped to a physical page is accessed, a minor page fault occurs. Just like with the accessed bit, the number of minor page faults per access may be larger.
On Intel processors, with more pages accessed, it becomes more likely for nearby accesses on the same logical core to 4K-alias. For more information on 4K aliasing, see: L1 memory bandwidth: 50% drop in efficiency using addresses which differ by 4096+64 bytes.
There are probably other potential issues.
Subquestion: even if you're nowhere near filling up RAM, are there
cases a modern OS will page a running process out anyways? (I think
Linux of the 90s might have done so to increase file caching, for
instance.)
On both Windows and Linux, there is a maximum working set size limit on each process. On Linux, this is called RLIMIT_RSS and is not enforced. On Windows, this limit can be either soft (the default) or hard. The limit is only enforced if it is hard (which can be specified by calling the SetProcessWorkingSetSizeEx function and passing the QUOTA_LIMITS_HARDWS_MIN_ENABLE flag). When a process reaches its hard working set limit, further memory allocation requests will be satisfied by paging out some of its pages to the page file, even if free physical memory is available.
I don't know about Linux of the 90s.
Footnotes:
(1) The Intel optimization manual mentions in Section 2.2.3 that Skylake can do two page walks in parallel compared to one in Haswell and earlier microarchitectures. To my knowledge, the manual does not clearly mention whether Broadwell can do one or two page walks in parallel. However, according to slide 10 of these Intel slides (entitled "Intel Xeon Processor D: The First Xeon processor optimized for dense solutions"), Xeon Broadwell supports two page walks. I think this also applies to all Broadwell models.
I know that there is a lot of questions regarding the issue I'm pointing here, but I couldn't find any complex answer (neither on StackOverflow nor in other sources).
I would like to ask about heap (RAM) fragmentation problem.
As I understood there are two kind of fragmentation:
internal - related with difference between allocation unit size (AU) and the size of the allocated memory AM (waste memory is equal to AM % AU),
external - related with noncontinuous areas of a free memory, so even if the sum of the free memory areas can handle the new allocation request, it fails if there is no continues area that can handle it.
This is quite clear. The problems start when the "paging" appears.
Sometimes I can find an information that paging solves the external fragmentation issue.
Indeed I agree that thanks to paging the OS is able to create the virtually continues areas of the memory, assigned to the process, even if physically the parts of the memory are scattered.
But how exactly does it help with the external fragmentation?
I mean, assuming that the size of a page has 4kB, and we want to allocate 16 kB, then of course we just need to find four empty pages frames, even if physically the frames are not a part of a continues area.
But what in case of the smaller allocation ?
I believe the page itself can still be fragmented and (in worst case) the OS still needs to provide a new frame if the old one cannot be used to allocate the requested memory.
So is it that (assuming the worst case) sooner or later, with paging or without, the long working application that allocates and releases the heap memory (different sizes) will fall into low-memory condition, because of external fragmentation ?
So the question is how to deal with the external fragmentation?
Own implementation of allocation algorithm ? Paging (as I wrote, not sure it helps) ? What else ? Does OS (Windows, Linux) provides some defragmentation methods ?
The most radical solution is to forbid using of the heap, but is it really necessary for the platforms with paging, virtual address spaces, virtual memory etc ... and the only issue is that the applications need to run unstoppable for a years ?
One more issue.. is internal fragmentation an ambiguous term ?
Somewhere I have spotted the definition that internal fragmentation points to the part of page frame, that is wasted because the process does not need more memory, but the single frame cannot be owned by more than a single processes.
I have bolded the questions, so the people who are in hurry could find the question without reading everything.
Regards!
"Fragmentation" is indeed not a very precise term. But we can say for sure that when a running application needs a block of n bytes and there are n or more bytes not in use, yet we can't get the required block, then "memory is too fragmented."
But how exactly does it [paging] help with the external allocation [I assume you mean fragmentation] ?
There's really nothing complicated here. External fragmentation is free memory between allocated blocks that's "too small" to satisfy any application requirement. This is a general concept. The definition of "too small" is application-dependent. Nonetheless, if allocated blocks can fall on any boundary, then it's easy, after many allocations and deallocations, for lots of such fragments to occur. Paging helps with external fragmentation in two ways.
First, it subdivides memory into fixed-size adjacent chunks - the pages - that are "large enough" so they're never useless. Again the definition of "large enough" is not precise. But most applications will have lots of requirements satisfiable by a single 4k page. Since no external fragmentation problem can occur for allocations of a page or less, the problem has been mitigated.
Second, the paging hardware provides a level of indirection between application pages and physical memory pages. Therefore any free physical memory page can be used to help satisfy any application request, no matter how large. For example, suppose you have 100 physical pages with every other physical page (50 of them) allocated. Without page-mapping hardware, the biggest request for contiguous memory that can be satisfied is 1 page. With mapping, it's 50 pages. (I'm disregarding virtual pages allocated initially with no mapped physical page. That's another discussion.)
But what in case of the smaller allocation ?
Again it's pretty simple. If the unit of allocation is a page, then any allocation smaller than a page yields an unused portion. This is internal fragmentation: unusable memory within an allocated block. The bigger you make allocation units (they don't have to be a single page), the more memory will be unusable due to internal fragmentation. On average, this will tend toward half of an allocation unit. Consequently, though OS's tend to allocate in units of pages, most application-side memory allocators request a very small number (often one) of big blocks (of pages) from the OS. They use much smaller allocation units internally: 4-16 bytes is pretty common.
So the question is how to deal with the external allocation [I assume you mean fragmentation] ? So is it that (assuming the worst case) sooner or later, with paging or without, the long working application that allocates and releases the heap memory (different sizes) will fall into low-memory condition, because of external fragmentation ?
If I understand you correctly, you're asking if fragmentation is inevitable. Except under very special conditions (e.g. the application only needs blocks of one size), the answer is yes. But that doesn't mean it's necessarily a problem.
Memory allocators use smart algorithms that limit fragmentation pretty effectively. For example, they may maintain "pools" with different block sizes, using the pool with block size most closely matching a given request. This tends to limit both internal and external fragmentation. A real world example that's very well documented is dlmalloc. The source code is also very clear.
Of course any general purpose allocator can fail under specific conditions. For this reason, modern languages (C++ and Ada are two I know) let you supply special-purpose allocators for objects of a given type. Typically -
for a fixed-size object - these might simply maintain a pre-allcoated free list, so fragmentation for that particular case is zero, and allocation/deallocation are very fast.
One more note: It's possible to totally eliminate fragmentation with copying/compacting garbage collection. Of course this requires underlying language support, and there's a performance bill to pay. A copying garbage collector compacts the heap by moving objects to eliminate unused space completely whenever it runs to reclaim storage. To do this it must update every pointer in the running program to the corresponding object's new location. While this may sound complex, I've implemented a copying garbage collector, and it's not so bad. The algorithms are extremely cool. Unfortunately, the semantics of many languages (e.g. C and C++) don't allow finding every pointer in the running program, which is required.
The most radical solution is to forbid using of the heap, but is it really necessary for the platforms with paging, virtual address spaces, virtual memory etc ... and the only issue is that the applications need to run unstoppable for a years ?
Though general purpose allocators are good, they're not guaranteed. It's not unusual for safety-critical or hard real time constrained systems to avoid heap use completely. On the other hand, when no absolute guarantee is needed, a general purpose allocator is often fine. There are many systems that run perfectly with tough loads for extended periods using general purpose allocators: fragmentation reaches an acceptable steady state and doesn't cause a problem.
One more issue.. is internal fragmentation an ambiguous term ?
The term isn't ambiguous, but is used in different contexts. The invariant is that it's referring to unused memory inside allocated blocks.
OS literature tends to assume the allocation unit is pages. For example, Linux sbrk lets you request the end of the data segment be set anywhere, but Linux allocates pages, not bytes, so the unused part of the last page is internal fragmentation from the OS's point of view.
Application-oriented discussions tend to assume allocation is in "blocks" or "chunks" of arbitrary size. dlmalloc uses about 128 discrete chunk sizes, each maintained in its own free list. Plus, it will custom allocate very large blocks using OS memory mapping system calls, so there's at most a page size (minus 1 byte) of mismatch between request and actual allocation. Clearly it's going to a lot of trouble to minimize internal fragmentation. The fragmentation caused a given allocation is the difference between the request and the chunk actually allocated. Since there are so many chunk sizes, that difference is strictly limited. On the other hand, the many chunk sizes increase chances of external fragmentation problems: free memory may consist entirely of chunks that are well-managed by dlmalloc, yet too small to honor an application requirement.
I'm reading how TLB works and I came across this:
context
Lots of workloads (though certainly not all) approach 100% TLB hit rate.
What kind of workloads? any example would really help.
Here's a good little paper I found through DuckDuckGo that explains quite well how the TLB works (part of a Comp Sci course from the University of Wisconsin-Madison, by the looks of it): https://pages.cs.wisc.edu/~remzi/OSTEP/vm-tlbs.pdf. That should give you enough to start reasoning about how your data layout and algorithms might affect your TLB hit rate.
Essentially, it comes down to keeping things close together in memory, so you minimize the number of different pages you have to access when accessing your data and instructions. With 4K pages or larger being typical, it turns out that most programs typically will have a very high TLB hit rate. In fact, the computer systems we use today have been specifically designed this way, based on extensive research into what most computing workloads typically did before those systems were designed.
If you think about a function running without any branches, you can picture the sequential fetching and executing of each instruction, beginning at some point in a page and continuing down the page. The first instruction may result in a TLB miss, but every instruction after that will be a TLB hit, until the end of the page is reached. When the first instruction on the next page is accessed, there may be another TLB miss, but then the rest of the page of instructions will all be hits. It gets even better if the function contains a loop, because the CPU will jump back to something it just executed. It may not even leave the same page for a long time.
Programs also tend to access data on pages that they have just recently accessed. For example, all the automatic variables within a function will typically reside on one or two stack pages, and in many cases, all the stack frames for all the functions in the call stack will often fit into a small number of pages. For heap-allocated or statically-allocated memory, there is usually some locality too. The paper I mention above gives a good example of how array traversals result in high TLB hit rates.
Workloads that tend to have relatively low TLB hit rates are those that spread their frequently-accessed data and/or instructions over a large number of pages, with infrequently-accessed data/instructions in between; and those that do things to invalidate their TLB entries (e.g., on some systems a process context switch results in the TLB being dumped).
I'm reading the following paper: http://www-db.in.tum.de/~leis/papers/ART.pdf and in it, they say in the abstract:
Main memory capacities have grown up to a point where most databases
fit into RAM. For main-memory database systems, index structure
performance is a critical bottleneck. Traditional in-memory data
structures like balanced binary search trees are not efficient on
modern hardware, because they do not optimally utilize on-CPU caches.
Hash tables, also often used for main-memory indexes, are fast but
only support point queries.
How can I better understand this utilization of on-CPU caches and how it impacts the performance of particular data structures/algorithms?
Just somewhere to get started would be great because this sort of analysis is really opaque to me and I don't know where to go to start understanding.
This is going to be a really basic answer, as it would otherwise be extremely broad. I'm also not an expert on the subject (picking up bits and pieces to help understand how to optimize my hotspots better). But it might help you get started investigating this subject.
The topic reminds me of my university days when computer architecture
courses only taught about registers, DRAM, and disk, while glossing
over the CPU cache in between. The CPU cache is one of the most
dominant factors these days in performance.
The memory of the computer is divided into a hierarchy ranging from the absolute biggest but slowest (disk) to absolute smallest but fastest (registers).
Below disk is DRAM which is still pretty slow. And above registers is the CPU cache which is pretty damned fast (especially the smallest L1 cache).
Accessing One Node
Now let's say you request to access memory in some form from some data structure, say a linked structure like a tree or linked list and we're just accessing one node.
Note, I'm inverting the view of memory access for simplicity. Typically it begins with an instruction to load something into a register with the process working backwards and forwards, rather than merely forwards.
Virtual to Physical (DRAM)
In this case, unless the memory is already mapped to physical memory, the operating system has to map a page from virtual memory to a physical address in DRAM (this is freaking slow, especially in the worst-case scenario where the page fault involves a disk access). This is often done in pretty hefty chunks (the machine grabs memory by the handful), like aligned 4-kilobyte chunks. So we end up grabbing a big old 4-kilobyte aligned chunk of memory just for this one node.
DRAM to CPU Cache
Now that this 4-kilobyte page is physically mapped, we still want to do something with the node (most instructions have to operate at the register level) so the computer moves it down through the CPU cache hierarchy (this is pretty slow). Typically all levels of CPU cache have the same cache-line size, like 64-byte cache lines on Intel.
To move the memory from DRAM into these CPU caches, we have to grab a chunk of cache-line-sized-and-aligned memory from DRAM and move it into the CPU cache. We might also have to evict some data already in various levels of the CPU cache hierarchy on the way, like the least recently used memory. So now we're grabbing a 64-byte aligned handful of memory for this node.
Maybe at this point, the cache line memory might look like this. Let's say the relevant node data is 42, while the stuff in ??? is irrelevant memory surrounding it that's not part of our linked data structure.
CPU Cache to Register
Now we move the memory from CPU cache into a register (this occurs very quickly). And here we're still grabbing memory in sort of a handful, but a pretty small one. For example, we might grab a 64-bit aligned chunk of memory and move it into a general-purpose register. So we grab the memory around "42" here and move it into a register.
Finally we do some operations on the register and store the results, and the results often kind of work their way back up the memory hierarchy.
Accessing One Other Node
When we access the next node in the linked structure, we end up having to potentially do this all over again, just to read one little node's data. The contents of the cache line might look like this (with 22 being the node data of interest).
We can see potentially how much wasted effort the hardware and operating system are applying, moving big, aligned chunks of data from slower memory to faster memory only in order to access one little teeny bit of it prior to eviction.
And that's why little objects all allocated separately, as in the case of linked nodes or languages which can't represent user-defined types contiguously, aren't very cache or page-friendly. They tend to invoke a lot of page faults and cache misses as we traverse them, accessing their data. That is, unless they have help from a memory allocator which allocates these nodes in a more contiguous fashion (in which case the data or two or more nodes might be right next to each other and accessed together).
Contiguity and Spatial Locality
The most cache-friendly data structures tend to be based on contiguous arrays (it doesn't have to be one gigantic array, but perhaps arrays linked together, e.g., as is the case of an unrolled list). When we iterate through an array and access the first element, we might have to do the motions described above yet we might be able to get this once the memory is moved into a cache line:
Now we can iterate through the array and access all the elements while it's in the second-fastest form of memory on the machine, the L1 cache, simply moving data from L1 cache to register after the initial compulsory cache miss/page fault. If we start at 17, we have the initial compulsory cache miss but all the subsequent elements in this cache line can then be accessed without repeating the motions above. This is extremely fast, and the computer can blaze through such data.
So that was what was meant by this part:
Traditional in-memory data structures like balanced binary search
trees are not efficient on modern hardware, because they do not
optimally utilize on-CPU caches.
Note that it is possible to make linked structures like trees and linked lists substantially more cache-friendly than they would naturally be using a custom memory allocator, but they lack this inherent cache-friendliness at the basic data structure level.
Hash tables, on the other hand, tend to be contiguous table structures based on arrays. They might use chaining and linked bucket structures, but those are also easier to make cache-efficient with a little teeny bit of help from the custom allocator (far less than the tree due to the simpler, sequential access patterns within a hash bucket).
So anyway, that's a little brief overview on the subject, a bit oversimplified, but hopefully enough to help get started. If you want to understand this subject at a deeper level, keywords would be cache/memory efficiency/optimization and locality of reference.
After Compute Capability 2.0 (Fermi) was released, I've wondered if there are any use cases left for shared memory. That is, when is it better to use shared memory than just let L1 perform its magic in the background?
Is shared memory simply there to let algorithms designed for CC < 2.0 run efficiently without modifications?
To collaborate via shared memory, threads in a block write to shared memory and synchronize with __syncthreads(). Why not simply write to global memory (through L1), and synchronize with __threadfence_block()? The latter option should be easier to implement since it doesn't have to relate to two different locations of values, and it should be faster because there is no explicit copying from global to shared memory. Since the data gets cached in L1, threads don't have to wait for data to actually make it all the way out to global memory.
With shared memory, one is guaranteed that a value that was put there remains there throughout the duration of the block. This is as opposed to values in L1, which get evicted if they are not used often enough. Are there any cases where it's better too cache such rarely used data in shared memory than to let the L1 manage them based on the usage pattern that the algorithm actually has?
2 big reasons why automatic caching is less efficient than manual scratch pad memory (applies to CPUs as well)
parallel accesses to random addresses are more efficient. Example: histogramming. Let's say you want to increment N bins, and each are > 256 bytes apart. Then due to coalescing rules, that will result in N serial reads/writes since global and cache memory is organized in large ~256byte blocks. Shared memory doesn't have that problem.
Also to access global memory, you have to do virtual to physical address translation. Having a TLB that can do lots of translations in || will be quite expensive. I haven't seen any SIMD architecture that actually does vector loads/stores in || and I believe this is the reason why.
avoids writing back dead values to memory, which wastes bandwidth & power. Example: in an image processing pipeline, you don't want your intermediate images to get flushed to memory.
Also, according to an NVIDIA employee, current L1 caches are write-through (immediately writes to L2 cache), which will slow down your program.
So basically, the caches get in the way if you really want performance.
As far as i know, L1 cache in a GPU behaves much like the cache in a CPU. So your comment that "This is as opposed to values in L1, which get evicted if they are not used often enough" doesn't make much sense to me
Data on L1 cache isn't evicted when it isn't used often enough. Usually it is evicted when a request is made for a memory region that wasn't previously in cache, and whose address resolves to one that is already in use. I don't know the exact caching algorithm employed by NVidia, but assuming a regular n-way associative, then each memory entry can only be cached in a small subset of the entire cache, based on it's address
I suppose this may also answer your question. With shared memory, you get full control as to what gets stored where, while with cache, everything is done automatically. Even though the compiler and the GPU can still be very clever in optimizing memory accesses, you can sometimes still find a better way, since you're the one who knows what input will be given, and what threads will do what (to a certain extent of course)
Caching data through several memory layers always needs to follow a cache-coherency protocol. There are several such protocols and the decision on which one is the most suitable is always a trade off.
You can have a look at some examples:
Related to GPUs
Generally for computing units
I don't want to get in many details, because it is a huge domain and I am not an expert. What I want to point out is that in a shared-memory system (here the term shared does not refer to the so called shared memory of GPUs) where many compute-units (CUs) need data concurrently there is a memory protocol that attempts to keep the data close to the units so that can fetch them as fast as possible. In the example of a GPU when many threads in the same SM (symmetric multiprocessor) access the same data there should be a coherency in the sense that if thread 1 reads a chunk of bytes from the global memory and in the next cycle thread 2 is going to access these data, then an efficient implementation would be such that thread 2 is aware that data are found already in L1 cache and can access it fast. This is what the cache coherency protocol attempts to achieve, to let all compute units be up to date with what data exist in caches L1, L2 and so on.
However, keeping threads up to date, or else, keeping threads in coherent states, comes at some cost which is essentially missing cycles.
In CUDA by defining the memory as shared rather than L1-cache you free it from that coherency protocol. So access to that memory (which is physically the same piece of whatever material it is) is direct and does not implicitly call the functionality of coherency protocol.
I don't know how fast should this be, I didn't perform any such benchmark but the idea is that since you don't pay anymore for this protocol the access should be faster!
Of course, the shared memory on NVIDIA GPUs is split in banks and if someone wants to use it for performance improvement should have a look at this before. The reason is bank conflicts that occur when two threads access the same bank and this causes serialization of the access..., but that's another thing link