Could someone help me understand VkPhysicalDeviceMemoryProperties? - memory-management

I'm trying to figure it out, but I'm getting a little stuck.
The way the types and heaps are related is simple, if a bit strange. (why not just give VkMemoryHeap a VkMemoryType member?)
I think I understand what all the VkMemoryPropertyFlags mean, they seem fairly straightforward.
But what's with the VkMemoryHeap.flags member? It apparently only has one non-zero valid value, VkMemoryHeapFlagBits.VK_MEMORY_HEAP_DEVICE_LOCAL_BIT, and though that wouldn't be too odd on it's own, but there's also a VkMemoryPropertyFlagBits.VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT that could be present on the memory type of the heap.
What does the VkMemoryHeap.flags member mean and how does it relate to the VkMemoryType.flags member?

Vulkan recognizes two distinct concepts when it comes to memory. There are the actual physical pieces of RAM that the device can talk to. Then there are ways to allocate memory from one of those pools of RAM.
A heap represents a specific piece of RAM. VkMemoryHeap is the object that describes one of the available heaps of RAM that the device can talk to. There really aren't that many things that define a particular heap. Just two: the number of bytes of that RAMs storage and the storage's location relative to the Vulkan device (local vs. non-local).
A memory type is a particular means of allocating memory from a specific heap. VkMemoryType is the object that describes a particular way of allocating memory. And there are a lot more descriptive flags for how you can allocate memory from a heap.
For a more concrete example, consider a standard PC setup with a discrete GPU. The device has its own local RAM, but the discrete GPU can also access CPU memory. So a Vulkan device will have two heaps: one of them will be local, the other non-local.
However, there will usually be more than two memory types. You usually have one memory type that represents local memory, which does not have the VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT set. That means you can't map the memory; you can only access it via transfer operations from some other memory type (or from rendering operations or whatever).
But you will often have two memory types that both use the same non-local heap. They will both be VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT, thus allowing mapping. However, one of them will likely have the VK_MEMORY_PROPERTY_HOST_CACHED_BIT flag set, while the other will be VK_MEMORY_PROPERTY_HOST_COHERENT_BIT. This allows you to choose whether you want cached CPU access (thus requiring an explicit flush of ranges of modified memory) or uncached CPU access.
But while they are two separate memory types, they both allocate from the same heap. Which is why VkMemoryType has an index that refers to the heap who's memory it is allocating from.
Only thing I'm not getting is how the two DEVICE_LOCAL flags interact.
Did you look at the specification? It's not exactly hiding how this works:
if propertyFlags has the VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT bit set, memory allocated with this type is the most efficient for device access. This property will only be set for memory types belonging to heaps with the VK_MEMORY_HEAP_DEVICE_LOCAL_BIT set.
Is it saying that if the memory is local then all types corresponding to that memory are local, or that they can be local?
You seem to be trying to impose the wrong meaning to these things. Just look at what the specification says and take it at face value.
PROPERTY_DEVICE_LOCAL denotes a memory type which will achieve the best device access performance. The only connection between this and MEMORY_DEVICE_LOCAL is that memory types with PROPERTY_DEVICE_LOCAL will only be associated with memory heaps that use MEMORY_DEVICE_LOCAL.
That's the only relevant meaning here.
If you want an example of when a memory heap would be device local, yet have memory types that aren't, consider a GPU that has no memory of its own. There's only one heap, which is therefore MEMORY_DEVICE_LOCAL.
However, allocating memory from that pool in a way that makes it host-visible may decrease the performance of device access to that memory. Therefore, for such hardware, the host-visible memory types for the same heap will not use PROPERTY_DEVICE_LOCAL.
Then again, other hardware doesn't lose performance from making memory host-visible. So they only have one memory type, which has all of the available properties. For Intel, their on-chip GPUs apparently have access to some level of the CPU's caches.

Related

Why should userspace applications lock Ebpf maps?

When you create EBPF maps, memory is allocated in kernel space. And kernel memory never gets swapped out. Then, why is there a need for the userspace application to call setrlimit() with RLIMIT_MEMLOCK?
RLIMIT_MEMLOCK refers to the amount of memory that may be locked into RAM, it is not specific to allocations in the user space memory address range. On pre-5.11 kernels, memory used for eBPF objects (programs, maps, BTF objects etc.) is accounted against this resource, meaning that if you create too many of them at a given time, or even in a short interval of time (given that there is a small delay after the deletion of an object before the kernel reclaims the corresponding memory area), you may hit the resource limit and get a -EPERM as an answer. For privileged users, calling setrlimit() to lift the restrictions on this resource is the usual workaround indeed.
Note that in Linux 5.11, rlimit-based accounting was dropped in favour of cgroup-based memory accounting. The rlimit had a number of downsides (refer to the cover letter linked above for details), and the cgroup-based accounting is more flexible, while allowing a better control and offering easier ways to retrieve the current amount of memory used. It should also reflect the actual memory consumption, while this was not necessarily the case with rlimit.

Variable allocation and tracking

I started searching and reading about ALDS and memory management recently after I got a doubt about memory allocation, and after a couple of days of study I learnt a lot of things about memory management but the actual doubt remains unsolved.
So the doubt is, while allocating memory to a variable, how exactly does the system know which block of memory is available and which is free, and similarly when we destruct an object or set a variable as null or when GC frees up some memory, what exactly does it do with that block of memory, as I know the actual data is never erased on deletion, that block just gets marked as free somewhere in some table, but does that table keep track of each and every bit on the memory, if yes then wouldn't that become a lot of data in itself to store?
For an example, if I declare a linked list, then a block will be allocated in heap with it's next block having null value as there is no other node to reference, now as I keep adding more nodes into it, system will keep allocating more blocks each containing reference to next one. Now these blocks can be present on random locations depending on the availability of memory at allocation time, and can only be accessed through their proceeding nodes.
So now, for any given block of memory, how the system will know if its free and has just garbage value in it, or its actually a node of some linked list.
On a modern operating system the process has a logical, linear address space. Part of that address space is reserved for the system and is common to all processes. Some of the address space may be reserved but most of the remainder is available to the process.
The address space is defined by PAGE TABLES. The structure of the page table is defined by the processor but the operating system maintains a table for each process. Memory is allocated to a process in PAGES. The smallest I am aware of is 512 bytes but the size can go up to a megabyte or even larger in some processors and some processor configurations.The size is always a power of 2.
The page table defines:
Whether an page has actually been mapped to the process
Whether the pages has a corresponding physical memory location
If so, the mapping to that physical location.
There operating system only knows about pages.
At the next level down there are memory managers. These are not part of the operating system. Memory managers manage heaps that consist of pages allocated by the operating system. The memory manage has to keep track of the heap size and what memory has been allocated within it.
Memory managers operate is a huge number of different ways. There are malloc/free implementations galore that you can link into your code to get different behaviors.

How to deal with external fragmentation, how paging helps with external fragmentation?

I know that there is a lot of questions regarding the issue I'm pointing here, but I couldn't find any complex answer (neither on StackOverflow nor in other sources).
I would like to ask about heap (RAM) fragmentation problem.
As I understood there are two kind of fragmentation:
internal - related with difference between allocation unit size (AU) and the size of the allocated memory AM (waste memory is equal to AM % AU),
external - related with noncontinuous areas of a free memory, so even if the sum of the free memory areas can handle the new allocation request, it fails if there is no continues area that can handle it.
This is quite clear. The problems start when the "paging" appears.
Sometimes I can find an information that paging solves the external fragmentation issue.
Indeed I agree that thanks to paging the OS is able to create the virtually continues areas of the memory, assigned to the process, even if physically the parts of the memory are scattered.
But how exactly does it help with the external fragmentation?
I mean, assuming that the size of a page has 4kB, and we want to allocate 16 kB, then of course we just need to find four empty pages frames, even if physically the frames are not a part of a continues area.
But what in case of the smaller allocation ?
I believe the page itself can still be fragmented and (in worst case) the OS still needs to provide a new frame if the old one cannot be used to allocate the requested memory.
So is it that (assuming the worst case) sooner or later, with paging or without, the long working application that allocates and releases the heap memory (different sizes) will fall into low-memory condition, because of external fragmentation ?
So the question is how to deal with the external fragmentation?
Own implementation of allocation algorithm ? Paging (as I wrote, not sure it helps) ? What else ? Does OS (Windows, Linux) provides some defragmentation methods ?
The most radical solution is to forbid using of the heap, but is it really necessary for the platforms with paging, virtual address spaces, virtual memory etc ... and the only issue is that the applications need to run unstoppable for a years ?
One more issue.. is internal fragmentation an ambiguous term ?
Somewhere I have spotted the definition that internal fragmentation points to the part of page frame, that is wasted because the process does not need more memory, but the single frame cannot be owned by more than a single processes.
I have bolded the questions, so the people who are in hurry could find the question without reading everything.
Regards!
"Fragmentation" is indeed not a very precise term. But we can say for sure that when a running application needs a block of n bytes and there are n or more bytes not in use, yet we can't get the required block, then "memory is too fragmented."
But how exactly does it [paging] help with the external allocation [I assume you mean fragmentation] ?
There's really nothing complicated here. External fragmentation is free memory between allocated blocks that's "too small" to satisfy any application requirement. This is a general concept. The definition of "too small" is application-dependent. Nonetheless, if allocated blocks can fall on any boundary, then it's easy, after many allocations and deallocations, for lots of such fragments to occur. Paging helps with external fragmentation in two ways.
First, it subdivides memory into fixed-size adjacent chunks - the pages - that are "large enough" so they're never useless. Again the definition of "large enough" is not precise. But most applications will have lots of requirements satisfiable by a single 4k page. Since no external fragmentation problem can occur for allocations of a page or less, the problem has been mitigated.
Second, the paging hardware provides a level of indirection between application pages and physical memory pages. Therefore any free physical memory page can be used to help satisfy any application request, no matter how large. For example, suppose you have 100 physical pages with every other physical page (50 of them) allocated. Without page-mapping hardware, the biggest request for contiguous memory that can be satisfied is 1 page. With mapping, it's 50 pages. (I'm disregarding virtual pages allocated initially with no mapped physical page. That's another discussion.)
But what in case of the smaller allocation ?
Again it's pretty simple. If the unit of allocation is a page, then any allocation smaller than a page yields an unused portion. This is internal fragmentation: unusable memory within an allocated block. The bigger you make allocation units (they don't have to be a single page), the more memory will be unusable due to internal fragmentation. On average, this will tend toward half of an allocation unit. Consequently, though OS's tend to allocate in units of pages, most application-side memory allocators request a very small number (often one) of big blocks (of pages) from the OS. They use much smaller allocation units internally: 4-16 bytes is pretty common.
So the question is how to deal with the external allocation [I assume you mean fragmentation] ? So is it that (assuming the worst case) sooner or later, with paging or without, the long working application that allocates and releases the heap memory (different sizes) will fall into low-memory condition, because of external fragmentation ?
If I understand you correctly, you're asking if fragmentation is inevitable. Except under very special conditions (e.g. the application only needs blocks of one size), the answer is yes. But that doesn't mean it's necessarily a problem.
Memory allocators use smart algorithms that limit fragmentation pretty effectively. For example, they may maintain "pools" with different block sizes, using the pool with block size most closely matching a given request. This tends to limit both internal and external fragmentation. A real world example that's very well documented is dlmalloc. The source code is also very clear.
Of course any general purpose allocator can fail under specific conditions. For this reason, modern languages (C++ and Ada are two I know) let you supply special-purpose allocators for objects of a given type. Typically -
for a fixed-size object - these might simply maintain a pre-allcoated free list, so fragmentation for that particular case is zero, and allocation/deallocation are very fast.
One more note: It's possible to totally eliminate fragmentation with copying/compacting garbage collection. Of course this requires underlying language support, and there's a performance bill to pay. A copying garbage collector compacts the heap by moving objects to eliminate unused space completely whenever it runs to reclaim storage. To do this it must update every pointer in the running program to the corresponding object's new location. While this may sound complex, I've implemented a copying garbage collector, and it's not so bad. The algorithms are extremely cool. Unfortunately, the semantics of many languages (e.g. C and C++) don't allow finding every pointer in the running program, which is required.
The most radical solution is to forbid using of the heap, but is it really necessary for the platforms with paging, virtual address spaces, virtual memory etc ... and the only issue is that the applications need to run unstoppable for a years ?
Though general purpose allocators are good, they're not guaranteed. It's not unusual for safety-critical or hard real time constrained systems to avoid heap use completely. On the other hand, when no absolute guarantee is needed, a general purpose allocator is often fine. There are many systems that run perfectly with tough loads for extended periods using general purpose allocators: fragmentation reaches an acceptable steady state and doesn't cause a problem.
One more issue.. is internal fragmentation an ambiguous term ?
The term isn't ambiguous, but is used in different contexts. The invariant is that it's referring to unused memory inside allocated blocks.
OS literature tends to assume the allocation unit is pages. For example, Linux sbrk lets you request the end of the data segment be set anywhere, but Linux allocates pages, not bytes, so the unused part of the last page is internal fragmentation from the OS's point of view.
Application-oriented discussions tend to assume allocation is in "blocks" or "chunks" of arbitrary size. dlmalloc uses about 128 discrete chunk sizes, each maintained in its own free list. Plus, it will custom allocate very large blocks using OS memory mapping system calls, so there's at most a page size (minus 1 byte) of mismatch between request and actual allocation. Clearly it's going to a lot of trouble to minimize internal fragmentation. The fragmentation caused a given allocation is the difference between the request and the chunk actually allocated. Since there are so many chunk sizes, that difference is strictly limited. On the other hand, the many chunk sizes increase chances of external fragmentation problems: free memory may consist entirely of chunks that are well-managed by dlmalloc, yet too small to honor an application requirement.

Virtual memory- don't fully-understand why we need it beyond security reasons?

In several books and on websites a reason given for virtual memory management is that it allows only part of a program to be loaded in to RAM and therefore more efficient use of RAM is made.
1) Why do we need virtual memory management to only load part of a program? Why could we not load part of a program using physical addresses?
2) Beyond the security reasons for separating the different parts (stack, heap etc) of a process' memory in to various physical locations, I really don't see what other benefits there are to virtual memory?
3) Why is it important the process thinks the addresses are continuous (courtesy of virtual addresses) when in reality they are discontinuous?
EDIT: I know the obvious reason that virtual memory allows more memory to be treated as if it were RAM.
There are a number of advantages to using virtual memory over strictly physical memory, some of which you've already listed. Basically it allows your programs to just use memory without having to worry about where it comes from or what else might be competing for it. It makes memory appear to be flat and contiguous, even if it's spread out across various sections of physical memory and to disk.
1) Why do we need virtual memory management to only load part of a
program? Why could we not load part of a program using physical
addresses?
You can try that with purely physical addresses, but what if there's not a large enough single block available? With virtual addresses you can bridge sections of physical RAM and make them appear as one large block. You can also move things around in memory without interrupting processes that would be surprised to have that happen.
2) Beyond the security reasons for separating the different parts
(stack, heap etc) of a process' memory in to various physical
locations, I really don't see what other benefits there are to virtual
memory?
It also helps keep memory from getting overly fragmented. Makes it easier to segregate memory in use by one process from memory in use by another.
3) Why is it important the process thinks the addresses are continuous
(courtesy of virtual addresses) when in reality they are
discontinuous?
Try iterating over an array that's split between two discontinuous sections of memory and then come ask that again. Or allocating a buffer for some serial communications, or any number of times that your software expects a single chunk of memory.
1) Why do we need virtual memory management to only load part of a program? Why could we not load part of a program using physical addresses?
Some of us are old enough to remember 32-bit systems with 8MB of memory. Even compressing a small image file would exceed the physical memory of the system.
It's likely that the paging aspect of virtual memory will vanish in the future as system memory and storage merge.
2) Beyond the security reasons for separating the different parts (stack, heap etc) of a process' memory in to various physical locations, I really don't see what other benefits there are to virtual memory?
See #1. The amount of memory required by a program may exceed the physical memory available.
That said, the main security reasons are to separate the various processes and the system address space. Any separation of stack, heap, code, are usually for convenience and error detection.
Advantages then include:
Process memory in excess of physical memory
Separation of processes
Manages access to the kernel (and in some systems other modes as well)
Prevents executing non-executable pages.
Prevents writing to read only pages (code, data).
Easy of programming
3) Why is it important the process thinks the addresses are continuous (courtesy of virtual addresses) when in reality they are discontinuous?
I presume you are referring to virtual addresses. This is simply a matter of convenience. It would make no sense to make them non-contiguous.
1) Why do we need virtual memory management to only load part of a
program? Why could we not load part of a program using physical
addresses?
Well, you obviously are aware that programs' size can range from some KB's to several GBs or even more than that. But, as we have kind of a limitation on our main memory aka RAM (because of cost issues), so the whole program bigger than the size of RAM can't be loaded as whole at once. So, to achieve the desired result scientists (computer scientists) developed a method virtual memory. It'd help to achieve
a) first space equal to size of some portion of hard-disk(not total),but the major part that would easily accomodate parts of running program. Say, if the running program's size exceeds the size of RAM, then the program is kinda cut into segments (not really), only the relevant part which could easily fit into memory is called, and the subsequent codes are called as per address by address in sequence or as per instruction call!
b) less burden on physical memory and thereby enabling other programs in the main memory to keep running. Well there are several more reasons!
2) Beyond the security reasons for separating the different parts
(stack, heap etc) of a process' memory in to various physical
locations, I really don't see what other benefits there are to virtual
memory?
Separation of heap,stack,etc. is for storing several kinds of operations running at times. They all are different data-structures and hence they will be storing possibly different program's values or, even if similar program's values, then also distinct instruction sequence's address! Say, stack would be storing a recursive call's return address (the calling address) whereas the heap would be pointing at current code of execution of the program!
Also, it is not the virtual memory which has this storage scheme but it's actually fit in the main memory. Also,there are several portions of heap also, which performs different functions entirely! Also, I already mentioned benefits of virtual memory -- helps in running several programs simultaneously,optimises caching, addressing using paging, segmentation, etc.
3) Why is it important the process thinks the addresses are continuous
(courtesy of virtual addresses) when in reality they are
discontinuous?
Would it be better in the world if there had been counting like 1,2,3,4,5,etc. which we are familiar or had it started like 1,5,2,4,3, etc. even though knowing the true pattern rejecting the choice to render it discontinuous? Well, at least I'd have chosen the pattern option to perform any task. Similar is the case with physical (main) memory! The physical memory renders the exact address and it clearly fetches the addresses in a dis-continuous manner -- kinda mingled.
But wait, WOW, we have a mechanism like virtual memory which has led to the formation of the actual discontinuous memory locations to a fixed regular/continuous memory location! Virtual memory using paging, segmentation has made the work same, but to make us understand easier. Also,due to relative indexing in paging and due to segmentation -- the address/memory location appears continuous though the actual address is always determined the paging scheme or segment's starting address! Hence, the virtual-memory renders as if we are working with a continuous memory location. Isn't it good/better!

CUDA: When to use shared memory and when to rely on L1 caching?

After Compute Capability 2.0 (Fermi) was released, I've wondered if there are any use cases left for shared memory. That is, when is it better to use shared memory than just let L1 perform its magic in the background?
Is shared memory simply there to let algorithms designed for CC < 2.0 run efficiently without modifications?
To collaborate via shared memory, threads in a block write to shared memory and synchronize with __syncthreads(). Why not simply write to global memory (through L1), and synchronize with __threadfence_block()? The latter option should be easier to implement since it doesn't have to relate to two different locations of values, and it should be faster because there is no explicit copying from global to shared memory. Since the data gets cached in L1, threads don't have to wait for data to actually make it all the way out to global memory.
With shared memory, one is guaranteed that a value that was put there remains there throughout the duration of the block. This is as opposed to values in L1, which get evicted if they are not used often enough. Are there any cases where it's better too cache such rarely used data in shared memory than to let the L1 manage them based on the usage pattern that the algorithm actually has?
2 big reasons why automatic caching is less efficient than manual scratch pad memory (applies to CPUs as well)
parallel accesses to random addresses are more efficient. Example: histogramming. Let's say you want to increment N bins, and each are > 256 bytes apart. Then due to coalescing rules, that will result in N serial reads/writes since global and cache memory is organized in large ~256byte blocks. Shared memory doesn't have that problem.
Also to access global memory, you have to do virtual to physical address translation. Having a TLB that can do lots of translations in || will be quite expensive. I haven't seen any SIMD architecture that actually does vector loads/stores in || and I believe this is the reason why.
avoids writing back dead values to memory, which wastes bandwidth & power. Example: in an image processing pipeline, you don't want your intermediate images to get flushed to memory.
Also, according to an NVIDIA employee, current L1 caches are write-through (immediately writes to L2 cache), which will slow down your program.
So basically, the caches get in the way if you really want performance.
As far as i know, L1 cache in a GPU behaves much like the cache in a CPU. So your comment that "This is as opposed to values in L1, which get evicted if they are not used often enough" doesn't make much sense to me
Data on L1 cache isn't evicted when it isn't used often enough. Usually it is evicted when a request is made for a memory region that wasn't previously in cache, and whose address resolves to one that is already in use. I don't know the exact caching algorithm employed by NVidia, but assuming a regular n-way associative, then each memory entry can only be cached in a small subset of the entire cache, based on it's address
I suppose this may also answer your question. With shared memory, you get full control as to what gets stored where, while with cache, everything is done automatically. Even though the compiler and the GPU can still be very clever in optimizing memory accesses, you can sometimes still find a better way, since you're the one who knows what input will be given, and what threads will do what (to a certain extent of course)
Caching data through several memory layers always needs to follow a cache-coherency protocol. There are several such protocols and the decision on which one is the most suitable is always a trade off.
You can have a look at some examples:
Related to GPUs
Generally for computing units
I don't want to get in many details, because it is a huge domain and I am not an expert. What I want to point out is that in a shared-memory system (here the term shared does not refer to the so called shared memory of GPUs) where many compute-units (CUs) need data concurrently there is a memory protocol that attempts to keep the data close to the units so that can fetch them as fast as possible. In the example of a GPU when many threads in the same SM (symmetric multiprocessor) access the same data there should be a coherency in the sense that if thread 1 reads a chunk of bytes from the global memory and in the next cycle thread 2 is going to access these data, then an efficient implementation would be such that thread 2 is aware that data are found already in L1 cache and can access it fast. This is what the cache coherency protocol attempts to achieve, to let all compute units be up to date with what data exist in caches L1, L2 and so on.
However, keeping threads up to date, or else, keeping threads in coherent states, comes at some cost which is essentially missing cycles.
In CUDA by defining the memory as shared rather than L1-cache you free it from that coherency protocol. So access to that memory (which is physically the same piece of whatever material it is) is direct and does not implicitly call the functionality of coherency protocol.
I don't know how fast should this be, I didn't perform any such benchmark but the idea is that since you don't pay anymore for this protocol the access should be faster!
Of course, the shared memory on NVIDIA GPUs is split in banks and if someone wants to use it for performance improvement should have a look at this before. The reason is bank conflicts that occur when two threads access the same bank and this causes serialization of the access..., but that's another thing link

Resources