Restricting virtual address range of a process? - virtual-memory

On Linux x86_64, I have a simple application for which I track all memory accesses with Intel's PIN. The program uses only "a bit" of memory, most of it for dynamically allocated matrices (I've bisected the right value with ulimit). However, the memory accesses span the whole range of the VM address space, low addresses for what I presume global variables in the code, high addresses for the malloc()ed arrays.
There's a huge gap in the middle, and even in the high addresses the range is between 0x7fff4e1a37f4 and 0x7fea3af99000, which is much larger than what I would assume my application to use in total.
The post-processing that I need to do on the memory accesses deals very badly with these sparse accesses, so I'm looking for a way to restrict the virtual address range available to the process so that "it just fits", and accesses will show addresses between 0 and some more reasonable value for dynamically allocated memory (somewhere around the 40 Mb that I've discovered through ulimit).
Q: Is there an easy way to limit the available address space (and hence implicitly, available memory) to an individual process on Linux, ideally from the command line on a per-process basis?
Further notes:
I can link my application statically.
Even if I limit the memory with ulimit, the process still uses the full VM address range (not entirely unexpected).
I know about /proc/${pid}/maps, but would like to avoid creating wrappers to deal with this, and how to actually use the data in there.
I've heard about prelink (which may not apply to my static binary, but only libraries?) and can image that there are more intrusive ways to interfere with malloc(), but these solutions are too far out of my expertise to evaluate their usefulness (Limiting the heap area's Virtual address range, https://stackoverflow.com/a/29960208/60462)
If there's no simple command line-solution, instead of going for any elaborate hack, I'll probably just wing it in the post-processing and "normalize" the addresses e.g. via a few lines of perl).

ld.so(8) lists LD_PREFER_MAP_32BIT_EXEC as a way to restrict the address space to the lower 2GiB which is significantly less than the normal 64-bit address space, but possibly not small enough for your purposes.
It may also be possible to use the prctl(2) PR_SET_MM options to control the addresses of different parts of the program to solve your problem.

Related

virtual memory effects and relations between paging and segmentation

This is my first post. I want to ask about how are virtual memory related to paging and segmentation. I am searching internet for few days, but still can't manage to put that information into right order. Here is what I know so far:
We can talk about addresses (we could say they are levels of memory abstraction) in memory:
physical level (CPU talking to memory controller, "hey give me contents of address 0xFFEABCD", these adresses are adresses of cells in RAM, so cell 0xABCD has physical address 0xABCD. memory controller can only use physical adresses, so if adress is not physical it must be changed to physical.
logical level.This is abstraction over physical addresses. Here processes if ask for memory, (assume successfull allocation) are given address which has no direct relation to cells in RAM. We can say these addresses are from different pool (world?) than physical addresses. As I said before memory controller only understand physical adresses, so to use logical addresses, we need to convert them to physical addresses. There are two ways for OS to be able to create logical adresses:
paging - in which physical memory (RAM) is divided into continous blocks of memory (called frames), and logical memory (this other world) is also divided in same in length blocks (called pages). Now OS keep in RAM data structure called page table. It's an associative array (map) and it's primary goal of existence is to translate logical level addresses to physical level adresses. Paging has following effect: memory allocated by process in RAM (so in frames in physical memory belonging to program) may not be in contingous manner (so there may be holes inside).
segmentation - program is divided into parts called segments. Segments sizes are not fixed, so different segments may have different sizes. Program is divided in few segments and each segment will have its own place in RAM (physical) memory. So one segment (call it sementA), and another (call it segmentB) may not be near each other. In other words segmentA don't have to has segmentB as a neighbour.
internal fragmentation - when memory which belongs to process isn't used in 100%. So if process want to have 2 bytes for its use, OS need to allocate page/pages which total size need to be greater or equal than amount of memory requested by program. Typical size of page is 4KB. Unit in which OS gives memory to process are pages. So it can't give less than 4KB. So if we use 2 bytes, 4KB - 2B = 4094 bytes are wasted (memory is associated with our process so other processes can't use it. Only we can use it, but we only need 2B).
external fragmentation - when allocated blocks of memory are one near another, but there is a little hole between them. Its free, so other programs, can use it, but it is unlikly because it is very small. That holes with high probability will be wasted. More holes - more wasted memory.
Paging may cause effect of internal fragmentation. Segmentation may cause effect of external fragmentation.
virtual level - addresses used in virtual memory. This is extension of logical memory level. Now program don't even need to have all of it's allocated pages in RAM to start execution. It can be implemented with following techniques:
paged segmentation - method in which segments are divided into pages.
segmented paging - less used method but also possible.
Combining them takes a positive aspects from both solutions.
What i have read about pros and cons of virtual memory:
PROS:
processes have their own address space which mean if we have two processes A and B, and both of them have a pointer to address eg. 17 processA pointer will be showing to different frame than pointer in processB. this results in greater process isolation. Processes are protected from each other (so one process can't do things with another process memory if it isn't shared memory because in its mapping don't exist such mapping entry), and OS is more protected from processes.
have more memory than you physical first order memory(RAM, due to swapping to secondary order memory).
better use of memory due to:
swapping unused parts of programs to secondary memory.
making sharings pages possible, also make possible "copy on write".
improved multiprogram capability (when not needed parts of programs are swapped out to secondary memory, they made free space in ram which could be used for new procesess.)
improved CPU utilisation (if you can have more processes loaded into memory you have bigger probability than there exist some program that now need do CPU stuff, not IO stuff. In such cases you can better utilise CPU).
CONS:
virtual memory has it's overhead because we need to get access to memory twice (but here a lot of improvment can be achieved using TLB buffers)
it makes OS part managing memory more complicated.
So here we came to parts which I don't really understand:
Why in some sources logical address and virtual addresses are described as synonymes? Do I get something wrong?
Is really virtual memory making protection to processes? I mean, in segmentation for example there was also check if process do not acces other memory (resulting in segfault if it does), paging also has a protection bit in a page table, so doesn't the protection come from simply extending abstraction of logic level addresses? If VM (Virtual Memory) brings extended protection features, what are they and how they work? In other words: does creating separate address space for each process, bring extended memory protection. If so, what can't be achieved is paging without VM?
How really differ paged segmentation from segmented paging. I know that the difference between these two will be how a address is constructed (a page number, segment number, that stuff..), but I suppose it isn't enough to develop 2 strategies. This reason is like nothing. I read that segmented paging is less elastic, and that's the reason why it is rarely used. But why it it less elastic? Is the reason for that, that in program you can have only few segments instead a lot of pages. If thats the case paging indeed allow better "granularity".
If VM make separate address space for each process, does it mean, paging without VM use logic addresses from "one pool" (is then every logic address globally unique in that case?).
Any help on that topic would be appreciated.
Edit: #1
Ok. I finally understood that paging not on demand is also a virtual memory. I just found some clarification was helpful to understand the topic. Below is link to image which I made to visualize differences. Thanks for help.
differences between paging, demand paging and swapping
Why in some sources logical address and virtual addresses are described as synonymes? Do I get something wrong?
Many sources conflate logical and virtual memory translation. In ye olde days, logical address translation never took place without virtual address translation so processor documentation referred to them as the same.
Now we have large memory systems that use logical memory translation without virtual memory.
Is really virtual memory making protection to processes?
It is the logical memory translation that implements page protections.
How really differ paged segmentation from segmented paging.
You can really ignore segments. No rationally designed processor architecture designed after 1970 used segments and they are finally dying out.
If VM make separate address space for each process, does it mean, paging without VM use logic addresses from "one pool"
It is logical memory that creates the separate address space for each process. Paging is virtual memory. You cannot have one without the other.

Virtual memory- don't fully-understand why we need it beyond security reasons?

In several books and on websites a reason given for virtual memory management is that it allows only part of a program to be loaded in to RAM and therefore more efficient use of RAM is made.
1) Why do we need virtual memory management to only load part of a program? Why could we not load part of a program using physical addresses?
2) Beyond the security reasons for separating the different parts (stack, heap etc) of a process' memory in to various physical locations, I really don't see what other benefits there are to virtual memory?
3) Why is it important the process thinks the addresses are continuous (courtesy of virtual addresses) when in reality they are discontinuous?
EDIT: I know the obvious reason that virtual memory allows more memory to be treated as if it were RAM.
There are a number of advantages to using virtual memory over strictly physical memory, some of which you've already listed. Basically it allows your programs to just use memory without having to worry about where it comes from or what else might be competing for it. It makes memory appear to be flat and contiguous, even if it's spread out across various sections of physical memory and to disk.
1) Why do we need virtual memory management to only load part of a
program? Why could we not load part of a program using physical
addresses?
You can try that with purely physical addresses, but what if there's not a large enough single block available? With virtual addresses you can bridge sections of physical RAM and make them appear as one large block. You can also move things around in memory without interrupting processes that would be surprised to have that happen.
2) Beyond the security reasons for separating the different parts
(stack, heap etc) of a process' memory in to various physical
locations, I really don't see what other benefits there are to virtual
memory?
It also helps keep memory from getting overly fragmented. Makes it easier to segregate memory in use by one process from memory in use by another.
3) Why is it important the process thinks the addresses are continuous
(courtesy of virtual addresses) when in reality they are
discontinuous?
Try iterating over an array that's split between two discontinuous sections of memory and then come ask that again. Or allocating a buffer for some serial communications, or any number of times that your software expects a single chunk of memory.
1) Why do we need virtual memory management to only load part of a program? Why could we not load part of a program using physical addresses?
Some of us are old enough to remember 32-bit systems with 8MB of memory. Even compressing a small image file would exceed the physical memory of the system.
It's likely that the paging aspect of virtual memory will vanish in the future as system memory and storage merge.
2) Beyond the security reasons for separating the different parts (stack, heap etc) of a process' memory in to various physical locations, I really don't see what other benefits there are to virtual memory?
See #1. The amount of memory required by a program may exceed the physical memory available.
That said, the main security reasons are to separate the various processes and the system address space. Any separation of stack, heap, code, are usually for convenience and error detection.
Advantages then include:
Process memory in excess of physical memory
Separation of processes
Manages access to the kernel (and in some systems other modes as well)
Prevents executing non-executable pages.
Prevents writing to read only pages (code, data).
Easy of programming
3) Why is it important the process thinks the addresses are continuous (courtesy of virtual addresses) when in reality they are discontinuous?
I presume you are referring to virtual addresses. This is simply a matter of convenience. It would make no sense to make them non-contiguous.
1) Why do we need virtual memory management to only load part of a
program? Why could we not load part of a program using physical
addresses?
Well, you obviously are aware that programs' size can range from some KB's to several GBs or even more than that. But, as we have kind of a limitation on our main memory aka RAM (because of cost issues), so the whole program bigger than the size of RAM can't be loaded as whole at once. So, to achieve the desired result scientists (computer scientists) developed a method virtual memory. It'd help to achieve
a) first space equal to size of some portion of hard-disk(not total),but the major part that would easily accomodate parts of running program. Say, if the running program's size exceeds the size of RAM, then the program is kinda cut into segments (not really), only the relevant part which could easily fit into memory is called, and the subsequent codes are called as per address by address in sequence or as per instruction call!
b) less burden on physical memory and thereby enabling other programs in the main memory to keep running. Well there are several more reasons!
2) Beyond the security reasons for separating the different parts
(stack, heap etc) of a process' memory in to various physical
locations, I really don't see what other benefits there are to virtual
memory?
Separation of heap,stack,etc. is for storing several kinds of operations running at times. They all are different data-structures and hence they will be storing possibly different program's values or, even if similar program's values, then also distinct instruction sequence's address! Say, stack would be storing a recursive call's return address (the calling address) whereas the heap would be pointing at current code of execution of the program!
Also, it is not the virtual memory which has this storage scheme but it's actually fit in the main memory. Also,there are several portions of heap also, which performs different functions entirely! Also, I already mentioned benefits of virtual memory -- helps in running several programs simultaneously,optimises caching, addressing using paging, segmentation, etc.
3) Why is it important the process thinks the addresses are continuous
(courtesy of virtual addresses) when in reality they are
discontinuous?
Would it be better in the world if there had been counting like 1,2,3,4,5,etc. which we are familiar or had it started like 1,5,2,4,3, etc. even though knowing the true pattern rejecting the choice to render it discontinuous? Well, at least I'd have chosen the pattern option to perform any task. Similar is the case with physical (main) memory! The physical memory renders the exact address and it clearly fetches the addresses in a dis-continuous manner -- kinda mingled.
But wait, WOW, we have a mechanism like virtual memory which has led to the formation of the actual discontinuous memory locations to a fixed regular/continuous memory location! Virtual memory using paging, segmentation has made the work same, but to make us understand easier. Also,due to relative indexing in paging and due to segmentation -- the address/memory location appears continuous though the actual address is always determined the paging scheme or segment's starting address! Hence, the virtual-memory renders as if we are working with a continuous memory location. Isn't it good/better!

Does virtual address matching matter in shared mem IPC?

I'm implementing IPC between two processes on the same machine (Linux x86_64 shmget and friends), and I'm trying to maximize the throughput of the data between the processes: for example I have restricted the two processes to only run on the same CPU, so as to take advantage of hardware caching.
My question is, does it matter where in the virtual address space each process puts the shared object? For example would it be advantageous to map the object to the same location in both processes? Why or why not?
It doesn't matter as long as the OS is concerned. It would have been advantageous to use the same base address in both processes if the TLB cache wasn't flushed between context switches. The Translation Lookaside Buffer (TLB) cache is a small buffer that caches virtual to physical address translations for individual pages in order to reduce the number of expensive memory reads from the process page table. Whenever a context switch occurs, the TLB cache is flushed - you don't want processes to be able to read a small portion of the memory of other processes, just because its page table entries are still cached in the TLB.
Context switch does not occur between processes running on different cores. But then each core has its own TLB cache and its content is completely uncorrelated with the content of the TLB cache of the other core. TLB flush does not occur when switching between threads from the same process. But threads share their whole virtual address space nevertheless.
It only makes sense to attach the shared memory segment at the same virtual address if you pass around absolute pointers to areas inside it. Imagine, for example, a linked list structure in shared memory. The usual practice is to use offsets from the beginning of the block instead of aboslute pointers. But this is slower as it involves additional pointer arithmetic. That's why you might get better performance with absolute pointers, but finding a suitable place in the virtual address space of both processes might not be an easy task (at least not doing it in a portable way), even on platforms with vast VA spaces like x86-64.
I'm not an expert here, but seeing as there are no other answers I will give it a go. I don't think it will really make a difference, because the virutal address does not necessarily correspond to the physical address. Said another way, the underlying physical address the OS maps your virtual address to is not dependent on the virtual address the OS gives you.
Again, I'm not a memory master. Sorry if I am way off here.

MapViewOfFileEx - valid lpBaseAddress

In answer to a question about mapping non-contiguous blocks of files into contiguous memory, here, it was suggested by one respondent that I should use VirtualAllocEx() with MEM_RESERVE in order to establish a 'safe' value for the final (lpBaseAddress) parameter for MapViewOfFileEx().
Further investigation revealed that this approach causes MapViewofFileEx() to fail with error 487: "Attempt to access invalid address." The MSDN page says:
"No other memory allocation can take place in the region that is used for mapping, including the use of the VirtualAlloc or VirtualAllocEx function to reserve memory."
While the documentation might be considered ambiguous with respect to valid sequences of calls, experimentation suggests that it is not valid to reserve memory for MapViewOfFileEx() using VirtualAllocEx().
On the web, I've found examples with hard-coded values - example:
#define BASE_MEM (VOID*)0x01000000
...
hMap = MapViewOfFileEx( hFile, FILE_MAP_WRITE, 0, 0, 0, BASE_MEM );
To me, this seems inadequate and unreliable... It is far from clear to me why this address is safe, or how many blocks can be safely be mapped there. It seems even more shaky given that I need my solution to work in the context of other allocations... and that I need my source to compile and work in both 32 and 64 bit contexts.
What I'd like to know is if there is any way to reliably reserve a pool of address space in order that - subsequently - it can be reliably used by MapViewOfFileEx to map blocks to explicit memory addresses.
You almost got to the solution by yourself but fell short of the last small step.
As you figured, use VirtualAlloc (with MEM_RESERVE) to find room in your address space, but after that (and before MapViewOfFileEx) use VirtualFree (with MEM_RELEASE). Now the address range will be free again. Then use the same memory address (returned by VirtualAlloc) with MapViewOfFileEx.
What you are trying to do is impossible.
From the MapViewOfFileEx docs, the pointer you supply is "A pointer to the memory address in the calling process address space where mapping begins. This must be a multiple of the system's memory allocation granularity, or the function fails."
The memory allocation granularity is 64K, so you cannot map disparate 4K pages from the file into adjacent 4K pages in virtual memory.
If you provide a base address, the function will try to map your file at that address. If it cannot use that base address (because something is already using all or part of the requested memory region), then the call will fail.
For most applications, there's no real point trying to fix the address yourself. If you're a sophisticated database process and you're trying to carefully manage your own memory layout on a machine with a known configuration for efficiency reasons, then it might be reasonable. But you'd have to be prepared for failure.
In 64-bit processes, the virtual address space is pretty wide open, so it might be possible to select a base address with some certainty, but I don't think I'd bother.
From MSDN:
While it is possible to specify an address that is safe now (not used by the operating system), there is no guarantee that the address will remain safe over time. Therefore, it is better to let the operating system choose the address.
I believe "over time" refers to future versions of the OS and whatever run-time libraries you're using (e.g., for memory allocation), which might take a different approach to memory layout.
Also:
If the lpBaseAddress parameter specifies a base offset, the function succeeds if the specified memory region is not already in use by the calling process. The system does not ensure that the same memory region is available for the memory mapped file in other 32-bit processes.
So basically, your instinct is right: specifying a base address is not reliable. You can try, but you must be prepared for failure.
So to directly answer your question:
What I'd like to know is if there is any way to reliably reserve a pool of address space in order that - subsequently - it can be reliably used by MapViewOfFileEx to map blocks to explicit memory addresses.
No, there isn't. Not without applying many constraints on the runtime environment (e.g., limiting to a specific version of the OS, setting base addresses for all of your DLLs, disallowing DLL injection, etc.).

Getting the lowest free virtual memory address in windows

Title says it pretty much all : is there a way to get the lowest free virtual memory address under windows ? I should add that I am interested by this information at the beginning of the program (before any dynamic memory allocation has been done).
Why I need it : trying to build a malloc implementation under Windows. If it is not possible I would have to really to whatever VirtualAlloc() returns when given NULL as first parameter. While you would expect it to do something sensible, like allocation memory at the bottom of what is available, there are no guarantees.
This can be implemented yourself by using VirtualQuery looking for pages that are marked as free. It would be relatively slow though. (You will also need to consider allocation granularity which is different from page size.)
I will say that unless you need contiguous blocks of memory, trying to keep everything close together is mostly meaningless since if two pages of virtual memory might be next to each other in the address space, there is no reason to assume they are close to each other in physical memory. In fact, even if they are close to each other at some point in time, if those pages get moved to backing store and then faulted back into memory, the page would not be faulted to the same physical address page.
The OS uses more complicated metrics than just what is the "lowest" memory address available. Specifically, VirtualAlloc allocates pages of memory, so depending on how much you're asking for, at least one page of unused address space has to be available at the starting address. So even if you think there's a "lower" address that it should have used, that address might not have been compatible with the operation that you asked for.

Resources