Windows stack and heap address ranges - winapi

Working with Linux until now where stack addresses are very high and heap addresses are pretty low (as seen by printing heap and stack addresses using a C program), I have a problem with the Win32 process memory layout. MWSDN is saying that that stack addresses are higher than heap addresses, but from what I saw in practice, stack addresses are lower than heap addresses. So I am confused. Someone please explain.

Hm, stack addresses are higher than heap addresses - this is simply not true. Both stack and heap can reside anywhere in the address space of the process on Windows.
If you start a lot of threads, make huge heap allocations and load hundreds of dlls, you will find that all these objects are evenly spread around the address space.
This picture shows the structure of virtual allocations in a typical 32-bit process on Windows. Green shows free areas, blue that something is allocated. Activity is mostly taking place in the beginning of the address space but in other address ranges it is present also.

Related

How come the allocation of virtual address spaces doesn't rob you of all virtual memory?

On a 32-bit computer, a virtual memory address is represented as an integer between 0 and 2^32. By virtue of being a 32-bit system, no address can be represented that's lower than 0 or higher than 2^32, and we therefore have a total of 4 GiB (2^32 bytes) of virtual memory to use up. We also know that address spaces are memory protected; they cannot touch, because otherwise one process would be able to “step on the toes” of another. So, if all that I've said is correct, let me now ask this: if we grant that, by Microsoft’s own documentation, 2 GiB of virtual address space are used to operate the system and 2GiB of virtual address space are provided to a single user-mode process, have we not exhausted every possible virtual memory address on a 32-bit system? Would this not mean that we have to resort to disk-swapping just to run 2 processes? Surely, this is too ridiculous to be true, and I just want someone to clarify where my thinking has gone astray...
I have looked at the following questions but none of them seem to give satisfying/consistent/not hand-wavy answers. Or maybe I just don't understand them:
What is the maximum addressable space of virtual memory? - Stack Overflow
Virtual address space in windows - Stack Overflow
What happens when the number of possible virtual addresses are exceeded - Stack Overflow
Thanks! :)
TL;DR: Virtual memory is per-process and the address space changes when the OS switches execution from one process to another.
Remember that we are talking about virtual memory here and virtual memory is a trick that works because of the cooperation between the OS and the CPU hardware.
The split is not always at 2GB (there is a boot switch for 3GB etc.) but for this discussion lets assume it is always 2GB and that we are on a x86 machine.
When the CPU needs to access a an address in virtual memory it needs to translate the memory page from virtual to physical. The exact mechanics of how this works is too big of a topic to cover here but suffice to say that the translation involves a page directory that has information about present/swapped, modified, COW etc and a way to map the address to physical RAM (and if not present, the CPU will ask the OS to swap it in from the page-file).
The upper 2GB is where the kernel and drivers live and the mapping is always the same in all processes (but it can only be accessed in kernel mode (CPU ring 0)). The lower 2GB however are per-process. Each process have their own set of mappings. When the OS switches executing from one process to another (context switch) the page directory for the CPU the thread is about to run on is changed. This way each process has its own virtual address space.

virtual memory effects and relations between paging and segmentation

This is my first post. I want to ask about how are virtual memory related to paging and segmentation. I am searching internet for few days, but still can't manage to put that information into right order. Here is what I know so far:
We can talk about addresses (we could say they are levels of memory abstraction) in memory:
physical level (CPU talking to memory controller, "hey give me contents of address 0xFFEABCD", these adresses are adresses of cells in RAM, so cell 0xABCD has physical address 0xABCD. memory controller can only use physical adresses, so if adress is not physical it must be changed to physical.
logical level.This is abstraction over physical addresses. Here processes if ask for memory, (assume successfull allocation) are given address which has no direct relation to cells in RAM. We can say these addresses are from different pool (world?) than physical addresses. As I said before memory controller only understand physical adresses, so to use logical addresses, we need to convert them to physical addresses. There are two ways for OS to be able to create logical adresses:
paging - in which physical memory (RAM) is divided into continous blocks of memory (called frames), and logical memory (this other world) is also divided in same in length blocks (called pages). Now OS keep in RAM data structure called page table. It's an associative array (map) and it's primary goal of existence is to translate logical level addresses to physical level adresses. Paging has following effect: memory allocated by process in RAM (so in frames in physical memory belonging to program) may not be in contingous manner (so there may be holes inside).
segmentation - program is divided into parts called segments. Segments sizes are not fixed, so different segments may have different sizes. Program is divided in few segments and each segment will have its own place in RAM (physical) memory. So one segment (call it sementA), and another (call it segmentB) may not be near each other. In other words segmentA don't have to has segmentB as a neighbour.
internal fragmentation - when memory which belongs to process isn't used in 100%. So if process want to have 2 bytes for its use, OS need to allocate page/pages which total size need to be greater or equal than amount of memory requested by program. Typical size of page is 4KB. Unit in which OS gives memory to process are pages. So it can't give less than 4KB. So if we use 2 bytes, 4KB - 2B = 4094 bytes are wasted (memory is associated with our process so other processes can't use it. Only we can use it, but we only need 2B).
external fragmentation - when allocated blocks of memory are one near another, but there is a little hole between them. Its free, so other programs, can use it, but it is unlikly because it is very small. That holes with high probability will be wasted. More holes - more wasted memory.
Paging may cause effect of internal fragmentation. Segmentation may cause effect of external fragmentation.
virtual level - addresses used in virtual memory. This is extension of logical memory level. Now program don't even need to have all of it's allocated pages in RAM to start execution. It can be implemented with following techniques:
paged segmentation - method in which segments are divided into pages.
segmented paging - less used method but also possible.
Combining them takes a positive aspects from both solutions.
What i have read about pros and cons of virtual memory:
PROS:
processes have their own address space which mean if we have two processes A and B, and both of them have a pointer to address eg. 17 processA pointer will be showing to different frame than pointer in processB. this results in greater process isolation. Processes are protected from each other (so one process can't do things with another process memory if it isn't shared memory because in its mapping don't exist such mapping entry), and OS is more protected from processes.
have more memory than you physical first order memory(RAM, due to swapping to secondary order memory).
better use of memory due to:
swapping unused parts of programs to secondary memory.
making sharings pages possible, also make possible "copy on write".
improved multiprogram capability (when not needed parts of programs are swapped out to secondary memory, they made free space in ram which could be used for new procesess.)
improved CPU utilisation (if you can have more processes loaded into memory you have bigger probability than there exist some program that now need do CPU stuff, not IO stuff. In such cases you can better utilise CPU).
CONS:
virtual memory has it's overhead because we need to get access to memory twice (but here a lot of improvment can be achieved using TLB buffers)
it makes OS part managing memory more complicated.
So here we came to parts which I don't really understand:
Why in some sources logical address and virtual addresses are described as synonymes? Do I get something wrong?
Is really virtual memory making protection to processes? I mean, in segmentation for example there was also check if process do not acces other memory (resulting in segfault if it does), paging also has a protection bit in a page table, so doesn't the protection come from simply extending abstraction of logic level addresses? If VM (Virtual Memory) brings extended protection features, what are they and how they work? In other words: does creating separate address space for each process, bring extended memory protection. If so, what can't be achieved is paging without VM?
How really differ paged segmentation from segmented paging. I know that the difference between these two will be how a address is constructed (a page number, segment number, that stuff..), but I suppose it isn't enough to develop 2 strategies. This reason is like nothing. I read that segmented paging is less elastic, and that's the reason why it is rarely used. But why it it less elastic? Is the reason for that, that in program you can have only few segments instead a lot of pages. If thats the case paging indeed allow better "granularity".
If VM make separate address space for each process, does it mean, paging without VM use logic addresses from "one pool" (is then every logic address globally unique in that case?).
Any help on that topic would be appreciated.
Edit: #1
Ok. I finally understood that paging not on demand is also a virtual memory. I just found some clarification was helpful to understand the topic. Below is link to image which I made to visualize differences. Thanks for help.
differences between paging, demand paging and swapping
Why in some sources logical address and virtual addresses are described as synonymes? Do I get something wrong?
Many sources conflate logical and virtual memory translation. In ye olde days, logical address translation never took place without virtual address translation so processor documentation referred to them as the same.
Now we have large memory systems that use logical memory translation without virtual memory.
Is really virtual memory making protection to processes?
It is the logical memory translation that implements page protections.
How really differ paged segmentation from segmented paging.
You can really ignore segments. No rationally designed processor architecture designed after 1970 used segments and they are finally dying out.
If VM make separate address space for each process, does it mean, paging without VM use logic addresses from "one pool"
It is logical memory that creates the separate address space for each process. Paging is virtual memory. You cannot have one without the other.

Does memory layout of a program depend on address binding technique?

I have learned that with run-time address binding, the program can be allocated frames in the physical memory non-contiguously. Also, as described here and here, every segment of the program in the logical address space is contiguous, but not all segments are placed together side-by-side. The text, data, BSS and heap segments are placed together, but the stack segment is not. In other words, there are pages between the heap and the stack segments (between the program break and stack top) in the logical address space that are not mapped to any frames in the physical address space, thus implying that the logical address space is non-contiguous in the case of run-time address binding.
But what about the memory layout in the case of compile-time or load-time binding ? Now that the logical address space in not an abstract address space but the actual physical address space, how is a program laid out in the physical memory ? More specifically, how is the stack segment placed in the physical address space of a program ? Is it placed together with the rest of the segments or separately just as in the case of run-time binding ?
To answer your quesitons, I first have to explain a bit about stack and heap allocation in modern operating systems.
Stack as the name suggest is the continues memory allocation, where cpu uses push and pop commands to add/remove data from top of the stack. I assume that you already know how stack works. process stores - return address, function arguments and local variables over stack. Every time a function is called, more data is pushed (it could ultimately lead to stack overflow is no data is popped ever - infinite recursion?). Stack size is fixed for a program when it is loaded in the memory. Most of the programming language lets you decide stack size during compilation. If not, they will decide a default. On Linux, maximum stack size(hard limit) is limited by ulimit. You can check and set the size by ulimit -s.
Heap space however, has no upper limit in *nix systems(depends, confirm it using ulimit -v), every program starts with a default/set amount of heap and can increase as much needed. Heap space in a process is actually two linked lists, free and used blocks. Whenever memory allocation is required from heap, one or more free blocks are combined to form a bigger block and allocated to the used list as a single block. Freeing up means removing a block from the used list to the free list. After Freeing the blocks, heap can have external fragmentation. Now if the the number of free blocks can't contain the whole data, process will request more memory from the OS, Generally the newer blocks are allocated from a higher address. Thus we show upward direction diagram for heap growth. I rephrase - Heap does not allocate memory continuously in a higher direction.
Now to answer your questions.
With compile-time or load-time address binding, how are the stack and
the heap segments placed in the physical address space of a program ?
Fixed stack is allocated at compile time, with some heap memory. How are placed have been explained above.
Is the space between the heap and the stack reserved for the program
or is it available for the OS to be used for other programs ?
Yes it is reserved for the program. Process however can request more memory to add free blocks in its heap. It is different than sharing it's own heap.
Note: There are lots of topic which can be covered here as the question is broad. Some of them are - garbage collection, block selection, shared memory etc. I will soon add the references here.
References:-
Memory Management in JVM
Stack vs Heap
Heap memory allocation strategies

Is contiguous memory easier to get in a 64-bit address space? If so why?

A comment in this blog states:
We know how to make chunked heaps, but there would be some overhead to
using them. We have more requests for faster storage management than
we do for larger heaps in the 32-bit JVM. If you really want large
heaps, switch to the 64-bit JVM. We still need contiguous memory,
but it's much easier to get in a 64-bit address space.
This implication of the above statement is that it is easier to get contiguous memory in a 64-bit address space. Is this true? If so why?
That's very true. A process must allocate memory from the virtual memory address space. Which stores both code and data and whose size is restricted by the addressing capability of the architecture. You can never address more than 2^32 bytes in a 32-bit process, not counting bank-switching tricks. That's 4 gigabytes. The operating system typically takes a big chunk out of that as well, on 32-bit Windows for example that cuts down the addressable VM size to 2 gigabytes.
Ideally, allocations are made so that they fit snugly together. That very rarely works out in practice. Shared libraries or DLLs in particular need to pick a preferred load address and that has to be guessed up front when the library is built.
So in practice, the allocations are made from the holes in between existing ones and the largest possible contiguous allocation you can get is restricted by the size of the largest hole. Usually much smaller than the addressable VM size, on Windows it is typically around 650 megabytes. That tends to go down-hill from there as the available address space is getting fragmented by allocations. Particularly by native code that can't afford to have allocations moved by a compacting garbage collector. If you use Windows then you can get insight in the VM allocations with the SysInternals' VMMap utility.
This problem completely disappears in a 64-bit process. The theoretical addressable virtual memory size is 2^64, an enormous number. So large that current processors don't implement it, they can go up to 2^48. Further restricted by the operating system version you have and its willingness to keep page mapping tables for that much VM. Eight terabytes is a typical limit. By implication, the holes between allocations are huge. Your program will keel over on paging file thrashing before it dies from OOM.
I can't speak for how the JVM is implemented obviously, but from a purely theoretical viewpoint, if you have a significantly larger virtual address space (eg 64-bit as compared with 32-bit) it should be significantly easier to find a large block of contiguous memory which is available for allocation (going to extremes - you've got no chance of finding a contiguous 4GB of free memory in a 32-bit address space, but a significant chance of finding this space in a full 64-bit address space).
It should be noted that whatever the virtual address space size, this is still going to be implemented by allocation of (probably) non-contiguous physical memory pages, particularly if the requested allocation is large - the larger virtual address space just means there are a likely to be a lot more contiguous virtual addresses available for use.

Process Explorer: What does the Commit History graph show?

In the Memory graph available in Process Explorer, the top graph shows Commit History. What does this actually indicate at the OS level?
To experiment if this is the memory allocated on the heap by a process, I wrote a small program that incrementally malloc-ed 100 MB many times. The Commit History graph increased for a while (upto 1.7 GB of memory allocation) and did not grow after that despite the program malloc-ing memory.
So, what is this graph indicating? How can this information be used to understand/analyse the state of Windows?
The Commit level is the amount of anonymous virtual address space allocated to all processes in the system. (It does not include any file-backed virtual address space, e.g., from an mmap'd file.) In process explorer, the 'Commit History' graph shows the size of this value over time.
Because of the way virtual memory is allocated and parceled out (the actual RAM backing a page of virtual address space isn't necessarily allocated until its first touched), this current 'commit' level represents the worst case (at the moment) of memory that the system may have to come up with. Unlike Linux, Windows will not hand out promises (address space) for RAM it cannot come up with or fake (via the paging file). So, once the commit level reaches the limit for the system (roughly RAM + pageing file size), new address space allocations will fail (but new uses of existing virtual address space regions will not fail).
Some conclusions about your system that you can draw from this value:
If this value is less than your current RAM (excepting the kernel and system overhead), then your system is very unlikely to swap (use the paging file) since in the worst case, everything should fit in memory.
If this value is much larger than physical memory usage, then some program is allocating a lot of virtual address space, but isn't yet using it.
Exiting an application should reduce the committed memory usage as all of its virtual address space will be cleaned up.
Your experiment validated this. I suspect you ran into address space limitations (32-bit processes in windows are limited to 2GB ... maybe 300MB disappeared to fragmentation, libraries and text?).

Resources