Memory management and Process - memory-management

Is HEAP local to a process? In other words we have stack which is always local to a process and for each process it is seprate. Does the same apply to heap? Also, if HEAP is local, i believe HEAP size should change during runtime as we request more and more memory from CPU, so who puts a top limit on how much memory can be requested?

Heaps are indeed local to the process. Limits are placed by the operating system. Memory can also be limited by the number of bits used for addressing (i.e. 32-bits can only address 2G 4G of memory at a time).

Yes, on modern operating systems there exists a separate heap for each process. There is by the way not just a separate stack for every process, but there is a separate stack for every thread in the process. Thus a process can have quite a number of independent stacks.
But not all operating systems and not all hardware platforms offer this feature. You need a memory management unit (in hardware) for that to work. But desktop computers have that feature since... well... a while back... The 386-CPU? (leave a comment if you know better). You may though find yourself on some kind of micro processor that does not have that feature.
Anyway: The limit to the heap size is mainly limited by the operating system and the hardware. The hardware limits especially due to the limited amount of address space that it allows. For example a 32bit-CPU will not address more than 4GB (2^32). A CPU that features physical address extensions (PAE), which the current CPUs do support, can address up to 64GB, but that's done by the use of segments and one single process will not be able to make use of this feature. It will always see 4GB max.
Additionally the operating system can limit the memory as it sees fit. On Linux you can see and set limits using the ulimit command. If you are running some code not natively, but for example in an interpreter/virtual machine (such as Java, or PHP), then that environment can additionally limit the heap size.

'heap' is local to a proccess, but it is shared among threads, while stack is not, it is per-thread.
Regarding the limit, for example in linux it is set by ulimit (see manpage).

On a modern, preemptively multi-tasking OS, each process gets its own address space. The set of memory pages that it can see are separate from the pages that other processes can see. As a result, yes, each process sees its own stack and heap, because the stack and heap are just areas of memory.
On an older, cooperatively multi-tasking OS, each process shared the same address space, so the heap was effectively shared among all processes.
The heap is defined by the collection of things in it, so the heap size only changes as memory is allocated and freed. This is true regardless to how the OS is managing memory.
The top limit of how much memory can be requested is determined by the memory manager. In a machine without virtual memory, the top limit is simply how much memory is installed in the computer. With virtual memory, the top limit is defined by physical memory plus the size of the swap file on the disk.

Related

Maximum memory that can be allocated to a process on Windows 8.1

I'm a fresher and was asked this question in the Microsoft recruitment process.
I'd read somewhere that the maximum memory allocated to a process can be the maximum physical memory available. So is it that if the RAM is 4GB, that's the answer? If yes, then how? Because some part of the RAM is always occupied by the Operating System, right? If no, then could you tell me the answer and what are the factors it really depends on?
First of all, the base of your question is totally related to Virtual Memory which has already been pointed out by Chris O!
Now,proceeding to your questions step by step :-
I'd read somewhere that the maximum memory allocated to a process can
be the maximum physical memory available. So is it that if the RAM is
4GB, that's the answer?
No, the maximum memory which your process can use can be anything depending on the virtual memory assigned or the swap size. Swap memory is generally taken twice of the physical memory,thought it can always be more or less depending on the requirements!
Also, PAE (Physical Address Extension) allows more memory to be allocated. PAE allows a 32-bit OS to use more RAM, that is, more physical memory. This has nothing whatsoever to do with the 4GB virtual address space limitation that 32-bit OSes have.
A 32-bit OS uses 32-bit virtual addresses. That limits it to 4GB of addressable virtual memory at any one time. If a 32-bit OS also uses 32-bit physical addresses, it is limited to 4GB of physical memory as well. PAE allows a 32-bit OS to use 36-bit physical addresses, which raises the limit to 64GB.
Next, the point which you mentioned is valid for the atomic processes which can't be broken further into threads or So. I doubt one would rarely face that situation in which the size of atomic process is more than that of the physical memory...
If yes, then how?Because some part of the RAM is always occupied by
the Operating System, right?
No.it's not as I already have mentioned above!
If no, then could you tell me the answer and what are the factors it
really depends on?
The memory requirement of a process is not defined earlier. But, you might have heard about this that many programs recommend at least it must have this much of memory to execute this process. This is the minimal requirement of the process without which the process won't even run properly! Because it must have suitable physical memory to handle those events! Next, the term swapping comes into picture whenever we are talking about Virtual memory! All the process which are currently not running are send to disks and the process which are to be executed are sent to the physical memory for execution.So, more than one processes are requested and executed by continuous swapping!
Some other continuous processes which are maintained in main memory are :-
System processes OR daemons
cache memory or cache maintenance

Will Windows be still able to allocate memory when free space in physical memory is very low?

On Windows 32-bit system the application is being developed using Visual Studio:
Lets say lots of other application running on my machine and they have occupied almost all of physical memory and only 1 MB memory is left free. If my application (which has not yet allocated any memory) tries to allocate, say 2 MB, will the call be successful?
My guess: In theory, each Windows application has 2GB of virtual memory available.
So I believe this call should be successful (regardless how much physical memory is available). But I am not sure on this. That's why asking here.
Windows gives a rock-hard guarantee that this will always work. A process can only allocate virtual memory when Windows can commit space in the paging file for the allocation. If necessary, it will grow the paging file to make the space available. If that fails, for example when the paging file grows beyond the preset limit, then the allocation fails as well. Windows doesn't have the equivalent of the Linux "OOM killer", it does not support over-committing that may require an operating system to start randomly killing processes to find RAM.
Do note that the "always works" clause does have a sting. There is no guarantee on how long this will take. In very extreme circumstances the machine can start thrashing where just about every memory access in the running processes causes a page fault. Code execution slows down to a crawl, you can lose control with the mouse pointer frozen when Explorer or the mouse or video driver start thrashing as well. You are well past the point of shopping for RAM when that happens. Windows applies quotas to processes to prevent them from hogging the machine, but if you have enough processes running then that doesn't necessarily avoid the problem.
Of course. It would be lousy design if memory had to be wasted now in order to be used later. Operating systems constantly re-purpose memory to its most advantageous use at any moment. They don't have to waste memory by keeping it free just so that it can be used later.
This is one of the benefits of virtual memory with a page file. Because the memory is virtual, the system can allocate more virtual memory than physical memory. Virtual memory that cannot fit in physical memory, is pushed out to the page file.
So the fact that your system may be using all of the physical memory does not mean that your program will not be able to allocate memory. In the scenario that you describe, your 2MB memory allocation will succeed. If you then access that memory, the virtual memory will be paged in to physical memory and very likely some other pages (maybe in your process, maybe in another process) will be pushed out to the page file.
Well, it will succeed as long as there's some memory for it - apart from physical memory, there's also the page file.
However, once you reach the limit of both RAM and the page file, you're done for and that's when the out of memory situation really starts being fun.
Now, systems like Windows Vista will try to use all of your available RAM, pretty much for caching. That's a good thing, and when there's a request for memory from an application, the cache will be thrown away as needed.
As for virtual memory, you can request much more than you have available, regardless of your RAM or page file size. Only when you commit the memory does it actually need some backing - either RAM or the page file. On 64-bit, you can easily request terabytes of virtual memory - that doesn't mean you'll get it when you try to commit it, though :P
If your application is unable to allocate a physical memory (RAM) block to store information, the operating system takes over and 'pages' or stores sections that are in RAM on disk to free up physical memory so that your program is able to perform the allocation. This is done automatically and is completely invisible to your applications.
So, in your example, on a system that has 1MB RAM free, if your application tries to allocate memory, the operating system will page certain contents of physical memory to disk and free up RAM for your application. Your application will not crash in this case.
This, obviously is much more complicated than that.
There are several ways to configure a page file on Windows (fixed size, variable size and on which disk). If you run out of physical memory, and out of hard drive space (because your page file has grown very large due to excessive 'paging') or reach the limit of your paging file (if it is a static limit) then your applications will fail due out an out-of-memory exception. With today's systems with large local storage however, this is a rare event.
Be sure to read about paging for the full picture. Check out:
http://en.wikipedia.org/wiki/Paging
In certain cases, you will notice that you have sufficient free physical memory. Say 100MB and your program tries to allocate a 10MB block to store a large object but fails. This is caused by physical memory fragmentation. Although the total free memory is 100MB, there is no single contiguous block of 10MB that can be used to store your object. This will result in an exception that needs to be handled in your code. If you allocate large objects in your code you may want to separate the allocation into smaller blocks to facilitate allocation, and then aggregate them back in your code logic. For example, instead of having a single 10m vector, you can declare 10 x 1m vectors in an array and allocate memory for each individual one.

memory allocation vs. swapping (under Windows)

sorry for my rather general question, but I could not find a definite answer to it:
Given that I have free swap memory left and I allocate memory in reasonable chunks (~1MB) -> can memory allocation still fail for any reason?
The smartass answer would be "yes, memory allocation can fail for any reason". That may not be what you are looking for.
Generally, whether your system has free memory left is not related to whether allocations succeed. Rather, the question is whether your process address space has free virtual address space.
The allocator (malloc, operator new, ...) first looks if there is free address space in the current process that is already mapped, that is, the kernel is aware that the addresses should be usable. If there is, that address space is reserved in the allocator and returned.
Otherwise, the kernel is asked to map new address space to the process. This may fail, but generally doesn't, as mapping does not imply using physical memory yet -- it is just a promise that, should someone try to access this address, the kernel will try to find physical memory and set up the MMU tables so the virtual->physical translation finds it.
When the system is out of memory, there is no physical memory left, the process is suspended and the kernel attempts to free physical memory by moving other processes' memory to disk. The application does not notice this, except that executing a single assembler instruction apparently took a long time.
Memory allocations in the process fail if there is no mapped free region large enough and the kernel refuses to establish a mapping. For example, not all virtual addresses are useable, as most operating systems map the kernel at some address (typically, 0x80000000, 0xc0000000, 0xe0000000 or something such on 32 bit architectures), so there is a per-process limit that may be lower than the system limit (for example, a 32 bit process on Windows can only allocate 2 GB, even if the system is 64 bit). File mappings (such as the program itself and DLLs) further reduce the available space.
A very general and theoretical answer would be no, it can not. One of the reasons it could possibly under very peculiar circumstances fail is that there would be some weird fragmentation of your available / allocatable memory. I wonder whether you're trying get (probably very minor) performance boost (skipping if pointer == NULL - kind of thing) or you're just wondering and want to discuss it, in which case you should probably use chat.
Yes, memory allocation often fails when you run out of memory space in a 32-bit application (can be 2, 3 or 4 GB depending on OS version and settings). This would be due to a memory leak. It can also fail if your OS runs out of space in your swap file.

Process Explorer: What does the Commit History graph show?

In the Memory graph available in Process Explorer, the top graph shows Commit History. What does this actually indicate at the OS level?
To experiment if this is the memory allocated on the heap by a process, I wrote a small program that incrementally malloc-ed 100 MB many times. The Commit History graph increased for a while (upto 1.7 GB of memory allocation) and did not grow after that despite the program malloc-ing memory.
So, what is this graph indicating? How can this information be used to understand/analyse the state of Windows?
The Commit level is the amount of anonymous virtual address space allocated to all processes in the system. (It does not include any file-backed virtual address space, e.g., from an mmap'd file.) In process explorer, the 'Commit History' graph shows the size of this value over time.
Because of the way virtual memory is allocated and parceled out (the actual RAM backing a page of virtual address space isn't necessarily allocated until its first touched), this current 'commit' level represents the worst case (at the moment) of memory that the system may have to come up with. Unlike Linux, Windows will not hand out promises (address space) for RAM it cannot come up with or fake (via the paging file). So, once the commit level reaches the limit for the system (roughly RAM + pageing file size), new address space allocations will fail (but new uses of existing virtual address space regions will not fail).
Some conclusions about your system that you can draw from this value:
If this value is less than your current RAM (excepting the kernel and system overhead), then your system is very unlikely to swap (use the paging file) since in the worst case, everything should fit in memory.
If this value is much larger than physical memory usage, then some program is allocating a lot of virtual address space, but isn't yet using it.
Exiting an application should reduce the committed memory usage as all of its virtual address space will be cleaned up.
Your experiment validated this. I suspect you ran into address space limitations (32-bit processes in windows are limited to 2GB ... maybe 300MB disappeared to fragmentation, libraries and text?).

How does the operating system know how much memory my app is using? (And why doesn't it do garbage collection?)

When my task manager (top, ps, taskmgr.exe, or Finder) says that a process is using XXX KB of memory, what exactly is it counting, and how does it get updated?
In terms of memory allocation, does an application written in C++ "appear" different to an operating system from an application that runs as a virtual machine (managed code like .NET or Java)?
And finally, if memory is so transparent - why is garbage collection not a function-of or service-provided-by the operating system?
As it turns out, what I was really interested in asking is WHY the operating system could not do garbage collection and defrag memory space - which I see as a step above "simply" allocating address space to processes.
These answers help a lot! Thanks!
This is a big topic that I can't hope to adequately answer in a single answer here. I recommend picking up a copy of Windows Internals, it's an invaluable resource. Eric Lippert had a recent blog post that is a good description of how you can view memory allocated by the OS.
Memory that a process is using is basically just address space that is reserved by the operating system that may be backed by physical memory, the page file, or a file. This is the same whether it is a managed application or a native application. When the process exits, the operating system deletes the memory that it had allocated for it - the virtual address space is simply deleted and the page file or physical memory backings are free for other processes to use. This is all the OS really maintains - mappings of address space to some physical resource. The mappings can shift as processes demand more memory or are idle - physical memory contents can be shifted to disk and vice versa by the OS to meet demand.
What a process is using according to those tools can actually mean one of several things - it can be total address space allocated, total memory allocated (page file + physical memory) or memory a process is actually using that is resident in memory. Task Manager has a separate column for each of these possibilities.
The OS can't do garbage collection since it has no insight into what that memory actually contains - it just sees allocated pages of memory, it doesn't see objects which may or may not be referenced.
Whereas the OS handles allocates at the virtual address level, in the process itself there are other memory managers which take these large, page-sized chunks and break them up into something useful for the application to use. Windows will return memory allocated in 64k boundaries, but then the heap manager breaks it up into smaller chunks for use by each individual allocation done by the program via new. In .Net applications, the CLR will hand off new objects off of the garbage collected heap and when that heap reaches its limits, will perform garbage collection.
I can't speak to your question about the differences in how the memory appears in C++ vs. virtual machines, etc, but I can say that applications are typically given a certain memory range to use upon initialization. Then, if the application ever requires more, it will request it from the operating system, and the operating system will (generally) grant it. There are many implementations of this - in some operating systems, other applications' memory is moved away so as to give yours a larger contiguous block, and in others, your application gets various chunks of memory. There may even be some virtual memory handling involved. Its all up to an abstracted implementation. In any case, the memory is still treated as contiguous from within your program - the operating system will handle that much at least.
With regard to garbage collection, the operating system knows the bounds of your memory, but not what is inside. Furthermore, even if it did look at the memory used by your application, it would have no idea what blocks are used by out-of-scope variables and such that the GC is looking for.
The primary difference is application management. Microsoft distinguishes this as Managed and Unmanaged. When objects are allocated in memory they are stored at a specific address. This is true for both managed and unmanaged applications. In the managed world the "address" is wrapped in an object handle or "reference".
When memory is garbage collected by a VM, it can safely suspend the application, move objects around in memory and update all the "references" to point to their new locations in memory.
In a Win32 style app, pointers are pointers. There's no way for the OS to know if it's an object, an arbitrary block of data, a plain-old 32-bit value, etc. So it can't make any inferences about pointers between objects, so it can't move them around in memory or update pointers between objects.
Because the way references are handled, the OS can't take over the GC process and instead it's left up to the VM to manage the memory used by the application. For that reason, VM applications appear exactly the same to the OS. They simply request blocks of memory for use and the OS gives it to them. When the VM performs GC and compacts it's memory it's able to free memory back to the OS for use by another app.

Resources