What is the maximum amount of memory any single process on Windows can address? Is this different than the maximum virtual memory for the system? How would this affect a system design?
On 32-bit versions of Windows, a single process can map and address no more than 3GB of virtual memory at time. In 64-bit versions of Windows, a 32-bit process can map and address no more than 4GB of virtual memory at a time.
For 64-bit processes, the amount is difficult to calculate as there are numerous overlapping limits that could apply depending on all kinds of factors. It's typically around 7TB.
The maximum amount of virtual memory for the system is difficult to calculate and not a very meaningful number. Also, the limits on physical memory are not related to these limits on virtual memory.
You can read more details on Microsoft's page, Memory Limits for Windows Releases.
Related
Is there a size limit to the file mapping object? The reason I'm asking is that there is a mentioning of 2GB limit somewhere in MSDN (lost the track..) and I also checked this sample, which also expects 2GB size limit:
https://cpp.hotexamples.com/examples/-/-/CreateFileMapping/cpp-createfilemapping-function-examples.html
But I tried on a 40GB file with no problems on newest Win 10, so I'm a bit worried if there wasn't some limitation on older Windows for example.
There is no 2GB limit for file mappings. You can boot 32-bit Windows with the 3GB option or when a 32-bit process running on a 64-bit system, you get the full 4GB if the correct PE flag is set. All these limits are theoretical and you will never reach them in practice.
How large of a view you can map depends on two things;
The contiguous range of free addresses in your processes address space.
Available kernel memory for keeping track of the memory pages.
The first one is the big limit on 32-bit systems since the address space of your process is shared with system libraries, 3rd-party libraries (anti-virus, injected "tweaking" tools etc.), the PEB and TEBs, the system region, thread stacks and memory reserved by hardware. This will often put you well below 2GB. Any design requiring more than 500MB should probably be changed to only map in specific smaller ranges as needed.
For a 64-bit process on 64-bit Windows, the virtual address space is the 128-terabyte range 0x000'00000000 through 0x7FFF'FFFFFFFF (KB889654 claims 8 TB but that only applies to < Windows 8.1). Any usable range is going to be smaller but you can assume a couple of terabyte at least. 40GB is no problem and not enough to run into problems with low system resources either.
Generally as we know virtual memory is larger than physical memory.But when is it advantageous to define virtual memory smaller than physical memory?
If you have pointer-heavy code, you can save memory by choosing a smaller address space. For example, a pointer on a 32-bit platform occupies 4 bytes versus 8 bytes on 64-bit. The same goes for integer types like size_t.
This only works and makes sense if:
Your code/application/server uses multiple processes and all processes together need more memory than the amount of virtual memory (otherwise you wouldn't need more physical than virtual memory).
Your platform supports more physical than virtual memory (for example, Intel PAE).
The smaller amount of virtual memory is enough for each single process.
Imagine a large server system supporting multiple users. You don't want users to hog memory, so you restrict the size of the logical (virtual) address space by limiting page table size.
I'm a fresher and was asked this question in the Microsoft recruitment process.
I'd read somewhere that the maximum memory allocated to a process can be the maximum physical memory available. So is it that if the RAM is 4GB, that's the answer? If yes, then how? Because some part of the RAM is always occupied by the Operating System, right? If no, then could you tell me the answer and what are the factors it really depends on?
First of all, the base of your question is totally related to Virtual Memory which has already been pointed out by Chris O!
Now,proceeding to your questions step by step :-
I'd read somewhere that the maximum memory allocated to a process can
be the maximum physical memory available. So is it that if the RAM is
4GB, that's the answer?
No, the maximum memory which your process can use can be anything depending on the virtual memory assigned or the swap size. Swap memory is generally taken twice of the physical memory,thought it can always be more or less depending on the requirements!
Also, PAE (Physical Address Extension) allows more memory to be allocated. PAE allows a 32-bit OS to use more RAM, that is, more physical memory. This has nothing whatsoever to do with the 4GB virtual address space limitation that 32-bit OSes have.
A 32-bit OS uses 32-bit virtual addresses. That limits it to 4GB of addressable virtual memory at any one time. If a 32-bit OS also uses 32-bit physical addresses, it is limited to 4GB of physical memory as well. PAE allows a 32-bit OS to use 36-bit physical addresses, which raises the limit to 64GB.
Next, the point which you mentioned is valid for the atomic processes which can't be broken further into threads or So. I doubt one would rarely face that situation in which the size of atomic process is more than that of the physical memory...
If yes, then how?Because some part of the RAM is always occupied by
the Operating System, right?
No.it's not as I already have mentioned above!
If no, then could you tell me the answer and what are the factors it
really depends on?
The memory requirement of a process is not defined earlier. But, you might have heard about this that many programs recommend at least it must have this much of memory to execute this process. This is the minimal requirement of the process without which the process won't even run properly! Because it must have suitable physical memory to handle those events! Next, the term swapping comes into picture whenever we are talking about Virtual memory! All the process which are currently not running are send to disks and the process which are to be executed are sent to the physical memory for execution.So, more than one processes are requested and executed by continuous swapping!
Some other continuous processes which are maintained in main memory are :-
System processes OR daemons
cache memory or cache maintenance
Is HEAP local to a process? In other words we have stack which is always local to a process and for each process it is seprate. Does the same apply to heap? Also, if HEAP is local, i believe HEAP size should change during runtime as we request more and more memory from CPU, so who puts a top limit on how much memory can be requested?
Heaps are indeed local to the process. Limits are placed by the operating system. Memory can also be limited by the number of bits used for addressing (i.e. 32-bits can only address 2G 4G of memory at a time).
Yes, on modern operating systems there exists a separate heap for each process. There is by the way not just a separate stack for every process, but there is a separate stack for every thread in the process. Thus a process can have quite a number of independent stacks.
But not all operating systems and not all hardware platforms offer this feature. You need a memory management unit (in hardware) for that to work. But desktop computers have that feature since... well... a while back... The 386-CPU? (leave a comment if you know better). You may though find yourself on some kind of micro processor that does not have that feature.
Anyway: The limit to the heap size is mainly limited by the operating system and the hardware. The hardware limits especially due to the limited amount of address space that it allows. For example a 32bit-CPU will not address more than 4GB (2^32). A CPU that features physical address extensions (PAE), which the current CPUs do support, can address up to 64GB, but that's done by the use of segments and one single process will not be able to make use of this feature. It will always see 4GB max.
Additionally the operating system can limit the memory as it sees fit. On Linux you can see and set limits using the ulimit command. If you are running some code not natively, but for example in an interpreter/virtual machine (such as Java, or PHP), then that environment can additionally limit the heap size.
'heap' is local to a proccess, but it is shared among threads, while stack is not, it is per-thread.
Regarding the limit, for example in linux it is set by ulimit (see manpage).
On a modern, preemptively multi-tasking OS, each process gets its own address space. The set of memory pages that it can see are separate from the pages that other processes can see. As a result, yes, each process sees its own stack and heap, because the stack and heap are just areas of memory.
On an older, cooperatively multi-tasking OS, each process shared the same address space, so the heap was effectively shared among all processes.
The heap is defined by the collection of things in it, so the heap size only changes as memory is allocated and freed. This is true regardless to how the OS is managing memory.
The top limit of how much memory can be requested is determined by the memory manager. In a machine without virtual memory, the top limit is simply how much memory is installed in the computer. With virtual memory, the top limit is defined by physical memory plus the size of the swap file on the disk.
I am playing with an MSDN sample to do memory stress testing (see: http://msdn.microsoft.com/en-us/magazine/cc163613.aspx) and an extension of that tool that specifically eats physical memory (see http://www.donationcoder.com/Forums/bb/index.php?topic=14895.0;prev_next=next). I am obviously confused though on the differences between Virtual and Physical Memory. I thought each process has 2 GB of virtual memory (although I also read 1.5 GB because of "overhead". My understanding was that some/all/none of this virtual memory could be physical memory, and the amount of physical memory used by a process could change over time (memory could be swapped out to disc, etc.)I further thought that, in general, when you allocate memory, the operating system could use physical memory or virtual memory. From this, I conclude that dwAvailVirtual should always be equal to or greater than dwAvailPhys in the call GlobalMemoryStatus. However, I often (always?) see the opposite. What am I missing.
I apologize in advance if my question is not well formed. I'm still trying to get my head around the whole memory management system in Windows. Tutorials/Explanations/Book recs are most welcome!
Andrew
That was only true in the olden days, back when RAM was expensive. The operating system maps pages of virtual memory to RAM as needed. If there isn't enough RAM to satisfy a program's request, it starts unmapping pages to make room. If such a page contains data instead of code, it gets written to the paging file. Whenever the program accesses that page again, it generates a paging fault, letting the operating system read the page back from disk.
If the machine has little RAM and lots of processes consuming virtual memory pages, that can cause a very unpleasant effect called "thrashing". The operating system is constantly accessing the disk and machine performance slows down to a crawl.
More RAM means less disk access. There's very little reason not to use 3 or 4 GB of RAM on a 32-bit operating system, it's cheap. Even if you do not get to use all 4 GB, not all of it will be addressable due hardware devices taking space on the address bus (video, mostly). But that won't change the size of the virtual memory accessible by user code, it is still 2 Gigabytes.
Windows Internals is a good book.
The amount of virtual memory is limited by size of the address space - which is 4GB per process on a 32-bit system. And you have to subtract from this the size of regions reserved for system use and the amount of VM used already by your process (including all the libraries mapped to its address space).
On the other hand, the total amount of physical memory may be higher than the amount of virtual memory space the system has left free for your process to use (and these days it often is).
This means that if you have more than ~2GB or RAM, you can't use all your physical memory in one process (since there's not enough virtual memory space to map it to), but it can be used by many processes. Note that this limitation is removed in a 64-bit system.
I don't know if this is your issue, but the MSDN page for the GlobalMemoryStatus function contains the following warning:
On computers with more than 4 GB of memory, the GlobalMemoryStatus function can return incorrect information, reporting a value of –1 to indicate an overflow. For this reason, applications should use the GlobalMemoryStatusEx function instead.
Additionally, that page says:
On Intel x86 computers with more than 2 GB and less than 4 GB of memory, the GlobalMemoryStatus function will always return 2 GB in the dwTotalPhys member of the MEMORYSTATUS structure. Similarly, if the total available memory is between 2 and 4 GB, the dwAvailPhys member of the MEMORYSTATUS structure will be rounded down to 2 GB. If the executable is linked using the /LARGEADDRESSAWARE linker option, then the GlobalMemoryStatus function will return the correct amount of physical memory in both members.
Since you're referring to members like dwAvailPhys instead of ullAvailPhys, it sounds like you're using a MEMORYSTATUS structure instead of a MEMORYSTATUSEX structure. I don't know the consequences of that on a 64-bit platform, but on a 32-bit platform that definitely could cause incorrect memory sizes to be reported.