I have some conceptions I will lay out first. A 32-bit Windows application can't address more than ~3 GB of memory if I got it right. However, when it runs out of memory in its virtual address space, it will hand over data to the Windows virtual memory manager in order to make more space for whatever it needs. Windows' virtual memory manager will then write this data to a location that is in physical RAM, but outside the application's virtual address space. When the 32-bit application needs this data again, the virtual memory manager can respond swiftly by loading it from physical RAM.
I can imagine there is overhead to this. How much overhead are we talking about? A 64-bit application would be able to make a virtual address space that is large enough for everything it needs, but how much more efficient is this than shuffling around data with the memory manager?
I am curious to know about the difference between memory management in Windows and Linux.Does Windows OS support paging or segmentation?
I trying to understand, If all processes cumulatively uses all RAM on Windows machine then every user is prevented even from log in to the system but that is not the case with Linux systems.
So how it's achieved in Linux system?
Additionally to the recent posts, Windows 10 also supports compressing of the RAM. that means, before Windows tries so SWAP out memmory on hard drive, it will try to compress the RAM
Given a 32-bit/64-bit processor can a 4GB process run on 2GB RAM. Will it use virtual memory or it wont run at all?
This is HIGHLY platform dependent. On many 32bit OS's, no single process can ever use more than 2GB of memory, regardless of the physical memory installed or virtual memory allocated.
For example, my work computers use 32bit Linux with PAE (Physical Address Extensions) to allow it to have 16GB of RAM installed. The 2GB per process limit still applies however. Having the extra RAM simply allows me to have more individual processes running. 32bit Windows is the same way.
64bit OS's are more of a mixed bag. 64bit Linux will allow individual processes to map memory well in excess of 32GB (but again, varies from Kernel to Kernel). You will be limited only by the amount of Swap (Linux virtual memory) you have. 64bit Windows is a complete crap shoot. Certain versions will only allow 2GB per process, but most will allow >32GB limited only by the amount of Page File the user has allocated.
Microsoft provides a useful table breaking down the various memory limits on various OS versions/editions. Unfortunately there is no such table that I can find with cursory searching for Linux since it is so fragmented.
Short answer: Depends on the system.
Most 32-bit systems have a limitation of 2GB per process. If your system allows >2GB per process, then we can move on to the next part of your question.
Most modern systems use Virtual Memory. Yet, there are some constrained (and various old) systems that would just run out of space and make you cry. I believe uClinux supports both MMU and MMU-less architectures. Most 32-bit processors have a MMU (a few don't, see ARM Cortex-M0) and a handful of 16-bit or 8-bit have it as well (see Atmel ATtiny13A-MMU and Atari MMU).
Any process that needs more memory than is physically available will require a form of Memory Swap (e.g., a partition or file).
Virtual Memory is divided in pages. At some point, a page reside either in RAM or in Swap. Any attempt to access a memory page that's not loaded in RAM will trigger an interruption called Page Fault, which is handled by the kernel.
A 64-bit process needing 4GB on a 64-bit OS can generally run in 2GB of physical RAM, by using virtual memory, assuming disk swap space is available, but performance will be severely impacted if all of that memory is frequently accessed.
A 32-bit process can't address exactly 4GB of memory in practice (some address space overhead is required by the operating system), so it won't run. Depending on the OS, it can probably run a process that needs > 2GB and < 3-4GB.
There is a utility consume.exe that comes with Windows Server 2003 Resource Toolkit that can be used to max out CPU utilization, or fill up physical memory or disk-space, or even fill up the page-file. When run in the page-file mode, this tool does not seem to consume the physical memory first (there is no change in free-bytes)
If I were to implement a similar tool, how would I use the pagefile before using up physical memory? (Let's say I'm using C#/.net)
My system uses a third part kernel built in native libraries (C++) with a J2EE upper layer running on Tomcat 6. The vendor stipulates 32bit JDK and overall the application very memory hungry. We are presently running on Windows x64 with 32 bit JVM. Essentially, the JVM will hang once the Virtual Size gets close to the 2GB 32bit addressing limit.
Question: From time to time, the third party frameworks will make large requests for memory and this pushes up the Virtual Size allocated on the server. The Virtual Size allocated will never recover even though it appears that the memory that the kernel is reducing its memory needs. In a typical Tomcat deployment, does the Virtual Size ever recover automatically or does it always act as a high water level that keeps on rising? Is there a way to tell the JVM to try to lower the Virtual Size dynamically?
I suspect that the 3rd party native kernel is to blame here but I need to investigate all our options.
FYI - AWE in Windows is not a clear option as the vendor does not officially support any JVMs that have AWE support. Migration to Linux is also not an easy path but is being considered.