My system uses a third part kernel built in native libraries (C++) with a J2EE upper layer running on Tomcat 6. The vendor stipulates 32bit JDK and overall the application very memory hungry. We are presently running on Windows x64 with 32 bit JVM. Essentially, the JVM will hang once the Virtual Size gets close to the 2GB 32bit addressing limit.
Question: From time to time, the third party frameworks will make large requests for memory and this pushes up the Virtual Size allocated on the server. The Virtual Size allocated will never recover even though it appears that the memory that the kernel is reducing its memory needs. In a typical Tomcat deployment, does the Virtual Size ever recover automatically or does it always act as a high water level that keeps on rising? Is there a way to tell the JVM to try to lower the Virtual Size dynamically?
I suspect that the 3rd party native kernel is to blame here but I need to investigate all our options.
FYI - AWE in Windows is not a clear option as the vendor does not officially support any JVMs that have AWE support. Migration to Linux is also not an easy path but is being considered.
Related
I have some conceptions I will lay out first. A 32-bit Windows application can't address more than ~3 GB of memory if I got it right. However, when it runs out of memory in its virtual address space, it will hand over data to the Windows virtual memory manager in order to make more space for whatever it needs. Windows' virtual memory manager will then write this data to a location that is in physical RAM, but outside the application's virtual address space. When the 32-bit application needs this data again, the virtual memory manager can respond swiftly by loading it from physical RAM.
I can imagine there is overhead to this. How much overhead are we talking about? A 64-bit application would be able to make a virtual address space that is large enough for everything it needs, but how much more efficient is this than shuffling around data with the memory manager?
I want to split RAM in my PC into two parts; half for my Windows OS and the other half for an image buffer for my application. For example, my desktop has 32GB memory, and I want to assign 16GB for Windows and assign another 16GB for my application access only. Windows doesn't touch the other 16GB but my application should use that 16GB image buffer. I know how to do this in Linux, but I need to do this in Windows OS. I think I have to configure the BIOS and need to implement a page remap Windows driver of image buffer for my application access.
Is there any good way to do this?
You can do this with the Address Windowing Extensions API. Although this was originally designed for 32-bit applications, it is still available to 64-bit applications, and memory allocated this way is not available to the virtual memory management system.
However, you should note that in most cases allowing the virtual memory manager to do its job will result in better overall performance than explicitly locking down memory will.
Given a 32-bit/64-bit processor can a 4GB process run on 2GB RAM. Will it use virtual memory or it wont run at all?
This is HIGHLY platform dependent. On many 32bit OS's, no single process can ever use more than 2GB of memory, regardless of the physical memory installed or virtual memory allocated.
For example, my work computers use 32bit Linux with PAE (Physical Address Extensions) to allow it to have 16GB of RAM installed. The 2GB per process limit still applies however. Having the extra RAM simply allows me to have more individual processes running. 32bit Windows is the same way.
64bit OS's are more of a mixed bag. 64bit Linux will allow individual processes to map memory well in excess of 32GB (but again, varies from Kernel to Kernel). You will be limited only by the amount of Swap (Linux virtual memory) you have. 64bit Windows is a complete crap shoot. Certain versions will only allow 2GB per process, but most will allow >32GB limited only by the amount of Page File the user has allocated.
Microsoft provides a useful table breaking down the various memory limits on various OS versions/editions. Unfortunately there is no such table that I can find with cursory searching for Linux since it is so fragmented.
Short answer: Depends on the system.
Most 32-bit systems have a limitation of 2GB per process. If your system allows >2GB per process, then we can move on to the next part of your question.
Most modern systems use Virtual Memory. Yet, there are some constrained (and various old) systems that would just run out of space and make you cry. I believe uClinux supports both MMU and MMU-less architectures. Most 32-bit processors have a MMU (a few don't, see ARM Cortex-M0) and a handful of 16-bit or 8-bit have it as well (see Atmel ATtiny13A-MMU and Atari MMU).
Any process that needs more memory than is physically available will require a form of Memory Swap (e.g., a partition or file).
Virtual Memory is divided in pages. At some point, a page reside either in RAM or in Swap. Any attempt to access a memory page that's not loaded in RAM will trigger an interruption called Page Fault, which is handled by the kernel.
A 64-bit process needing 4GB on a 64-bit OS can generally run in 2GB of physical RAM, by using virtual memory, assuming disk swap space is available, but performance will be severely impacted if all of that memory is frequently accessed.
A 32-bit process can't address exactly 4GB of memory in practice (some address space overhead is required by the operating system), so it won't run. Depending on the OS, it can probably run a process that needs > 2GB and < 3-4GB.
I am investigating a strange problem with my application, where the behaviour is different on 2 versions of Windows:
Windows XP (32-bit)
Windows Server 2008 (64-bit)
My findings are as follows.
Windows XP (32-bit)
When running my test scenario, the XML parser fails at a certain point during the parsing of a very large configuration file (see this question for more information).
At the time of failure, the process size is approximately 2.3GB. Note that a registry key has been set to allow the process to exceed the default maximum process size of 2GB (on 32-bit operating systems).
The system of the failure is a call to IXMLDOMDocument::load() failing, as described in the question linked above.
Windows Server 2008 (64-bit)
I run exactly the same test scenario in Windows Server 2008 -- the only variable is the operating system. When I look at my process under Task Manager, it has a * 32 next to it, which I am assuming means it is running in 32-bit compatibility mode.
What I am noticing is that at the point where the XML parsing fails on Windows XP, the process size on Windows Server 2008 is only about 1GB (IOW, approximately half the process size as on Windows XP).
The XML parsing does not fail on Windows Server 2008, it all works as it should.
My questions are:
Why would a 32-bit application (running in 32-bit mode) consume half the amount of memory on a 64-bit operating system? Is it really using half the memory, it is usual virtual memory differently, or is it something else?
Acknowledging that my application (seems) to be using half the amount of memory on Windows Server 2008, does anyone have any ideas as to why the XML parsing would be failing on Windows XP? Every time I run the test case, the error accessed via IXMLDOMParseError (see this answer) is different. Because this appears to be non-deterministic, it suggests to me that I am running into a memory usage problem rather than dealing with malformed XML.
You didn't say how you observed the process. I'll assume you used Taskmgr.exe. Beware that it's default view gives very misleading values in the Memory column. It shows Working set size, the amount of RAM that's being used by the process. That has nothing to do with the source of your problem, running out of virtual memory space. There is not much reason to assume that Windows 2008 would show the same value as XP, it has a significantly different memory manager.
You can see the virtual memory size as well, use View + Columns.
The reason your program doesn't bomb on a 64-bit operating system is because 32-bit processes have close to 4 gigabytes of addressable virtual memory. On a 32-bit operating system, it needs to share the address space with the operating system and gets only 2 gigabytes. More if you use the /3GB boot option.
Use the SAX parser to avoid consuming so much memory.
Not only are there differences in available memory between 32 bit and 64 bit (as discussed in previous answers), but its the availability of contiguous memory that may be killing your app on 32 bit.
On 32 bit machine your app's DLLs will be littering the memory landscape in the first 2GB of memory (app at 0x00400000, OS DLLs up at 0x7xxx0000, other DLLs elsewhere). Most likely the largest contiguous block you have available is about 1.1GB.
On a 64 bit machine (which gives you the 4GB address space with /LARGEADDRESSAWARE) you'll have a least one block in that 4GB space that is 2GB or more in size.
So there is your difference. If your XML parser is relying on a large blob of memory rather than many small blobs it may be that your XML parser is running out of contiguous usable space on 32 bit but is not running out of contiguous usable space on 64 bit.
If you want to visualize this on the 32 bit OS, grab a copy of VMValidator (free) and look at the Virtual view for a visualization of your memory and the Pages and Paragraphs views to see the data for each memory page/paragraph.
From what I understand, a 32-bit process can only access 2 GB of memory on 32-bit Windows without the /3GB switch, and that some of that memory is taken up by the OS for its own diabolical reasons. This seems to mesh with my experience as we have an app that crashes when it reaches around 1.2 - 1.5 GB of RAM without memory exceptions, even though there is still plenty of memory available.
Would moving this 32-bit application to 64-bit Windows allowing it accesses more than 1.5 GB it can now? Would the application itself have to be upgraded to 64-bit?
Newer versions of Visual Studio have a new flag which make 32-bit apps "big address space aware". Basically what it does is say that if it's loaded on a 64-bit version of windows, then it will get 4GB (the limit of 32-bit pointers). This is certainly better than the 2 or 3 GB you get on 32-bit versions of windows. See http://msdn.microsoft.com/en-us/library/aa366778.aspx:
Most notably it says:
Limits on memory and address space
vary by platform, operating system,
and by whether the
IMAGE_FILE_LARGE_ADDRESS_AWARE value
of the LOADED_IMAGE structure and
4-gigabyte tuning (4GT) are in use.
IMAGE_FILE_LARGE_ADDRESS_AWARE is set
or cleared by using the
/LARGEADDRESSAWARE linker option.
Also see: http://msdn.microsoft.com/en-us/library/wz223b1z.aspx
Yes, under the right circumstances, a 32-bit process on Windows can access a full 4GB of memory, rather than the 2Gb it's normally limited to.
For this to work, you need the following:
The app must be running on a 64-bit OS
The app must be compiled with the /LARGEADDRESSAWARE flag.
The app should be tested to make sure it actually works properly in this case. ;) (specifically, code that relies on all pointers pointing to addresses below the 2GB boundary will obviously not work here)
Your app will be limited by the pointer size, in your example 32 bits.
If your app was to access more memory then you would need some sort of segmented memory architecture like we had in the 16 bit days where apps used 16bit pointers and offsets to access the full 32bit memory space.
WOW64 allows using 32-bit Windows application on 64-bit Windows, translating 32-bit pointers to real 64-bit pointers. And actually 32-bit addressing should allow accessing 4GB of memory.