Relation between RAM size and Virtual memory with JVM heap size - memory-management

for performance testing, i need 2 GB of heap memory,so i am setting the parameter in java setting via "-Xmx2048m" and also increasing the virtual memory...but while running the application, it is giving errors like "the java run time environment cannot be loaded" and "Several JVM running in the same process caused an error", (in fact, it is not giving same error for any value more than 1 GB).
so is it possible to set Heap memory to be 2 GB? or it can be maximum of 1 GB only? if yes, how to do it??
I'm using windows 7, 64 bit with RAM size of 8 GB..and using java 1.6

Since you are running a 32-bit JVM, there is a limit on how much memory the process can use. Due to how virtual memory is laid out, 32-bit processes can only access 2 GB of memory (or up to 3-4 GB with special settings). Since Java needs some memory for its own bookkeeping, which is not part of the heap available to your application, the actual usuable limit for -Xmx must be somewhere below 2 GB. According to this answer, the limit for 32-bit Java on Windows is -Xmx1500m (not sure if it has changed in newer releases, but due to the limitations outlined above, it must be below 2 GB so it's likely to have stayd at 1500 MB).

Related

setting up heap memory in jmeter for more than one concurrent script execution

Below is my scenario.
I have 2 test scripts :- one might use 5GB to 15GB of heap memory and other script might use from 5GB to 12GB.
If i have a machine of 32 GB memory,
While executing for the first script can i assign XMS 1GB XMX 22GB(though my script needs 15GB) and for the second script can i assign XMS 1GB and XMX 12GB
As sum of maximum goes beyong 32GB(total memory)
In the second case i assign like this--->
for script 1:XMS 22GB XMX 22GB
for script 2:XMS 12GB and XMX 12GB
Sum of Max 34GB.
Does it by any chance work like below----- >
If 12GB is assigned for first script,is this memory blocked for that process/script ? and can i not use the unused memory for other processes ?
or
If 12GB is assigned for the first script ,it uses only as much as requuired by it and any other process can use the rest memory ? IF it works in this way-i don't have to specifically assign heap for two scripts separately.
If you set the minimum heap memory via Xms parameter the JVM will reserve this memory and it will not be available for other processes.
If you're able to allocate more JVM Heap than you have total physical RAM it means that your OS will go for swapping - using hard drive to dump memory pages which extends your computer memory at cost of speed because memory operations are fast and disk operations are very slow.
For example look at my laptop which has 8 GB of total physical RAM:
It has 8 GB of physical memory of which 1.2 GB are free. It means that I can safely allocate 1 GB of RAM to Java
However when I give 8 GB to Java as:
java -Xms8G
it still works
15 GB - still works
and when I try to allocate 20 GB it fails because it doesn't fit into physical + virtual memory.
You must avoid swapping because it means that JMeter will not be able to send requests fast enough even if the system under tests supports it so make sure to measure how much available physical RAM you have and your test must not exceed this. If you cannot achieve it on one machine - you will have to go for distributed testing
Also "concurrently" running 2 different scripts is not something you should be doing because it's not only about the memory, a single CPU core can execute only one command at a time, other commands are waiting in the queue and being served by context switching which is kind of expensive and slow operation.
And last but not the least, allocating the maximum HEAP is not the best idea because this way garbage collections will be less frequent but will last much longer resulting in throughput dropdowns, keep heap usage between 30% and 80% like in Optimal Heap Size article

Neo4j Heap and pagecache configuration

I'm running neo4j 3.2.1 on Linux machine of 16GB RAM, yet I'm having issues ith the heap memory showing an error everytime.
In the documentation, we have:
Actual OS allocation = available RAM - (page cache + heap size)
Does it mean that if we configure it for my machine (for example 16g for the heap and 16g for page cache) then the Allocation would be 0, inducing the problem?
Can anyone tell me how to proceed with the best configuration to explore more of the capacity of the machine without having to face that heap error again?
In your example, you are trying to give all the RAM to the page cache, and also all the RAM to the heap. That is never possible. The available RAM has to be divided between the OS, the page cache, and the heap.
The performance documentation shows how to divide up the RAM.
As a first pass, you could try this allocation (given your 16 GB of RAM):
7 GB for the page cache
8 GB for the heap
That will leave (16GB - (7GB + 8GB)), or 1 GB for the OS.
But you should read the documentation to fine tune your allocations.

Why would the JVM suddenly not allocate up to the maximum heap setting, even when running low on memory and with plenty of free OS memory?

My JVM is set to have a maximum heap size of 2GB. It is currently running slowly due to being low on memory, but it will not allocate beyond 1841MB (even though it has done so before on this run). I have over 16GB memory free.
Why would this suddenly happen to a running JVM? Could it be because it is "fenced in" - it cannot get a larger continuous range of physical memory?
This is for java 1.8.0_73 (64bit) on Windows 10. But I have seen this now and then for other java versions and on Windows 7 and XP too.
32 bit JVMs usually struggle to use more than about 1800Mb. Exactly how much they can allocate depends on your Operating System and how it lays out the 32 bit address space (which can vary between runs).
Use a 64 bit JVM to get more.
Start JVM with
java -Xmx2048m -Xms2048m
This will preallocate 2GB at JVM startup (even if not needed).
You cannot make a program run faster by just increasing the heap memory. A program may be slow due to various reasons.
In your case, may be it's not because of the memory usage, as the increase in the heap memory does not cause the program to use that memory to the fullest, or run faster. The heap is used up if you create a lot of objects which are in use and not garbage collected.
The other reason for the slowness could be due to some parts of the program using up the processing power (poorly performing algorithms ?).
It could also be due to slow I/O operations (file reads/writes ?).
These are only a few reasons. We can determine the slowness by getting to know more about your program.
You could look for slow running parts of your code by going through its logs (if any) or by using various profiling tools like jconsole (shipped with jdk), VisualVM, etc.
You could also tune your JVM by passing various parameters to customize the Garbage collection, various parts of the heap, thread stack size, etc.

Yarn container out of memory when using large memory mapped file

I am using hadoop 2.4. The reducer use several large memory mapped files (about 8G total). The reducer itself uses very little memory. To my knowledge, the memeory mapped file (FileChannel.map(readonly)) also uses little memory (managed by OS instead of JVM).
I got this error:
Container [pid=26783,containerID=container_1389136889967_0009_01_000002]
is running beyond physical memory limits.
Current usage: 4.2 GB of 4 GB physical memory used;
5.2 GB of 8.4 GB virtual memory used. Killing container
Here was my settings:
mapreduce.reduce.java.opts=-Xmx2048m
mapreduce.reduce.memory.mb=4096
So I adjust the parameter to this and works:
mapreduce.reduce.java.opts=-Xmx10240m
mapreduce.reduce.memory.mb=12288
I further adjust the parameters and get it work like this:
mapreduce.reduce.java.opts=-Xmx2048m
mapreduce.reduce.memory.mb=10240
My question is: why I need the yarn container to have about 8G more memory than the JVM size? The culprit seems to be the large Java memory mapped files I used (each about 1.5G, sum up to about 8G). Isn't the memory mapped files managed by the OS and they supposed to be sharable by multiple processes (e.g. reducers)?
I use AWS m2.4xlarge instance (67G memory) and it has about 8G unused and the OS should have sufficient memory. In current settings, there are only about 5 reducers available for each instance, and each reducer has extra 8G memory. This just looks very stupid.
From the logs, it seems that you have enabled yarn.nodemanager.pmem-check-enabled and yarn.nodemanager.vmem-check-enabled properties in yarn-site.xml. If these checks are enabled, then NodeManger may kill container(s), if it detects that the container(s) exceeded the resource limits. In your case, physical memory exceeded the configured value (=4G) so NodeManager killed the task (running within the container).
In normal cases, heap memory (defined using -Xmx property in mapreduce.reduce.java.opts and mapreduce.map.java.opts configurations) is defined 75-80% of the total memory (defined using mapreduce.reduce.memory.mb and mapreduce.map.memory.mb configurations). However, in your case due to Java Memory Mapped Files implementation, non-heap memory requirements are higher than heap memory, and thats why you had to keep quite large gap between total and heap memory.
Please check below link, there may be a need to tune the property mapreduce.reduce.shuffle.input.buffer.percent
Out of memory error in Mapreduce shuffle phase

Why Windows Server 2008 R2 x64 can't allocate more than 1.2GB for 32bit process

I have very strange behavior of the Windows OS regarding 32-bit processes regarding memory allocation. The problem is - it seems that 32-bit process can't allocate more than something around 1.2GB, the limit floats between 1.1GB and 1.3GB depending on conditions I have no idea about.
My environment is:
Windows Server 2008 R2
Total physical RAM - 16GB
RAM in use at the moment of experiment - 6GB (i.e. around 9+GB is free)
I have self-written c++ tool, which bounces memory allocation via malloc() in blocks of 32MB until it gets denial from OS (the upper limit right now is 1070MB, but as I mentioned already - limit floats, the max number I remember was around 1.3GB) and than releases all these blocks in reverse order. This is done in cycle.
So the questions I have are:
Why I can't get close to real limit of 2GB for 32bit process? 1.3GB is only 60% of theoretical limit. Inability to get at least 0.5 GB more seems to be very strange to me. I have 9GB of unused RAM.
What (at runtime) can influence on the top limit? It does change over time - I have no idea why. I'd like to control it somehow - probably there is some magic command to optimize OS address space :)
In reply to comments:
show the loop where you allocate the memory: here it is https://code.google.com/p/membounce/source/browse/
Note: after looking deeper into the code I've noticed the bug which incorrectly calculated the upper limit (when allocation page size is changed dynamically while tool is running :() - the actual limit I'm getting right now is around 1.6 GB - this is better than 1.2 but yet not close to 2GB.
Build to x64 instead: in that case I'm interested particularly in 32bit. The original cause of this topic is sometime I'm not able to run java (specifically Netbeans x32) with Xmx900M (says JVM creation failed), and sometime it DOES run with Xmx1200M (Xmx is JVM parameter of max RAM that can be used by JVM). The reason I want to use non-x64 netbeans is because java (at least Oracle's JVM) effectively consumes twice more memory when switching to x64, i.e I have to allocate 2GB of RAM to JVM to get same efficiency as with 1GB on 32bit JVM. I have number of java processes and want to fit more of them into my 16GB address space :) (this is some kind of offtopic background)

Resources