I add heap size in Jmeter but still got OOM error - jmeter

I run peak test in Jmeter, got OOM error, but when i add the heap size in Jmeter.bat file, it still got OOM error. My PC is RAM is 16G, updated the max heap to 8G. Below are my updated jmeter.bat file.
set HEAP=-Xms3g -Xmx8g -XX:MaxMetaspaceSize=5120m

OutOfMemoryError can have many faces, it is not necessarily the lack of heap space, the other reasons could be in:
GC Overhead Limit Exceeded when the garbage collector is running and the program itself is not (or very slow)
Requested array size exceeds VM limit when you try to allocate too large objects
Unable to Create New Native Thread when it is not possible to create a thread due to underlying operating system limits
etc.
Not knowing the details of your test plan and not seeing the full jmeter.log file (preferably with debug logging enabled) it is not possible to come up with the comprehensive answer, going forward consider including more details as "OOM" doesn't tell anything to us.

Related

Recommended Java Heap Size for Commercial JMeter Project

Depending on the nature of the automated workflow and the number of active threads at any given time the Heap size requirement for JMeter can vary and in the testing I am doing there is some ambiguity with respect to the affect of Heap size on the test results. The initial Heap size and the maximum Heap size of the server hosting JMeter is shown in the attached screenshot.
Upon executing the test for a large set of current users (eg:100) the in built JMeter report does not render however the results can be seen in the CSV output. Will increasing the Heap size solve this issue and if so to how much should we increase the Heap size?. Note that this issue does not happen for a small user count such as 10 or 15.
What is the recommended industrial standard value for Heap size and other system variables for a server used for commercial performance testing using JMeter.
There is no "recommended industrial standard".
Each test is individual and you need to tune JMeter appropriately.
As of JMeter 5.5 default heap size is 1GB which is sufficient for tests development and debugging but might be not sufficient for the load you're trying to conduct.
According to this article:
"If the occupancy of the Java heap is too high, garbage collection occurs frequently. If the occupancy is low, garbage collection is infrequent but lasts longer... Try to keep the memory occupancy of the Java heap between 40% and 70% of the Java heap size... The highest point of occupancy of the Java heap is preferably not above 70% of the maximum heap size, and the average occupancy is between 40% and 70% occupancy. If the occupancy goes over 70%, resize the Java heap."
So I would recommend checking what's going on with your heap using JVisualVM or equivalent and adjusting it up or down as needed.
If your test runs fine and you're experiencing OOM issues only during dashboard generation you can increase it temporarily by setting the relevant HEAP environment variable value.

Getting out of memory issue in jmeter non gui mode

created a script with this configuration:
Number of threads (users) - 200 users
Duration assertion - 3 seconds
Ramp-up period - 10 seconds
Think time - 2 seconds
Jmeter version 5.2.1
Java 8
Laptop configuration:
Microsoft windows 10
Intel i7 processor
RAM 16gb
I also increased the size of the memory to 1024m in batch file
: "${HEAP:="-Xms1g -Xmx1g -XX:MaxMetaspaceSize=1024m"}"
Still I am facing the out of memory issue. can you please help me out
As per Java command line arguments
-XX:MaxMetaspaceSize=size
Sets the maximum amount of native memory that can be allocated for class metadata. By default, the size is not limited. The amount of metadata for an application depends on the application itself, other running applications, and the amount of memory available on the system.
So if you're getting java.lang.OutOfMemoryError: Metaspace error you need to either increase the metaspace size more or just remove this parameter from the JVM arguments so it will be unlimited.
Most probably you're getting java.lang.OutOfMemoryError: Java heap space, if this is the case - you need to increase the -Xmx1g to somewhat bigger, i.e. -Xmx4g
Actually OutOfMemoryError exception has many faces so it's not possible to provide exact steps for this without seeing the full output from stdout/stderr/jmeter.log file/ .hprof file
See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article for more JMeter tuning tips.

Gradle heap error on uploadArchives

I am trying to upload an archive that's 600MB in size.
I get this error:
Execution failed for task ':uploadArchives'.
> Java heap space
...
...
Caused by: java.lang.OutOfMemoryError: Java heap space
I have tried to set GRADLE_OPTS, JVM_OPTS and MAVEN_OPTS variables, for setting the max. heap size, like for example:
export GRADLE_OPTS=-Xmx1024m
gradle uploadArchives
But I am still getting the same error.
What am I missing here?
Ultimately you always have a finite max of heap to use no matter what platform you are running on. In Windows 32 bit this is around 2gb (not specifically heap but total amount of memory per process). It just happens that Java happens to make the default smaller (presumably so that the programmer can't create programs that have runaway memory allocation without running into this problem and having to examine exactly what they are doing).
So this given there are several approaches you could take to either determine what amount of memory you need or to reduce the amount of memory you are using. One common mistake with garbage collected languages such as Java or C# is to keep around references to objects that you no longer are using, or allocating many objects when you could reuse them instead. As long as objects have a reference to them they will continue to use heap space as the garbage collector will not delete them.
In this case you can use a Java memory profiler to determine what methods in your program are allocating large number of objects and then determine if there is a way to make sure they are no longer referenced, or to not allocate them in the first place. One option which I have used in the past is "JMP" http://www.khelekore.org/jmp/.
If you determine that you are allocating these objects for a reason and you need to keep around references (depending on what you are doing this might be the case), you will just need to increase the max heap size when you start the program. However, once you do the memory profiling and understand how your objects are getting allocated you should have a better idea about how much memory you need.
In general if you can't guarantee that your program will run in some finite amount of memory (perhaps depending on input size) you will always run into this problem. Only after exhausting all of this will you need to look into caching objects out to disk etc. At this point you should have a very good reason to say "I need Xgb of memory" for something and you can't work around it by improving your algorithms or memory allocation patterns. Generally this will only usually be the case for algorithms operating on large datasets (like a database or some scientific analysis program) and then techniques like caching and memory mapped IO become useful.
Run Java with the command-line option -Xmx, which sets the maximum size of the heap.
http://docs.oracle.com/javase/7/docs/technotes/tools/windows/java.html#nonstandard
The problem was because the actual size of package was a lot higher due to gradle not being able to handle symlinks.
When I manually handled the symlinks, the problem ended.
The JVM heap size can be set in the gradle.properties file, in the root directory of your gradle project. Like this:
org.gradle.jvmargs=-Xms256m -Xmx1024m

Java heap bottleneck - how to identify the cause?

I have a J2EE project running on JBoss, with a maximum heap size of 2048m, which is giving strange results under load testing. I've benchmarked the heap and cpu usage and received the following results (series 1 is heap usage, series 2 is cpu usage):
It seems as if the heap is being used properly and getting garbage collected properly around A. When it gets to B however, there appears to be some kind of a bottleneck as there is heap space available, but it never breaks that imaginary line. At the same time, at C, the cpu usage drops dramatically. During this period we also receive an "OutOfMemoryError (GC overhead limit exceeded)," which does not make much sense to me as there is heap space available.
My guess is that there is some kind of bottleneck, but what exactly I can't even imagine. How would you suggest going about finding the cause of the issue? I've profiled the memory usage and noticed that there are quite a few instances of the one class (around a million), but the total size of these instances is fairly small (around 50MB if I remember correctly).
Edit: The server is dedicated to to this application and the CPU usage given is only for the JVM (there should not be any significant CPU usage outside of the JVM). The memory usage is only for the heap, it does not include the permgen space. This problem is reproducible. My main concern is surrounding the limit encountered around B, for which I have not found a plausible explanation yet.
Conclusion: Turns out this was caused by a bunch of long running SQL queries being called concurrently. The returned ResultSets were also very large, possibly explaining the OOME. I still have no reasonable explanation for why there appears to be some limit at B.
From the error message it appears that the JVM is using the parallel scavenger algorithm for garbage collection. The message is dumped along with an OOME error when a lot of time is spent on GC, but not a lot of the heap is recovered.
The document from Sun does not specify if the 98% of the total time consumed is to be read as 98% of the CPU utilization of the process or that of the CPU itself. In either case, I have to draw the following inferences (with limited information):
The garbage collector or the JVM process does not have enough CPU utilization, most likely due to other processes consuming CPU at the same time.
The garbage collector does not have enough CPU utilization since it is a low priority thread, and another memory intensive (but not CPU intensive) thread in the JVM is doing work at the same time, which results in the failure to de-allocate memory.
Based on the above inferences (all, one or none of them could be true), it would be worthwhile to correlate the graph that you're obtained with the runtime behavior of the application as far as users are concerned. In other words, you might find it useful to determine if other processes are kicked off (when your problem occurs), or the part of the application that is in operation (again, when the problem occurs).
In any case, the page referenced above, does give an option to disable the GC overhead limit used by the GC algorithm.
EDIT: If the problem occurs periodically, and can be reproduced, it might turn out to be a memory leak, otherwise (i.e. it occurs sporadically), you are better off tuning the GC algorithm or even changing it.
If I want to know where the "bottlenecks" are, I just get a few stackshots. There's no need to wonder and guess and play detective. They will just tell you.
Usually memory problems and performance problems go hand in hand, so if you fix the performance problems, you will also fix the memory problems (not for certain, though).

Why does perfmon fail to give available memory and what are the alternatives?

I am trying to get reliable information on when my C# application (Windows XP) will run out of memory. I did some research and tests on my machine and picked the most reliable perfmon counters:
Memory.Pages Output/sec
Memory.Available Bytes
I use thresholds and AND operator and it works quite well, but on the client machine (also Windows XP) both counters are useless. Available memory does not drop below 1GB and pages output is constant zero. After reading some logs I still don't see any useful counter.
Counters like committed memory give correct value, but the program runs out of memory (with paging killing the performance) after crossing 50%-60% of the 5GB available.
Any alternatives? I wouldn't like to be forced to try to allocate memory and catch OutOfMemory exceptions during the computations.
See When does a Windows process run out of memory?
Long story short, you want to be checking the Private Bytes, Virtual Bytes and/or Working Set for your process(es).

Resources