Depending on the nature of the automated workflow and the number of active threads at any given time the Heap size requirement for JMeter can vary and in the testing I am doing there is some ambiguity with respect to the affect of Heap size on the test results. The initial Heap size and the maximum Heap size of the server hosting JMeter is shown in the attached screenshot.
Upon executing the test for a large set of current users (eg:100) the in built JMeter report does not render however the results can be seen in the CSV output. Will increasing the Heap size solve this issue and if so to how much should we increase the Heap size?. Note that this issue does not happen for a small user count such as 10 or 15.
What is the recommended industrial standard value for Heap size and other system variables for a server used for commercial performance testing using JMeter.
There is no "recommended industrial standard".
Each test is individual and you need to tune JMeter appropriately.
As of JMeter 5.5 default heap size is 1GB which is sufficient for tests development and debugging but might be not sufficient for the load you're trying to conduct.
According to this article:
"If the occupancy of the Java heap is too high, garbage collection occurs frequently. If the occupancy is low, garbage collection is infrequent but lasts longer... Try to keep the memory occupancy of the Java heap between 40% and 70% of the Java heap size... The highest point of occupancy of the Java heap is preferably not above 70% of the maximum heap size, and the average occupancy is between 40% and 70% occupancy. If the occupancy goes over 70%, resize the Java heap."
So I would recommend checking what's going on with your heap using JVisualVM or equivalent and adjusting it up or down as needed.
If your test runs fine and you're experiencing OOM issues only during dashboard generation you can increase it temporarily by setting the relevant HEAP environment variable value.
Related
I run peak test in Jmeter, got OOM error, but when i add the heap size in Jmeter.bat file, it still got OOM error. My PC is RAM is 16G, updated the max heap to 8G. Below are my updated jmeter.bat file.
set HEAP=-Xms3g -Xmx8g -XX:MaxMetaspaceSize=5120m
OutOfMemoryError can have many faces, it is not necessarily the lack of heap space, the other reasons could be in:
GC Overhead Limit Exceeded when the garbage collector is running and the program itself is not (or very slow)
Requested array size exceeds VM limit when you try to allocate too large objects
Unable to Create New Native Thread when it is not possible to create a thread due to underlying operating system limits
etc.
Not knowing the details of your test plan and not seeing the full jmeter.log file (preferably with debug logging enabled) it is not possible to come up with the comprehensive answer, going forward consider including more details as "OOM" doesn't tell anything to us.
My Spring Data JPA/Hibernate Application consumes over 2GB of memory at start without a single user hitting it. I am using Hazelcast as the second level cache but I had the same issue when I used ehCache as well so that is probably not the cause of the issue.
I ran a profile with a Heap Dump in Visual VM and I see where the bulk of the memory is being consumed by JpaMetamodelMappingContext and secondary a ton of Map objects. I just need help in deciphering what I am seeing and if this is actually a problem. I do have a hundred classes in the model so this may be normal but I have no point of reference. It just seems a bit excessive.
Once I get a load of 100 concurrent users, my memory consumption increases to 6-7 GB. That is quite normal for the amount of data I push around and cache, but I feel like if I could reduce the initial memory, I'd have a lot more room for growth.
I don't think you have a problem here.
Instead, I think you are misinterpreting the data you are looking at.
Note that the heap space diagram displays two numbers: Heap size and Used heap
Heap size (orange) is the amount of memory available to the JVM for the heap.
This means it is the amount that the JVM requested at some point from the OS.
Used heap is the part of the Heap size that is actually used.
Ignoring the startup phase, it grows linear and then drops repeatedly over time.
This is typical behavior of an idling application.
Some part of the application generates a moderate amount of garbage (rising part of the curve) which from time to time gets collected.
The low points of that curve are the amount of memory you are actually really using.
It seems to be about 250MB which doesn't sound very much to me, especially when you say that the total consumption of 6-7GB when actually working sounds reasonable to you.
Some other observations:
Both CPU load and heap grows fast/fluctuates a lot at start time.
This is to be expected because the analysis of repositories and entities happen at that time.
JpaMetamodelMappingContext s retained size is about 23MB.
Again, a good chunk of memory, but not that huge.
This includes the stuff it references, which is almost exclusively metadata from the JPA implementation as you can easily see when you take a look at its source.
1) Our application: Spring boot, Java 8
2) Parameters we use: xms = 256 MB, xmx = 2 GB
We have been seeing that used heap size of our java8 applications are not shrinking back down when appropriate.
Any other parameters that we should be using along with #2 above, when launching our spring boot/Java 8 application, so that GC can do a better job?
Thanks for your help!
The above options have the following effect:
-Xms, -Xmx: Places boundaries on the heap size to increase the predictability of garbage collection. The heap size is limited in replica servers so that even Full GCs do not trigger SIP retransmissions. -Xms sets the starting size to prevent pauses caused by heap expansion.
-XX:+UseG1GC: Use the Garbage First (G1) Collector.
-XX:MaxGCPauseMillis: Sets a target for the maximum GC pause time. This is a soft goal, and the JVM will make its best effort to achieve it.
-XX:ParallelGCThreads: Sets the number of threads used during parallel phases of the garbage collectors. The default value varies with the platform on which the JVM is running.
-XX:ConcGCThreads: Number of threads concurrent garbage collectors will use. The default value varies with the platform on which the JVM is running.
-XX:InitiatingHeapOccupancyPercent: Percentage of the (entire) heap occupancy to start a concurrent GC cycle. GCs that trigger a concurrent GC cycle based on the occupancy of the entire heap and not just one of the generations, including G1, use this option. A value of 0 denotes 'do constant GC cycles'. The default value is 45.
Oracle JDK provides inbuilt Java VisualIVM tool to analyze and tune GC factors
I am trying to build a large Maven-GWT project on a virtual host, which has a limit amount of RAM and cannot use swap space.
The GWT compile stage (where it computes permutations) uses a massive amount of the CPU and memory, and I was wondering if there was any way I could impose a limit on how much of each it uses, even if it takes much longer to compile.
Thanks
If you are using more than one worker thread, decrease it to just one worker thread - that will decrease the memory required. however, the compile will be correspondingly slower. Otherwise, there isn't much you can do to decrease the memory requirements. Setting the xmx to a lower number will work too, but that will cause OOME if it is too low. i think about 256m is the minimum, tho 128m works for most projects of a small to medium size.
add -Dgwt-plugin.localWorkers="1" and -Dgwt-plugin.extraJvmArgs="-Xmx128m -Xms16m" to your MAVEN_OPTS, and tweak those numbers till it works nicely.
I have a J2EE project running on JBoss, with a maximum heap size of 2048m, which is giving strange results under load testing. I've benchmarked the heap and cpu usage and received the following results (series 1 is heap usage, series 2 is cpu usage):
It seems as if the heap is being used properly and getting garbage collected properly around A. When it gets to B however, there appears to be some kind of a bottleneck as there is heap space available, but it never breaks that imaginary line. At the same time, at C, the cpu usage drops dramatically. During this period we also receive an "OutOfMemoryError (GC overhead limit exceeded)," which does not make much sense to me as there is heap space available.
My guess is that there is some kind of bottleneck, but what exactly I can't even imagine. How would you suggest going about finding the cause of the issue? I've profiled the memory usage and noticed that there are quite a few instances of the one class (around a million), but the total size of these instances is fairly small (around 50MB if I remember correctly).
Edit: The server is dedicated to to this application and the CPU usage given is only for the JVM (there should not be any significant CPU usage outside of the JVM). The memory usage is only for the heap, it does not include the permgen space. This problem is reproducible. My main concern is surrounding the limit encountered around B, for which I have not found a plausible explanation yet.
Conclusion: Turns out this was caused by a bunch of long running SQL queries being called concurrently. The returned ResultSets were also very large, possibly explaining the OOME. I still have no reasonable explanation for why there appears to be some limit at B.
From the error message it appears that the JVM is using the parallel scavenger algorithm for garbage collection. The message is dumped along with an OOME error when a lot of time is spent on GC, but not a lot of the heap is recovered.
The document from Sun does not specify if the 98% of the total time consumed is to be read as 98% of the CPU utilization of the process or that of the CPU itself. In either case, I have to draw the following inferences (with limited information):
The garbage collector or the JVM process does not have enough CPU utilization, most likely due to other processes consuming CPU at the same time.
The garbage collector does not have enough CPU utilization since it is a low priority thread, and another memory intensive (but not CPU intensive) thread in the JVM is doing work at the same time, which results in the failure to de-allocate memory.
Based on the above inferences (all, one or none of them could be true), it would be worthwhile to correlate the graph that you're obtained with the runtime behavior of the application as far as users are concerned. In other words, you might find it useful to determine if other processes are kicked off (when your problem occurs), or the part of the application that is in operation (again, when the problem occurs).
In any case, the page referenced above, does give an option to disable the GC overhead limit used by the GC algorithm.
EDIT: If the problem occurs periodically, and can be reproduced, it might turn out to be a memory leak, otherwise (i.e. it occurs sporadically), you are better off tuning the GC algorithm or even changing it.
If I want to know where the "bottlenecks" are, I just get a few stackshots. There's no need to wonder and guess and play detective. They will just tell you.
Usually memory problems and performance problems go hand in hand, so if you fix the performance problems, you will also fix the memory problems (not for certain, though).