1) Our application: Spring boot, Java 8
2) Parameters we use: xms = 256 MB, xmx = 2 GB
We have been seeing that used heap size of our java8 applications are not shrinking back down when appropriate.
Any other parameters that we should be using along with #2 above, when launching our spring boot/Java 8 application, so that GC can do a better job?
Thanks for your help!
The above options have the following effect:
-Xms, -Xmx: Places boundaries on the heap size to increase the predictability of garbage collection. The heap size is limited in replica servers so that even Full GCs do not trigger SIP retransmissions. -Xms sets the starting size to prevent pauses caused by heap expansion.
-XX:+UseG1GC: Use the Garbage First (G1) Collector.
-XX:MaxGCPauseMillis: Sets a target for the maximum GC pause time. This is a soft goal, and the JVM will make its best effort to achieve it.
-XX:ParallelGCThreads: Sets the number of threads used during parallel phases of the garbage collectors. The default value varies with the platform on which the JVM is running.
-XX:ConcGCThreads: Number of threads concurrent garbage collectors will use. The default value varies with the platform on which the JVM is running.
-XX:InitiatingHeapOccupancyPercent: Percentage of the (entire) heap occupancy to start a concurrent GC cycle. GCs that trigger a concurrent GC cycle based on the occupancy of the entire heap and not just one of the generations, including G1, use this option. A value of 0 denotes 'do constant GC cycles'. The default value is 45.
Oracle JDK provides inbuilt Java VisualIVM tool to analyze and tune GC factors
Related
Depending on the nature of the automated workflow and the number of active threads at any given time the Heap size requirement for JMeter can vary and in the testing I am doing there is some ambiguity with respect to the affect of Heap size on the test results. The initial Heap size and the maximum Heap size of the server hosting JMeter is shown in the attached screenshot.
Upon executing the test for a large set of current users (eg:100) the in built JMeter report does not render however the results can be seen in the CSV output. Will increasing the Heap size solve this issue and if so to how much should we increase the Heap size?. Note that this issue does not happen for a small user count such as 10 or 15.
What is the recommended industrial standard value for Heap size and other system variables for a server used for commercial performance testing using JMeter.
There is no "recommended industrial standard".
Each test is individual and you need to tune JMeter appropriately.
As of JMeter 5.5 default heap size is 1GB which is sufficient for tests development and debugging but might be not sufficient for the load you're trying to conduct.
According to this article:
"If the occupancy of the Java heap is too high, garbage collection occurs frequently. If the occupancy is low, garbage collection is infrequent but lasts longer... Try to keep the memory occupancy of the Java heap between 40% and 70% of the Java heap size... The highest point of occupancy of the Java heap is preferably not above 70% of the maximum heap size, and the average occupancy is between 40% and 70% occupancy. If the occupancy goes over 70%, resize the Java heap."
So I would recommend checking what's going on with your heap using JVisualVM or equivalent and adjusting it up or down as needed.
If your test runs fine and you're experiencing OOM issues only during dashboard generation you can increase it temporarily by setting the relevant HEAP environment variable value.
Java buildpack memory calculator with Spring Boot application inside of Docker container with 1GB memory calculates memory as it says in documentation, it takes entire available memory and this are calculated JVM options:
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -Xmx747490K -XX:MaxMetaspaceSize=157725K -Xss1M (Total Memory: 1G, Thread Count: 50, Loaded Class Count: 25433, Headroom: 0%)
Question is why does it takes entire available memory and gives it to JVM? It should leave some memory for java process outside of JVM. This can lead to OOM because JVM thinks it has 1GB for itself (747490K for heap), and in reality it has less because some of it's memory is used by native memory, outside of JVM.
Should I not use this calculator and set JVM configuration by myself or I can reconfigure this somehow?
Question is why does it takes entire available memory and gives it to JVM?
The assumption is that the only thing running in your container is your Java application, thus it assigns all of the available memory to be used.
If you do things like shell out and run other processes or run other processes in the container, you need to tell memory calculator so it can take that into account.
This can lead to OOM because JVM thinks it has 1GB for itself (747490K for heap), and in reality it has less because some of it's memory is used by native memory, outside of JVM.
The memory calculator takes into consideration the major memory regions within a Java process. Not just heap. That said, it cannot 100% guarantee that you will never go over your memory limit. That's impossible with a Java app.
There are things you can do as an application developer, like create 10,000 threads or JNI, that cannot be restricted and could potentially consume a whole ton of memory. If you do that, your app will go over its container memory limit and crash.
The memory calculator attempts to give you a reasonable memory configuration for most common Java workloads. Running a web app, running a microservice, running some batch jobs, etc...
If you are doing something that doesn't fit within that pattern, then you can simply tell the memory calculator and it'll adjust things accordingly.
Should I not use this calculator and set JVM configuration by myself or I can reconfigure this somehow?
Even if you need to customize what the calculator is doing it can be helpful. It's additional toil to calculate these values manually, especially when it's so easy to change the memory limits. If your ops team increases the memory limit of the container, you want your application to automatically adjust to that configuration (as well as it can).
Beyond that, memory calculator is also good at detecting problems early. If you configure the JVM manually and you mess it up, let's say you over-allocate memory, the JVM won't necessarily care until it tries to get more memory and can't. At some point down the road, you're going to have a problem but it's not clear when (probably at 3am on a Sat, lol).
With memory calculator, it's doing the math when your container first starts to make sure that memory settings are sane. If there's something off with the configuration, it'll fail and let you know.
TIPS:
You can override a memory calculator-defined value by simply setting that JVM option in the JAVA_TOOL_OPTIONS env variable. For example, if I want to allow for more direct memory, I would set JAVA_TOOL_OPTIONS='-XX:MaxDirectMemorySize=50M'. Then when you restart the container, the memory calculator will shift memory around to accommodate that.
The one thing you don't want to set is -Xmx. The memory calculator should always set this because it will set it to whatever is left after other regions have been accounted for. You can think of it like HEAP = CONTAINER_MEMORY_LIMIT - (all static memory regions).
If you were to set -Xmx, you have to get it exactly right. If it's too low then you're wasting memory. If it's too high then you could exceed the container memory limit and get crashes.
In short, if you think you want to set -Xmx, you should either increase the container memory limit or decrease one of the static memory regions.
If you run other things in the container, you need to set the headroom. This is done with the BPL_JVM_HEAD_ROOM env variable. Give it a percent of the total container memory limit. Ex: BPL_JVM_HEAD_ROOM=20 would use 80% of the container's memory limit for Java and 20 for other stuff.
Setting some headroom can be useful in other cases as well, like if you're troubleshooting a container crash and you want a little extra room, or if you don't like operating at 100% the memory limit. You can leave 5 or 10% unused to match your comfort level.
If you have an application that uses a lot of threads, you'll need to adjust this as well. The default is 250 threads, which works well for many web/servlet-based applications (thread per request model). We do automatically lower to 50 threads if you're specifically using Spring Webflux which does not need so many threads.
For other cases, it's up to you to configure this. For example, if you have a batch application that only needs a thread pool of 10, then you could set this 40 or 50. 40-50 seems weird in this example, but the JVM creates a number of its own threads and you need to account for those in addition to application-specific threads when in doubt look at a thread dump.
As per IBM link (https://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/openj9/xgcpolicy/index.html), the gc policy can be specified by setting by -Xgcpolicy. Default gcpolicy is gencon (-Xgcpolicy:gencon). WAS is 9.0 and JVM is IBM J9 (Java version 1.8).
Next, from the below link of IBM it seems that the setting of the gc algorithm is also possible using flag -XX, like in other JVM. E.g: -XX:+UseG1GC can work.
https://www.ibm.com/support/knowledgecenter/en/SS3KLZ/com.ibm.java.diagnostics.visualizer.doc/verbosegc.html
My intention is to apply the gc behavior like of UseG1GC. The heap size is -Xms16G to -Xmx20G. So, I wish to go for Garbase First and concurrent that is UseG1GC. The -Xgcpolicy:gencon also does somewhat similar but it cause the "stop the world". When gc is running the application gets suspension.
Little confused that even if I set the -XX:+UseG1GC, will it follow and be effective to UseG1GC behavior or it will follow the mechanism of -Xgcpolicy:gencon? Or the gcpolicy and gc algorithm are two different things?
There is no effect of using -XX:+UseG1GC on IBM JVM. It will just be silently swallowed. The JVM will default to Gencon GC policy.
You can verify that by running -verbose:gc, what will reported GC policy being used.
The closest IBM's GC policy to Hotspot's G1GC is Balanced one, the main distinguishing characteristics being they are region based (unlike Gencon that has two distinct ares of heap for old and new objects).
As far as concurrency, all 3 (G1GC, Balanced, Gencon) are similar: global GCs are mostly concurrent and partial/local GCs are STW (Stop-The-World).
Reasons to use region based GC policy are to reduce worst case pause time. They are capable of doing some global type operations incrementally in partial GCs. Most notably, they can incrementally de-fragment heap, unlike Gencon, that it does in global GC via optional STW compact operation. Most of applications will not require such global compact, hence Gencon is default. But, if long pauses due to global compaction are observed in Gencon run, Balanced should be tried. Balanced GC will however slightly compromise the application throughput.
Below is my scenario.
I have 2 test scripts :- one might use 5GB to 15GB of heap memory and other script might use from 5GB to 12GB.
If i have a machine of 32 GB memory,
While executing for the first script can i assign XMS 1GB XMX 22GB(though my script needs 15GB) and for the second script can i assign XMS 1GB and XMX 12GB
As sum of maximum goes beyong 32GB(total memory)
In the second case i assign like this--->
for script 1:XMS 22GB XMX 22GB
for script 2:XMS 12GB and XMX 12GB
Sum of Max 34GB.
Does it by any chance work like below----- >
If 12GB is assigned for first script,is this memory blocked for that process/script ? and can i not use the unused memory for other processes ?
or
If 12GB is assigned for the first script ,it uses only as much as requuired by it and any other process can use the rest memory ? IF it works in this way-i don't have to specifically assign heap for two scripts separately.
If you set the minimum heap memory via Xms parameter the JVM will reserve this memory and it will not be available for other processes.
If you're able to allocate more JVM Heap than you have total physical RAM it means that your OS will go for swapping - using hard drive to dump memory pages which extends your computer memory at cost of speed because memory operations are fast and disk operations are very slow.
For example look at my laptop which has 8 GB of total physical RAM:
It has 8 GB of physical memory of which 1.2 GB are free. It means that I can safely allocate 1 GB of RAM to Java
However when I give 8 GB to Java as:
java -Xms8G
it still works
15 GB - still works
and when I try to allocate 20 GB it fails because it doesn't fit into physical + virtual memory.
You must avoid swapping because it means that JMeter will not be able to send requests fast enough even if the system under tests supports it so make sure to measure how much available physical RAM you have and your test must not exceed this. If you cannot achieve it on one machine - you will have to go for distributed testing
Also "concurrently" running 2 different scripts is not something you should be doing because it's not only about the memory, a single CPU core can execute only one command at a time, other commands are waiting in the queue and being served by context switching which is kind of expensive and slow operation.
And last but not the least, allocating the maximum HEAP is not the best idea because this way garbage collections will be less frequent but will last much longer resulting in throughput dropdowns, keep heap usage between 30% and 80% like in Optimal Heap Size article
Prometheus plugin in Springboot app is sending tons of data, I don't find any highlight to the meaning of what I get from the exporter:
1) What does "jvm_gc_memory_allocated_bytes_total" mean?
2) What does "jvm_gc_memory_promoted_bytes_total" mean?
What I need is the actual memory usage of the Java Garbage Collector, so I'm expecting a value which is always below 2GB (max memory size) but at the moment is 8GB and still raising.
"jvm_gc_memory_allocated_bytes_total"
and
"jvm_gc_memory_promoted_bytes_total"
are the only two Garbage Collector related variables delivered from the exporter.
To answer you questions, there's a help text provided with each exposed metric in Prometheus exposition format:
# HELP jvm_gc_memory_allocated_bytes_total Incremented for an increase in the size of the young generation memory pool after one GC to before the next
# HELP jvm_gc_memory_promoted_bytes_total Count of positive increases in the size of the old generation memory pool before GC to after GC
These metrics accumulate the allocated bytes in young generation and the promoted bytes which survived a garbage collection and thus they are promoted to the old generation. (very simplified)
From your question, I think you actually are not looking for "memory usage of the Java Garbage Collector" but actually for the managed memory usage of the JVM. These managed pieces are divided in "heap" and "non-heap" (the area tag) on a first level and can be further drilled down into by the id tag.
Here's the metrics you are likely looking for:
jvm_memory_used_bytes{area="heap|nonheap" id="<depends-on-gc-and-jvm>"}
jvm_memory_committed_bytes{area="heap|nonheap" id="<depends-on-gc-and-jvm>"}
jvm_memory_max_bytes{area="heap|nonheap" id="<depends-on-gc-and-jvm>"}
So if you want to get ahold of the currently used heap, you need to sum together the heap area metrics with the following PromQL:
sum(jvm_memory_used_bytes{job="myjob", instance="myhost", area="heap"})