Pod sizing with Actuator metrics jvm.memory.max - spring-boot

I am trying to size our pods using the actuator metrics info. With the below K8 resource quota configuration;
resources:
requests:
memory: "512Mi"
limits:
memory: "512Mi"
We are observing that jvm.memory.max returns ~1455 mb. I understand that this value includes heap and non-heap. Further drilling into the api (jvm.memory.max?tag=area:nonheap) and (jvm.memory.max?tag=area:heap) results in ~1325mb and ~129mb respectively.
Obviously with the non-heap set to max out at a value greater than the K8 limit, the container is bound to get killed eventually. But why is the jvm (non-heap memory) not bounded by the memory configuration of the container (configured in K8)?
The above observations are valid with java 8 and java 11. The below blog discusses the experimental options with java 8 where CPU and heap configurations are discussed but no mention of non-heap. What are some suggestions to consider in sizing the pods?
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
Source

Java 8 has a few flags that can help the runtime operate in a more container aware manner:
java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -jar app.jar
Why you get maximum JVM heap memory of 129 MB if you set the maximum container memory limit to 512 MB? So the answer is that memory consumption in JVM includes both heap and non-heap memory. The memory required for class metadata, JIT complied code, thread stacks, GC, and other processes is taken from the non-heap memory. Therefore, based on the cgroup resource restrictions, the JVM reserves a portion of the memory for non-heap use to ensure system stability.
The exact amount of non-heap memory can vary widely, but a safe bet if you’re doing resource planning is that the heap is about 80% of the JVM’s total memory. So if you set the set maximum heap to 1000 MB, you can expect that the whole JVM might need around 1250 MB.
The JVM read that the container is limited to 512M and created a JVM with maximum heap size of ~129MB. Exactly 1/4 of the container memory as defined in the JDK ergonomic page.
If you dig into the JVM Tuning guide you will see the following.
Unless the initial and maximum heap sizes are specified on the command line, they're calculated based on the amount of memory on the machine. The default maximum heap size is one-fourth of the physical memory while the initial heap size is 1/64th of physical memory. The maximum amount of space allocated to the young generation is one third of the total heap size.
You can find more information about it here.

Related

How to determine correct heap size for ElasticSearch?

How can I determine the heap size required for 1 GB logs having 1 day retention period?
if I take the machine with 32 GB heap size (64 GB RAM) how many GB logs I can keep in this for 1 day?
It depends on various factors like the number of indexing requests, search requests, cache utilization, size of search and indexing requests, number of shards/segments etc, also heap size should follow the sawtooth pattern, and instead of guessing it, you should start measuring it.
The good thing is that you can starting right, by assigning 50% of RAM as ES Heap size which is not crossing 32 GB.

How do I increase the Heap Size for FreeRTOS in Zynq702 SoC?

I am using Zynq 702 SoC. It has 2 CPU's. CPU0 is loaded with Petalinux and Cpu1 with FreeRtos and my FreeRtos current heap Size is 6MB.
The actual size of the RAM is 1GB, in this 512MB being set in the Petalinux Kernel and rest is not used and want to use completely for CPU1. I am using OpenAMP for communications between 2cores.
I want to increase the Heap size of FreeRtos so that this new Heap size will help us in developing some more features.
Is anybody tried to include OpenAMP and for loading CPU1, and the stack can be expanded to > 16MB.
FreeRTOS has more than one heap implementation - and in fact can be used with no heap at all - how the heap is increased depends on which implementation is in use. See https://www.freertos.org/a00111.html for details.

What is the Spring Boot default memory settings?

For example if I run/debug simple spring boot app from IDE without definitions, what size of initial heap size, max heap size and stack size (-Xms, -Xmx, -Xss) will be set?
By default Spring Boot app will use JVM default memory settings.
Default heap size
In case your physical memory size is up to 192 megabytes (MB) then default maximum heap size is half of the physical memory.
In case your physical memory size is more than 192 megabytes then default maximum heap size is one fourth of the physical memory.
For example, if your computer has 128 MB of physical memory, then the maximum heap size is 64 MB, and greater than or equal to 1 GB of physical memory results in a maximum heap size of 256 MB.
The maximum heap size is not actually used by the JVM unless your program creates enough objects to require it. A much smaller amount, called the initial heap size, is allocated during JVM initialization. This amount is at least 8 MB and otherwise 1/64th of physical memory up to a physical memory size of 1 GB.
The maximum amount of space allocated to the young generation is one third of the total heap size.
You can check default values specific to you machine with the following command
Linux:
java -XX:+PrintFlagsFinal -version | grep HeapSize
Windows:
java -XX:+PrintFlagsFinal -version | findstr HeapSize
Reference:https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/parallel.html#default_heap_size
Default thread stack size
The default thread stack size varies with JVM, OS and environment variables.
To find out what your default thread stack size is on your platform, use
In Linux:
java -XX:+PrintFlagsFinal -version | grep ThreadStackSize
In Windows:
java -XX:+PrintFlagsFinal -version | findstr ThreadStackSize
usually its 25% of your total physical memory if no "Xmx" options are provided during java start
On a Unix/Linux system, you can do
java -XX:+PrintFlagsFinal -version | grep HeapSize
On Windows, use the following command to find out the defaults
java -XX:+PrintFlagsFinal -version | findstr HeapSize
Look for the options MaxHeapSize (for -Xmx) and InitialHeapSize for -Xms.
The resulting output is in bytes.

WebSphere min heap size

There's a lot of different advice regarding heap sizes and I was interested to see what should be a general setting for min setting when max is currently set to 2048.
I thought the min should be set to about half of the max.
What could the implications of having the min set to low be?
There are no golden rules for setting the Min and Max Heap size of your Websphere.
It depends on how much memory your app needs and how much load will the specific server have.
My suggestion is to set min heap size to a low value and max heap size to a high enough value.
Then expose the server in production-like load and read verbose gc log (native_stderr.log) of the JVM to see how it performs.
Generally speaking you should keep your Heap usage between 40%-70% of your max heap Size.

managed heap fragmentation

I am trying to understand how heap fragmenation works. What does the following output tell me?
Is this heap overly fragmented?
I have 243010 "free objects" with a total of 53304764 bytes. Are those "free object" spaces in the heap that once contained object but that are now garabage collected?
How can I force a fragmented heap to clean up?
!dumpheap -type Free -stat
total 243233 objects
Statistics:
MT Count TotalSize Class Name
0017d8b0 243010 53304764 Free
It depends on how your heap is organized. You should have a look at how much memory in Gen 0,1,2 is allocated and how much free memory you have there compared to the total used memory.
If you have 500 MB managed heap used but and 50 MB is free then you are doing pretty well. If you do memory intensive operations like creating many WPF controls and releasing them you need a lot more memory for a short time but .NET does not give the memory back to the OS once you allocated it. The GC tries to recognize allocation patterns and tends to keep your memory footprint high although your current heap size is way too big until your machine is running low on physical memory.
I found it much easier to use psscor2 for .NET 3.5 which has some cool commands like ListNearObj where you can find out which objects are around your memory holes (pinned objects?). With the commands from psscor2 you have much better chances to find out what is really going on in your heaps. Most commands are also available in SOS.dll in .NET 4 as well.
To answer your original question: Yes free objects are gaps on the managed heap which can simply be the free memory block after your last allocated object on a GC segement. Or if you do !DumpHeap with the start address of a GC segment you see the objects allocated in that managed heap segment along with your free objects which are GC collected objects.
This memory holes do normally happen in Gen2. The object addresses before and after the free object do tell you what potentially pinned objects are around your hole. From this you should be able to determine your allocation history and optimize it if you need to.
You can find the addresses of the GC Heaps with
0:021> !EEHeap -gc
Number of GC Heaps: 1
generation 0 starts at 0x101da9cc
generation 1 starts at 0x10061000
generation 2 starts at 0x02aa1000
ephemeral segment allocation context: none
segment begin allocated size
02aa0000 02aa1000** 03836a30 0xd95a30(14244400)
10060000 10061000** 103b8ff4 0x357ff4(3506164)
Large object heap starts at 0x03aa1000
segment begin allocated size
03aa0000 03aa1000 03b096f8 0x686f8(427768)
Total Size: Size: 0x115611c (18178332) bytes.
------------------------------
GC Heap Size: Size: 0x115611c (18178332) bytes.
There you see that you have heaps at 02aa1000 and 10061000.
With !DumpHeap 02aa1000 03836a30 you can dump the GC Heap segment.
!DumpHeap 02aa1000 03836a30
Address MT Size
...
037b7b88 5b408350 56
037b7bc0 60876d60 32
037b7be0 5b40838c 20
037b7bf4 5b408350 56
037b7c2c 5b408728 20
037b7c40 5fe4506c 16
037b7c50 60876d60 32
037b7c70 5b408728 20
037b7c84 5fe4506c 16
037b7c94 00135de8 519112 Free
0383685c 5b408728 20
03836870 5fe4506c 16
03836880 608c55b4 96
....
There you find your free memory blocks which was an object which was already GCed. You can dump the surrounding objects (the output is sorted address wise) to find out if they are pinned or have other unusual properties.
You have 50MB of RAM as Free space. This is not good.
Having .NET allocating blocks of 16MB from process, we have a fragmentation issue indeed.
There are plenty of reasons to fragmentation to occure in .NET.
Have a look here and here.
In your case it is possibly a pinning. As 53304764 / 243010 makes 219.35 bytes per object - much lower then LOH objects.

Resources