java how to track and tune Compression Class Space - java-8

We're using JDK 8 and some of our processes are giving OOM with "compressed class space". We're logging GC and our jvm statistics.log file currently gives the following type of log entries
2017-06-30 03:57:07,944 INFO - HEAP - [USAGE: 1678.7, FREE: 986.7, TOTAL: 2665.5, MAX: 2665.5]; PERM - [USAGE: N/A, FREE: N/A, MAX: N/A]; CLASSES - [Loaded: 1832624, Unloaded: 637, Left: 1831987]; THREADS - [Count: 92]
We're wondering if adding the flags "-XX:+TraceClassUnloading -XX:+TraceClassLoading" would let us know what value we should set for the "Compressed Class space" ( -XX: CompressedClassSpaceSize) ? If yes, how do we determine the size from the Trace logs ?

You can use -XX:-UseCompressedClassPointers to disable the compressed class space which should allow the JVM to load as many classes as fit into memory instead of the limited compressed class virtual memory region. The drawback are larger class pointers in object headers.
But as #Holger mentioned in the comments you should make sure that your application isn't leaking classes over time, otherwise memory consumption will keep growing.

Related

how to disable memory calculator from docker image generated by buildpack

When I'm establishing the memory limits (-Xmx512m -Xms512m) in the deployment.yml for a Spring Boot Application which the docker image was generated with the command (mvn spring-boot:build-image) then I'm receiving the following error:
Setting Active Processor Count to 4
Adding $JAVA_OPTS to $JAVA_TOOL_OPTIONS
unable to calculate memory configuration
all memory regions require 1130933K which is greater than 956052K available for allocation:
-Xmx512M, 0 headroom, -XX:MaxDirectMemorySize=10M, -XX:MaxMetaspaceSize=94645K, -XX:ReservedCodeCacheSize=240M,
-Xss1M * 250 threads ←[31;1mERROR: ←[0mfailed to launch: exec.d: failed to execute exec.d file
at path '/layers/paketo-buildpacks_bellsoft-liberica/helper/exec.d/memory-calculator': exit status 1
Current deployment.yml config:
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: JAVA_OPTS
value: >-
-XX:+PrintGCDetails
-Xlog:gc
-XX:+UseParallelGC
-XX:+PrintFlagsFinal
-Xmx512m
-Xms512m
resources:
requests:
cpu: 1554m
memory: 979M
limits:
cpu: 1554m
memory: 979M
How to set the memory limits properly or disable the buildpack memory calculator?
NOTE: I'm using JAVA 11.
UPDATE:
Thank you for your answer.
But I applied the second option, I understood your point of view, I will try to do a summary about the approach
Iteration 1: No limits
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: JAVA_OPTS
value: >-
-Xlog:gc
-XX:+UseParallelGC
grafana visualization it1
Memory Calculator:
Calculating JVM memory based on 13287832K available memory
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -Xmx12681764K -XX:MaxMetaspaceSize=94067K -XX:ReservedCodeCacheSize=240M -Xss1M (Total Memory: 13287832K, Thread Count: 250, Loaded Class Count: 14194, Headroom: 0%)
Enabling Java Native Memory Tracking
Adding 128 container CA certificates to JVM truststore
Spring Cloud Bindings Enabled
Picked up JAVA_TOOL_OPTIONS: -Djava.security.properties=/layers/paketo-buildpacks_bellsoft-liberica/java-security-properties/java-security.properties -XX:+ExitOnOutOfMemoryError -XX:ActiveProcessorCount=4 -Xlog:gc -XX:+UseParallelGC -XX:MaxDirectMemorySize=10M -Xmx12681764K -XX:MaxMetaspaceSize=94067K -XX:ReservedCodeCacheSize=240M -Xss1M -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary -XX:+PrintNMTStatistics -Dorg.springframework.cloud.bindings.boot.enable=true
Xmx12681764K = 1585,2205 MB
In this case, Grafana visualize all resources from the hardware and it is not the ideal configuration so, it is necessary to define limits from the upper boundary, the pod.
Iteration 2: With defined limits at kubernetes level
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: JAVA_OPTS
value: >-
-Xlog:gc
-XX:+UseParallelGC
resources:
requests:
cpu: 1554m
memory: 979M
limits:
cpu: 1554m
memory: 979M
grafana visualization it2
Memory calculator:
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -Xmx349984K -XX:MaxMetaspaceSize=94067K -XX:ReservedCodeCacheSize=240M -Xss1M (Total Memory: 956052K, Thread Count: 250, Loaded Class Count: 14194, Headroom: 0%)
Enabling Java Native Memory Tracking
Adding 128 container CA certificates to JVM truststore
Spring Cloud Bindings Enabled
Picked up JAVA_TOOL_OPTIONS: -Djava.security.properties=/layers/paketo-buildpacks_bellsoft-liberica/java-security-properties/java-security.properties -XX:+ExitOnOutOfMemoryError -XX:ActiveProcessorCount=4 -Xlog:gc -XX:+UseParallelGC -XX:MaxDirectMemorySize=10M -Xmx349984K -XX:MaxMetaspaceSize=94067K -XX:ReservedCodeCacheSize=240M -Xss1M -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary -XX:+PrintNMTStatistics -Dorg.springframework.cloud.bindings.boot.enabl
In this case, Memory Calculator determinates a minimum memory to operate with the application but not limit the upper boundary because it is limited by the configuration from k8s level. My doubt was generated by the delay in the grafana visualization.
As you say, Memory calculator is our side to help.
Many thanks in advance
Alberto.
You're getting the error:
all memory regions require 1130933K which is greater than 956052K available for allocation:
-Xmx512M, 0 headroom, -XX:MaxDirectMemorySize=10M, -XX:MaxMetaspaceSize=94645K, -XX:ReservedCodeCacheSize=240M,
-Xss1M * 250 threads
which is telling you that you have an invalid memory configuration. The amount of memory that you want to assign to the JVM does not fit within the limits you have put on your container.
There are a number of ways you can fix this:
Increase the container memory limit so that the JVM fits. The error message tells you how much you'd need, 1130933K.
Reduce the amount of memory you're assigning to the JVM so that it fits within the container memory limit. The error message tells you which JVM memory settings are being used, you can override them in JAVA_TOOL_OPTIONS (or JAVA_OPTS, anything added there is included within JAVA_TOOL_OPTIONS) to limit them.
Remove -Xmx512m -Xms512m from JAVA_OPTS and just let the memory calculator generate the largest JVM memory configuration that will fit within the container memory limit you've assigned. You won't get that much heap, but you'll get as large of a heap as possible within the given container memory limit.
Some notes on these options:
If you try option #2, be careful in terms of what you reduce. The JVM is memory-hungry, but that's also how it's so fast. Make sure you are performance testing before and after any changes you make to ensure that you're not hurting your application performance (or to confirm that you're still performing to required levels).
With the Paketo Java buildpack, you should really never set -Xmx and -Xms. What you want to do instead is to adjust the other memory settings, like -Xss, -XX:ReservedCodeCacheSize=240M, -XX:MaxDirectMemorySize=10M, thread count, etc...
The memory calculator will adjust the -Xmx and -Xms settings dynamically so that they consume the remainder of the memory in the container. If you manually set these values, what's likely to happen is that you will either cause an error because the values are too large (what happened here) or that you set them too low and the JVM is not using all of the memory available to it. Let the memory calculator do its job and you'll get the optimal settings.
There is no option to disable the memory calculator and I would strongly caution against attempting to do that. The memory calculator is your friend here.
It's like a compiler for JVM memory settings. It is checking and validating the settings you enter, so it can tell you in advance if there is a problem with your memory configuration. It might be annoying that it complains, but this is far, far better than having your container crash in the middle of the night because it runs out of memory. If it complains, adjust your memory configuration and then rest easy knowing that everything is properly sized to fit in your container.
The memory calculator will by default size your application for a production deployment, optimizing for performance, not low-memory consumption. Again, Java trades higher memory consumption for speed. To do this, it means your container needs at least 1G of RAM.
There is a Paketo RFC to add a low-memory mode to the memory calculator. This would make it easier to run PoC applications and other low-traffic apps that are willing to accept potentially lower performance in exchange for reducing memory consumption (and thereby cost). This RFC has not been implemented as of this post, but we hope to have it implemented in the near future.

memory usage grows until VM crashes while running Wildfly 9 with Java 8

We are having an issue with virtual servers (VMs) running out of native memory. These VMs are running:
Linux 7.2(Maipo)
Wildfly 9.0.1
Java 1.8.0._151 running with (different JVMs have different heap sizes. They range from 0.5G to 2G)
The JVM args are:
-XX:+UseG1GC
-XX:SurvivorRatio=1
-XX:NewRatio=2
-XX:MaxTenuringThreshold=15
-XX:-UseAdaptiveSizePolicy
-XX:G1HeapRegionSize=16m
-XX:MaxMetaspaceSize=256m
-XX:CompressedClassSpaceSize=64m
-javaagent:/<path to new relic.jar>
After about a month, sometimes longer, the VMs start to use all of their swap space and then eventually the OOM-Killer notices that java is using too much memory and kills one of our JVMs.
The amount of memory being used by the java process is larger than heap + metaSpace + compressed as revealed by using -XX:NativeMemoryTracking=detail
Are there tools that could tell me what is in this native memory(like a heap dump but not for the heap)?
Are there any tools that can map java heap usage to native memory usage (outside the heap) that are not jemalloc? I have used jemalloc to try to achieve this but the graph that is being drawn contains only hex values and not human readable class names so I cant really get anything out of it. Maybe I'm doing something wrong or perhaps I need another tool.
Any suggestions would be greatly appreciated.
You can use jcmd.
Start application with -XX:NativeMemoryTracking=summary or -
XX:NativeMemoryTracking=detail
Use jcmd to monitor the NMT (native memory tracker)
jcmd "pid" VM.native_memory baseline //take the baseline
jcmd "pid" VM.native_memory detail.diff // use based on your need to analyze more on change in native memory from its baseline

systemd MemoryLimit not enforced

I am running systemd version 219.
root#EVOvPTX1_RE0-re0:/var/log# systemctl --version
systemd 219
+PAM -AUDIT -SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID -ELFUTILS +KMOD -IDN
I have a service, let's call it foo.service which has the following.
[Service]
MemoryLimit=1G
I have deliberately added code to allocate 1M memory 4096 times which causes
4G memory alloc when a certain event is received. The idea is that after
the process consumes 1G of address space, memory alloc would start failing.
However, this does not seem to be the case. I am able to alloc 4G memory
without any issues. This tells me that the memory limit specified in the
service file is not enforced.
Can anyone let me know what am I missing ?
I looked at the proc file system - file named limits. This shows that the
Max address space is Unlimited, which also confirms that the memory limit
is not getting enforced.
This distinction is that you have allocated memory, but you haven't actually used it. In the output of top, this is the difference between the "VIRT" memory column (allocated) and the "RES" column (actually used).
Try modifying your experiment to assign values to elements of a large array instead of just allocating memory and see if you hit the memory limit that way.
Reference: Resident and Virtual memory on Linux: A short example

Unexpected Heap Dumps for Hello World Android APP

I am learning about Memory Utilization using the MAT in Eclipse. Though I have ran into a strange problem. Leave aside the heavy apps, I began with the most benign The "Hello World" App. This is what I get as Heap Stats on Nexus 5, ART runtime, Lollipop 5.0.1.
ID: 1
Heap Size: 25.429 MB
Allocated: 15.257 MB
Free: 10.172 MB
% Used: 60%
# Objects: 43487
My Heap dump gives me 3 Memory Leak suspects:
Overview
"Can't post the Pie Chart because of low reputation."
Problem Suspect 1
The class "android.content.res.Resources", loaded by "", occupies 10,166,936 (38.00%) bytes. The memory is
accumulated in one instance of "android.util.LongSparseArray[]" loaded
by "".
Keywords android.util.LongSparseArray[] android.content.res.Resources
Problem Suspect 2
209 instances of "android.graphics.NinePatch", loaded by "" occupy 5,679,088 (21.22%) bytes. These instances are
referenced from one instance of "java.lang.Object[]", loaded by
"" Keywords java.lang.Object[]
android.graphics.NinePatch
Problem Suspect 3
8 instances of "java.lang.reflect.ArtMethod[]", loaded by "" occupy 3,630,376 (13.57%) bytes. Biggest instances:
•java.lang.reflect.ArtMethod[62114] # 0x70b19178 - 1,888,776 (7.06%)
bytes. •java.lang.reflect.ArtMethod[21798] # 0x706f5a78 - 782,800
(2.93%) bytes. •java.lang.reflect.ArtMethod[24079] # 0x70a9db88 -
546,976 (2.04%) bytes. Keywords java.lang.reflect.ArtMethod[]
This is all by a simple code of:
import android.app.Activity;
import android.os.Bundle;
public class MainActivity extends Activity {
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
}
}
Questions
Why are the heap numbers so big. ?
Also as a side note the app was consuming 52 MB of RAM in the system.
Where are these 209 instance of NinePatch coming ? I merely created the project by doing a "Create a new Project" in Eclipse ?
The first leak suspect of resources, It comes up all the time in my analysis of apps. Is it really a suspect ?
What is the ArtMethod? Does it have to do something with the ART runtime ?
In Lollipop the default runtime is ART i.e Android Run Time, which replaces the old Dalvik Run Time(DRT) used in older Android versions.
In KitKat, Google released an experimental version of ART to get feedback from the users.
In Dalvik JIT(just in time compilation) is used, which means when you open the application only then the DEX code is converted to object code.
However, in ART the dex code is converted to object code(i.e AOT ahead of time compilation) during installation itself. The size of this object code is bigger compared to the DEX code therefore ART needs more RAM than DRT. The advantage of ART is that ART apps have better response time over DRT apps.
Yesterday i'm faced with this problem too. In your log key word is "NinePatch". In my case the cause was a "fake" shadow - tiny picture with alpha channel which trigger resource leak. It's costs about 60mb leaked memory for me.

Which property of Hyper-V WMI classes to access memory info

I want to get memory of a virtual machine with Hyper-V WMI Classes.
There are 4 memory classes; but I could not find any properties of them to get memory value.
Msvm_Memory class have BlockSize and NumberOfBlocks properties.
When I multiply them, I could not get correct memory.
Respect to https://msdn.microsoft.com/en-us/library/hh850175(v=vs.85).aspx It is already wrong approach.
BlockSize
Data type: uint64
Access type: Read-only
The size, in bytes, of the blocks that form the storage extent. If variable block size, then the maximum block size, in bytes, should be specified. If the block size is unknown, or if a block concept is not valid (for example, for aggregate extents, memory, or logical disks), enter a 1 (one). This property is inherited from CIM_StorageExtent, and it is always set to 1048576.
Which class and property should I use?
You can use the Msvm_MemorySettingData class to access the defined memory properties of an instance. You may filter the results by InstanceID and parse AllocationUnits together with Limit to get the configured maximum memory amount.
In the following case there is 1 TB of memory that can be allocated for the specific instance "4764334E-E001-4176-82EE-5594EC9B530E".
Example InstanceID: "Microsoft:Definition\\4764334E-E001-4176-82EE-5594EC9B530E\\Default"
AllocationUnits: "bytes * 2^20"
Limit: 1048576
Msvm_MemorySettingData: https://msdn.microsoft.com/en-us/library/hh850176(v=vs.85).aspx

Resources