I am trying to measure my application memory foot print pragmatically.
I am using java.lang.management class to calculate this
val heap = ManagementFactory.getMemoryMXBean.getHeapMemoryUsage
val nonHeap = ManagementFactory.getMemoryMXBean.getNonHeapMemoryUsage
val total = heap + nonHeap + (?)
I assumed the sum of both will give me the total amount of memory used by application, but this is not the case, the actual size is greater which was provided by top command.
So I am trying to understand what am I missing? What else do I need to add to this equation in order to get the total memory usage of my application.
To find the memory usage as provided by top, check the OS-level statistics for the process.
On Linux you can do this by reading /proc/self/stat or /proc/self/status.
More about proc pseudo-file system.
Note that Application footprint is a different concept. From JVM point of view
Java application footprint is roughly the amount of space occupied by Java objects (Heap)
and Java classes (Non-heap). From OS point of view there are much more things to count,
including JVM itself and all the components of Java Runtime that make your application work.
The memory used by the whole Java process include
Java Heap;
Metaspace (for class metadata);
Code Cache (the place for JIT-compiled methods and all the generated code);
Direct ByteBuffers;
Memory-mapped files, including files mapped by JVM, e.g. all JAR files on the classpath;
Thread stacks;
JVM code itself and all the dynamic libraries loaded by Java Runtime;
Many other internal JVM structures.
Related
Java buildpack memory calculator with Spring Boot application inside of Docker container with 1GB memory calculates memory as it says in documentation, it takes entire available memory and this are calculated JVM options:
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -Xmx747490K -XX:MaxMetaspaceSize=157725K -Xss1M (Total Memory: 1G, Thread Count: 50, Loaded Class Count: 25433, Headroom: 0%)
Question is why does it takes entire available memory and gives it to JVM? It should leave some memory for java process outside of JVM. This can lead to OOM because JVM thinks it has 1GB for itself (747490K for heap), and in reality it has less because some of it's memory is used by native memory, outside of JVM.
Should I not use this calculator and set JVM configuration by myself or I can reconfigure this somehow?
Question is why does it takes entire available memory and gives it to JVM?
The assumption is that the only thing running in your container is your Java application, thus it assigns all of the available memory to be used.
If you do things like shell out and run other processes or run other processes in the container, you need to tell memory calculator so it can take that into account.
This can lead to OOM because JVM thinks it has 1GB for itself (747490K for heap), and in reality it has less because some of it's memory is used by native memory, outside of JVM.
The memory calculator takes into consideration the major memory regions within a Java process. Not just heap. That said, it cannot 100% guarantee that you will never go over your memory limit. That's impossible with a Java app.
There are things you can do as an application developer, like create 10,000 threads or JNI, that cannot be restricted and could potentially consume a whole ton of memory. If you do that, your app will go over its container memory limit and crash.
The memory calculator attempts to give you a reasonable memory configuration for most common Java workloads. Running a web app, running a microservice, running some batch jobs, etc...
If you are doing something that doesn't fit within that pattern, then you can simply tell the memory calculator and it'll adjust things accordingly.
Should I not use this calculator and set JVM configuration by myself or I can reconfigure this somehow?
Even if you need to customize what the calculator is doing it can be helpful. It's additional toil to calculate these values manually, especially when it's so easy to change the memory limits. If your ops team increases the memory limit of the container, you want your application to automatically adjust to that configuration (as well as it can).
Beyond that, memory calculator is also good at detecting problems early. If you configure the JVM manually and you mess it up, let's say you over-allocate memory, the JVM won't necessarily care until it tries to get more memory and can't. At some point down the road, you're going to have a problem but it's not clear when (probably at 3am on a Sat, lol).
With memory calculator, it's doing the math when your container first starts to make sure that memory settings are sane. If there's something off with the configuration, it'll fail and let you know.
TIPS:
You can override a memory calculator-defined value by simply setting that JVM option in the JAVA_TOOL_OPTIONS env variable. For example, if I want to allow for more direct memory, I would set JAVA_TOOL_OPTIONS='-XX:MaxDirectMemorySize=50M'. Then when you restart the container, the memory calculator will shift memory around to accommodate that.
The one thing you don't want to set is -Xmx. The memory calculator should always set this because it will set it to whatever is left after other regions have been accounted for. You can think of it like HEAP = CONTAINER_MEMORY_LIMIT - (all static memory regions).
If you were to set -Xmx, you have to get it exactly right. If it's too low then you're wasting memory. If it's too high then you could exceed the container memory limit and get crashes.
In short, if you think you want to set -Xmx, you should either increase the container memory limit or decrease one of the static memory regions.
If you run other things in the container, you need to set the headroom. This is done with the BPL_JVM_HEAD_ROOM env variable. Give it a percent of the total container memory limit. Ex: BPL_JVM_HEAD_ROOM=20 would use 80% of the container's memory limit for Java and 20 for other stuff.
Setting some headroom can be useful in other cases as well, like if you're troubleshooting a container crash and you want a little extra room, or if you don't like operating at 100% the memory limit. You can leave 5 or 10% unused to match your comfort level.
If you have an application that uses a lot of threads, you'll need to adjust this as well. The default is 250 threads, which works well for many web/servlet-based applications (thread per request model). We do automatically lower to 50 threads if you're specifically using Spring Webflux which does not need so many threads.
For other cases, it's up to you to configure this. For example, if you have a batch application that only needs a thread pool of 10, then you could set this 40 or 50. 40-50 seems weird in this example, but the JVM creates a number of its own threads and you need to account for those in addition to application-specific threads when in doubt look at a thread dump.
I'm looking to understanding the relationship of
container_memory_working_set_bytes vs process_resident_memory_bytes vs total_rss (container_memory_rss) + file_mapped so as to better equipped system for alerting on OOM possibility.
It seems against my understanding (which is puzzling me right now) given if a container/pod is running a single process executing a compiled program written in Go.
Why is the difference between container_memory_working_set_bytes is so big(nearly 10 times more) with respect to process_resident_memory_bytes
Also the relationship between container_memory_working_set_bytes and container_memory_rss + file_mapped is weird here, something I did not expect, after reading here
The total amount of anonymous and swap cache memory (it includes transparent hugepages), and it equals to the value of total_rss from memory.status file. This should not be confused with the true resident set size or the amount of physical memory used by the cgroup. rss + file_mapped will give you the resident set size of cgroup. It does not include memory that is swapped out. It does include memory from shared libraries as long as the pages from those libraries are actually in memory. It does include all stack and heap memory.
So cgroup total resident set size is rss + file_mapped how does this value is less than container_working_set_bytes for a container that is running in the given cgroup
Which make me feels something with this stats that I'm not correct.
Following are the PROMQL used to build the above graph
process_resident_memory_bytes{container="sftp-downloader"}
container_memory_working_set_bytes{container="sftp-downloader"}
go_memstats_heap_alloc_bytes{container="sftp-downloader"}
container_memory_mapped_file{container="sftp-downloader"} + container_memory_rss{container="sftp-downloader"}
So the relationship seems is like this
container_working_set_in_bytes = container_memory_usage_bytes - total_inactive_file
container_memory_usage_bytes as its name implies means the total memory used by the container (but since it also includes file cache i.e inactive_file which OS can release under memory pressure) substracting the inactive_file gives container_working_set_in_bytes
Relationship between container_memory_rss and container_working_sets can be summed up using following expression
container_memory_usage_bytes = container_memory_cache + container_memory_rss
cache reflects data stored on a disk that is currently cached in memory. it contains active + inactive file (mentioned above)
This explains why the container_working_set was higher.
Ref #1
Ref #2
Not really an answer, but still two assorted points.
Does this help to make sense of the chart?
Here at my $dayjob, we had faced various different issues with how different tools external to the Go runtime count and display memory usage of a process executing a program written in Go.
Coupled with the fact Go's GC on Linux does not actually release freed memory pages to the kernel but merely madvise(2)s it that such pages are MADV_FREE, a GC cycle which had freed quite a hefty amount of memory does not result in any noticeable change of the readings of the "process' RSS" taken by the external tooling (usually cgroups stats).
Hence we're exporting our own metrics obtained by periodically calling runtime.ReadMemStats (and runtime/debug.ReadGCStats) in any major serivice written in Go — with the help of a simple package written specifically for that. These readings reflect the true idea of the Go runtime about the memory under its control.
By the way, the NextGC field of the memory stats is super useful to watch if you have memory limits set for your containers because once that reading reaches or surpasses your memory limit, the process in the container is surely doomed to be eventually shot down by the oom_killer.
Prior to Java 8 we had 5 major runtime data areas:
Method Area
Heap
JVM Stacks
PC registers
Native method stacks
With Java 8, there is no Perm Gen, that means there is no more
“java.lang.OutOfMemoryError: PermGen”
which is great but I also read
Method Area is part of space in the Perm Gen
but I can't seem to find anything which explicitly says Method area is no more in Java 8.
So is Perm Gen along with Method area got removed or only Perm Gen got
removed and Method area is still present in old generation.
Please attach any good source material that you may have seen related to Java 8 Memory Model
Since Method Area is a logical concept described in the specification, every JVM has a Method Area, though that doesn’t imply that it has to be reflected in the implementation code. Likewise, the Java Heap Space is specified as a concept in the specification, to be the storage of all Java objects, therefore all Java objects are stored in the Heap, per definition, regardless of how it is actually implemented.
Unlike the Perm Gen, which contained Java objects and JVM data structures other than Java objects, the memory layout of the HotSpot JVM for Java 8 has a clear separation. The Old Gen still only contains Java objects, whereas the Metaspace only contains JVM specific data and no Java objects. So Java objects formerly stored in the Perm Gen have been moved to the Old Gen. Since the Method Area contains artifacts “such as the run-time constant pool, field and method data, and the code for methods and constructors…”, in other words non-Java-objects (the pool may contain references to heap objects though), it is part of the Metaspace now.
You could now discuss whether the Metaspace is an implementation of Method Area or may contain more than the Method Area, but this has no practical relevance. Practically, the JVM contains code to manage the Metaspace and its contained artifacts and does not need to care whether these artifacts do logically belong to what the specification describes as “Method Area” or not.
Here is the runtime data storage for HotSpot VM In Java 8
Heap
Has got all your objects created using new, including String constant pool
Contains your fields/instance variables
MetaSpace(Method Area)
Contains static data(Class variables and static methods)
Data in here is accessible by Heap, JVM stack
Unlike <=Java7 PermGen which takes JVM process memory which is limited and can't be expanded at runtime. MetaSpace uses native memory
JVM Stack
Current execution of your program.
Contains local variables
It's a thread
Native Stack
Used for native method executions, as Java core language has some native stuff
It's also a thread
PC register/ Instruction Sets
Holds the JVM memory addresses(Not Native address) for each JVM instruction in your stack
Generally each entry in JVM/native stack refers to PC registers for addresses to get actual data from Heap/MetaSpace
Each stack is associated with a PC register
I am trying to upload an archive that's 600MB in size.
I get this error:
Execution failed for task ':uploadArchives'.
> Java heap space
...
...
Caused by: java.lang.OutOfMemoryError: Java heap space
I have tried to set GRADLE_OPTS, JVM_OPTS and MAVEN_OPTS variables, for setting the max. heap size, like for example:
export GRADLE_OPTS=-Xmx1024m
gradle uploadArchives
But I am still getting the same error.
What am I missing here?
Ultimately you always have a finite max of heap to use no matter what platform you are running on. In Windows 32 bit this is around 2gb (not specifically heap but total amount of memory per process). It just happens that Java happens to make the default smaller (presumably so that the programmer can't create programs that have runaway memory allocation without running into this problem and having to examine exactly what they are doing).
So this given there are several approaches you could take to either determine what amount of memory you need or to reduce the amount of memory you are using. One common mistake with garbage collected languages such as Java or C# is to keep around references to objects that you no longer are using, or allocating many objects when you could reuse them instead. As long as objects have a reference to them they will continue to use heap space as the garbage collector will not delete them.
In this case you can use a Java memory profiler to determine what methods in your program are allocating large number of objects and then determine if there is a way to make sure they are no longer referenced, or to not allocate them in the first place. One option which I have used in the past is "JMP" http://www.khelekore.org/jmp/.
If you determine that you are allocating these objects for a reason and you need to keep around references (depending on what you are doing this might be the case), you will just need to increase the max heap size when you start the program. However, once you do the memory profiling and understand how your objects are getting allocated you should have a better idea about how much memory you need.
In general if you can't guarantee that your program will run in some finite amount of memory (perhaps depending on input size) you will always run into this problem. Only after exhausting all of this will you need to look into caching objects out to disk etc. At this point you should have a very good reason to say "I need Xgb of memory" for something and you can't work around it by improving your algorithms or memory allocation patterns. Generally this will only usually be the case for algorithms operating on large datasets (like a database or some scientific analysis program) and then techniques like caching and memory mapped IO become useful.
Run Java with the command-line option -Xmx, which sets the maximum size of the heap.
http://docs.oracle.com/javase/7/docs/technotes/tools/windows/java.html#nonstandard
The problem was because the actual size of package was a lot higher due to gradle not being able to handle symlinks.
When I manually handled the symlinks, the problem ended.
The JVM heap size can be set in the gradle.properties file, in the root directory of your gradle project. Like this:
org.gradle.jvmargs=-Xms256m -Xmx1024m
I am starting the Java process with the following command:
java -Xmx32m -jar winstone-lite.jar --warfile=myWarFile.war
Instead of using the amount of memory I specified, it is still allocating 144m.
EDIT:
When I say allocate, I mean when I look at the "top" process I am seeing 144m as the amount of memory being used.
I am using http://www.oracle.com/technetwork/java/embedded/documentation/index.html current version.
I would figure that if my application required more memory than I am allocating the jvm would crash.
-Xmxjust tells the JVM how much memory it may use for its internal heap.
The JVM needs memory for other purposes (permanent generation, temporary space etc.), plus like every binary it needs space for its own binary code, plus any libraries/DLLs/.so it loads.
The 144 MiB you quote probably contains at least some of these other memory uses.
How did you measure the memory usage? On modern OS using virtual memory, measuring memory usage of a process is not quite trivial, and cannot be expressed as a single value.