Is Method area still present in Java 8? - java-8

Prior to Java 8 we had 5 major runtime data areas:
Method Area
Heap
JVM Stacks
PC registers
Native method stacks
With Java 8, there is no Perm Gen, that means there is no more
“java.lang.OutOfMemoryError: PermGen”
which is great but I also read
Method Area is part of space in the Perm Gen
but I can't seem to find anything which explicitly says Method area is no more in Java 8.
So is Perm Gen along with Method area got removed or only Perm Gen got
removed and Method area is still present in old generation.
Please attach any good source material that you may have seen related to Java 8 Memory Model

Since Method Area is a logical concept described in the specification, every JVM has a Method Area, though that doesn’t imply that it has to be reflected in the implementation code. Likewise, the Java Heap Space is specified as a concept in the specification, to be the storage of all Java objects, therefore all Java objects are stored in the Heap, per definition, regardless of how it is actually implemented.
Unlike the Perm Gen, which contained Java objects and JVM data structures other than Java objects, the memory layout of the HotSpot JVM for Java 8 has a clear separation. The Old Gen still only contains Java objects, whereas the Metaspace only contains JVM specific data and no Java objects. So Java objects formerly stored in the Perm Gen have been moved to the Old Gen. Since the Method Area contains artifacts “such as the run-time constant pool, field and method data, and the code for methods and constructors…”, in other words non-Java-objects (the pool may contain references to heap objects though), it is part of the Metaspace now.
You could now discuss whether the Metaspace is an implementation of Method Area or may contain more than the Method Area, but this has no practical relevance. Practically, the JVM contains code to manage the Metaspace and its contained artifacts and does not need to care whether these artifacts do logically belong to what the specification describes as “Method Area” or not.

Here is the runtime data storage for HotSpot VM In Java 8
Heap
Has got all your objects created using new, including String constant pool
Contains your fields/instance variables
MetaSpace(Method Area)
Contains static data(Class variables and static methods)
Data in here is accessible by Heap, JVM stack
Unlike <=Java7 PermGen which takes JVM process memory which is limited and can't be expanded at runtime. MetaSpace uses native memory
JVM Stack
Current execution of your program.
Contains local variables
It's a thread
Native Stack
Used for native method executions, as Java core language has some native stuff
It's also a thread
PC register/ Instruction Sets
Holds the JVM memory addresses(Not Native address) for each JVM instruction in your stack
Generally each entry in JVM/native stack refers to PC registers for addresses to get actual data from Heap/MetaSpace
Each stack is associated with a PC register

Related

Memory Management in Java - Metaspace/native memory

Is native memory (Metaspace) for a java application gets space from heap memory or there is completely a different set of memory dedicated for it?
I think it uses the memory which is used by OS to manage all their applications, but not clear.
Java Heap Space
Java Heap space is used by java runtime to allocate memory to Objects and JRE classes. Whenever we create any object, it’s always created in the Heap space. Garbage Collection runs on the heap memory to free the memory used by objects that doesn’t have any reference. Any object created in the heap space has global access and can be referenced from anywhere of the application.
Java Stack Memory
Java Stack memory is used for execution of a thread. They contain method specific values that are short-lived and references to other objects in the heap that are getting referred from the method. Stack memory is always referenced in LIFO (Last-In-First-Out) order. Whenever a method is invoked, a new block is created in the stack memory for the method to hold local primitive values and reference to other objects in the method. As soon as method ends, the block becomes unused and become available for next method. Stack memory size is very less compared to Heap memory.
Difference between Java Heap Space and Stack Memory
Heap memory is used by all the parts of the application whereas stack memory is used only by one thread of execution.
Whenever an object is created, it’s always stored in the Heap space and stack memory contains the reference to it. Stack memory only contains local primitive variables and reference variables to objects in heap space.
Objects stored in the heap are globally accessible whereas stack memory can’t be accessed by other threads.
Memory management in stack is done in LIFO manner whereas it’s more complex in Heap memory because it’s used globally. Heap memory is divided into Young-Generation, Old-Generation etc, more details at Java Garbage Collection.
Stack memory is short-lived whereas heap memory lives from the start till the end of application execution.
We can use -Xms and -Xmx JVM option to define the startup size and maximum size of heap memory. We can use -Xss to define the stack memory size.
When stack memory is full, Java runtime throws java.lang.StackOverFlowError whereas if heap memory is full, it throws java.lang.OutOfMemoryError: Java Heap Space error.
Stack memory size is very less when compared to Heap memory. Because of simplicity in memory allocation (LIFO), stack memory is very fast when compared to heap memory.
Reference:
Java Heap Space vs Stack – Memory Allocation in Java
Java (JVM) Memory Model – Memory Management in Java
This is the basic of Java Memory Management but going through the reference material should give you comprehensive idea.
Edit
Thanks to #rajvineet to pointing out to this great article on how the JVM uses native memory on Windows and Linux. Specially the section, how the Java runtime uses native memory describes everything clearly.

Find my application memory foot print programmatically

I am trying to measure my application memory foot print pragmatically.
I am using java.lang.management class to calculate this
val heap = ManagementFactory.getMemoryMXBean.getHeapMemoryUsage
val nonHeap = ManagementFactory.getMemoryMXBean.getNonHeapMemoryUsage
val total = heap + nonHeap + (?)
I assumed the sum of both will give me the total amount of memory used by application, but this is not the case, the actual size is greater which was provided by top command.
So I am trying to understand what am I missing? What else do I need to add to this equation in order to get the total memory usage of my application.
To find the memory usage as provided by top, check the OS-level statistics for the process.
On Linux you can do this by reading /proc/self/stat or /proc/self/status.
More about proc pseudo-file system.
Note that Application footprint is a different concept. From JVM point of view
Java application footprint is roughly the amount of space occupied by Java objects (Heap)
and Java classes (Non-heap). From OS point of view there are much more things to count,
including JVM itself and all the components of Java Runtime that make your application work.
The memory used by the whole Java process include
Java Heap;
Metaspace (for class metadata);
Code Cache (the place for JIT-compiled methods and all the generated code);
Direct ByteBuffers;
Memory-mapped files, including files mapped by JVM, e.g. all JAR files on the classpath;
Thread stacks;
JVM code itself and all the dynamic libraries loaded by Java Runtime;
Many other internal JVM structures.

Java Card memory leak in for loop?

I know that Java Card VM's doesn't have have a garbage collector, but what happens with a for loop:
for(short x=0;x<10;x++)
{}
Does the x variable get utilized after the for loop, or it turns into garbage?
Just in case I have a transient byte array called index from size of 2 (instead of i in for loop) and I use the array in for loops:
for(index[0]=0;index[0]<10;index[0]++)
{}
But it is a little slower than the first version. If I use a normal variable for the index in a for loop then it gets really slow.
So, what happens with the x variable in the first for loop? Is it safe to use for loops like that, or not?
The x variable does not really exist in byte code. There are operations on a location in the Java stack that represents x (be it Java byte code or the code after conversion by the Java Card converter). Now the Java stack is a virtual stack. This stack is implemented on a CPU that has registers and a non-virtual stack. In general, if there are enough registers available, then the variable x is simply put in a register until it is out of scope. The register may of course be reused. The CPU stack itself is a LIFO (last in first out) queue in transient memory (RAM). The stack continuously grows and shrinks during the execution of the byte code that makes up your Applet. Like registers, the stack memory is reused over and over again. All the local variables (those defined inside code blocks as well as method arguments) are treated this way.
If you put your variable in a transient array then you are putting the variable on a RAM based heap. The Java Card RAM heap will never go out of scope. That means that if you update the value that the change needs to be written to transient memory. That is of course slower than a localized update of a CPU register, as you found by experimentation. Usually the memory in the transient memory is never freed. That said, you can of course reuse the memory for other purposes, as long as you have a reference to the array. Note that the references themselves (the index in index[0]) may be either in persistent memory (EEPROM or flash) or transient memory.
It's unclear what you mean with "normal variable". If this is something that has been created with new or if it is a field in an object instance then it persists in the heap in persistent memory (EEPROM or flash). EEPROM and flash have limited amount of write cycles and writing to EEPROM or flash is much much slower than writing to RAM.
Java Card contains two kinds of transient memory: CLEAR_ON_RESET and CLEAR_ON_DESELECT. The difference between the two is that the CLEAR_ON_RESET allows memory to be shared between Applet instances while the CLEAR_ON_DESELECT allows memory to be reused by different Applets.
Java Card classic doesn't contain a garbage collector that runs during Applet execution, you can usually only request garbage collection during startup using JCSystem.requestObjectDeletion() which will clean up the memory that is not referenced anymore, both on the heap in transient memory as well as in persistent memory. Cleaning up the memory means scanning all the memory, marking all the unreferenced blocks and then compacting the memory. This is similar to defragmenting a hard disk; it can take a uncomfortably long time.
ROM is filled during the manufacturing phase. It may contain the operating system, the Java Card API implementation, byte code (including constants) of pre-loaded applets etc. It only be read in the field, so it isn't of any consequence to the question asked.
Let us have a short introduction about the Memory. In brief, there is 3 types of memories in the Smart cards as below:
ROM (And sometimes FLASH)
EEPROM
RAM
ROM:
The card's OS and Java Card APIs and some factory proprietary packages stored here. The contents of this memory is fixed and you can't modify it. Writing in this memory happens only once in the chip production and the process is named Masking.
EEPROM:
This is modifiable memory that your applets load into and it is consist of 4 sections named as below:
Text : also known as code segment contains the machine instructions of the program. The code can be thought of like the text of a novel: It tells the story of what the program does
Data : contains the static data of the program, i.e. the variables that exist throughout program execution. Global variables in a C or C++ program are static, as are variables declared as static in C, C++, or Java.
Heap : is a pool of memory used for dynamically allocated memory, such as with malloc() in C or new in C++ and Java.
Stack : contains the system stack, which is used as temporary storage.
A power-less (Card tearing for example) doesn't have any effect on the contents of this memory.
RAM:
This is a modifiable type of memory also. There is three main difference between RAM and EEPROM:
RAM is really faster than EEPROM. (1000 times faster)
Contents of RAM will destroyed in the power-loss.
The number of writing in EEPROM is limited (Typically 100.000 times.) while RAM has a really higher number.
And What now?
When you write for(short x=0; x<10; x++), you define x as a local variable. Local variables stores in Stack. The stack pointer will reset on the power loss and the used stack part will reclaim. So the main problem of the absence of Garbage Collector is about Heap.
i.e when you define a local variable using new keyword, you specify that part of Heap to a local variable for ever. When the Runtime Environment finished that method, the object will destroy and become unavailable, while that section of Heap doesn't reclaimed. So you will lose that part of Heap. The case that you used for yourfor loop, seems OK and doesn't make any problem because you didn't use new keyword.
Note that in newer versions of Java Card (2.2.2 and higher) there is a manual Garbage Collector (look JCSystem.requestObjectDeletion documentation). But consider that it is really slow and even dangerous in some situations(Look Java Card power less during garbage collection question).

Default heap for a process

I read this article which is about Managing heap memory written by Randy Kath.
I would ask about this segment:
Every process in Windows has one heap called the default heap.
Processes can also have as many other dynamic heaps as they wish,
simply by creating and destroying them on the fly. The system uses the
default heap for all global and local memory management functions, and
the C run-time library uses the default heap for supporting malloc
functions.
I did not underestand what is the function or benefit of the default heap?
Also, I have another question, the author referred always to reserved and committed space, what is meant by committed space?
Applications need heaps to allocate dynamic memory. Windows automatically creates one heap for each process. This is the default heap. Most apps just use this single default heap.
Committing is the act of assigning reserved virtual addresses to specific memory so that it is available for use by the process. I suggest you read this article on MSDN: Managing Virtual Memory.

JVM and CLR allocation optimization

Do the JVM and .NET VM allocate objects on the stack when it is obvious to the runtime that an objects lifetime is limited to a certain scope?
The JVM does this. It can actually remove the allocation totally, under the right circumstances.
Quoting this article.
The Java language does not offer any way to explicitly allocate an object on the stack, but this fact doesn't prevent JVMs from still using stack allocation where appropriate. JVMs can use a technique called escape analysis, by which they can tell that certain objects remain confined to a single thread for their entire lifetime, and that lifetime is bounded by the lifetime of a given stack frame. Such objects can be safely allocated on the stack instead of the heap. Even better, for small objects, the JVM can optimize away the allocation entirely and simply hoist the object's fields into registers.
More information on Escape analysis from Wikipedia.

Resources