what algorithm does the g1 GC based on to resize the Eden region - g1gc

I found a GC problem in my application, young GC suddenly becomes 10 times as much as normal which result of a small size of eden region. Through the gc log, I found than eden size become 5000M-7000M usually while it keeped 528M when falled into problems. I don't know what happend cause such problem.
I have read a lot of blogs to research the algorithm that adjust the eden region size, and most of the blog just tell me than the eden region size is automatically changed according to previous gc pause time. only a vaguely answer is found in https://product.hubspot.com/blog/g1gc-fundamentals-lessons-from-taming-garbage-collection,
if (recent_STW_time < MaxGCPauseMillis)
eden = min(100% - G1ReservePercent - Tenured, G1MaxNewSizePercent)
else
eden = min(100% - Tenured, G1NewSizePercent)
But according to this, eden region size should be 4000-6500M in my application.

Related

How to determine correct heap size for ElasticSearch?

How can I determine the heap size required for 1 GB logs having 1 day retention period?
if I take the machine with 32 GB heap size (64 GB RAM) how many GB logs I can keep in this for 1 day?
It depends on various factors like the number of indexing requests, search requests, cache utilization, size of search and indexing requests, number of shards/segments etc, also heap size should follow the sawtooth pattern, and instead of guessing it, you should start measuring it.
The good thing is that you can starting right, by assigning 50% of RAM as ES Heap size which is not crossing 32 GB.

CMS class unloading took much time

On large load was noticed large GC pause(400ms) for our application. During investigation it turns out, that pause happens on CMS Final Remark and class unloading phase took a lot more time, than other phases(10x-100X):
(CMS Final Remark)
[YG occupancy: 142247 K (294912 K)]
2019-03-13T07:38:30.656-0700: 24252.576:
[Rescan (parallel) , 0.0216770 secs]
2019-03-13T07:38:30.677-0700: 24252.598:
[weak refs processing, 0.0028353 secs]
2019-03-13T07:38:30.680-0700: 24252.601:
[class unloading, 0.3232543 secs]
2019-03-13T07:38:31.004-0700: 24252.924:
[scrub symbol table, 0.0371301 secs]
2019-03-13T07:38:31.041-0700: 24252.961:
[scrub string table, 0.0126352 secs]
[1 CMS-remark: 2062947K(4792320K)] 2205195K(5087232K), 0.3986822 secs]
[Times: user=0.63 sys=0.01, real=0.40 secs]
Total time for which application threads were stopped: 0.4156259 seconds,
Stopping threads took: 0.0014133 seconds
This pause always happens in first second of performance test, the duration of pause varies from 300ms to 400+ms.
Unfortunately, I have no access to server(it's under maintenance) and have only logs for several test runs. But when the server would available, I want to be prepared for further investigation, but I have no idea of what causes such a behavior.
My first thought was about Linux Huge pages, but we don't use them.
After more time in logs, I found following:
Heap after GC invocations=7969 (full 511):
par new generation total 294912K, used 23686K [0x0000000687800000, 0x000000069b800000, 0x000000069b800000)
eden space 262144K, 0% used [0x0000000687800000, 0x0000000687800000, 0x0000000697800000)
from space 32768K, 72% used [0x0000000699800000, 0x000000069af219b8, 0x000000069b800000)
to space 32768K, 0% used [0x0000000697800000, 0x0000000697800000, 0x0000000699800000)
concurrent mark-sweep generation total 4792320K, used 2062947K [0x000000069b800000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 282286K, capacity 297017K, committed 309256K, reserved 1320960K
class space used 33038K, capacity 36852K, committed 38872K, reserved 1048576K
}
Heap after GC invocations=7970 (full 511):
par new generation total 294912K, used 27099K [0x0000000687800000, 0x000000069b800000, 0x000000069b800000)
eden space 262144K, 0% used [0x0000000687800000, 0x0000000687800000, 0x0000000697800000)
from space 32768K, 82% used [0x0000000697800000, 0x0000000699276df0, 0x0000000699800000)
to space 32768K, 0% used [0x0000000699800000, 0x0000000699800000, 0x000000069b800000)
concurrent mark-sweep generation total 4792320K, used 2066069K [0x000000069b800000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 282303K, capacity 297017K, committed 309256K, reserved 1320960K
class space used 33038K, capacity 36852K, committed 38872K, reserved 1048576K
}
Investigating GC pause happens between GC invocations 7969 and 7970. And the amount of used space in meta space is almost the same(it's actually increased)
So, It looks like it's not actually some stall classes which aren't used anymore(since no space was cleared) and it's not safe point reaching issue - since blocking of threads took small time(0.0014133).
How to investigate such case and what diagnostic information is required for proper preparedness.
Technical details
Centos5 + JDK8 + CMS GC with args:
-XX:+CMSClassUnloadingEnabled
-XX:CMSIncrementalDutyCycleMin=10
-XX:+CMSIncrementalPacing
-XX:CMSInitiatingOccupancyFraction=50
-XX:+CMSParallelRemarkEnabled
-XX:+DisableExplicitGC
-XX:InitialHeapSize=5242880000
-XX:MaxHeapSize=5242880000
-XX:MaxNewSize=335544320
-XX:MaxTenuringThreshold=6
-XX:NewSize=335544320
-XX:OldPLABSize=16
-XX:+UseCompressedClassPointers
-XX:+UseCompressedOops
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC

discrepancy between htop and golang readmemstats

My program loads a lot of data at start up and then calls debug.FreeOSMemory() so that any extra space is given back immediately.
loadDataIntoMem()
debug.FreeOSMemory()
after loading into memory , htop shows me the following for the process
VIRT RES SHR
11.6G 7629M 8000
But a call to runtime.ReadMemStats shows me the following
Alloc 5593336608 5.3G
BuckHashSys 1574016 1.6M
HeapAlloc 5593336610 5.3G
HeapIdle 2607980544 2.5G
HeapInuse 7062446080 6.6G
HeapReleased 2607980544 2.5G
HeapSys 9670426624 9.1G
MCacheInuse 9600 9.4K
MCacheSys 16384 16K
MSpanInuse 106776176 102M
MSpanSys 115785728 111M
OtherSys 25638523 25M
StackInuse 589824 576K
StackSys 589824 576K
Sys 10426738360 9.8G
TotalAlloc 50754542056 48G
Alloc is the amount obtained from system and not yet freed ( This is
resident memory right ?) But there is a big difference between the two.
I rely on HeapIdle to kill my program i.e if HeapIdle is more than 2 GB, restart - in this case it is 2.5, and isn't going down even after a while. Golang should use from heap idle when allocating more in the future, thus reducing heap idle right ?
If assumption 1 is wrong, which stat can accurately tell me what the RES value in htop is.
What can I do to reduce the value of HeapIdle ?
This was tried on go 1.4.2, 1.5.2 and 1.6.beta1
The effective memory consumption of your program will be Sys-HeapReleased. This still won't be exactly what the OS reports, because the OS can choose to allocate memory how it sees fit based on the requests of the program.
If your program runs for any appreciable amount of time, the excess memory will be offered back to the OS so there's no need to call debug.FreeOSMemory(). It's also not the job of the garbage collector to keep memory as low as possible; the goal is to use memory as efficiently as possible. This requires some overhead, and room for future allocations.
If you're having trouble with memory usage, it would be a lot more productive to profile your program and see why you're allocating more than expected, instead of killing your process based on incorrect assumptions about memory.

Windows program has big native heap, much larger than all allocations

We are running a mixed mode process (managed + unmanaged) on Win 7 64 bit.
Our process is using up too much memory (especially VM). Based on our analysis, the majority of the memory is used by a big native heap. Our theory is that the LFH is saving too many free blocks in committed memory, for future allocations. They sum to about 1.2 GB while our actual allocated native memory is only at most 0.6 GB.
These numbers are from a test run of the process. In production it sometimes exceeded 10 GB of VM - with maybe 6 GB unaccounted for by known allocations.
We'd like to know if this theory of excessive committed-but-free-for-allocation segments is true, and how this waste can be reduced.
Here's the details of our analysis.
First we needed to figure out what's allocated and rule out memory leaks. We ran the excellent Heap Inspector by Jelle van der Beek and we ruled out a leak and established that the known allocations are at most 0.6 deci-GB.
We took a full memory dump and opened in WinDbg.
Ran !heap -stat
It reports a big native heap with 1.83 deci-GB committed memory. Much more than the sum of our allocations!
_HEAP 000000001b480000
Segments 00000078
Reserved bytes 0000000072980000
Committed bytes 000000006d597000
VirtAllocBlocks 0000001e
VirtAlloc bytes 0000000eb7a60118
Then we ran !heap -stat -h 0000001b480000
heap # 000000001b480000
group-by: TOTSIZE max-display: 20
size #blocks total ( %) (percent of total busy bytes)
c0000 12 - d80000 (10.54)
b0000 d - 8f0000 (6.98)
e0000 a - 8c0000 (6.83)
...
If we add up all the 20 reported items, they add up to 85 deci-MB - much less than the 1.79 deci-GB we're looking for.
We ran !heap -h 1b480000
...
Flags: 00001002
ForceFlags: 00000000
Granularity: 16 bytes
Segment Reserve: 72a70000
Segment Commit: 00002000
DeCommit Block Thres: 00000400
DeCommit Total Thres: 00001000
Total Free Size: 013b60f1
Max. Allocation Size: 000007fffffdefff
Lock Variable at: 000000001b480208
Next TagIndex: 0000
Maximum TagIndex: 0000
Tag Entries: 00000000
PsuedoTag Entries: 00000000
Virtual Alloc List: 1b480118
Unable to read nt!_HEAP_VIRTUAL_ALLOC_ENTRY structure at 000000002acf0000
Uncommitted ranges: 1b4800f8
FreeList[ 00 ] at 000000001b480158: 00000000be940080 . 0000000085828c60 (9451 blocks)
...
When adding up up all the segment sizes in the report, we get:
Total Size = 1.83 deci-GB
Segments Marked Busy Size = 1.50 deci-GB
Segments Marked Busy and Internal Size = 1.37 deci-GB
So all the committed bytes in this report do add up to the total commit size. We grouped on block size and the most heavy allocations come from blocks of size 0x3fff0. These don't correspond to allocations that we know of. There were also mystery blocks of other sizes.
We ran !heap -p -all. This reports the LFH internal segments but we don't understand it fully. Those 3fff0 sized blocks in the previous report appear in the LFH report with an asterisk mark and are sometimes Busy and sometimes Free. Then inside them we see many smaller free blocks.
We guess these free blocks are legitimate. They are committed VM that the LFH reserves for future allocations. But why is their total size so much greater than sum of memory allocations, and can this be reduced?
Well, I can sort of answer my own question.
We had been doing lots and lots of tiny allocations and deallocations in our program. There was no leak, but it seems this somehow created a fragmentation of some sort. After consolidating and eliminating most of these allocations our software is running much better and using less peak memory. It is still a mystery why the peak committed memory was so much higher than the peak actually-used memory.

managed heap fragmentation

I am trying to understand how heap fragmenation works. What does the following output tell me?
Is this heap overly fragmented?
I have 243010 "free objects" with a total of 53304764 bytes. Are those "free object" spaces in the heap that once contained object but that are now garabage collected?
How can I force a fragmented heap to clean up?
!dumpheap -type Free -stat
total 243233 objects
Statistics:
MT Count TotalSize Class Name
0017d8b0 243010 53304764 Free
It depends on how your heap is organized. You should have a look at how much memory in Gen 0,1,2 is allocated and how much free memory you have there compared to the total used memory.
If you have 500 MB managed heap used but and 50 MB is free then you are doing pretty well. If you do memory intensive operations like creating many WPF controls and releasing them you need a lot more memory for a short time but .NET does not give the memory back to the OS once you allocated it. The GC tries to recognize allocation patterns and tends to keep your memory footprint high although your current heap size is way too big until your machine is running low on physical memory.
I found it much easier to use psscor2 for .NET 3.5 which has some cool commands like ListNearObj where you can find out which objects are around your memory holes (pinned objects?). With the commands from psscor2 you have much better chances to find out what is really going on in your heaps. Most commands are also available in SOS.dll in .NET 4 as well.
To answer your original question: Yes free objects are gaps on the managed heap which can simply be the free memory block after your last allocated object on a GC segement. Or if you do !DumpHeap with the start address of a GC segment you see the objects allocated in that managed heap segment along with your free objects which are GC collected objects.
This memory holes do normally happen in Gen2. The object addresses before and after the free object do tell you what potentially pinned objects are around your hole. From this you should be able to determine your allocation history and optimize it if you need to.
You can find the addresses of the GC Heaps with
0:021> !EEHeap -gc
Number of GC Heaps: 1
generation 0 starts at 0x101da9cc
generation 1 starts at 0x10061000
generation 2 starts at 0x02aa1000
ephemeral segment allocation context: none
segment begin allocated size
02aa0000 02aa1000** 03836a30 0xd95a30(14244400)
10060000 10061000** 103b8ff4 0x357ff4(3506164)
Large object heap starts at 0x03aa1000
segment begin allocated size
03aa0000 03aa1000 03b096f8 0x686f8(427768)
Total Size: Size: 0x115611c (18178332) bytes.
------------------------------
GC Heap Size: Size: 0x115611c (18178332) bytes.
There you see that you have heaps at 02aa1000 and 10061000.
With !DumpHeap 02aa1000 03836a30 you can dump the GC Heap segment.
!DumpHeap 02aa1000 03836a30
Address MT Size
...
037b7b88 5b408350 56
037b7bc0 60876d60 32
037b7be0 5b40838c 20
037b7bf4 5b408350 56
037b7c2c 5b408728 20
037b7c40 5fe4506c 16
037b7c50 60876d60 32
037b7c70 5b408728 20
037b7c84 5fe4506c 16
037b7c94 00135de8 519112 Free
0383685c 5b408728 20
03836870 5fe4506c 16
03836880 608c55b4 96
....
There you find your free memory blocks which was an object which was already GCed. You can dump the surrounding objects (the output is sorted address wise) to find out if they are pinned or have other unusual properties.
You have 50MB of RAM as Free space. This is not good.
Having .NET allocating blocks of 16MB from process, we have a fragmentation issue indeed.
There are plenty of reasons to fragmentation to occure in .NET.
Have a look here and here.
In your case it is possibly a pinning. As 53304764 / 243010 makes 219.35 bytes per object - much lower then LOH objects.

Resources