Garbage Collection tuning in Java8 using G1GC - g1gc

We are trying to improve our app performance and are doing a performance testing.
We have a Linux VM with 4 cores and 16 GB memory. The application has more then 100 users and are complaining of slowness.
Here are the performance tuning we have done so far
<heap size="8192m" max-size="8192m"/>
***<jvm-options>
<option value="-XX:+UseG1GC"/>
<option value="-XX:+UseStringDeduplication"/>
<option value="-verbose:gc"/>
<option value="-XX:+PrintGCDetails"/>
<option value="-XX:+PrintGCDateStamps"/>
<option value="-XX:+PrintGCTimeStamps"/>
<option value="-XX:+PrintGCApplicationStoppedTime"/>
<option value="-XX:+UseGCLogFileRotation"/>
<option value="-XX:NumberOfGCLogFiles=5"/>
<option value="-XX:GCLogFileSize=3M"/>
<option value="-XX:-TraceClassUnloading"/>
<option value="-XX:+HeapDumpOnOutOfMemoryError"/>***
we are seeing a 11 second garbage collection time as below
2021-09-14T18:43:27.186+1000: 14806.057: [GC pause (G1 Evacuation Pause) (young), 0.1189389 secs]
[Parallel Time: 71.8 ms, GC Workers: 4]
[GC Worker Start (ms): Min: 14806057.7, Avg: 14806057.7, Max: 14806057.7, Diff: 0.1]
[Ext Root Scanning (ms): Min: 5.4, Avg: 6.9, Max: 8.9, Diff: 3.5, Sum: 27.5]
[Update RS (ms): Min: 7.2, Avg: 9.0, Max: 9.7, Diff: 2.6, Sum: 35.8]
[Processed Buffers: Min: 30, Avg: 33.2, Max: 38, Diff: 8, Sum: 133]
[Scan RS (ms): Min: 0.3, Avg: 0.3, Max: 0.3, Diff: 0.1, Sum: 1.2]
[Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.1]
[Object Copy (ms): Min: 54.7, Avg: 55.3, Max: 56.0, Diff: 1.4, Sum: 221.3]
[Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
[Termination Attempts: Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 4]
[GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.3]
[GC Worker Total (ms): Min: 71.5, Avg: 71.6, Max: 71.6, Diff: 0.1, Sum: 286.2]
[GC Worker End (ms): Min: 14806129.2, Avg: 14806129.3, Max: 14806129.3, Diff: 0.1]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[String Dedup Fixup: 16.8 ms, GC Workers: 4]
[Queue Fixup (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.1]
[Table Fixup (ms): Min: 16.0, Avg: 16.3, Max: 16.6, Diff: 0.6, Sum: 65.3]
[Clear CT: 0.5 ms]
[Other: 29.8 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 26.7 ms]
[Ref Enq: 0.9 ms]
[Redirty Cards: 0.1 ms]
[Humongous Register: 0.2 ms]
[Humongous Reclaim: 0.1 ms]
[Free CSet: 1.3 ms]
[Eden: 4072.0M(4072.0M)->0.0B(4096.0M) Survivors: 98304.0K->86016.0K Heap: 7267.0M(8192.0M)->3187.9M(8192.0M)]
[Times: user=0.38 sys=0.00, real=0.12 secs]
How do we bring it down? we are planning to add 32 GB memory to the server and have the heap size (min and max at 20 GB)

The total GC time in the log is 0.1189389 secs, which is about 118.9 ms instead of 11 seconds.
The MaxGCPauseMillis parameter in G1 is used to control the maximum pause time. The default value is 200 ms. If you want to reduce the pause time, you can consider setting MaxGCPauseMillis to the desired value.

Related

How to convert sparse matrix to dense matrix in Julia

How do you convert a sparse matrix to a dense matrix in Julia? According to this I should be able to use full or Matrix, however full is evidently not standard in the SparseArrays module, and when I try to use Matrix:
I = []
J = []
A = []
for i in 1:3
push!(I, i)
push!(J, i^2)
push!(A, sqrt(i))
end
sarr = sparse(I, J, A, 10, 10)
arr = Matrix(sarr)
I get this error:
Exception has occurred: MethodError
MethodError: no method matching zero(::Type{Any})
It is enough to do collect(sarr) or Matrix(sarr).
Note, however that your code uses untyped containers which is not recommended. Indexes in arrays are Ints so it should be:
I = Int[]
J = Int[]
A = Float64[]
for i in 1:3
push!(I, i)
push!(J, i^2)
push!(A, sqrt(i))
end
sarr = sparse(I, J, A, 10, 10)
Now you can do:
julia> collect(sarr)
10×10 Matrix{Float64}:
1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 1.41421 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.73205 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

why heap space is smaller than sum of young and survivors after young gc?

My jvm options:
-verbose:gc -Xmx200M -Xmn40M -XX:+PrintTenuringDistribution -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseG1GC -XX:NewSize=40m -XX:MaxTenuringThreshold=1 -XX:-UseAdaptiveSizePolicy
The following is gc log:
Desired survivor size 2621440 bytes, new threshold 1 (max 1)
- age 1: 792 bytes, 792 total
, 0.0012861 secs]
[Parallel Time: 0.4 ms, GC Workers: 8]
[GC Worker Start (ms): Min: 120214.2, Avg: 120214.3, Max: 120214.3, Diff: 0.2]
[Ext Root Scanning (ms): Min: 0.1, Avg: 0.2, Max: 0.3, Diff: 0.2, Sum: 1.3]
[Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.3]
[Processed Buffers: Min: 0, Avg: 0.6, Max: 1, Diff: 1, Sum: 5]
[Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
[Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
[Object Copy (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
[Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.4]
[Termination Attempts: Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 8]
[GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
[GC Worker Total (ms): Min: 0.2, Avg: 0.3, Max: 0.4, Diff: 0.2, Sum: 2.2]
[GC Worker End (ms): Min: 120214.5, Avg: 120214.5, Max: 120214.6, Diff: 0.0]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.1 ms]
[Other: 0.7 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.4 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.2 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 39.0M(39.0M)->0.0B(39.0M) Survivors: 1024.0K->1024.0K Heap: 39.9M(200.0M)->670.6K(200.0M)]
[Times: user=0.00 sys=0.01, real=0.00 secs]
My question is why heap space(670.6K) is smaller than sum of(0+1024=1024) eden and survivors from the gc log [Eden: 39.0M(39.0M)->0.0B(39.0M) Survivors: 1024.0K->1024.0K Heap: 39.9M(200.0M)->670.6K(200.0M)]

Gimp index-colors vs ImageMagick in terms of image size?

I made a small image using imagemagick, this image has only 2 colors (alpha + some other color) so I want to turn in into index-color mode.
Once I finish the image, I run the command
convert file.png +dither -colors 2 file.png
The resulting image is 169 bytes. Now when I open the image with gimp, I make no change, I just export it as .png, over the old image, on the window that says "Export image as PNG" I untick everything, and keep compression level at 9.
The resulting image is 109 bytes. What makes this difference ? At first I thought it would be meta-data or something like that, I tried from imagemagick to use the -strip command, but it kept the image at 169 bytes. So, it it because of the compression algorithm? What compression algorithm does gimp use ? How can I replicate that with imagemagick?
The reason I want to do it with imagemagick is so I can automate the process, as I plan to do this on 100+ images.
EDIT:
Output of identify -verbose imagemagickfile.png
Image: ic_stage.png
Format: PNG (Portable Network Graphics)
Mime type: image/png
Class: DirectClass
Geometry: 176x176+0+0
Units: Undefined
Type: PaletteAlpha
Endianess: Undefined
Colorspace: sRGB
Depth: 8-bit
Channel depth:
red: 4-bit
green: 8-bit
blue: 1-bit
alpha: 1-bit
Channel statistics:
Pixels: 30976
Red:
min: 0 (0)
max: 102 (0.4)
mean: 60.0126 (0.235343)
standard deviation: 50.1973 (0.196852)
kurtosis: -1.87106
skewness: -0.359086
entropy: 0.977354
Green:
min: 0 (0)
max: 150 (0.588235)
mean: 88.2538 (0.346093)
standard deviation: 73.8196 (0.289489)
kurtosis: -1.87106
skewness: -0.359086
entropy: 0.977354
Blue:
min: 0 (0)
max: 255 (1)
mean: 150.031 (0.588359)
standard deviation: 125.493 (0.492131)
kurtosis: -1.87106
skewness: -0.359086
entropy: 0.977354
Alpha:
min: 0 (0)
max: 255 (1)
mean: 150.031 (0.588359)
standard deviation: 125.493 (0.492131)
kurtosis: -1.87106
skewness: 0.359086
entropy: 0.977354
Image statistics:
Overall:
min: 0 (0)
max: 255 (1)
mean: 100.817 (0.395359)
standard deviation: 99.3306 (0.389532)
kurtosis: -1.05614
skewness: 0.476254
entropy: 0.977354
Alpha: none #00000000
Colors: 2
Histogram:
12751: ( 0, 0, 0, 0) #00000000 none
18225: (102,150,255,255) #6696FFFF srgba(102,150,255,1)
Rendering intent: Perceptual
Gamma: 0.454545
Chromaticity:
red primary: (0.64,0.33)
green primary: (0.3,0.6)
blue primary: (0.15,0.06)
white point: (0.3127,0.329)
Background color: srgba(255,255,255,1)
Border color: srgba(223,223,223,1)
Matte color: grey74
Transparent color: none
Interlace: None
Intensity: Undefined
Compose: Over
Page geometry: 176x176+0+0
Dispose: Undefined
Iterations: 0
Compression: Zip
Orientation: Undefined
Properties:
date:create: 2015-10-26T11:27:21+02:00
date:modify: 2015-10-26T11:27:21+02:00
png:bKGD: chunk was found (see Background color, above)
png:cHRM: chunk was found (see Chromaticity, above)
png:IHDR.bit-depth-orig: 2
png:IHDR.bit_depth: 2
png:IHDR.color-type-orig: 3
png:IHDR.color_type: 3 (Indexed)
png:IHDR.interlace_method: 0 (Not interlaced)
png:IHDR.width,height: 176, 176
png:PLTE.number_colors: 3
png:sRGB: intent=0 (Perceptual Intent)
png:tRNS: chunk was found
signature: 219cc9a10e56ed2940dc1d92e37bec98d49d12c1bd5f2adfd2bcc91fd7b56f85
Artifacts:
filename: ic_stage.png
verbose: true
Tainted: False
Filesize: 218B
Number pixels: 31K
Pixels per second: 0B
User time: 0.000u
Elapsed time: 0:01.000
Version: ImageMagick 6.9.2-0 Q16 x86_64 2015-08-18 http://www.imagemagick.org
Output of identify -verbose gimpfile.png
Image: ic_stage_2.png
Format: PNG (Portable Network Graphics)
Mime type: image/png
Class: DirectClass
Geometry: 176x176+0+0
Units: Undefined
Type: PaletteAlpha
Endianess: Undefined
Colorspace: sRGB
Depth: 8-bit
Channel depth:
red: 4-bit
green: 8-bit
blue: 1-bit
alpha: 1-bit
Channel statistics:
Pixels: 30976
Red:
min: 102 (0.4)
max: 255 (1)
mean: 164.981 (0.646985)
standard deviation: 75.296 (0.295278)
kurtosis: -1.87106
skewness: 0.359086
entropy: 0.977354
Green:
min: 150 (0.588235)
max: 255 (1)
mean: 193.222 (0.757735)
standard deviation: 51.6737 (0.202642)
kurtosis: -1.87106
skewness: 0.359086
entropy: 0.977354
Blue:
min: 255 (1)
max: 255 (1)
mean: 255 (1)
standard deviation: 0 (0)
kurtosis: 0
skewness: 0
entropy: -nan
Alpha:
min: 0 (0)
max: 255 (1)
mean: 150.031 (0.588359)
standard deviation: 125.493 (0.492131)
kurtosis: -1.87106
skewness: 0.359086
entropy: 0.977354
Image statistics:
Overall:
min: 0 (0)
max: 255 (1)
mean: 179.543 (0.70409)
standard deviation: 77.6019 (0.304321)
kurtosis: 1.86389
skewness: -1.46287
entropy: -nan
Alpha: srgba(255,255,255,0) #FFFFFF00
Colors: 2
Histogram:
18225: (102,150,255,255) #6696FFFF srgba(102,150,255,1)
12751: (255,255,255, 0) #FFFFFF00 srgba(255,255,255,0)
Rendering intent: Perceptual
Gamma: 0.454545
Chromaticity:
red primary: (0.64,0.33)
green primary: (0.3,0.6)
blue primary: (0.15,0.06)
white point: (0.3127,0.329)
Background color: white
Border color: srgba(223,223,223,1)
Matte color: grey74
Transparent color: none
Interlace: None
Intensity: Undefined
Compose: Over
Page geometry: 176x176+0+0
Dispose: Undefined
Iterations: 0
Compression: Zip
Orientation: Undefined
Properties:
date:create: 2015-10-26T11:27:52+02:00
date:modify: 2015-10-26T11:27:52+02:00
png:IHDR.bit-depth-orig: 1
png:IHDR.bit_depth: 1
png:IHDR.color-type-orig: 3
png:IHDR.color_type: 3 (Indexed)
png:IHDR.interlace_method: 0 (Not interlaced)
png:IHDR.width,height: 176, 176
png:PLTE.number_colors: 2
png:sRGB: intent=0 (Perceptual Intent)
png:tRNS: chunk was found
signature: a22844c38c7ef3a612d94c2d3b9d1be29bb9d5e2f897f87c92947946fe6bc868
Artifacts:
filename: ic_stage_2.png
verbose: true
Tainted: False
Filesize: 138B
Number pixels: 31K
Pixels per second: 30.976GB
User time: 0.000u
Elapsed time: 0:01.000
Version: ImageMagick 6.9.2-0 Q16 x86_64 2015-08-18 http://www.imagemagick.org

Create Image from Text File RGB Data in Matlab

I have a text file with RGB data in the form of:
[Pixel 0,0] [Pixel 1,0] [Pixel 2,0]...
[Pixel 0,1] [Pixel 1,1] [Pixel 2,2]...
...
With an input of:
0.0 0.0 0.0 <-- this would be Pixel 0,0
1.0 0.0 0.0
1.0 0.9 0.0
I can create the flag of Germany in size 3x1 with:
%load the data to myData
Germany = reshape(myData,3,1,3);
image(Germany)
The 1px-wide pattern works good as show in picture, however, the goal is to be able to create multiple patterns, e.g. the Germany flag in 3x3 followed by Romania flag in 3x3 or any other pattern of any length and doing that! is where I can not find the proper way to reshape the matrix.
The input that should create the second example shown in picture is this:
|========= Germany Flag ==========| [ Blue ] [ Yellow ] [ Red ]
Black -> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.9 0.0 1.0 0.0 0.0
Red -> 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 1.0 0.9 0.0 1.0 0.0 0.0
Yellow-> 1.0 0.9 0.0 1.0 0.9 0.0 1.0 0.9 0.0 0.0 0.0 1.0 1.0 0.9 0.0 1.0 0.0 0.0
Any help is appreciated
Update: Asked by Marcin, the input files are literal as I explained above.
This is the content of the GermanyRomania.txt file:
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.9 0.0 1.0 0.0 0.0
1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 1.0 0.9 0.0 1.0 0.0 0.0
1.0 0.9 0.0 1.0 0.9 0.0 1.0 0.9 0.0 0.0 0.0 1.0 1.0 0.9 0.0 1.0 0.0 0.0
With that file I must create the 2nd pattern in picture (German+Romania Flag), there is ALL the RGB info required to do it.
I don't think you can achieve what you want by simply using the reshape function.
We must take into account that Matlab stores matrices in column-major order (you can read more about it here).
Therefore, before we can use the reshape function, we must have the data matrix in the following format:
[Pixel 0,0]
[Pixel 0,1]
...
[Pixel 1,0]
[Pixel 1,1]
...
[Pixel n,n]
Here's a possible solution:
# data stores the input
height = size(data, 1)
width = size(data, 2)
vertical_data_cell = mat2cell(data, height, 3 * ones(1, width / 3))'
vertical_data = cell2mat(vertical_data_cell)
flags = reshape(vertical_data, height, width / 3, 3)
image(flags)
Note that we make the matrix transformation on lines 4 and 5.
And here is the result for the input you provided:
It also works with different heights.
Here's the input for the flags of Germany, Argentina and Portugal.
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.46 0.66 0.85 0.46 0.66 0.85 0.46 0.66 0.85
1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 1.0 1.0 0.98 0.75 0.29 1.0 1.0 1.0
1.0 0.9 0.0 1.0 0.9 0.0 1.0 0.9 0.0 0.46 0.66 0.85 0.46 0.66 0.85 0.46 0.66 0.85
0.0 1.0 0.0 0.0 1.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0
0.0 1.0 0.0 1.0 0.9 0.0 1.0 0.9 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0
0.0 1.0 0.0 0.0 1.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0
And this is the result:

Efficiently process all possible 2D array combinations in Perl

I have a 2D array containing numbers. I am attempting to work out the product of multiplying a single number from each sub-array with one from each of the other sub-arrays; I then need to do this for all possible combinations.
The aim is that I input a file of the frequency of individual events, and get an output of the probability of a particular series of these events happening, with one event from each set.
I have fudged together some code with the help of a previous question:
for my $aref ( getCartesian(#freq) ) {
my $p = 1;
foreach my $n (#$aref) {
$p = $p * $n;
}
print "$p\n";
}
sub getCartesian {
my #input = #_;
my #ret = map [$_], #{ shift #input };
for my $a2 (#input) {
#ret = map {
my $v = $_;
map [#$v, $_], #$a2;
}
#ret;
}
return #ret;
}
where #freq is an array of arrays, such as:
#freq = [0.1, 0.2, 0.3,]
[0.4, 0.5, 0.6,]
[0.7, 0.8, 0.9,]; `and ~ 20 more sub arrays`
This works fine for a small test file, but when I give it my required input of 24 sub-arrays with 3 items each, the generation of combinations is clearly far too intensive, with 3^24 possibilities.
I have run it on a machine with 22 GB RAM, and it maxed out after 4 minutes before any output.
My question is, how could I modify the code so that I can print out $p for each combination, without having to hold the whole set of combinations in memory, which kills it. I presume that time would be the only limiting factor for computation then, not resources.
Edit: a method in base Perl without packages would be great. I don't have admin on the HPC facility sadly,
Set::CrossProduct lets you iterate through the Cartesian product so you don't have to store everything in memory:
use List::Util qw(reduce);
use Set::CrossProduct;
my #array = (
[0.1, 0.2, 0.3],
[0.4, 0.5, 0.6],
[0.7, 0.8, 0.9]
);
my $iterator = Set::CrossProduct->new(\#array);
while (my $tuple = $iterator->get) {
say '(', join(', ', #$tuple), '): ', reduce { $a * $b } #$tuple;
}
Outputs:
(0.1, 0.4, 0.7): 0.028
(0.1, 0.4, 0.8): 0.032
(0.1, 0.4, 0.9): 0.036
(0.1, 0.5, 0.7): 0.035
(0.1, 0.5, 0.8): 0.04
(0.1, 0.5, 0.9): 0.045
(0.1, 0.6, 0.7): 0.042
(0.1, 0.6, 0.8): 0.048
(0.1, 0.6, 0.9): 0.054
(0.2, 0.4, 0.7): 0.056
(0.2, 0.4, 0.8): 0.064
(0.2, 0.4, 0.9): 0.072
(0.2, 0.5, 0.7): 0.07
(0.2, 0.5, 0.8): 0.08
(0.2, 0.5, 0.9): 0.09
(0.2, 0.6, 0.7): 0.084
(0.2, 0.6, 0.8): 0.096
(0.2, 0.6, 0.9): 0.108
(0.3, 0.4, 0.7): 0.084
(0.3, 0.4, 0.8): 0.096
(0.3, 0.4, 0.9): 0.108
(0.3, 0.5, 0.7): 0.105
(0.3, 0.5, 0.8): 0.12
(0.3, 0.5, 0.9): 0.135
(0.3, 0.6, 0.7): 0.126
(0.3, 0.6, 0.8): 0.144
(0.3, 0.6, 0.9): 0.162

Resources