Grok for jrockit gc logging - elasticsearch

I have the below jrockit gc log which I want to index using grok to find pattern.
[memory ][Thu Feb 4 14:23:21 2016][01888] [OC#1119] 199979.563-199981.320: OC 1875383KB->1445390KB (2097152KB), 1.757 s, sum of pauses 1731.731 ms, longest pause 1731.731 ms.
The Information I want is
date : Thu Feb 4 14:23:21 2016
CurrentHeap: 1875383
Heap: 1445390
TotalHeap: 2097152
SumofPause: 1731.731
LongestPause: 1731.731
I starte writing something like this..
[memory ][%{DATA:wls_timestamp}][%{DATA:discard1}][%{DATA:discard2}]
But couldn't go any further, can someone guide me how to extract specific information in a data like this.
Thanks
SR

Related

Why pprof heap inuse_space less than container_working_set_size?

I found in grafana that my pod <***-qkcdl> occupated about 1.0G of container_memory_working_set_bytes, and 1.4G of container_memory_rss;
pods momery usage in grafana
container_memory_rss of pod(max avg current)
and my query of container_memory_working_set_bytes and container_memory_rss is:
container_memory_working_set_bytes{k8s_cluster="$cluster", namespace="$dept", pod=~'$pod', container=~"$container"}
container_memory_cache{k8s_cluster="$cluster", namespace="$dept", pod=~'$pod', container=~"$container"}
then when I track the pprof heap inuse_space, it shows:
go tool pprof --inuse_space pprof http://{pod_ip}:8899/debug/pprof/heap
Fetching profile over HTTP from http://{pod_ip}:8899/debug/pprof/heap
pprof: read pprof: is a directory
Fetched 1 source profiles out of 2
Saved profile in {local_path}
File: {app}
Type: inuse_space
Time: Oct 15, 2021 at 6:38pm (CST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof)
(pprof) top10
Showing nodes accounting for 335.36MB, 91.58% of 366.19MB total
Dropped 195 nodes (cum <= 1.83MB)
Showing top 10 nodes out of 77
...
so, why my golang application use only 335.36MB heap space, but the grafana show about 1.0G of working_set_size and 1.4G of rss, what does the "335.36MB", "1.0G" and "1.4G" means ? why ?
PS: I know what the metrics means, but it does nothing to me
container_memory_rss: The amount of anonymous and swap cache memory (includes transparent hugepages).
container_memory_working_set_bytes: The amount of working set memory, this includes recently accessed memory,dirty memory, and kernel memory. Working set is <= "usage".

Why is a different timezone being printed with ParseInLocation?

I'm trying to parse a string into time with a user-specific timezone location -
// error handling skipped for brevity
loc, _ := time.LoadLocation("Asia/Kolkata")
now, _ := time.ParseInLocation("15:04", "10:10", loc)
fmt.Println("Location : ", loc, " Time : ", now)
The output I get on my system is - Location : Asia/Kolkata Time : 0000-01-01 10:10:00 +0553 HMT
Where did this HMT time zone come from?
If instead of parsing the time I use now := time.Now().In(loc), the timezone printed is correct - IST. Am I doing something wrong with timezone parsng or is my system timezone database faulty?
This may be a relic of the fact that your year for now is 0000, while time.Now() returns the current time. Timezones are weird, and certain locations haven't always used the same timezone. This is an excerpt from the IANA Time Zone Database:
# Zone NAME GMTOFF RULES FORMAT [UNTIL]
Zone Asia/Kolkata 5:53:28 - LMT 1854 Jun 28 # Kolkata
5:53:20 - HMT 1870 # Howrah Mean Time?
5:21:10 - MMT 1906 Jan 1 # Madras local time
5:30 - IST 1941 Oct
5:30 1:00 +0630 1942 May 15
5:30 - IST 1942 Sep
5:30 1:00 +0630 1945 Oct 15
5:30 - IST
If I am interpreting this correctly, it seems HMT was used from 1854 until 1870—I'm not exactly sure why this would cause it to be used for year 0000, which would seem to fall under LMT, but it's possible the Go database is slightly different (or it's possible that I'm misinterpreting the database). If you're concerned about the correct timezone being used for historical dates (like 0000) I'm not sure I can give a great answer, however for anything recent IST should be correctly used.

How to get exact NTP drift in OS X

I'm trying to get actual NTP drift on Macs connected to a local NTP server.
When reading /var/db/ntp.drift file I get -37.521 which according to PPM to milliseconds conversion gives -3241ms of drift.
When using ntpq -c lpeer I get something like this:
remote refid st t when poll reach delay offset jitter
==============================================================================
*172-1-1-5.light 164.67.62.212 2 u 57 64 377 199.438 38.322 29.012
which means 38.322ms of drift.
Finally, sntp 172.1.1.5 outputs this:
2016 Jan 21 18:41:45.248591 +0.019244 +/- 0.022507 secs
which means 19.244ms of drift.
I'm confused which one of the approaches gives accurate NTP drift?
Have a look at ntpq -pcrv that should give you all the info and more. If you need any of the output explaining then edit your question and we will try & help you out.
Remember drift is specific to your box. It looks like your NTP server is either far away or you have a poor network link (based on your delay time). You might want to try a closer ntp server.

Trying to convert time 19 digits

I am wondering what they formats are? Any advice would be much appreciated. This is used in the IBM application called Tealeaf
4682158116698062848 = 12:00:00 AM
4682162239866667008 = 12:01:00 AM
4682166363035271168 = 12:02:00 AM
4682405506814312448 = 01:00:00 AM
If I have to use an application to convert it, then the choice would be PHP
This looks like a Microsoft OLE Automation timestamp. Here is Microsoft's page about it. It represents the number of 24 hour periods since 1 Jan. 1900.
Looks like 64+ bit stamps. The most significant 28+bits are the seconds about 788 days after some epoch (Jan 1, 1970??) which would make it Feb 28, 1972 - or possible some other encoding based on seconds. The least significant 36-bits are all 0. I would expect the values could reach pow(2,72) or 22 decimal digits.

PDF::API2 image_png

I have a problem with pdf::api2 I need to edit an existing pdf and put in some images. The problem is that for inserting 4 images it takes around 20 seconds per image. So the whole process goes up to minute and a half. Is there some magic i can do to speed up the process? The images are 1920 × 1080 and need to stay that size, because i need quality pdf-s... So without further ado, here is the code:
#!/usr/bin/perl
use PDF::API2;
print "start ".(localtime)."\n";
$pdf = PDF::API2->open("sample.pdf");
$page = $pdf->openpage(1);
$page->mediabox(840,600)
$gfx=$page->gfx;
print "first image ".(localtime)."\n";
$first=$pdf->image_png("first.png");
print "inserting first image ".(localtime)."\n";
$gfx->image($first,134,106,510,281);
print "saving ".(localtime)." \n";
$pdf->saveas('new_file.pdf');
print "done ".(localtime)." \n";
The output i get:
start Mon Jun 3 10:46:31 2013
first image Mon Jun 3 10:46:31 2013
inserting first image Mon Jun 3 10:46:53 2013
saving Mon Jun 3 10:46:53 2013
done Mon Jun 3 10:46:57 2013
So the most time consuming process is image_png which takes 22 seconds in this example... Any help would be appreciated. Thanks
Update: if i use the same image converted to JPEG, it works flawlessly, under a second. The problem is i need the transparency of the PNG files
The documentation for PDF::API2 explicitly says that operations on transparent .png files are slow, and recommends installing PDF::API2::XS or (IIRC) Image::PNG::libpng to speed it up.

Resources