Earlier I had only one type of log for an index, but recently I changed the logs pattern. Now my grok pattern looks like
grok {
match => { "message" => "%{DATA:created_timestamp},%{DATA:request_id},%{DATA:tenant},%{DATA:username},%{DATA:job_code},%{DATA:stepname},%{DATA:quartz_trigger_timestamp},%{DATA:execution_level},%{DATA:facility_name},%{DATA:channel_code},%{DATA:status},%{DATA:current_step_time_ms},%{DATA:total_time_ms},\'%{DATA:error_message}\',%{DATA:tenant_mode},%{GREEDYDATA:channel_src_code},\'%{GREEDYDATA:jobSpecificMetaData}\'" }
match => { "message" => "%{DATA:created_timestamp},%{DATA:request_id},%{DATA:tenant},%{DATA:username},%{DATA:job_code},%{DATA:stepname},%{DATA:quartz_trigger_timestamp},%{DATA:execution_level},%{DATA:facility_name},%{DATA:channel_code},%{DATA:status},%{DATA:current_step_time_ms},%{DATA:total_time_ms},%{DATA:error_message},%{DATA:tenant_mode},%{GREEDYDATA:channel_src_code}" }
}
and sample logs are
2023-01-11 15:16:20.932,edc71ada-62f5-46be-99a4-3c8b882a6ef0,geocommerce,null,UpdateInventoryTask,MQ_TO_EVENTHANDLER,Wed Jan 11 15:16:13 IST 2023,TENANT,null,AMAZON_URBAN_BASICS,SUCCESSFUL,5903,7932,'',LIVE,AMAZON_IN,'{"totalCITCount":0}'
2023-01-11 15:16:29.368,fedca039-e834-4393-bbaa-e1903c3c92e6,bellacasa,null,UpdateInventoryTask,MQ_TO_EVENTHANDLER,Wed Jan 11 15:16:03 IST 2023,TENANT,null,FLIPKART_SMART,SUCCESSFUL,24005,26368,'',LIVE,FLIPKART_SMART,'{"totalCITCount":0}'
2023-01-11 15:16:31.684,762b8b46-2d21-437b-83fc-a1cc40737c84,ishitaknitfab,null,UpdateInventoryTask,MQ_TO_EVENTHANDLER,Wed Jan 11 15:15:48 IST 2023,TENANT,null,FLIPKART_SMART,SUCCESSFUL,41442,43684,'',LIVE,FLIPKART_SMART,'{"totalCITCount":0}'
2023-01-11 15:15:58.739,1416f5f2-a67b-416a-8e38-6bd7de457f6a,kapiva,null,PickingReplanner,MQ_TO_JOBSERVICE,Wed Jan 11 15:15:56 IST 2023,FACILITY,Non Sellable Bengaluru Return,null,SUCCESSFUL,393,2739,Task completed successfully,LIVE,null
2023-01-11 15:15:58.743,1416f5f2-a67b-416a-8e38-6bd7de457f6a,kapiva,null,PickingReplanner,MQ_TO_JOBSERVICE,Wed Jan 11 15:15:56 IST 2023,FACILITY,Delhi Main,null,SUCCESSFUL,371,2743,Task completed successfully,LIVE,null
2023-01-11 15:15:58.744,1416f5f2-a67b-416a-8e38-6bd7de457f6a,kapiva,null,PickingReplanner,MQ_TO_JOBSERVICE,Wed Jan 11 15:15:56 IST 2023,FACILITY,Bengaluru D2C,null,SUCCESSFUL,388,2744,Task completed successfully,LIVE,null
Logstash has to process approximately 150000 events in 5 minutes for this index and approx. 400000 events for the other index.
Now whenever I try to change grok, the CPU usage of the logstash server reaches 100%.
I don't know how to optimize my grok.
can anyone help me is this?
The first step to improve grok would be to anchor the patterns. grok is slow when it fails to match, not when it matches. More details on how much anchoring matters can be found in this blog post from Elastic.
The second step would be to define a custom pattern to use instead of DATA, such as
pattern_definitions => { "NOTCOMMA" => "[^,]*" }
that will prevent DATA attempting to consume more than one field while failing to match.
I have generated a dxf file but when I opened it with AutoCAD, crashes AutoCAD and gives a message ID 11 incorrect: already used.
the dxf content: https://github.com/tarikjabiri/dxf/blob/dev/examples/latest.dxf
I can't spot the problem 3 days I am trying to solve it.
I think something wrong with the APPID because it holding the ID 11 or the Handle in the language of DXF.
I have a dxf working: https://github.com/tarikjabiri/dxf/blob/dev/examples/Minimal_DXF_AC1021.dxf
Thanks in advance.
There are two minor issues:
DIMSTYLE table
0
TABLE
2
DIMSTYLE
105 <<< handle group code of the table "head" is 5 as usual
8
100
AcDbSymbolTable
100
AcDbDimStyleTable
70
1
0
DIMSTYLE
5 <<< handle group code of the table entry is 105
12
330
8
100
AcDbSymbolTableRecord
100
AcDbDimStyleTableRecord
2
STANDARD
70
0
40
1
BLOCK_RECORD table entries for *MODEL_SPACE and *PAPER_SPACE
0
TABLE
2
BLOCK_RECORD
5
9
330
0
100
AcDbSymbolTable
70
2
0
BLOCK_RECORD
5
14
330
9
100
AcDbSymbolTableRecord
100
AcDbRegAppTableRecord <<< subclass marker string "AcDbBlockTableRecord"
2
*MODEL_SPACE
70
0
70
0
280
After this changes the file opens in Autodesk DWG Trueview 2022.
I have been reading a post already written on this subject, but I can not represent my data since an error appears.
%data.txt:
"Hf" 2233 13.31
"Ir" 2466 22.56
"B_4C" 2763 2.52
"Y_2O_3" 2425 5.03
"Nb" 2477 8.57
"NbN" 2573 8.47
"SrZrO_3" 2700 5.1
"SiC" 2830 3.16
"ZrO_2" 2715 5.68
"Mo" 2623 10.28
"VC" 2810 5.77
"TiB_2" 3230 4.52
"HfO_2" 2758 9.68
"UO_2" 2867 10.97
"TiN" 2930 5.22
"TiC" 3160 4.93
"ZrB_2" 3246 6.085
"ZrN" 2952 7.09
"TaB_2" 3140 11.15
"C" 3549 2.27
"ZrC" 3540 6.73
"ThO_2" 3390 10
"HfB_2" 3250 10.5
"HfN" 3305 13.8
"NbC" 3608 7.82
"Re" 3186 21.02
"W" 3422 19.25
"Ta" 3017 16.65
"WC" 2830 15.63
"TaC" 3880 14.6
"HfC" 3890 12.2
%code:
set terminal postscript enhanced color"Times-Roman" 20
set output "TemperatureVsDensity.eps"
set xlabel "Temperature [degrees]}"
set ylabel " Density [g/cc]"
plot "data.txt" using 2:3 , "" u 2:3:1 w labels rotate offset 1
Can someone help me with this?
Thanks in advance :D
It would be helpful if you can post the error message you got when trying to run your code. I copied your data, code, and ran gnuplot code on the terminal. gnuplot gives the following warning (not error):
"code", line 6: warning: enhanced text mode parser - ignoring spurious }
which tells you that you should remove the spurious } on your set ylabel statement (as pointed out in the comments already). However, that does not prevent the figure to be generated. This is what I got:
Our IIS server generates logs in the following format : -
Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken
2018-09-13 08:47:52 ::1 GET / - 80 U:papl ::1 Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/68.0.3440.106+Safari/537.36 - 200 0 0 453
2018-09-13 08:47:52 ::1 GET /api/captcha.aspx rnd=R43YM 80 U:papl ::1 Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/68.0.3440.106+Safari/537.36 http://localhost/ 200 0 0 36
Now I want to config logstash in such a way where it can create separate columns for IP, RequestMethodType i.e. GET or POST, PageName which is here /api/captcha.aspx.
But it is creating a single column named "message" in elasticSearch and storing whole value in this message field.
So what changes should I make in logstash to create separate columns in ElasticSearch for IP, RequestMethod(POST/GET) and PageName?
Currently, I am using the following filter:-
match => {"message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration} %{URIPATH:uriStem} %{NOTSPACE:uriQuery} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clientIP} %{NOTSPACE:protocolVersion} %{NOTSPACE:userAgent} %{NOTSPACE:cookie} %{NOTSPACE:referer} %{NOTSPACE:requestHost} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:win32response} %{NUMBER:bytesSent} %{NUMBER:bytesReceived} %{NUMBER:timetaken}"
In this, it created only messages field and stores all the values in this single field.
Please help me.
NB: to test your pattern, you can use this site, which allow to save a lot of time when working with patterns.
The pattern you're using is too long if you just want IP, request and pageName, you should just try to extract what you need. In addition to this, a shorter pattern will be quicker to execute and more resilient to change.
This filter correctly extract what you asked:
match => {"message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:PageName}"}
With this pattern and the logs you provided, you'd get this result (with the site I've linked above):
I tested the filter with logstash:
filter {
grok { match => {"message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:PageName}"} }
}
output {
stdout { codec => json }
}
With this input:
2018-09-16 04:11:52 W3SVC10 webserver 107.6.166.194 GET /axestrack/homepagedata/ uname=satish34&pwd=3445&panelid=1 80 - 223.188.235.131 HTTP/1.1 Dalvik/1.6.0+(Linux;+U;+Android+4.4.4;+2014818+MIUI/V7.5.2.0.KHJMIDE) - - vehicletrack.biz 200 0 0 730 229 413
I'm getting this result:
{
"client":"107.6.166.194",
"method":"GET",
"#version":"1",
"host":"frsred-0077",
"message":"2018-09-16 04:11:52 W3SVC10 webserver 107.6.166.194 GET /axestrack/homepagedata/ uname=satish34&pwd=3445&panelid=1 80 - 223.188.235.131 HTTP/1.1 Dalvik/1.6.0+(Linux;+U;+Android+4.4.4;+2014818+MIUI/V7.5.2.0.KHJMIDE) - - vehicletrack.biz 200 0 0 730 229 413\r",
"#timestamp":"2018-09-18T08:13:23.539Z",
"PageName":"/axestrack/homepagedata/"
}
I'm using Elastic stack. There are a lot of messages which are parsed by my Logstash. I've decided to add some additional rules for Logstash.
I've installed Syslog pri plugin to my Logstash, because I want to create some mapping for my syslog's severity levels.
All of my messages has syslog_pri values according to RFC-3164, where error messages has "(3, 11, 19, ..., 187)" values of syslog_pri.
Well, I have two problems:
1) It's not very usable for me, because querying via Kibana is not usable. When I want to filter errors, it looks like:
syslog_pri: (3 OR 11 OR 19 OR 27 OR 35 OR 43 OR 51 OR 59 OR 67 OR 75 OR 83 OR 91 OR 99 OR 107 OR 115 OR 123 OR 131 OR 139 OR 147 OR 155 OR 163 OR 171 OR 179 OR 187)
but it will be much easier with syslog_pri plugin. I expect to have something like this:
syslog_pri: "error"
Is it possible to create this mapping somehow?
2) I want to change this syslog_pri value for some specific messages.
For example, I'm catching message like "Hello world" and want to change the severity from 14 (info messages) into 11 (error message).
I'm doing something like this:
filter {
grok {
match => { "message" => "..." }
}
syslog_pri { }
if "Hello world" in [message]
{
mutate { syslog_pri => 11 }
}
But this failed with an error:
logstash.filters.mutate - Unknown setting 'syslog_pri' for mutate
Suggestions?
To use the syslog_pri filter, you simply need to have a field with the value, which in turn will be decoded by the filter. If you have a field which is already named syslog_pri, then using it is as simple as putting
syslog_pri { }
in your logstash configuration.
This plugin will create 4 additional fields which will contain the decoded syslog_pri information:
syslog_facility
syslog_severity
syslog_facility_code
syslog_severity_code
As for mutating a field the syntax is as follows.
mutate {
replace => { "syslog_pri" => "11"}
}