Is it possible to index only the matched log lines of grok in Logstash? - elasticsearch

I'm having a log file, which actually has INFOs' and ERRORs. So I tried to match only the needful INFOs by using the grok filter. So this is how my log lines look like. Few of them from the file.
And this is how my grok look like in my logstash conf:
grok {
patterns_dir => ["D:/elk_stack_for_ideabiz/elk_from_chamith/ELK_stack/logstash-5.1.1/bin/patterns"]
match => {
"message" => [
"^TID\: \[0\] \[AM\] \[%{LOGTIMESTAMPTWO:logtimestamp}]%{REQUIREDDATAFORAPP:app_message}",
"^TID\: \[0\] \[AM\] \[%{LOGTIMESTAMPTWO:logtimestamp}]%{REQUIREDDATAFORRESPONSESTATUS:response_message}"
]
}
}
The pattern seems to be working fine. I could provide the pattern if required.
I've got two questions. One is I wanted only the grok matched lines to be sent to the index, and prevent Logstash from indexing the non-matched ones and the second is to prevent Logstash from showing the message in every single ES record.
I tried using the overwrite as such under the match but still no luck:
overwrite => [ "message" ]
All in all what I need to see in my indice are the messages (app_message, response_message from the above match), which should match the above two conditions. Where as now, all the lines are getting indexed.
Is it possible do something like this? Or does Logstash index all of them by default?
Where am I going wrong? Any help could be appreciated.

Indexing only the lines you want is pretty straightforward.
By default, if grok fails to match anything it will add the tag _grokparsefailure.
Now what you want to do is check for that tag and then use the drop filter.
if "_grokparsefailure" in [tags] {
drop { }
}
sysadmin1138 mentioned that you can select between different grok filters by adding tag_on_failure => ['_INFOFailure' ]
For your sencond question you can simply remove the field. Every filter has a remove_field option but I use the mutate filter to indicate that I am manipulating the message.
mutate {
remove_field => ["message"]
}

Related

Logstash - Breaking apart a field filtered with Grok into further fields

We have log messages that look like the following:
<TE CT="20:33:57.258102" Sv="N" As="CTWare.PerimeterService" T="PerimeterService" M="GetWallboard" TID="1" TN="" MID="" ID="" MSG="Exit method 'GetWallboard' took 00:00:00.0781247" />
Right now, we use the following Grok filter:
match => { "message" => "<TE CT=\"%{DATESTAMP:log_timestamp}\" Sv=%{QS:severity} As=%{QS:assembly} T=%{QS:T} M=%{QS:M} TID=%{QS:TID} TN=%{QS:TN} MID=%{QS:MID} ID=%{QS:ID} MSG=%{QS:log_raw} />" }
Inside the "MSG" / "log_raw" field, however, I want to try and extract the timestamp after "...took" into its own field. I was hoping to accomplish it by using a custom regex to extract "MSG" / "log_raw" up to a specific point, then another regex to capture the "took" timestamp and make a new field. But, when I test with online Grok debuggers I'm not having any luck. Is it even possible to do something like this?
Your CT field does not match DATESTAMP. It will match TIME. You can then use a second grok to pull the time from the [log_raw] field.
grok { match => { "[log_raw]" => "took %{TIME:someField}" } }
to get
"someField" => "00:00:00.0781247",
I would be tempted to use an xml filter to parse that [message], since it will adjust if there are ever additional or missing fields.

Parsing log data throught grok filter (logstash)

I'm pretty new to ELK, and I'm trying to parse my logs throught logstash. Logs are sent by filebeat.
Logs looks like:
2019.12.02 16:21:54.330536 [ 1 ] {} <Information> Application: starting up
2020.03.21 13:14:54.941405 [ 28 ] {xxx23xx-xxx23xx-4f0e-a3c6-rge3gu1} <Debug> executeQuery: (from [::ffff:192.0.0.0]:9999) blahblahblah
2020.03.21 13:14:54.941469 [ 28 ] {xxx23xx-xxx23xx-4f0e-a3c6-rge3gu0} <Error> executeQuery: Code: 62, e.displayText() = DB::Exception: Syntax error: failed at position 1
My default logstash configuration is:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
}
}
In my log example, I extract fields like this:
timestamp
code
pipelineId
logLevel
program
message.
But I have several problems with my grok pattern. First, the timestamp on the log is quite different than a classic timestamp. How can I get it recognized ?
I also have problems when {} can be empty or not. Can you give me some advices on what should be the correct grok pattern please ?
Also, in Kibana, I have A LOT of informations, such as hostname, os details, agent details, source etc. I've read that these fields are ES metadata so it's not possible to remove them. I found that it's
a lot of informations throught, is there any way to "hide" these ?
Grok pattern
On the screenshot below you can see the pattern I constructed for your example log (in Grok Debugger):
Is this the result you're looking for?
Logstash config
# logstash.conf
…
filter {
grok {
patterns_dir => ["./patterns"]
match => {
"message" => "%{CUSTOM_DATE:timestamp}\s\[\s%{BASE10NUM:code}\s\]\s\{%{GREEDYDATA:pipeline_id}\}\s\<%{GREEDYDATA:log_level}\>\s%{GREEDYDATA:program_message}"
}
}
}
…
Custom pattern
As you can see, I told grok to look for my custom patterns in the patterns directory which I put in the same location as my logstash.conf file. In this directory I created the custom.txt file with the following content:
# patterns/custom.txt
CUSTOM_DATE (?>\d\d){1,2}\.(?:0?[1-9]|1[0-2])\.(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])\s(?!<[0-9])(?:2[0123]|[01]?[0-9]):(?:[0-5][0-9])(?::(?:(?:[0-5][0-9]|60)(?:[:.,][0-9]+)?))(?![0-9])
I didn't write this long pattern on my own. I started with this line:
CUSTOM_DATE %{YEAR}\.%{MONTHNUM}\.%{MONTHDAY}\s%{TIME}
Then, I replaced every predefined pattern with a corresponding regular expression (one by one, directly in the Grok Debugger). You can use the %{YEAR}\.%{MONTHNUM}\.%{MONTHDAY}\s%{TIME} in your application, but the Grok Debugger interface will print every part separately.
Do you want to remove empty fields?
I don't know what you want to do in case the pipeline_id field is empty. If you want to remove it completely you can try adding the following lines to your config:
# logstash.conf
…
filter {
grok {
…
}
if [pipeline_id] == "" {
mutate {
remove_field => ["pipeline_id"]
}
}
}
…
Useful resources
Available patterns that I used in my pattern
What to do when part of one field got caught in a different pattern

Add extra value to field before sending to elasticsearch

I'm using logstash, filebeat and grok to send data from logs to my elastisearch instance. This is the grok configuration in the pipe
filter {
grok {
match => {
"message" => "%{SYSLOGTIMESTAMP:messageDate} %{GREEDYDATA:messagge}"
}
}
}
This works fine, the issue is that messageDate is in this format Jan 15 11:18:25 and it doesn't have a year entry.
Now, i actually know the year these files were created in and i was wondering if it is possible to add the value to the field during the process, that is, somehow turn Jan 15 11:18:25 into 2016 Jan 15 11:18:25 before sending to elasticsearch (obviously without editing the files, which i could do and even with ease but it'll be a temporary fix to what i have to do and not a definitive solution)
I have tried googling if it was possible but no luck...
Valepu,
The only way to modify the data from a field is using the ruby filter:
filter {
ruby {
code => "#your code here#"
}
}
For more information like...how to get,set field values, here is the link:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-ruby.html
If you have a separate field for date as a string, you can use logstash date plugin:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
If you don't have it as a separate field (as in this case) use this site to construct your own grok pattern:
http://grokconstructor.appspot.com/do/match
I made this to preprocess the values:
%{YEAR:yearVal} %{MONTH:monthVal} %{NUMBER:dayVal} %{TIME:timeVal} %{GREEDYDATA:message}
Not the most elegant I guess, but you get the values in different fields. Using this you can create your own date field and parse it with date filter so you will get a comparable value or you can use these fields by themselves. I'm sure there is a better solution, for example you could make your own grok pattern and use that, but I'm gonna leave some exploration for you too. :)
By reading thoroughly the grok documentation i found what google couldn't find for me and which i apparently missed the first time i read that page
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-add_field
Using the add_field and remove_field options i managed to add the year to my date, then i used the date plugin to send it to logstash as a timestamp. My filter configuration now looks like this
filter {
grok {
match => {
"message" => "%{SYSLOGTIMESTAMP:tMessageDate} %{GREEDYDATA:messagge}"
add_field => { "messageDate" => "2016 %{tMessageDate}" }
remove_field => ["tMessageDate"]
}
}
date {
match => [ "messageDate", "YYYY MMM dd HH:mm:ss"]
}
}
And it worked fine

How to assign a variable in logstash config?

I'm trying to fetch the host name from the events that logstash processes, and if the events matches to the criteria I want the host name to be sent to another file. But meanwhile the event should be sent to elasticsearch output.
The idea what am having is to assign the host name to a variable, and send the variable value to a file, if the "if" condition is satisfied.
Will this be possible with logstash?
Regards,
Gaurav
Yes, what you want is posible in Logstash. The Logstash site has documentation for the config format, and all the available plugins which can be found at http://logstash.net/docs/1.4.0/. You will probably want to use the grok filter to extract the host name, and the file output to write the data.
Here is an example confg, which does what you want:
input {
#some input
}
filters {
grok {
match => ["message", "%{HOSTNAME:host} rest of message line" ]
add_tag => ["has_hostname"]
}
}
output {
elasticsearch {}
if "has_hostname" in [tags] {
file {
message_format => "%{host}"
path => "path/to/file"
}
}
}
The grok pattern will need to be altered to match your data, the logstash docs include a link the default pattern set that you can use.

Logstash not parsing multiple named capture groups

I have just started playing around with Logstash, ElasticSearch and Kibana for visualisation of logs and am currently experiencing some problems.
I have a log file that is being gathered by logstash and I want to extract fields from log entries before writing these into ElasticSearch.
I have define a filter with my a number of named capture groups in my logstash config file but at this point only the first of those named capture groups is matching.
My log file looks something like the following:
[2014-01-31 12:00:00] [FIELD1:SOMEVALUE] [FIELD2:SOMEVALUE]
and my logstash filter looks like the follwing:
if[type] == "mytype { grok { match => [ "message", "(?<TIMESTAMP>regex)", "message", "(?<FIELD1>regex)", "message", "(?<FIELD2>regex)" ] } }
I have verfied the regexes for all my fields are correct but when I go to the Kibana dashboard FIELD1 and FIELD2 are not appearing.
If anyone could shed some light on this I would be grateful.
Thanks
Kevin
grok's default behavior is to stop processing after the first match.
You can change this by setting break_on_match to false:
if[type] == "mytype {
grok
{
match => [
"message", "(?<TIMESTAMP>regex)",
"message", "(?<FIELD1>regex)",
"message", "(?<FIELD2>regex)"
]
break_on_match => false
}
}
After learning a bit more about parsing using grok I've found a lot of the time it isn't necessary to have to write my own regexes. There are a number of predefined grok patterns I can use and I can extend these to create my own custom patterns when parsing logstash logs.
A useful link on the grok patterns supported by logstash: https://github.com/elastic/logstash/blob/v1.4.2/patterns/grok-patterns.
Using the new found knowledge I was able to change my match configuration to that below.
if[type] == "mytype" {
grok {
match => ["\[%{TIMESTAMP_ISO8601:dateTime}\]%{SPACE}\[%{WORD}\:%{FLOATINGPOINT:cpu}\]%{SPACE}\[%{WORD}\:%{FLOATINGPOINT:memory}\]"]
}
}
This uses the built in grok patterns TIMESTAMP:ISO8601 to pick out the date in my logs, and I have created a very simple custom pattern FLOATINGPOINT to pick out the floating point values for memory and cpu in my example. The FLOATINGPOINT pattern looks like:
FLOATINGPOINT %{INT}\.%{INT}

Resources