Logstash filter - half json line parse - elasticsearch

I'm using 'filebeat' as a shipper an the client send it to redis, read from redis with logstash and send it to ES.
I'm trying to parse the following example line:
09:24:01.969 watchdog - INFO - 100.140.2 PASSED: Mobile:Mobile[].popover["mc1814"].select(2,) :706<<<<<<<<<<<<<<<<<<< {"actionDuration":613}
In the end I want to have a field names: "actionDuration" with the value: 613.
As you can see it's partially json.
- I've tried to use grok filter, with add_field and match and I've tried to change a few configurations in the filebeat and logstash.
I'm using the basic configurations:
filebeat.conf:
filebeat.prospectors:
input_type: log
paths:
/sketch/workspace/sanity-dev-kennel/out/*.log
fields:
type: watchdog
BUILD_ID: 82161
If there's a possibility to do it in the filebeat side I prefer, but it's also good in the Logstash side.
Thanks a lot,
Moshe

This sort of partial-formatting is best handled on the Logstash side, not the shipper. The filters/transforms available in FileBeat aren't up to that. A Logstash filter pipeline is, though.
filter {
grok {
match => {
"message" => [ "(?<plain_prefix>^.*?) (?<json_segment>{.*$)"]
}
}
json {
source => "json_segment"
}
mutate {
remove_field => [ "json_segment" ]
}
}
This basic example will split your incoming message into two fields. a plain_prefix and a json_segment. The json{} filter is then used to parse the JSON data into the event. Finally, a mutate {} filter is used to remove the json_segment field from the event, as it has already been parsed and included.
Note: the .*? in the plain_prefix is critical in this filter. Constructed this way, everything from the first { onward is considered part of the JSON segment. If you use .*, the JSON segment will be everything from the last {, which will be a problem with complex JSON datastructures.

Related

Parsing log data throught grok filter (logstash)

I'm pretty new to ELK, and I'm trying to parse my logs throught logstash. Logs are sent by filebeat.
Logs looks like:
2019.12.02 16:21:54.330536 [ 1 ] {} <Information> Application: starting up
2020.03.21 13:14:54.941405 [ 28 ] {xxx23xx-xxx23xx-4f0e-a3c6-rge3gu1} <Debug> executeQuery: (from [::ffff:192.0.0.0]:9999) blahblahblah
2020.03.21 13:14:54.941469 [ 28 ] {xxx23xx-xxx23xx-4f0e-a3c6-rge3gu0} <Error> executeQuery: Code: 62, e.displayText() = DB::Exception: Syntax error: failed at position 1
My default logstash configuration is:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
}
}
In my log example, I extract fields like this:
timestamp
code
pipelineId
logLevel
program
message.
But I have several problems with my grok pattern. First, the timestamp on the log is quite different than a classic timestamp. How can I get it recognized ?
I also have problems when {} can be empty or not. Can you give me some advices on what should be the correct grok pattern please ?
Also, in Kibana, I have A LOT of informations, such as hostname, os details, agent details, source etc. I've read that these fields are ES metadata so it's not possible to remove them. I found that it's
a lot of informations throught, is there any way to "hide" these ?
Grok pattern
On the screenshot below you can see the pattern I constructed for your example log (in Grok Debugger):
Is this the result you're looking for?
Logstash config
# logstash.conf
…
filter {
grok {
patterns_dir => ["./patterns"]
match => {
"message" => "%{CUSTOM_DATE:timestamp}\s\[\s%{BASE10NUM:code}\s\]\s\{%{GREEDYDATA:pipeline_id}\}\s\<%{GREEDYDATA:log_level}\>\s%{GREEDYDATA:program_message}"
}
}
}
…
Custom pattern
As you can see, I told grok to look for my custom patterns in the patterns directory which I put in the same location as my logstash.conf file. In this directory I created the custom.txt file with the following content:
# patterns/custom.txt
CUSTOM_DATE (?>\d\d){1,2}\.(?:0?[1-9]|1[0-2])\.(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])\s(?!<[0-9])(?:2[0123]|[01]?[0-9]):(?:[0-5][0-9])(?::(?:(?:[0-5][0-9]|60)(?:[:.,][0-9]+)?))(?![0-9])
I didn't write this long pattern on my own. I started with this line:
CUSTOM_DATE %{YEAR}\.%{MONTHNUM}\.%{MONTHDAY}\s%{TIME}
Then, I replaced every predefined pattern with a corresponding regular expression (one by one, directly in the Grok Debugger). You can use the %{YEAR}\.%{MONTHNUM}\.%{MONTHDAY}\s%{TIME} in your application, but the Grok Debugger interface will print every part separately.
Do you want to remove empty fields?
I don't know what you want to do in case the pipeline_id field is empty. If you want to remove it completely you can try adding the following lines to your config:
# logstash.conf
…
filter {
grok {
…
}
if [pipeline_id] == "" {
mutate {
remove_field => ["pipeline_id"]
}
}
}
…
Useful resources
Available patterns that I used in my pattern
What to do when part of one field got caught in a different pattern

logstash add_field conversion issue

I am using logstash version 5.0.2. Parsing a file which holds a filename as one of the fields which is parsed by logstash grok filter, but for visualization i needed the file number to identify each file. So I added new field through mutate filter add_field checking the filename in [message].
if 'filename_1' in [message] {
mutate { add_field => { "file_no" => "13" } }
mutate {convert => [ "file_no", "float" ] }
}
If i check the parsing through stdin/stdout (rubydebug codec) filterit shows the file_no field is converted properly. but if I send the logstash output to elasticsearch kibana shows conflict in data type of that field.
there I am able to see file_no.keyword(as string) and file_no(as conflict), with error as:
Mapping conflict! A field is defined as several types (string, integer,
etc) across the indices that match this pattern. You may still be able to use
these conflict fields in parts of Kibana, but they will be unavailable for
functions that require Kibana to know their type. Correcting this issue will
require reindexing your data
I have converted the added filed so why is is still being sent to elasticsearch as string not sure.
any help would be great.
When tried converting the field there is not option of number in Kibana. The source logfile being monitored doesn't have this number to parse it directly as an integer with %{PATTERN_FOR_NUMBER:number_variable:int} otherwise this could have been easier

JSON parser in logstash ignoring data?

I've been at this a while now, and I feel like the JSON filter in logstash is removing data for me. I originally followed the tutorial from https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04
I've made some changes, but it's mostly the same. My grok filter looks like this:
uuid #uuid and fingerprint to avoid duplicates
{
target => "#uuid"
overwrite => true
}
fingerprint
{
key => "78787878"
concatenate_sources => true
}
grok #Get device name from the name of the log
{
match => { "source" => "%{GREEDYDATA}%{IPV4:DEVICENAME}%{GREEDYDATA}" }
}
grok #get all the other data from the log
{
match => { "message" => "%{NUMBER:unixTime}..." }
}
date #Set the unix times to proper times.
{
match => [ "unixTime","UNIX" ]
target => "TIMESTAMP"
}
grok #Split up the message if it can
{
match => { "MSG_FULL" => "%{WORD:MSG_START}%{SPACE}%{GREEDYDATA:MSG_END}" }
}
json
{
source => "MSG_END"
target => "JSON"
}
So the bit causing problems is the bottom, I think. My gork stuff should all be correct. When I run this config, I see everything in kibana displayed correctly, except for all the logs which would have JSON code in them (not all of the logs have JSON). When I run it again without the JSON filter it displays everything.
I've tried to use a IF statement so that it only runs the JSON filter if it contains JSON code, but that didn't solve anything.
However, when I added a IF statement to only run a specific JSON format (So, if MSG_START = x, y or z then MSG_END will have a different json format. In this case lets say I'm only parsing the z format), then in kibana I would see all the logs that contain x and y JSON format (not parsed though), but it won't show z. So i'm sure it must be something to do with how I'm using the JSON filter.
Also, whenever I want to test with new data I started clearing old data in elasticsearch so that if it works I know it's my logstash that's working and not just running of memory from elasticsearch. I've done this using XDELETE 'http://localhost:9200/logstash-*/'. But logstash won't make new indexes in elasticsearch unless I provide filebeat with new logs. I don't know if this is another problem or not, just thought I should mention it.
I hope that all makes sense.
EDIT: I just check the logstash.stdout file, it turns out it is parsing the json, but it's only showing things with "_jsonparsefailure" in kibana so something must be going wrong with Elastisearch. Maybe. I don't know, just brainstorming :)
SAMPLE LOGS:
1452470936.88 1448975468.00 1 7 mfd_status 000E91DCB5A2 load {"up":[38,1.66,0.40,0.13],"mem":[967364,584900,3596,116772],"cpu":[1299,812,1791,3157,480,144],"cpu_dvfs":[996,1589,792,871,396,1320],"cpu_op":[996,50]}
MSG_START is load, MSG_END is everything after in the above example, so MSG_END is valid JSON that I want to parse.
The log bellow has no JSON in it, but my logstash will try to parse everything after "Inf:" and send out a "_jsonparsefailure".
1452470931.56 1448975463.00 1 6 rc.app 02:11:03.301 Inf: NOSApp: UpdateSplashScreen not implemented on this platform
Also this is my output in logstash, since I feel like that is important now:
elasticsearch
{
hosts => ["localhost:9200"]
document_id => "%{fingerprint}"
}
stdout { codec => rubydebug }
I experienced a similar issue and found that some of my logs were using a UTC time/date stamp and others were not.
Fixed the code to use exclusively UTC and sorted the issue for me.
I asked this question: Logstash output from json parser not being sent to elasticsearch
later on, and it has more relevant information on it, maybe a better answer if anyone ever has a similar problem to me you can check out that link.

How to assign a variable in logstash config?

I'm trying to fetch the host name from the events that logstash processes, and if the events matches to the criteria I want the host name to be sent to another file. But meanwhile the event should be sent to elasticsearch output.
The idea what am having is to assign the host name to a variable, and send the variable value to a file, if the "if" condition is satisfied.
Will this be possible with logstash?
Regards,
Gaurav
Yes, what you want is posible in Logstash. The Logstash site has documentation for the config format, and all the available plugins which can be found at http://logstash.net/docs/1.4.0/. You will probably want to use the grok filter to extract the host name, and the file output to write the data.
Here is an example confg, which does what you want:
input {
#some input
}
filters {
grok {
match => ["message", "%{HOSTNAME:host} rest of message line" ]
add_tag => ["has_hostname"]
}
}
output {
elasticsearch {}
if "has_hostname" in [tags] {
file {
message_format => "%{host}"
path => "path/to/file"
}
}
}
The grok pattern will need to be altered to match your data, the logstash docs include a link the default pattern set that you can use.

Logstash not parsing multiple named capture groups

I have just started playing around with Logstash, ElasticSearch and Kibana for visualisation of logs and am currently experiencing some problems.
I have a log file that is being gathered by logstash and I want to extract fields from log entries before writing these into ElasticSearch.
I have define a filter with my a number of named capture groups in my logstash config file but at this point only the first of those named capture groups is matching.
My log file looks something like the following:
[2014-01-31 12:00:00] [FIELD1:SOMEVALUE] [FIELD2:SOMEVALUE]
and my logstash filter looks like the follwing:
if[type] == "mytype { grok { match => [ "message", "(?<TIMESTAMP>regex)", "message", "(?<FIELD1>regex)", "message", "(?<FIELD2>regex)" ] } }
I have verfied the regexes for all my fields are correct but when I go to the Kibana dashboard FIELD1 and FIELD2 are not appearing.
If anyone could shed some light on this I would be grateful.
Thanks
Kevin
grok's default behavior is to stop processing after the first match.
You can change this by setting break_on_match to false:
if[type] == "mytype {
grok
{
match => [
"message", "(?<TIMESTAMP>regex)",
"message", "(?<FIELD1>regex)",
"message", "(?<FIELD2>regex)"
]
break_on_match => false
}
}
After learning a bit more about parsing using grok I've found a lot of the time it isn't necessary to have to write my own regexes. There are a number of predefined grok patterns I can use and I can extend these to create my own custom patterns when parsing logstash logs.
A useful link on the grok patterns supported by logstash: https://github.com/elastic/logstash/blob/v1.4.2/patterns/grok-patterns.
Using the new found knowledge I was able to change my match configuration to that below.
if[type] == "mytype" {
grok {
match => ["\[%{TIMESTAMP_ISO8601:dateTime}\]%{SPACE}\[%{WORD}\:%{FLOATINGPOINT:cpu}\]%{SPACE}\[%{WORD}\:%{FLOATINGPOINT:memory}\]"]
}
}
This uses the built in grok patterns TIMESTAMP:ISO8601 to pick out the date in my logs, and I have created a very simple custom pattern FLOATINGPOINT to pick out the floating point values for memory and cpu in my example. The FLOATINGPOINT pattern looks like:
FLOATINGPOINT %{INT}\.%{INT}

Resources