Querying Kibana using grok pattern - elasticsearch

We have configured ELK stack over our daily logs and using Kibana UI to perform basic search/query operation on the the set of logs.
Some of our logs have a certain field in the message while others don't. Therefore we have not configured it as a separate field while configuring Logstash.
I have logs like:
[28/Jun/2016:23:59:56 +0530] 192.168.xxx.xxx [API:Profile]get_data_login: Project password success: 9xxxxxxxxx0
[28/Jun/2016:23:59:56 +0530] 192.168.xxx.xxx [API:Profile]session_end: logout success: 9xxxxxxxxx0 TotalTime:1.1234
In these two logs, I wish to extract TotalTime for all session_end logs. And visualize it.
How should I do it?
I can search all the logs which are listed under session_end, however I am not able to perform grok on the set of logs.

Inside your filter in logstash you can have something like :
filter {
...
if ([message] ~= "session_end") {
grok {
#write grok specifically for the second format of log here
}
}
else if ([message] ~= "get_data_login") {
grok {
#write grok specifically for the first format of log here
}
}
...
}
Grok patterns cannot be used for querying in Kibana.

You can use two different grok patterns in the same filter:
grok {
match => {
"message" => ['\[%{HTTPDATE}\] %{IP} \[API:Profile\]session_end: %{GREEDYDATA:session} TotalTime:%{GREEDYDATA:tt}',
'\[%{HTTPDATE}\] %{IP} \[API:Profile\]%{GREEDYDATA:data}']
}
}
The messages will be tested by the first pattern, if they have session_end: and TotalTime:, you'll have an elasticsearch document with the two fields. Then you'll be able to do aggregations and visualisation on these fields.
The other messages (without session_end: and TotalTime:) will be parsed by the second filter.

Related

How to parse non-json messages in logstash with grok filter

I am trying to put logs messages from all containers to elastic search but I suggest a lot of them not in JSON format I trying to parse them with simple grok filter parameters but I see a lot of container names in final msg and grokparsefail status
if [type] == "filebeat-docker-logs" {
grok {
match => {
"message" => "\[%{WORD:containerName}\] %{GREEDYDATA:message_remainder}"
}
}
use following grok pattern :
(?<containerName>[a-zA-Z0-9._-]+).*?(?<timestamp>%{YEAR}\/%{MONTHNUM}\/%{MONTHDAY} %{TIME}) %{GREEDYDATA:message}
it works

How to extract service name from document field in Logstash

I am stuck in middle of ELK- Stack configuration, any lead will be highly appreciated.
Case Study:
I am able to see the logs(parsed through logstash without any filter) but I want to apply filter's while parsing the logs.
For ex:
system.process.cmdline: "C:\example1\example.exe" -displayname "example.run" -servicename "example.run"
I can see the above logs in kibana dashboard but I want only the -servicename keys, value.
Expected output in Kibana, where servicename is an index and example.run will be associate value.
servicename "example.run"
I am newbie in ELK.So, Please help me out...
My environment:
Elasticsearch- 6.6
Kibana- 6.6
Logstash- 6.6
Filebeat- 6.6
Metricbeat- 6.6
Logs coming from- Windows server 2016
input {
beats {
port => "5044"
}
}
filter {
grok{
match =>{"message" => "%{NOSPACE:hostname} "}
}
}
output {
file {
path => "/var/log/logstash/out.log"
}
}
I have tried with the above logstash pipeline. But i am not successfull in getting the required result. Assuming i have to add more lines in filter but don't know what exactly.
use this in you filter:
grok{
match => { "message" => "%{GREEDYDATA:ignore}-servicename \"%{DATA:serviceName}\"" }
}
your service name should be now in serviceName key

Filter for my Custom Logs in Logstash

i am new to the ELK stack, I want to use ELK stack to push my logs to elastic so that I can use Kibana on em. Below is the format of my custom log:
Date Time INFO - searchinfo#username#searchQuery#latitude#longitude#client_ip#responseTime
The below is an example of a log that follows the format.
2017-07-04 11:16:10 INFO - searchinfo#null#gate#0.0#0.0#180.179.209.54#598
Now I am using filebeat to push my .log files to logstash and logstash would push that data into elastic.
I need help, writing up a filter for config for logstash that would simply split using the # and then put data into respective fields into elastic index.
How can I do this?
Try to use grok plugin to parse your logs into structured data:
filter {
grok {
match => { "message" => "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{WORD:var0}%{SPACE}%{NOTSPACE}%{SPACE}(?<searchinfo>[^#]*)#(?<username>[^#]*)#(?<searchQuery>[^#]*)#(?<latitude>[^#]*)#(?<longitude>[^#]*)#(?<client_ip>[^#]*)#(?<responseTime>[^#]*)" }
}
}
You can debug it online:
You need to use a grok filter to parse your log.
You can try with this:
filter {
grok {
match => { "message" => "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{WORD:var0}%{SPACE}%{NOTSPACE}%{SPACE}(?<var1>[^#]*)#(?<var2>[^#]*)#(?<var3>[^#]*)#(?<var4>[^#]*)#(?<var5>[^#]*)#(?<var6>[^#]*)#(?<var7>[^#]*)" }
}
}
This will parse you log and add fields named var0, var1, etc to the parsed document. You can rename this variables as you prefer.

How to generate reports on existing dump of logs using ELK?

Using ELK stack, is it possible to generate reports on existing dump of logs?
For example:
I have some 2 GB of Apache access logs and I want to have the dashboard reports showing:
All requests, with status code 400
All requests, with pattern like "GET http://example.com/abc/.*"
Appreciate, any example links.
Yes, it is possible. You should:
Install and setup the ELK stack.
Install filebeat, configure it to harvest your logs, and to forward the data to logstash.
In logstash, listen to filebeat input, use the grok to process/break up your data, and forward it to elastichsearch something like:
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{COMMONAPACHELOG}" }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "filebeat-logstash-%{+YYYY.MM.dd}"
}
}
In kibana, setup your indices, and query for data, e.g.
response: 400
verb: GET AND message: "http://example.com/abc/"

Grok filter not working even though it works in grok debugger

I'm using logstash-1.4.0 with elasticsearch 1.3.4 and kibana 3.1.1 (I know I'm outdated, that's the best I can do right now).
Log Example:
2016-05-31 16:05:33 RequestManager [INFO] The manual flag LOLROFLin TRALALA 123456Was changed to true
My grok filter:
filter {
grok {
match => { "message" => "%{DATESTAMP:timestamp} %{WORD:clazz} %{NOTSPACE:level} %{GREEDYDATA:content}"}
}
if (!([stack_trace])) and (!([clazz] == "RequestAsset")) {
drop {}
}
}
My questions are:
Why do I not see the grok fields in kibana? I only see the default fields but not mine. Grok Debugger shows success, but kibana does not work.
My goal is to drop any log message that does not have a stack trace OR is not from class (called clazz in my grok filter) "RequestAsset". Should this work? can I use the fields created by the grok filter in a seperate if filter?
EDIT: I realised what went wrong, I was using the log4j plugin which already seperates the log to its contents, and the field message was already just the message itself.
I tested your grok filter in this grok debugger and it failed. So i have rewritten it.
Here is the correct grok filter.
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{WORD:clazz} %{NOTSPACE:level} %{GREEDYDATA:content}"}
}
if (!([stack_trace])) and (!([clazz] == "RequestAsset")) {
drop {}
}
TIMESTAMP_ISO8601 => %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?
If you see "_grokparsefailure" in Kibana, you know that your grok filter failed.
On your second question shouldn't you use the OR operator?
I realised what went wrong, I was using the log4j plugin which already seperates the log to its contents, and the field message was already just the message itself.

Resources