Uncooperative ELK Docker Instance - elasticsearch

I have ELK 5.5.1 running in a Docker container, and it'll parse most of my logs, except for ones that originate from my Spring application. Kinda running out of ideas.
I've traced it down to the logstash->elasticsearch pipeline. Filebeat is doing its job, and Logstash is receiving logs from the application in question, based on tailing lostash's stdout log.
I wiped the docker volume that stores my ELK data clean, and started fresh with filebeat just forwarding the logs in question.
Take a log line like this:
FINEST|8384/0|Service tsoft_spring|17-08-31 14:12:01|2017-08-31 14:12:01.260 INFO 8384 --- [ taskExecutor-2] c.t.s.c.s.a.ConfirmationService : Will not persist empty response notes
Using a very minimal logstash configuration, it'll wind up being persisted in elasticsearch:
input {
beats {
port => 5044
ssl => false
}
}
filter {
if [message] =~ /tsoft_spring/ {
grok {
match => [ "message", "%{GREEDYDATA:logmessage}" ]
}
}
}
output {
stdout { }
elasticsearch { hosts => ["localhost:9200"] }
}
Using a more complete configuration, the log is just ignored by elastic, no grokparsefailure, no dateparsefailure:
input {
beats {
port => 5044
ssl => false
}
}
filter {
if [message] =~ /tsoft_spring/ {
grok {
match => [ "message", "%{WORD}\|%{NUMBER}/%{NUMBER}\|%{WORD}%{SPACE}%{WORD}\|%{TIMESTAMP_ISO8601:timestamp}\|%{TIMESTAMP_ISO8601}%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}%{NUMBER:pid}%{SPACE}---%{SPACE}%{SYSLOG5424SD:threadname}%{SPACE}%{JAVACLASS:classname}%{SPACE}:%{SPACE}%{GREEDYDATA:logmessage}" ]
}
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
}
}
}
output {
stdout { }
elasticsearch { hosts => ["localhost:9200"] }
}
I've checked that this pattern will parse that line, using http://grokconstructor.appspot.com/do/match#result, and I could've sworn it was working last weekend, but could be my imagination.

Maybe the problem here is not in your grok filter, but in the date match. Resulting year is 0017, instead of 2017. Maybe that's why you can't find the event in ES? Can you try this:
date {
match => [ "timestamp" , "yy-MM-dd HH:mm:ss" ]
}

Related

Match missing LogStash logs to the correct date in Kibana dashboard

TL;DR; Map missing logs from LogStash in Kibana dashboard to their correct date and time
I have an Amazon ElasticSearch domain(which includes both ElasticSearch service and Kibana dashboard) configured. My source of logs is a Beanstalk environment. I have installed FileBeat inside that environment, which sends the logs to an EC2, which have configured with LogStash. The LogStash server will then send the logs to ES domain endpoint.
This happened for a while without an issue, but when I checked yesterday, I saw logs were not sent to the ES for like 4 days. I got it fixed and the current logs are being transferred alright. The missing logs are still stacked in the LogStash server.
I modified my logstash.conf file to include the missing log files, but they all appear as a one single bar in the current date in my Kibana graph. What I want to do is that, make sure each missing set of logs is shown in Kibana in their respective date and time.
Example date and time part:
2021-05-20 19:44:34.700+0000
The following is my logstash.conf configuration. (Please let me know if I should post my FileBeat config too).
input {
beats {
port => 5044
}
}
filter {
date {
match => [ "logdate", "yyyy-MM-dd HH:mm:ss.SSS+Z" ]
}
mutate {
split => { "message" => "|" }
add_field => { "messageDate" => "%{[message][0]}" }
add_field => { "messageLevel" => "%{[message][1]}" }
add_field => { "messageContextName" => "%{[message][2]}" }
add_field => { "messagePID" => "%{[message][3]}" }
add_field => { "messageThread" => "%{[message][4]}" }
add_field => { "messageLogger" => "%{[message][5]}" }
add_field => { "messageMessage" => "%{[message][6]}" }
}
}
output {
amazon_es {
hosts => ["hostname"]
index => "dev-%{+YYYY.MM.dd}"
region => "region"
aws_access_key_id => "ackey_id"
aws_secret_access_key => "ackey"
}
}
Use a date filter to parse the date contained in the message. This will set [#timestamp] and then the event will appear in the right bucket in kibana.

How to send logs from multiple servers to ELK server

I have a server in which ELK installed, On other end i have 2 source servers which sending logs to ELK server through filebeat. But the issue is both server's logs showing on same page on kibana. which is too complicated to identify which log is coming from which server! How multiple server's logs show separate on kibana.
Following are my logstash.conf:
input {
beats {
port => 5044
}
}
# Used to parse syslog messages and send it to Elasticsearch for storing
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGLINE}" }
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
# Specify an Elastisearch instance
output {
Elasticsearch {
hosts => ["localhost:9200"]
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}
}

elaticsearch monitoring search queries

For more than a week I am struggling with trying to log into an index in elasticsearch information regarding queries which I run so I could compare performance between different types of queries. I have configured this config file on logstash home directory
input {
beats {
port> 5044
}
}
filter {
if "search" in [request]{
grok {
match => { "request" => ".*\n\{(?<query_body>.*)"}
}
grok {
match => { "path" => "\/(?<index>.*)\/_search"}
}
if [index] {
} else {
mutate {
add_field => { "index" => "All" }
}
}
mutate {
update => { "query_body" => "{%{query_body}" }
}
}
}
output {
if "search" in [request] and "ignore_unmapped" not in [query_body]{
elasticsearch {
hosts => "http://localhost:9200"
}
}
}
and also installed and configured packetbeat.yml
logstash hosts to :http://localhost:9200
Also in the tutorial that I have followed is mentioned that after starting Packetbeat it will listen for packets on 9200 sending them to Logstash and from there to the monitoring Elasticsearch cluster, it will be indexed in indexes like: logstash-2016.05.24. But these indexes does not exists.

Elasticsearch Logstash Filebeat mapping

Im having a problem with ELK Stack + Filebeat.
Filebeat is sending apache-like logs to Logstash, which should be parsing the lines. Elasticsearch should be storing the split data in fields so i can visualize them using Kibana.
Problem:
Elasticsearch recieves the logs but stores them in a single "message" field.
Desired solution:
Input:
10.0.0.1 some.hostname.at - [27/Jun/2017:23:59:59 +0200]
ES:
"ip":"10.0.0.1"
"hostname":"some.hostname.at"
"timestamp":"27/Jun/2017:23:59:59 +0200"
My logstash configuration:
input {
beats {
port => 5044
}
}
filter {
if [type] == "web-apache" {
grok {
patterns_dir => ["./patterns"]
match => { "message" => "IP: %{IPV4:client_ip}, Hostname: %{HOSTNAME:hostname}, - \[timestamp: %{HTTPDATE:timestamp}\]" }
break_on_match => false
remove_field => [ "message" ]
}
date {
locale => "en"
timezone => "Europe/Vienna"
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
useragent {
source => "agent"
prefix => "browser_"
}
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["localhost:9200"]
index => "test1"
document_type => "accessAPI"
}
}
My Elasticsearch discover output:
I hope there are any ELK experts around that can help me.
Thank you in advance,
Matthias
The grok filter you stated will not work here.
Try using:
%{IPV4:client_ip} %{HOSTNAME:hostname} - \[%{HTTPDATE:timestamp}\]
There is no need to specify desired names seperately in front of the field names (you're not trying to format the message here, but to extract seperate fields), just stating the field name in brackets after the ':' will lead to the result you want.
Also, use the overwrite-function instead of remove_field for message.
More information here:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-options
It will look similar to that in the end:
filter {
grok {
match => { "message" => "%{IPV4:client_ip} %{HOSTNAME:hostname} - \[%{HTTPDATE:timestamp}\]" }
overwrite => [ "message" ]
}
}
You can test grok filters here:
http://grokconstructor.appspot.com/do/match

ELK - GROK Pattern for Winston logs

I have setup local ELK. All works fine, but before trying to write my own GROK pattern I wonder is there already one for Winston style logs?
That works great for Apache style log.
I would need something that works for Winston style. I think JSON filter would do the trick, but I am not sure.
This is my Winston JSON:
{"level":"warn","message":"my message","timestamp":"2017-03-31T11:00:27.347Z"}
This is my Logstash configuration file example:
input {
beats {
port => "5043"
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
For some reason it is not getting parsed. No error.
Try like this instead:
input {
beats {
port => "5043"
codec => json
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}

Resources