Logstash parses logfiles to wrong index - elasticsearch

I have an elasticsearch instance, which parses my logfiles according to my regex pattern, which takes the date from the logfile. Then the date should be used as the index pattern for elasticsearch purposes. This is where it gets wrong. My logstash-pipeline-config-file looks as follows:
input {
beats {
port => "5044"
}
}
filter {
grok {
match => { "message" => "%{LOGGERLEVEL:log}%{PIPE:k}%{TIMESTAMP_ISO8601:datetime}%{GREEDYDATA:data}"}
}
date {
match => ["datetime", "ISO8601"]
timezone => "Europe/Helsinki"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
LOGGERLEVEL and PIPE are user defined regexes. This version parses the logs as it should but it indexes the first two hours of the day to the date before. If I change the config-file as follows, elasticsearch will ignore the first two hours altogether:
input {
beats {
port => "5044"
}
}
filter {
grok {
match => { "message" => "%{LOGGERLEVEL:log}%{PIPE:k}%{TIMESTAMP_ISO8601:datetime}%{GREEDYDATA:data}"}
}
date {
match => ["datetime", "ISO8601"]
timezone => "Europe/London"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
How should I configurate my pipeline that one day would be indexed as a whole and under the same index that is the date found from the logfile?

Solved the problem. Using kibana, go to management -> advanced settings and change the dateFormat:tz to the desired format. In my case I have to use the second configuration and select Europe/London from kibana-settings.

Related

Logstash delay of log sending

I'm forwarding application logs to elasticsearch, while performing some grok filters before.
The application has a timestamp field and there's the timestamp field of logstash itself.
We regularly check the difference between those timestamp, and on many cases the delay is very big, meaning the log took very long time to be shipped to elasticsearch.
I'm wondering how can I isolate the issue to know if the delay is coming from logstash or elasticsearch.
Example logstash scrape config:
input {
file {
path => "/app/app-core/_logs/app-core.log"
codec => multiline {
pattern => "(^[a-zA-Z.]+(?:Error|Exception).+)|(^\s+at .+)|(^\s+... \d+ more)|(^\t+)|(^\s*Caused by:.+)"
what => "previous"
}
}
}
filter {
if "multiline" not in [tags]{
json {
source => "message"
remove_field => ["[request][body]","[response][body][response][items]"]
}
}
else {
grok {
pattern_definitions => { APPJSON => "{.*}" }
match => { "message" => "%{APPJSON:appjson} %{GREEDYDATA:stack_trace}"}
remove_field => ["message"]
}
json {
source => "appjson"
remove_field => ["appjson"]
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch-logs.internal.app.io:9200"]
index => "logstash-core-%{+YYYY.MM.dd}"
document_type => "logs"
}
}
We tried adjusting the number of workers and batch size, no value we tried reduced the delay:
pipeline.workers: 9
pipeline.output.workers: 9
pipeline.batch.size: 600
pipeline.batch.delay: 5
Nothing was done on the elasticsearch side because I think the issue is with logstash, but I'm not sure.

Read a CSV in Logstash level and filter on basis of the extracted data

I am using Metricbeat to get process-level data and push it to Elastic Search using Logstash.
Now, the aim is to categorize the processes into 2 tags i.e the process running is either a browser or it is something else.
I am able to do that statically using this block of code :
input {
beats {
port => 5044
}
}
filter{
if [process][name]=="firefox.exe" or [process][name]=="chrome.exe" {
mutate {
add_field => { "process.type" => "browsers" }
convert => {
"process.type" => "string"
}
}
}
else {
mutate {
add_field => { "process.type" => "other" }
}
}
}
output {
elasticsearch {
hosts => "localhost:9200"
# manage_template => false
index => "metricbeatlogstash"
}
}
But when I try to make that if condition dynamic by reading the process list from a CSV, I am not getting any valid results in Kibana, nor a error on my LogStash level.
The CSV config file code is as follows :
input {
beats {
port => 5044
}
file{
path=>"filePath"
start_position=>"beginning"
sincedb_path=>"NULL"
}
}
filter{
csv{
separator=>","
columns=>["processList","IT"]
}
if [process][name] in [processList] {
mutate {
add_field => { "process.type" => "browsers" }
convert => {
"process.type" => "string"
}
}
}
else {
mutate {
add_field => { "process.type" => "other" }
}
}
}
output {
elasticsearch {
hosts => "localhost:9200"
# manage_template => false
index => "metricbeatlogstash2"
}
}
What you are trying to do does not work that way in logstash, the events in a logstash pipeline are independent from each other.
The events received by your beats input have no knowledge about the events received by your csv input, so you can't use fields from different events in a conditional.
To do what you want you can use the translate filter with the following config.
translate {
field => "[process][name]"
destination => "[process][type]"
dictionary_path => "process.csv"
fallback => "others"
refresh_interval => 300
}
This filter will check the value of the field [process][name] against a dictionary, loaded into memory from the file process.csv, the dictionary is a .csv file with two columns, the first is the name of the browser process and the second is always browser.
chrome.exe,browser
firefox.exe,browser
If the filter got a match, it will populate the field [process][type] (not process.type) with the value from the second column, in this case, always browser, if there is no match, it will populate the field [process][type] with the value of the fallback config, in this case, others, it will also reload the content of the process.csv file every 300 seconds (5 minutes)

Elasticsearch Logstash Filebeat mapping

Im having a problem with ELK Stack + Filebeat.
Filebeat is sending apache-like logs to Logstash, which should be parsing the lines. Elasticsearch should be storing the split data in fields so i can visualize them using Kibana.
Problem:
Elasticsearch recieves the logs but stores them in a single "message" field.
Desired solution:
Input:
10.0.0.1 some.hostname.at - [27/Jun/2017:23:59:59 +0200]
ES:
"ip":"10.0.0.1"
"hostname":"some.hostname.at"
"timestamp":"27/Jun/2017:23:59:59 +0200"
My logstash configuration:
input {
beats {
port => 5044
}
}
filter {
if [type] == "web-apache" {
grok {
patterns_dir => ["./patterns"]
match => { "message" => "IP: %{IPV4:client_ip}, Hostname: %{HOSTNAME:hostname}, - \[timestamp: %{HTTPDATE:timestamp}\]" }
break_on_match => false
remove_field => [ "message" ]
}
date {
locale => "en"
timezone => "Europe/Vienna"
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
useragent {
source => "agent"
prefix => "browser_"
}
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["localhost:9200"]
index => "test1"
document_type => "accessAPI"
}
}
My Elasticsearch discover output:
I hope there are any ELK experts around that can help me.
Thank you in advance,
Matthias
The grok filter you stated will not work here.
Try using:
%{IPV4:client_ip} %{HOSTNAME:hostname} - \[%{HTTPDATE:timestamp}\]
There is no need to specify desired names seperately in front of the field names (you're not trying to format the message here, but to extract seperate fields), just stating the field name in brackets after the ':' will lead to the result you want.
Also, use the overwrite-function instead of remove_field for message.
More information here:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-options
It will look similar to that in the end:
filter {
grok {
match => { "message" => "%{IPV4:client_ip} %{HOSTNAME:hostname} - \[%{HTTPDATE:timestamp}\]" }
overwrite => [ "message" ]
}
}
You can test grok filters here:
http://grokconstructor.appspot.com/do/match

ElasticSearch 2, Logstash and Kibana : grok match can't create fields

I try to parse a message field to generate differents fields. After research, the solution is to use grok with match. But in Kibana, I can't see the new fields (even after refresh or recreate fields from logstash indexes)
I try this in filter config :
grok {
match => {
"message" => "\[32m%{LOGLEVEL:loglevel}\[39m: memory: %{NOTSPACE:memory}, uptime \(seconds\): %{NUMBER:uptime}, load: %{NUMBER:load1},%{NUMBER:load5},%{NUMBER:load15}"
}
}
mutate {
rename => { "docker.id" => "container_id" }
rename => { "docker.name" => "container_name" }
rename => { "docker.image" => "docker_image" }
rename => { "docker.hostname" => "docker_hostname" }
}
To transform this type of message :
[32minfo[39m: memory: 76Mb, uptime (seconds): 5529.927, load: 0.05322265625,0.1298828125,0.19384765625
To this variables :
load15 0.19384765625
uptime 5529.927
load1 0.05322265625
load5 0.1298828125
memory 76Mb
loglevel info
I test the pattern in http://grokconstructor.appspot.com/do/match and my matches work fine. But, In Kibana I can't retrieve this fields.
Thanks

Convert timestamp timezone in Logstash for output index name

In my scenario, the "timestamp" of the syslog lines Logstash receives is in UTC and we use the event "timestamp" in the Elasticsearch output:
output {
elasticsearch {
embedded => false
host => localhost
port => 9200
protocol => http
cluster => 'elasticsearch'
index => "syslog-%{+YYYY.MM.dd}"
}
}
My problem is that at UTC midnight, Logstash sends log to different index before the end of the day in out timezone (GMT-4 => America/Montreal) and the index has no logs after 20h (8h PM) because of the "timestamp" being UTC.
We've done a work arround to convert the timezone but we experience a significant performance degradation:
filter {
mutate {
add_field => {
# Create a new field with string value of the UTC event date
"timestamp_zoned" => "%{#timestamp}"
}
}
date {
# Parse UTC string value and convert it to my timezone into a new field
match => [ "timestamp_zoned", "yyyy-MM-dd HH:mm:ss Z" ]
timezone => "America/Montreal"
locale => "en"
remove_field => [ "timestamp_zoned" ]
target => "timestamp_zoned_obj"
}
ruby {
# Output the zoned date to a new field
code => "event['index_day'] = event['timestamp_zoned_obj'].strftime('%Y.%m.%d')"
remove_field => [ "timestamp_zoned_obj" ]
}
}
output {
elasticsearch {
embedded => false
host => localhost
port => 9200
protocol => http
cluster => 'elasticsearch'
# Use of the string value
index => "syslog-%{index_day}"
}
}
Is there a way to optimize this config?
This is the optimize config, please have a try and test for the performance.
You no need to use mutate and date plugin. Use ruby plugin directly.
input {
stdin {
}
}
filter {
ruby {
code => "
event['index_day'] = event['#timestamp'].localtime.strftime('%Y.%m.%d')
"
}
}
output {
stdout { codec => rubydebug }
}
Example output:
{
"message" => "test",
"#version" => "1",
"#timestamp" => "2015-03-30T05:27:06.310Z",
"host" => "BEN_LIM",
"index_day" => "2015.03.29"
}
In logstash version 5.0 and later, you can use this:
filter{
ruby {
code => "event.set('index_day', event.get('[#timestamp]').time.localtime.strftime('%Y%m%d'))"
}
}
In version 1.5.0, we can convert timestamp by local timezone for the index name. Here is my configuration:
filter {
ruby {
code => "event['index_day'] = event.timestamp.time.localtime.strftime('%Y.%m.%d')"
}
}
output {
elasticsearch {
host => localhost
index => "thrall-%{index_day}"
}
}
In Logstash Version 5.0.2,The API was modified. We can convert timestamp by local timezone for the index name. Here is my configuration:
filter {
ruby {
code => "event['index_day'] = event.timestamp.time.localtime.strftime('%Y.%m.%d')"
}
}
Similar use case - but using the logstash file output plugin and writing files dated by the local time of the arrival of the event.
Verified on logstash version 7.12.
Adapted from discuss.elastic.co, mainly zero padding the offset hours. NB! If your offset has half hours you will need to adjust accordingly.
filter {
ruby {
code => "
require 'tzinfo'
tz = 'Europe/Oslo'
offset = TZInfo::Timezone.get(tz).current_period.utc_total_offset / (60*60)
event.set('[#metadata][local_date]',
event.get('#timestamp').time.localtime(
sprintf('+%02i:00', offset.to_s)
).strftime('%Y%m%d'))
"
}
if ([agent][type] == "filebeat") {
mutate {
add_field => ["file_path", "%{[host][name]}_%{[log][file][path]}.%{[#metadata][local_date]}"]
}
} else {
mutate {
add_field => ["file_path", "%{[agent][hostname]}_%{[agent][type]}.%{[#metadata][local_date]}"]
}
}
}

Resources