Writing to #timestamp in LogStash - elasticsearch

I need to write the value of a UNIX timestamp field to #timestamp so that I can correctly index data flowing through logstash, I have this part working. However I also have the requirement that #timestamp's value should be the insertion time. To this end I have made a temporary field that holds #timestamps original value.
Here is what I am working with:
filter {
csv {
separator => " " # <- this white space is actually a tab, don't change it, it's already perfect
skip_empty_columns => true
columns => ["timestamp", ...]
}
# works just fine
mutate {
add_field => {
"tmp" => "%{#timestamp}"
}
}
# works just fine
date {
match => ["timestamp", "UNIX"]
target => "#timestamp"
}
# this works too
mutate {
add_field => {
"[#metadata][indexDate]" => "%{+YYYY-MM-dd}"
}
}
# #timestamp is not being set back to its original value
date {
match => ["tmp", "UNIX"]
target => "#timestamp"
}
# works just fine
mutate {
remove_field => ["tmp"]
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
# this works
index => "indexname-%{[#metadata][indexDate]}"
}
}
The Problem is here:
date {
match => ["tmp", "UNIX"]
target => "#timestamp"
}
#timestamp is not being set back to its original value. When I check the data it has the same value as the timestamp field.

When you add the date to tmp, it gets added in ISO8601 format, so you need to use:
date {
match => ["tmp", "ISO8601"]
target => "#timestamp"
}

Related

Removing grok matched field after using it

I use filebeat to fetch log files into my logstash and then filter unnecessary fields. Everything works fine and I output these into elasticsearch but there is a field which I use for elasticsearch index name, I define this variable in my grok match but I couldn't find a way to remove that variable once it serves its purpose. I'll share my logstash config below
input {
beats {
port => "5044"
}
}
filter {
grok {
match => { "[log][file][path]" => ".*(\\|\/)(?<myIndex>.*)(\\|\/).*.*(\\|\/).*(\\|\/).*(\\|\/).*(\\|\/)" }
}
json {
source => message
}
mutate {
remove_field => ["agent"]
remove_field => ["input"]
remove_field => ["#metadata"]
remove_field => ["log"]
remove_field => ["tags"]
remove_field => ["host"]
remove_field => ["#version"]
remove_field => ["message"]
remove_field => ["event"]
remove_field => ["ecs"]
}
date {
match => ["t","yyyy-MM-dd HH:mm:ss.SSS"]
remove_field => ["t"]
}
mutate {
rename => ["l","log_level"]
rename => ["mt","msg_template"]
rename => ["p","log_props"]
}
}
output {
elasticsearch {
hosts => [ "localhost:9222" ]
index => "%{myIndex}"
}
stdout { codec => rubydebug { metadata => true } }
}
I just want to remove the "myIndex" field from my index. With this config file, I see this field in elasticsearch if possible I want to remove it. I've tried to remove it with other fields altogether but it gave an error. I guess it's because I removed it before logstash could give it to elasticsearch.
Create the field under [#metadata]. Those fields are available to use in logstash but are ignored by outputs unless they use a rubydebug codec.
Adjust your grok filter
match => { "[log][file][path]" => ".*(\\|\/)(?<[#metadata][myIndex]>.*)(\\|\/).*.*(\\|\/).*(\\|\/).*(\\|\/).*(\\|\/)" }
Delete [#metadata] from the mutate+remove_field and change the output configuration to have
index => "%{[#metadata][myIndex]}"

logstash Issues with replace logfile time with #timestamp in kibana

We have a logstash parsing script to parse the logfiles and we have written parsing script for the same. We are facing issue when trying to replace #timestamp with logfile time. Below is the filter that we have used
filter {
json {
source => "message"
target => "doc"
}
mutate {
copy => { "[doc][message]" => "mesg" }
copy => { "[doc][log][file][path]" => "logpath" }
remove_field => [ "[doc]" ]
}
if ( "/prodlogsfs/" not in [logpath] ) {
drop { }
}
if [logpath] {
dissect {
mapping => {
"logpath" => "%{deployment}deployment-%{?id}-%{?extra}"
}
}
}
grok { match => { "mesg" => [ "^\s?\[%{DATA:loglevel}\] %{TIMESTAMP_ISO8601:logts} \[%{DATA:threadname}\] %{DATA:podname} %{DATA:filler1} \[%{DATA:classname}\] %{GREEDYDATA:fullmesg}",
"(\s)+(?<exception>%{DATA}Exception)[:\s]+(?<trace>(%{DATA}at)+)"
]
} }
#Date Filter being used to replace #timestamp with logfile time
if [logts] {
date {
match => [ "logts", "ISO8601" ]
timezone => "Asia/Kolkata"
target => ["#timestamp"]
}
}
With the above code, when we check the value for #timestamp and logts in kibana, #timestamp shows the currenttime. Whereas the logts time seems to be a future time (+5.30) . Need help on how to match the #timestamp with logts.
Anyhelp on this is much appreciated. Thanks in Advance

ELK - date, defined in logstash shows as string in kibana

I have the following config file for logstash:
input {
file {
path => "/home/elk/data/visits.csv"
start_position => "beginning"
sincedb_path => "NUL"
}
}
filter {
csv {
separator => ","
columns => ["estado","tiempo_demora","poblacion","id_poblacion","edad_valor","cp","latitude_corregida","longitud_corregida","patologia","Fecha","id_tipo","id_personal","nasistencias","menor","Geopoint_corregido"]
}
date {
match => ["Fecha","dd-MM-YYYY HH:mm"]
target => "Fecha"
}
mutate {convert => ["nasistencias", "integer"]}
mutate {convert => ["id_poblacion", "integer"]}
mutate {convert => ["id_personal", "integer"]}
mutate {convert => ["id_tipo", "integer"]}
mutate {convert => ["cp", "integer"]}
mutate {convert => ["edad_valor", "integer"]}
mutate {
convert => { "longitud_corregida" => "float" }
convert => { "latitude_corregida" => "float" }
}
mutate {
rename => {
"longitud_corregida" => "[location][lon]"
"latitude_corregida" => "[location][lat]"
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "medicalvisits-%{+dd.MM.YYYY}"
}
stdout {
codec => json_lines
codec => rubydebug
}
}
From there Fecha should have been sent to elasticsearch as date, but in kibana, when I try to set it as timestamp, it doesn't appear, and it shows as string:
Any idea what am I doing wrong here?
The types in your index patterns are not the same as the types in your index templates (the information actually being stored).
I would suggest to you, that you should overwrite the timestamp with the information sent by your logstash. After all, what's important to you in most cases, is the timestamp of the event, not the timestamp of the time the event was sent to your elasticsearch.
With this being said, why you just dont save your "Fecha" directly into your "#timestamp" by means of "date" filter in logstash. Like this:
date {
match => ["Fecha","dd-MM-YYYY HH:mm"]
target => "#timestamp"
tag_on_failure => ["fallo_filtro_fecha"]
Another option if you really need that "Fecha" algonside with #timestamp (not the best idea), and Fecha being of type "date", is to modify your index mapping to change that field type to date. Like this (adjust as necessary):
PUT /nombre_de_tu_indice/_mapping
{
"properties": {
"Fecha": {
"type": "date",
}
}
}
Of course, this change will only affect new indexed indices or a re-indexed one.

Custom date time is same but not matching in grok date filter logstash

The input is comma separated values:
"2010-08-19","09:12:55","56095675"
I created the custom date_time field which appears to right format 2010-08-19;09:12:55 but not matching.
filter {
grok {
match => { "message" => '"(%{GREEDYDATA:cust_date})","(%{TIME:cust_time})","(%{NUMBER:author})"'}
add_field => {
"date_time" => "%{cust_date};%{cust_time}"
}
}
date {
match => ["date_time", "yyyy-MM-dd;hh:mm:ss"]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
Output on Kibana:
cust_date August 18th 2010, 20:00:00.000
cust_time 09:12:55
date_time 2010-08-19;09:12:55
message "2010-08-19","09:12:55","56095675"
tags beats_input_codec_plain_applied, _dateparsefailure
It gives _dateparsefailure. The fields appear to be same as match pattern.
I tried different time format like YYYY-MM-dd;hh:mm:ss and YYYY-MM-dd;HH:mm:ss
What am I doing wrong?
Help!
You should put the date plugin inside the filter section, right under grok.
filter {
grok {
match => { "message" => '"(%{GREEDYDATA:cust_date})","(%{TIME:cust_time})","(%{NUMBER:author})"'}
add_field => {
"date_time" => "%{cust_date};%{cust_time}"
}
date {
match => ["date_time", "yyyy-MM-dd;hh:mm:ss"]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}

How to type data input in logstash

I'm trying to input a csv file to elasticsearch through logstash.
That's my configuration file
input {
file {
codec => plain{
charset => "ISO-8859-1"
}
path => ["PATH/*.csv"]
sincedb_path => "PATH/.sincedb_path"
start_position => "beginning"
}
}
filter {
if [message] =~ /^"ID","DATE"/ {
drop { }
}
date {
match => [ "DATE","yyyy-MM-dd HH:mm:ss" ]
target => "DATE"
}
csv {
columns => ["ID","DATE",...]
separator => ","
source => message
remove_field => ["message","host","path","#version","#timestamp"]
}
}
output {
elasticsearch {
embedded => false
host => "localhost"
cluster => "elasticsearch"
node_name => "localhost"
index => "index"
index_type => "type"
}
}
Now, the mapping produced in elasticsearch types the DATE field as string. I would like to type as a date field.
In the filter element, I tried to convert the type field in date but it doesn't work.
How can I fix that ?
Regards,
Alexandre
You have your filter chain setup in the wrong order. The date{} block needs to come after the csv {} block.

Resources