Logstash date format grok pattern - elasticsearch

I have a log file which has a date time in 'yyyyMMdd_HHmmss_SSS' format. I am successful in parsing this with _ as delimiter and getting as 3 different text field in ES. But I need this to be converted as ISO_8601 so I can query and visualize the data by date or by hour or by minute.

If you don't specifically need ISO-8601, but care more about the events getting a queryable timestamp, the date filter sounds like a better fit for you.
filter {
date {
match => [ "logdate", "yyyyMMdd_HHmmss_SSS" ]
}
}
This will set the #timestamp field to be a date-searchable field.
However, if you really do need Grok to do the work, you'll probably be best suited through using custom regexes.
(?<logyear>\d{4,})(?<logmonth>\d\d)(?<logday>\d\d)_(and so on)
This leverages single-digit captures to build your string.

Related

Convert HTTPDATE timestamp to correct format in Graylog

I have Squid writing logs with a timestamp as dd/MMM/yyyy:HH:mm:ss ZZZZ
"27/Jul/2022:11:55:40 +0100"
I'm sending these logs into Graylog using Filebeat, then parsing the timestamp into individual fields using HTTPDATE in a Grok extractor, so I can get separate Month, Monthday, Year etc fields.
I need to replace the "message received" #timestamp field with the actual "event occurred" timestamp when the event is indexed in Elasticsearch.
How can I convert the Squid timestamp from HTTPDATE into yyyy-MM-dd HH:mm:ss format?
"2022-07-27 11:55:40"
Thanks
EDIT:
Actually I think I have this now. In case it helps anyone else, this was done with a Regex Replacement Extractor:
Extractor Part1
Extractor Part 2
Extractor Part 3
This is an excellent question for the community. Try it there.

How to parse a csv file which has some field containing seprator (comma) as-values

sample message - 111,222,333,444,555,val1in6th,val2in6th,777
The sixth column contains a value consisting of commas (val1in6th,val2in6th is a sample value of 6th column).
When I use a simple csv filter this message is getting converted to 8 fields. I want to be able to tell the filter that val1in6th,val2in6th should be treated as a single value and placed as the value of 6th column (its okay not to have comma between val1in6th and val2in6th when placed as the output as 6th column).
change your plugin, no more the csv one but grok filter - doc here.
Then you use a debugger to create a parser for your lines - like this one: https://grokdebug.herokuapp.com/
For your lines you could use this grok expression:
%{WORD:FIELD1},%{WORD:FIELD2},%{WORD:FIELD3},%{WORD:FIELD4},%{WORD:FIELD5},%{GREEDYDATA:FIELD6}
or :
%{INT:FIELD1},%{INT:FIELD2},%{INT:FIELD3},%{INT:FIELD4},%{INT:FIELD5},%{GREEDYDATA:FIELD6}
It changes the datatypes in elastic of the firsts 5 fields.
To know about parse csv with grok filter in elastic you could use this es official blog guide, it is explained how to use grok with ingestion pipeline, but it is the same with logstash

Keyword search using Elastic search

I am new to Elasticsearch and I am trying to achieve a Text Search functionality using Elasticsearch. I have over 100 documents and every document has lines starting with timestamp notations.
Eg.
00:00:00 - 00:01:00 This is the first line
00:01:01 - 00:02:30 This is the second line
00:02:30 - 00:03:45 This is the third line
00:03:46 - 00:05:00 This is the fourth line
00:05:01 - 00:06:00 This is fifth line
...
And so on.
I am splitting each of these lines into different paragraphs and performing a text search over the documents.
Now, I want to search by keyword wherein 1 or more keywords would be defined for let's say lines between timestamp 00:00:00 - 00:05:00. So based on the keyword search, the entire data from 00:00:00 - 00:05:00 should be returned. As in all the lines in between these timestamps should be returned based on keyword search.
Can you please help me understand how to achieve this functionality using Elasticsearch?
Thanks in advance!!
As per i understand below is my opinion:
It is better to create one more field (type can be datetime, timestamp) in your schema and perform range query on that field. Because it will be going to use very frequently and your data will store in time series manner.
[Not recommended] If you field type is "keyword" where you have stored your whole string, Then you need use wildcard query with '%youstring'. But this will return partial data only. And offcourse it is heavy cost and slow. It is like query in SQL.
[Not recommended] If you field type is "text" then you need to check wheather date time terms is created or not. This is also return partial data only.
It is best to design you schema according to your search query. 1st option will be better for you and it will help to scale in future.

ELK most appropriate timestamp name _ or #

What is the most appropriate name for the timestamp when utilizing Logstash to parse logs into Elasticsearch, then visualizing with Kibana?
I am defining the timestamp using date in a filter:
date {
match => [ "logtime", "yy-MM-dd HH:mm:ss" ]
}
Logstash automatically puts this into the #timestamp field. Kibana can be configured to use any correctly formatted field as the timestamp, but it seems to be correct to use _timestamp in Elasticsearch. To do that, you have to mutate and rename the datestamp field.
mutate {
rename => { "#timestamp" => "_timestamp" }
}
Which is slightly annoying.
This question could be entirely semantic - but is it most correct to use _timestamp, or is it just fine to use #timestamp? Are there any other considerations which should influence the naming of the timestamp field?
Elasticsearch allows you to define fields starting with an underscore, however, Kibana (since v4) will only show the ones declared outside of the _source document.
You should definitely keep with #timestamp which is the standard way to name the timestamp field in Logstash. Kibana will not allow you to use _timestamp.
Please note that _timestamp is reserved and deprecated special field name. Actually any field names starting with underscore are reserved for elasticsearch future internal usage. AFAIK logstash documentation examples use #timestamp as field name
without any renaming.

Elasticsearch date field: epoch millis input, string output?

Steps:
1. Define a date field in a mapping.
2. Insert a epoch millisecond (long) value into that field.
Can elastic search returns a string value (yyyy-MM-ddTHH:mm:SS) of that field for a search?
From what I understand of the date-format documentation of ElasticSearch, it will always accept a milliseconds-since-epoch input next to input in the format given by the format, and it will produce a String output using the (first) format given. If you don't provide a format, then the "date_optional_time" format will be used (yyyy-MM-dd’T'HH:mm:ss.SSSZZ).
If the time zone in there is a problem for you, you'd need to give ElasticSearch your intended format.
I don't have the code to hand, but in my testing I believe I managed to do the following:
I used the date formatter on the field and the query fields definition to do this:
curl -XGET 'http://localhost:9200/twitter/tweet/1?fields=title,date_field.date_time'
using the date formats specified here: http://www.elasticsearch.org/guide/reference/mapping/date-format/
If you want a full document returned, this may be onerous. In which case is it possible to use an alias 'view' mapping to get the result to return differently from your primary mapping? Possibly this has become a half-answer.

Resources