ElasticSearch - not setting the date type - elasticsearch

I am trying the ELK stack, and so far so good :)
I have run in to strange situation regardgin the parsing the date field and sending it to ElasticSearch. I manage to parse the field, and it really gets created in the ElasticSearch, but it always end up as string.
I have tried many different combinations. Also I have tried many different things that people suggested, but still I fail.
This is my setup:
The strings that comes from Filebeat:
[2017-04-26 09:40:33] security.DEBUG: Stored the security token in the session. {"key":"securitysecured_area"} []
[2017-04-26 09:50:42] request.INFO: Matched route "home_logged_in". {"route_parameters":{"controller":"AppBundle\Controller\HomeLoggedInController::showAction","locale":"de","route":"homelogged_in"},"request_uri":"https://qa.someserver.de/de/home"} []
The logstash parsing section:
if [#metadata][type] == "feprod" or [#metadata][type] == "feqa"{
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:logdate}" }
}
date {
#timezone => "Europe/Berlin"
match => [ "logdate", "yyyy-MM-dd HH:mm:ss"]
}
}
According to the documentation, my #timestamp field should be overwritten with the logdate value. But it is no happening.
In the ElasticSearch I can see the field logdate is being created and it has value of 2017-04-26 09:40:33, but its type is string.
I always create index from zero, I delete it first and let the logstash populate it.
I need either #timestamp overwritten with the actual date (not the date when it was indexed), or that logdate field is created with date type. Both is good

Unless you are explicitly adding [#metadata][type] somewhere that you aren't showing, that is your problem. It's not set by default, [type] is set by default from the 'type =>' parameter on your input.
You can validate this with a minimal complete example:
input {
stdin {
type=>'feprod'
}
}
filter {
if [#metadata][type] == "feprod" or [#metadata][type] == "feqa"{
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:logdate}" }
}
date {
match => [ "logdate", "yyyy-MM-dd HH:mm:ss"]
}
}
}
output {
stdout { codec => "rubydebug" }
}
And running it:
echo '[2017-04-26 09:40:33] security.DEBUG: Stored the security token in the session. {"key":"securitysecured_area"} []' | bin/logstash -f test.conf
And getting the output:
{
"#timestamp" => 2017-05-02T15:15:05.875Z,
"#version" => "1",
"host" => "xxxxxxxxx",
"message" => "[2017-04-26 09:40:33] security.DEBUG: Stored the security token in the session. {\"key\":\"securitysecured_area\"} []",
"type" => "feprod",
"tags" => []
}
if you use just if [type] ==... it will work fine.
{
"#timestamp" => 2017-04-26T14:40:33.000Z,
"logdate" => "2017-04-26 09:40:33",
"#version" => "1",
"host" => "xxxxxxxxx",
"message" => "[2017-04-26 09:40:33] security.DEBUG: Stored the security token in the session. {\"key\":\"securitysecured_area\"} []",
"type" => "feprod",
"tags" => []
}

Related

Automatically parse logs fields with Logstash

Let's say I have this kind of log :
Jun 2 00:00:00 192.168.14.4 date=2016-06-01 time=23:56:05
devname=POPB-FW-01 devid=FG1K2D3I14800220 logid=1059028704 type=utm
subtype=app-ctrl eventtype=app-ctrl-all level=information vd="root"
appid=40568 user="" srcip=10.20.4.35 srcport=52438
srcintf="VRF-PUBLIC" dstip=125.209.230.238 dstport=443 dstintf="OUT"
proto=6 service="HTTPS" sessionid=424666004 applist="Monitor-all"
appcat="Web.Others" app="HTTPS.BROWSER" action=pass
hostname="lcs.naver.com" url="/" msg="Web.Others: HTTPS.BROWSER,"
apprisk=medium
So with this code below, I can regex the timestamp and the ip in future elastic fields :
filter {
grok {
match => {"message" => "%{SYSLOGTIMESTAMP:timestamp} %{client}" }
}
}
Now, how do I automatically get fields for the rest of the log ? Is there a simple way to say :
The thing before the "=" is the field name and the thing after is the value.
So I can obtain a JSON for elastic index with many fields for each log line :
{
"path" => "C:/Users/yoyo/Documents/yuyu/temp.txt",
"#timestamp" => 2017-11-29T10:50:18.947Z,
"#version" => "1",
"client" => "192.168.14.4",
"timestamp" => "Jun 2 00:00:00",
"date" => "2016-06-01",
"time" => "23:56:05",
"devname" => "POPB-FW-01 ",
"devid" => "FG1K2D3I14800220",
etc,...
}
Thanks in advance
Okay, I am really dumb
It was easy, rather than search on google, how to match equals, I just had to search key value matching with logstash.
So I just have to write :
filter {
kv {
}
}
And it's done !
Sorry

Drop log messages containing a specific string

So I have log messages of the format :
[INFO] <blah.blah> 2016-06-27 21:41:38,263 some text
[INFO] <blah.blah> 2016-06-28 18:41:38,262 some other text
Now I want to drop all logs that does not contain a specific string "xyz" and keep all the rest. I also want to index timestamp.
grokdebug is not helping much.
This is my attempt :
input {
file {
path => "/Users/username/Desktop/validateLogconf/logs/*"
start_position => "beginning"
}
}
filter {
grok {
match => {
"message" => '%{SYSLOG5424SD:loglevel} <%{JAVACLASS:job}> %{GREEDYDATA:content}'
}
}
date {
match => [ "Date", "YYYY-mm-dd HH:mm:ss" ]
locale => en
}
}
output {
stdout {
codec => plain {
charset => "ISO-8859-1"
}
}
elasticsearch {
hosts => "http://localhost:9201"
index => "hello"
}
}
I am new to grok so patterns above might not be making sense. Please help.
To drop the message that does not contain the string xyz:
if ([message] !~ "xyz") {
drop { }
}
Your grok pattern is not grabbing the date part of your logs.
Once you have a field from your grok pattern containing the date, you can invoque the date filter on this field.
So your grok filter should look like this:
grok {
match => {
"message" => '%{SYSLOG5424SD:loglevel} <%{JAVACLASS:job}> %{TIMESTAMP_ISO8601:Date} %{GREEDYDATA:content}'
}
}
I added a part to grab the date, which will be in the field Date. Then you can use the date filter:
date {
match => [ "Date", "YYYY-mm-dd HH:mm:ss,SSS" ]
locale => en
}
I added the ,SSS so that the format match the one from the Date field.
The parsed date will be stored in the #timestamp field, unless specified differently with the target parameter.
to check if your message contains a substring, you can do:
if [message] =~ "a" {
mutate {
add_field => { "hello" => "world" }
}
}
So in your case you can use the if to invoke the drop{} filter, or you can wrap your output plugin in it.
To parse a date and write it back to your timestamp field, you can use something like this:
date {
locale => "en"
match => ["timestamp", "ISO8601"]
timezone => "UTC"
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
This matches my timestamp in:
Source field: "timestamp" (see match)
Format is "ISO...", you can use a custom format that matches your timestamp
timezone - self explanatory
target - write it back into the event's "#timestamp" field
Add a debug field to check that it has been matched correctly
Hope that helps,
Artur

Convert log message timestamp to UTC before storing it in Elasticsearch

I am collecting and parsing Tomcat access-log messages using Logstash, and am storing the parsed messages in Elasticsearch.
I am using Kibana to display the log messges in Elasticsearch.
Currently I am using Elasticsearch 2.0.0, Logstash 2.0.0, and Kibana 4.2.1.
An access-log line looks something like the following:
02-08-2016 19:49:30.669 ip=11.22.333.444 status=200 tenant=908663983 user=0a4ac75477ed42cfb37dbc4e3f51b4d2 correlationId=RID-54082b02-4955-4ce9-866a-a92058297d81 request="GET /pwa/rest/908663983/rms/SampleDataDeployment HTTP/1.1" userType=Apache-HttpClient requestInfo=- duration=4 bytes=2548 thread=http-nio-8080-exec-5 service=rms itemType=SampleDataDeployment itemOperation=READ dataLayer=MongoDB incomingItemCnt=0 outgoingItemCnt=7
The time displayed in the log file (ex. 02-08-2016 19:49:30.669) is in local time (not UTC!)
Here is how I parse the message line:
filter {
grok {
match => { "message" => "%{DATESTAMP:logTimestamp}\s+" }
}
kv {}
mutate {
convert => { "duration" => "integer" }
convert => { "bytes" => "integer" }
convert => { "status" => "integer" }
convert => { "incomingItemCnt" => "integer" }
convert => { "outgoingItemCnt" => "integer" }
gsub => [ "message", "\r", "" ]
}
grok {
match => { "request" => [ "(?:%{WORD:method} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpVersion})?)" ] }
overwrite => [ "request" ]
}
}
I would like Logstash to convert the time read from the log message ('logTimestamp' field) into UTC before storing it in Elasticsearch.
Can someone assist me with that please?
--
I have added the date filter to my processing, but I had to add a timezone.
filter {
grok {
match => { "message" => "%{DATESTAMP:logTimestamp}\s+" }
}
date {
match => [ "logTimestamp" , "mm-dd-yyyy HH:mm:ss.SSS" ]
timezone => "Asia/Jerusalem"
target => "logTimestamp"
}
...
}
Is there a way to convert the date to UTC without supplying the local timezone, such that Logstash takes the timezone of the machine it is running on?
The motivation behind this question is I would like to use the same configuration file in all my deployments, in various timezones.
That's what the date{} filter is for - to parse a string field containing a date string replace the [#timestamp] field with that value in UTC.
This can also be done in an ingest processor as follows:
PUT _ingest/pipeline/chage_local_time_to_iso
{
"processors": [
{
"date" : {
"field" : "my_time",
"target_field": "my_time",
"formats" : ["dd/MM/yyyy HH:mm:ss"],
"timezone" : "Europe/Madrid"
}
}
]
}

logstash, syslog and grok

I am working on an ELK-stack configuration. logstash-forwarder is used as a log shipper, each type of log is tagged with a type-tag:
{
"network": {
"servers": [ "___:___" ],
"ssl ca": "___",
"timeout": 15
},
"files": [
{
"paths": [
"/var/log/secure"
],
"fields": {
"type": "syslog"
}
}
]
}
That part works fine... Now, I want logstash to split the message string in its parts; luckily, that is already implemented in the default grok patterns, so the logstash.conf remains simple so far:
input {
lumberjack {
port => 6782
ssl_certificate => "___" ssl_key => "___"
}
}
filter {
if [type] == "syslog" {
grok {
match => [ "message", "%{SYSLOGLINE}" ]
}
}
}
output {
elasticsearch {
cluster => "___"
template => "___"
template_overwrite => true
node_name => "logstash-___"
bind_host => "___"
}
}
The issue I have here is that the document that is received by elasticsearch still holds the whole line (including timestamp etc.) in the message field. Also, the #timestamp still shows the date of when logstash has received the message which makes is bad to search since kibana does query the #timestamp in order to filter by date... Any idea what I'm doing wrong?
Thanks, Daniel
The reason your "message" field contains the original log line (including timestamps etc) is that the grok filter by default won't allow existing fields to be overwritten. In other words, even though the SYSLOGLINE pattern,
SYSLOGLINE %{SYSLOGBASE2} %{GREEDYDATA:message}
captures the message into a "message" field it won't overwrite the current field value. The solution is to set the grok filter's "overwrite" parameter.
grok {
match => [ "message", "%{SYSLOGLINE}" ]
overwrite => [ "message" ]
}
To populate the "#timestamp" field, use the date filter. This will probably work for you:
date {
match => [ "timestamp", "MMM dd HH:mm:ss", "MMM d HH:mm:ss" ]
}
It is hard to know were the problem without seeing an example event that is causing you the problem. I can suggest you to try the grok debugger in order to verify the pattern is correct and to adjust it to your needs once you see the problem.

Logstash date parsing as timestamp using the date filter

Well, after looking around quite a lot, I could not find a solution to my problem, as it "should" work, but obviously doesn't.
I'm using on a Ubuntu 14.04 LTS machine Logstash 1.4.2-1-2-2c0f5a1, and I am receiving messages such as the following one:
2014-08-05 10:21:13,618 [17] INFO Class.Type - This is a log message from the class:
BTW, I am also multiline
In the input configuration, I do have a multiline codec and the event is parsed correctly. I also separate the event text in several parts so that it is easier to read.
In the end, I obtain, as seen in Kibana, something like the following (JSON view):
{
"_index": "logstash-2014.08.06",
"_type": "customType",
"_id": "PRtj-EiUTZK3HWAm5RiMwA",
"_score": null,
"_source": {
"#timestamp": "2014-08-06T08:51:21.160Z",
"#version": "1",
"tags": [
"multiline"
],
"type": "utg-su",
"host": "ubuntu-14",
"path": "/mnt/folder/thisIsTheLogFile.log",
"logTimestamp": "2014-08-05;10:21:13.618",
"logThreadId": "17",
"logLevel": "INFO",
"logMessage": "Class.Type - This is a log message from the class:\r\n BTW, I am also multiline\r"
},
"sort": [
"21",
1407315081160
]
}
You may have noticed that I put a ";" in the timestamp. The reason is that I want to be able to sort the logs using the timestamp string, and apparently logstash is not that good at that (e.g.: http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/multi-fields.html).
I have unsuccessfull tried to use the date filter in multiple ways, and it apparently did not work.
date {
locale => "en"
match => ["logTimestamp", "YYYY-MM-dd;HH:mm:ss.SSS", "ISO8601"]
timezone => "Europe/Vienna"
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
Since I read that the Joda library may have problems if the string is not strictly ISO 8601-compliant (very picky and expects a T, see https://logstash.jira.com/browse/LOGSTASH-180), I also tried to use mutate to convert the string to something like 2014-08-05T10:21:13.618 and then use "YYYY-MM-dd'T'HH:mm:ss.SSS". That also did not work.
I do not want to have to manually put a +02:00 on the time because that would give problems with daylight saving.
In any of these cases, the event goes to elasticsearch, but date does apparently nothing, as #timestamp and logTimestamp are different and no debug field is added.
Any idea how I could make the logTime strings properly sortable? I focused on converting them to a proper timestamp, but any other solution would also be welcome.
As you can see below:
When sorting over #timestamp, elasticsearch can do it properly, but since this is not the "real" log timestamp, but rather when the logstash event was read, I need (obviously) to be able to sort also over logTimestamp. This is what then is output. Obviously not that useful:
Any help is welcome! Just let me know if I forgot some information that may be useful.
Update:
Here is the filter config file that finally worked:
# Filters messages like this:
# 2014-08-05 10:21:13,618 [17] INFO Class.Type - This is a log message from the class:
# BTW, I am also multiline
# Take only type- events (type-componentA, type-componentB, etc)
filter {
# You cannot write an "if" outside of the filter!
if "type-" in [type] {
grok {
# Parse timestamp data. We need the "(?m)" so that grok (Oniguruma internally) correctly parses multi-line events
patterns_dir => "./patterns"
match => [ "message", "(?m)%{TIMESTAMP_ISO8601:logTimestampString}[ ;]\[%{DATA:logThreadId}\][ ;]%{LOGLEVEL:logLevel}[ ;]*%{GREEDYDATA:logMessage}" ]
}
# The timestamp may have commas instead of dots. Convert so as to store everything in the same way
mutate {
gsub => [
# replace all commas with dots
"logTimestampString", ",", "."
]
}
mutate {
gsub => [
# make the logTimestamp sortable. With a space, it is not! This does not work that well, in the end
# but somehow apparently makes things easier for the date filter
"logTimestampString", " ", ";"
]
}
date {
locale => "en"
match => ["logTimestampString", "YYYY-MM-dd;HH:mm:ss.SSS"]
timezone => "Europe/Vienna"
target => "logTimestamp"
}
}
}
filter {
if "type-" in [type] {
# Remove already-parsed data
mutate {
remove_field => [ "message" ]
}
}
}
I have tested your date filter. it works on me!
Here is my configuration
input {
stdin{}
}
filter {
date {
locale => "en"
match => ["message", "YYYY-MM-dd;HH:mm:ss.SSS"]
timezone => "Europe/Vienna"
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
output {
stdout {
codec => "rubydebug"
}
}
And I use this input:
2014-08-01;11:00:22.123
The output is:
{
"message" => "2014-08-01;11:00:22.123",
"#version" => "1",
"#timestamp" => "2014-08-01T09:00:22.123Z",
"host" => "ABCDE",
"debug" => "timestampMatched"
}
So, please make sure that your logTimestamp has the correct value.
It is probably other problem. Or can you provide your log event and logstash configuration for more discussion. Thank you.
This worked for me - with a slightly different datetime format:
# 2017-11-22 13:00:01,621 INFO [AtlassianEvent::0-BAM::EVENTS:pool-2-thread-2] [BuildQueueManagerImpl] Sent ExecutableQueueUpdate: addToQueue, agents known to be affected: []
input {
file {
path => "/data/atlassian-bamboo.log"
start_position => "beginning"
type => "logs"
codec => multiline {
pattern => "^%{TIMESTAMP_ISO8601} "
charset => "ISO-8859-1"
negate => true
what => "previous"
}
}
}
filter {
grok {
match => [ "message", "(?m)^%{TIMESTAMP_ISO8601:logtime}%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}\[%{DATA:thread_id}\]%{SPACE}\[%{WORD:classname}\]%{SPACE}%{GREEDYDATA:logmessage}" ]
}
date {
match => ["logtime", "yyyy-MM-dd HH:mm:ss,SSS", "yyyy-MM-dd HH:mm:ss,SSS Z", "MMM dd, yyyy HH:mm:ss a" ]
timezone => "Europe/Berlin"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}

Resources