“rename” filter didn’t rename the field in events - logstash-configuration

I used the filter plugin "rename" to rename a field in my events. Logstash didn't show any error on restart. But there doesn't seem to be any changes to the field name(checked in sense plugin). I added "rename" after indexing the file. How can I rename a field after indexing? I'm using LS-2.0, ES-2.0. This is a piece of my logstash.conf file:
filter {
grok {
match => {"message" => "^(?<EventTime>[0-9 \-\:]*)\s(?<MovieName>[\w.\-\']*)\s(?<Rating>[\d.]+)\s(?<NoOfDownloads>\d+)\s(?<NoOfViews>\d+)" }
}
mutate {
convert => {"Rating" => "float"}
}
mutate {
convert => {"NoOfDownloads" => "integer"}
}
mutate {
convert => {"NoOfViews" => "integer"}
}
mutate{
rename => { "NoOfViews" => "TotalViews" }
}
> }

You need to either reindex or create elastic search alias for the already indexed data. Direct renaming of the field is not possible on indexed data.

Related

Elasticsearch change field type from filter dissect

I use logstash-logback-encoder to send java log files to logstash, and then to elasticsearch. To parse the message in java log, I use following filter to dissect message
input {
file {
path => "/Users/MacBook-201965/Work/java/logs/oauth-logstash.log"
start_position => "beginning"
codec => "json"
}
}
filter {
if "EXECUTION_TIME" in [tags] {
dissect {
mapping => {
"message" => "%{endpoint} timeMillis:[%{execution_time_millis}] data:%{additional_data}"
}
}
mutate {
convert => { "execution_time_millis" => "integer" }
}
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "elk-%{+YYYY}"
document_type => "log"
}
stdout {
codec => json
}
}
It dissect the message so I can get value of execution_time_millis. However the data type is string. I created the index using Kibana index pattern. How can I change the data type of execution_time_millis into long?
Here is the sample json message from logback
{
"message":"/tests/{id} timeMillis:[142] data:2282||0:0:0:0:0:0:0:1",
"logger_name":"com.timpamungkas.oauth.client.controller.ElkController",
"level_value":20000,
"endpoint":"/tests/{id}",
"execution_time_millis":"142",
"#version":1,
"host":"macbook201965s-MacBook-Air.local",
"thread_name":"http-nio-8080-exec-7",
"path":"/Users/MacBook-201965/Work/java/logs/oauth-logstash.log",
"#timestamp":"2018-01-04T11:20:20.100Z",
"level":"INFO",
"tags":[
"EXECUTION_TIME"
],
"additional_data":"2282||0:0:0:0:0:0:0:1"
}{
"message":"/tests/{id} timeMillis:[110] data:2280||0:0:0:0:0:0:0:1",
"logger_name":"com.timpamungkas.oauth.client.controller.ElkController",
"level_value":20000,
"endpoint":"/tests/{id}",
"execution_time_millis":"110",
"#version":1,
"host":"macbook201965s-MacBook-Air.local",
"thread_name":"http-nio-8080-exec-5",
"path":"/Users/MacBook-201965/Work/java/logs/oauth-logstash.log",
"#timestamp":"2018-01-04T11:20:19.780Z",
"level":"INFO",
"tags":[
"EXECUTION_TIME"
],
"additional_data":"2280||0:0:0:0:0:0:0:1"
}
Thank you
If you have already indexed the documents, you'll have to reindex the data after changing the datatype of any field.
However, you can use something like this to change the type of millis from string to integer. (long is not supported in this)
https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-convert
Also, try defining elasticsearch template before creating index if are going to add multiple index with index names having some regex pattern.Else, you can define your index format beforehand too an then start indexing.

Logstash Mutate Filter to Convert Field type is not working

I have a field traceinfo.duration in my webapplog. ES maps it as a string, but i want to change it's field type to integer. My logstash.conf contains following filter section:
filter {
if "webapp-log" in [tags] {
json { source => "message" }
mutate {
convert => {"[traceinfo][duration]" => "integer"}
}
mutate {
remove_field => ["[beat][hostname]","[beat][name]"]
}
}
}
I am creating a new index with this configuration to test it. But my field type in kibana is still string for traceinfo.duration field. My logstash version is 5.3.0. Please help

how filter {"foo":"bar", "bar": "foo"} with grok to get only foo field?

I copied
{"name":"myapp","hostname":"banana.local","pid":40161,"level":30,"msg":"hi","time":"2013-01-04T18:46:23.851Z","v":0}
from https://github.com/trentm/node-bunyan and save it as my logs.json. I am trying to import only two fields (name and msg) to ElasticSearch via LogStash. The problem is that I depend on a sort of filter that I am not able to accomplish. Well I have successfully imported such line as a single message but certainly it is not worth in my real case.
That said, how can I import only name and msg to ElasticSearch? I tested several alternatives using http://grokdebug.herokuapp.com/ to reach an useful filter with no success at all.
For instance, %{GREEDYDATA:message} will bring the entire line as an unique message but how to split it and ignore all other than name and msg fields?
At the end, I am planing to use here:
input {
file {
type => "my_type"
path => [ "/home/logs/logs.log" ]
codec => "json"
}
}
filter {
grok {
match => { "message" => "data=%{GREEDYDATA:request}"}
}
#### some extra lines here probably
}
output
{
elasticsearch {
codec => json
hosts => "http://127.0.0.1:9200"
index => "indextest"
}
stdout { codec => rubydebug }
}
I have just gone through the list of available Logstash filters. The prune filter should match your need.
Assume you have installed the prune filter, your config file should look like:
input {
file {
type => "my_type"
path => [ "/home/logs/logs.log" ]
codec => "json"
}
}
filter {
prune {
whitelist_names => [
"#timestamp",
"type",
"name",
"msg"
]
}
}
output {
elasticsearch {
codec => json
hosts => "http://127.0.0.1:9200"
index => "indextest"
}
stdout { codec => rubydebug }
}
Please be noted that you will want to keep type for Elasticsearch to index it into a correct type. #timestamp is required if you will view the data on Kibana.

Plot a Tile map with the ELK stack

I'm trying to create a tile map with Kibana. My conf file logstash works correctly and generates all what Kibana needs to plot a tile map. This is my conf logstash :
input {
file {
path => "/home/ec2-user/part.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => ","
columns => ["kilo_bytes_total","ip","session_number","request_number_total","duration_minutes_total","referer_list","filter_match_count_avg","request_number_avg","duration_minutes_avg","kilo_bytes_avg","segment_duration_avg","req_by_minute_avg","segment_mix_rank_avg","offset_avg_avg","offset_std_avg","extrem_interval_count_avg","pf0_avg","pf1_avg","pf2_avg","pf3_avg","pf4_avg","code_0_avg","code_1_avg","code_2_avg","code_3_avg","code_4_avg","code_5_avg","volume_classification_filter_avg","code_classification_filter_avg","profiles_classification_filter_avg","strange_classification_filter_avg"]
}
geoip {
source => "ip"
database => "/home/ec2-user/logstash-5.2.0/GeoLite2-City.mmdb"
target => "geoip"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
add_tag => "geoip"
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}
output {
elasticsearch {
index => "geotrafficip"
}
}
And this is what that generates :
It looks cool. Trying to create my tile map, I have this message :
What to do ?
It seems that I must add somewhere the possiblity to use dynamic templates.. Should I create a template and add it to my file conf logstash ?
Can anybody give me some feedback ? Thanks !
If you look in the Kibana settings for your index, you'll need at least one field to show up with a type of geo_point to be able to get anything on a map.
If you don't already have a geo_point field, you'll need to re-index your data after setting up an appropriate mapping for the geoip.coordinates field. For example: https://stackoverflow.com/a/42004303/2785358
If you are using a relatively new version of Elasticsearch (2.3 or later), it's relatively easy to re-index your data. You need to create a new index with the correct mapping, use the re-index API to copy the data to the new index, delete the original index and then re-index back to the original name.
You are using the geoip filter wrong and are trying to convert the longitude and latitude to float. Get rid of your mutate filter and change the geoip filter to this.
geoip {
source => "ip"
fields => ["latitude","longitude"]
add_tag => "geoip"
}
This will create the appropriate fields. And the required GeoJSON object.

Data type conversion using logstash grok

Basic is a float field. The mentioned index is not present in elasticsearch. When running the config file with logstash -f, I am getting no exception. Yet, the data reflected and entered in elasticsearch shows the mapping of Basic as string. How do I rectify this? And how do I do this for multiple fields?
input {
file {
path => "/home/sagnik/work/logstash-1.4.2/bin/promosms_dec15.csv"
type => "promosms_dec15"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
grok{
match => [
"Basic", " %{NUMBER:Basic:float}"
]
}
csv {
columns => ["Generation_Date","Basic"]
separator => ","
}
ruby {
code => "event['Generation_Date'] = Date.parse(event['Generation_Date']);"
}
}
output {
elasticsearch {
action => "index"
host => "localhost"
index => "promosms-%{+dd.MM.YYYY}"
workers => 1
}
}
You have two problems. First, your grok filter is listed prior to the csv filter and because filters are applied in order there won't be a "Basic" field to convert when the grok filter is applied.
Secondly, unless you explicitly allow it, grok won't overwrite existing fields. In other words,
grok{
match => [
"Basic", " %{NUMBER:Basic:float}"
]
}
will always be a no-op. Either specify overwrite => ["Basic"] or, preferably, use mutate's type conversion feature:
mutate {
convert => ["Basic", "float"]
}

Resources