My logstash config is something like the following
if "user" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
action => "index"
index => "user-%{+YYYY.MM.dd}"
template => '/path/to/elastic-template.json'
flush_size => 50
}
}
And the json template contains the lines
"fields" : {
"{name}" : {"type": "string", "index" : "analyzed", "omit_norms" : true, "index_options" : "docs"},
"{name}.raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 256}
}
So I assume the .raw can be used when searching or generating the visualization.
However, I removed the existing index and rebuild again, I can see the data, but I still cannot find the .raw field either Kibana's settings, discover or visualize
How to use the .raw field?
The template you posted isn't even valid JSON. If you want to apply a raw field as in not_analyzed you have to do it like this:
"action" : {
"type" : "string",
"fields" : {
"raw" : {
"index" : "not_analyzed",
"type" : "string"
}
}
}
This will create a action.raw field.
I encountered same issue.
I used ES5.5.1 and logstash 5.5.1, below is my template file
{
"template": "access_log",
"settings": {
"index.refresh_interval" : "5s"
},
"mappings": {
"log": {
"properties":{
"geoip":{
"properties":{
"location" : {
"type" : "geo_point",
"index": "false"
}
}
}
}
}
}
}
Related
Edit: See below for the solution
Currently having an issue with the templating in Grafana - trying to get a dropdown of hostnames from some data I'm feeding in to Elasticsearch via Logstash's Graphite plugin, so I can build a dynamic template in Grafana.
Versions are
Grafana 4.1.2 + Elasticsearch/Logstash 5.2.1
The terms query in Grafana I'm trying to use is as follows as per docs on grafana website - http://docs.grafana.org/features/datasources/elasticsearch/ :
{"find": "terms", "field": "host_name"}
This works fine if the field is a numeric type field - eg I get results in the template for metric_value, but this doesn't seem to work for text/string fields. I'm wondering if this is maybe due to the way I'm constructing or ingesting the fields - You can see below how I"m trying to achieve this - note, I've tried "keyword" and "text" types for these fields, neither seem to work.
This is the Logstash input filter that I'm using - basically trying to split the graphite style metric into seperate fields -
input {
graphite {
type => graphite
port => 2003
id => "graphite_input"
}
}
filter {
if [type] == "graphite" {
grok {
match => [ "message", "\Aicinga2\.%{MONGO_WORDDASH:host_name:keyword}\.%{WORD:metric_type:keyword}\.%{NOTSPACE:metric_name:keyword}\.value%{SPACE}%{NUMBER:metric_value:float}%{SPACE}%{POSINT:timestamp:date}" ]
}
}
}
output {
if [type] == "graphite" {
elasticsearch {
index => "graphite-%{+YYYY.MM}"
hosts => ["localhost"]
}
}
}
And an example document I'm indexing (taken from kibana)
{
"_index": "graphite-2017.02",
"_type": "graphite",
"_id": "XYZdflksdf",
"_score": null,
"_source": {
"#timestamp": "2017-02-21T00:17:16.000Z",
"metric_name": "interface-eth0.snmp-interface.perfdata.eth0_in_discard",
"port": 37694,
"icinga2.XXXYYY.services.interface-eth0.snmp-interface.perfdata.eth0_in_discard.value": 357237,
"#version": "1",
"host": "192.168.1.1",
"metric_type": "services",
"metric_value": 357237,
"message": "icinga2.XXXYYY.services.interface-eth0.snmp-interface.perfdata.eth0_in_discard.value 357237 1487636236",
"type": "graphite",
"host_name": "XXXYYY",
"timestamp": "1487636236"
},
"fields": {
"#timestamp": [
1487636236000
]
},
"sort": [
1487636236000
]
}
I have now solved this problem myself. The string fields are required to be defined as not_analyzed in order to appear in the Grafana dashboard.
Here's an example Template you can use:
Note: you'll have to install this manually, it seems like logstash won't install it into elasticsearch for some reason (maybe a bug?)
Install like so (assuming path is /etc/logstash/graphite-new.json:
curl -XPUT 'http://localhost:9200/_template/graphite-*' -d#/etc/logstash/graphite-new.json
Template:
{
"template" : "graphite-*",
"settings" : { "index.refresh_interval" : "60s" },
"mappings" : {
"_default_" : {
"_all" : { "enabled" : false },
"dynamic_templates" : [{
"message_field" : {
"match" : "message",
"match_mapping_type" : "string",
"mapping" : { "type" : "string", "index" : "not_analyzed" }
}
}, {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : { "type" : "string", "index" : "not_analyzed" }
}
}],
"properties" : {
"#timestamp" : { "type" : "date", "format" : "dateOptionalTime" },
"#version" : { "type" : "integer", "index" : "not_analyzed" },
"metric_name" : { "type" : "string", "index" : "not_analyzed" },
"host" : { "type" : "string", "index" : "not_analyzed" },
"host_name" : { "type" : "string", "index" : "not_analyzed" },
"metric_type" : { "type" : "string", "index" : "not_analyzed" }
}
}
}
}
I've still got this defined in the logstash filter as well:
if [type] == "graphite" {
elasticsearch {
index => "graphite-%{+YYYY.MM}"
hosts => ["localhost"]
template => "/etc/logstash/graphite-new.json"
}
}
i am trying to process my logs with custom template svlogs but Inex is not getting created on the go based on my template .i am facing below error
"error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index", "resource.type"=>"index_expression", "resource.id"=>"svlogs-2016.12.29", "index_uuid"=>"_na_", "index"=>"svlogs-2016.12.29"}
My output is
output {
elasticsearch {
hosts => [ "192.168.254.129:9200" ]
user => "logstash"
password => "selva123"
template_name => "svlogs"
index => "svlogs-%{+YYYY.MM.dd}"
}}
my template is :
curl -XPUT '192.168.254.129:9200/_template/svlogs?pretty' -d'
{
"template": "svlogs*",
"settings": {
"number_of_shards": 1
},
"mappings" : {
"_default_" : {
"properties" : {
"MSGID" : {"type": "integer" },
"debug" : {"type": "string" , "index" : "not_analyzed" },
"Error" : { "type" : "string", "index" : "not_analyzed" },
"client" : { "type" : "string" },
"eno" : { "type" : "integer" },
"login" : { "type" : "string" },
"message" : { "type" : "string" },
"pid" : { "type" : "integer" },
"process" : { "type" : "string" },
"sv_date" : { "type": "date", "format": "EEE MMM dd HH:mm:SS yyyy"},
"type" : { "type" : "string" }
}
}
}
}'
i was expecting logstash will create teh Index based on the temple given
Actually it was working till i installed x-pack . Theni have resolved all my privilege related issues , Now i need to create the index manually to make my logstash work . i tried added managae_temaple as "false" still no help .
Please guide Thanks in advance .
Issue got resolved after i commented below line in elasticsearch.yml
#action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*
In our existing design we are using logstash to fetch data from Kafka (JSON) and put it in ElasticSearch.
We are also using index template mapping while inserting data from logstash to ES and this could be done by setting 'template' property of ES output plugin of logstash, e.g.,
output {
elasticsearch {
template => "elasticsearch-template.json", //template file path
hosts => "localhost:9200"
template_overwrite => true
manage_template => true
codec=>plain
}
}
elasticsearch-template.json looks like below,
{
"template" : "logstash-*",
"settings" : {
"index.refresh_interval" : "3s"
},
"mappings" : {
"_default_" : {
"_all" : {"enabled" : true},
"dynamic_templates" : [ {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "string", "index" : "analyzed", "omit_norms" : true,
"fields" : {
"raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 256, "doc_values":true}
}
}
}
} ],
"properties" : {
"#version": { "type": "string", "index": "not_analyzed" },
"geoip" : {
"type" : "object",
"dynamic": true,
"properties" : {
"location" : { "type" : "geo_point" }
}
}
}
}
}
}
Now we are going to replace logstash with Apache Spark and I want to use similar kind of usage of index template in Spark while inserting data to ES.
I am using elasticsearch-spark_2.11 library for this implementation.
Thanks.
I have written a logstash conf filefor reading logs. If I use the default index, that is logstash-*, I could see .raw field in kibana. However, if I create a new index in conf file in logstash like
output{
elasticsearch {
hosts => "localhost"
index => "batchjob-*"}
}
Then the new index cant configure .raw field. Is there any resolve ways to solve it? Great Thanks.
The raw fields are created by a specific index template that the Logstash elasticsearch output creates in Elasticsearch.
What you can do is simply copy that template to a file named batchjob.json and change the template name to batchjob-* (see below)
{
"template" : "batchjob-*",
"settings" : {
"index.refresh_interval" : "5s"
},
"mappings" : {
"_default_" : {
"_all" : {"enabled" : true, "omit_norms" : true},
"dynamic_templates" : [ {
"message_field" : {
"match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "string", "index" : "analyzed", "omit_norms" : true,
"fielddata" : { "format" : "disabled" }
}
}
}, {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "string", "index" : "analyzed", "omit_norms" : true,
"fielddata" : { "format" : "disabled" },
"fields" : {
"raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 256}
}
}
}
} ],
"properties" : {
"#timestamp": { "type": "date" },
"#version": { "type": "string", "index": "not_analyzed" },
"geoip" : {
"dynamic": true,
"properties" : {
"ip": { "type": "ip" },
"location" : { "type" : "geo_point" },
"latitude" : { "type" : "float" },
"longitude" : { "type" : "float" }
}
}
}
}
}
}
Then you can modify your elasticsearch output like this:
output {
elasticsearch {
hosts => "localhost"
index => "batchjob-*"
template_name => "batchjob"
template => "/path/to/batchjob.json"
}
}
I would like to disable all the "raw" fields that are created in Elasticsearch by logstash-forwarder. So if I have a field as "host" logstash-forwarder won't create a "host.raw" field. But I need a general solution for all the string fields.
I have my string fields as "not_analyzed" so having raw fields has no point and just a duplicate of the data.
I tried to remove "fields" part of the mapping below but it's added back after the first log message. The closest thing I could achieve was to add the following mapping but that still creates empty raw fields:
curl -XPUT 'localhost:9200/myindex/' -d '{
"mappings": {
"_default_": {
"dynamic_templates" : [ {
"string_fields" : {
"mapping" : {
"index" : "not_analyzed",
"type" : "string",
"fields" : {
"raw" : {
"ignore_above" : 0,
"index" : "not_analyzed",
"type" : "string"
}
}
},
"match" : "*",
"match_mapping_type" : "string"
}
} ],
"_all": { "enabled": false }
}
}
}'
So how can I disable these fields?