I am using ELK stack with filebeat.
I am using a default template for mapping.
I am not getting all needed fields as "indexed"
Here is my mapping file,
{
"mappings": {
"_default_": {
"_all": {
"enabled": true,
"norms": {
"enabled": false
}
},
"dynamic_templates": [
{
"template1": {
"mapping": {
"doc_values": true,
"ignore_above": 1024,
"index": "not_analyzed",
"type": "{dynamic_type}"
},
"match": "*"
}
}
],
"properties": {
"#timestamp": {
"type": "date"
},
"offset": {
"type": "long",
"doc_values": "true"
}
}
}
},
"settings": {
"index.refresh_interval": "5s"
},
"template": "filebeat-*"
}
Here is my config file for output.
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Let's say I want a field name channelas an indexed field. How to modify the template?
Related
Running Kibana version 5.5.2.
My current setup is Logstash is taking the logs from Docker containers, runs grok filters before sending the logs to elasticsearch. The specific logs that I need to show up as long, float are two times from AWS calls to ECS and EC2 and currently a grok filter pulls them out. Here is the custom filter that pulls out the ECS timings: ECS_DESCRIBE_CONTAINER_INSTANCES (AWS)(%{SPACE})(ecs)(%{SPACE})(%{POSINT})(%{SPACE})(?<ECS_DURATION>(%{NUMBER}))(s)(%{SPACE})(?<ECS_RETRIES>(%{NONNEGINT}))(%{SPACE})(retries) so I need ECS_DURATION to be a float and ECS_RETRIES to be a long. In the docker log handler I have the following
if [ECS_DURATION] {
mutate {
convert => ["ECS_DURATION", "float"]
}
}
if [ECS_RETRIES] {
mutate {
convert => ["ECS_RETRIES", "integer"]
}
}
When I look at the field in Kibana, it still shows as a text field, but when I make the following request to elasticsearch for the mappings, it shows those fields as long and float.
GET /logstash-2020.12.18/_mapping
{
"logstash-2020.12.18": {
"mappings": {
"log": {
"_all": {
"enabled": true,
"norms": false
},
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"norms": false,
"type": "text"
}
}
},
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
}
}
}
],
"properties": {
"#timestamp": {
"type": "date",
"include_in_all": false
},
"#version": {
"type": "keyword",
"include_in_all": false
},
"EC2_DURATION": {
"type": "float"
},
"EC2_RETRIES": {
"type": "long"
},
"ECS_DURATION": {
"type": "float"
},
"ECS_RETRIES": {
"type": "long"
},
I even created a custom mapping template in elasticsearch with the following call
PUT /_template/aws_durations?pretty
{
"template": "logstash*",
"mappings": {
"type1": {
"_source": {
"enabled": true
},
"properties": {
"ECS_DURATION": {
"type": "half_float"
},
"ECS_RETRIES": {
"type": "byte"
},
"EC2_DURATION": {
"type": "half_float"
},
"EC2_RETRIES": {
"type": "byte"
}
}
}
}
}
Have you checked that its actually going into the if [ECS_DURATION] and if [ECS_RETRIES] conditions? (I wasnt able to comment)
I'm sending my data to elasticsearch with index_number of documents. Its unique identifier. When i try to sort it with this, from python client i get this consistency problem as you see in the picture.
This is my query dsl
"size": 1,
"query": {
"match_all": {}
},
"sort": [
{
"index_number.keyword": {
"order": "asc",
"missing": "_last",
"unmapped_type": "String"
}
}
]
In logstash output
output{
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash_%{+yyyy-MM-dd}"
manage_template => true
template_name => "logstash_template"
template => "..../logstash_template.json"
http_compression => true
}
}
In my logstash template.json
...
{
"index_patterns": ["logstash_*"],
"template": {
"settings":{
"number_of_shards": 1,
"number_of_replicas": 0,
"index": {
"sort.field": "index_number",
"sort.order": "asc"
}
},
"mappings": {
"dynamic_templates":{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {"type":"keyword"}
}
},
"properties": {
"index_number": {
"type": "keyword",
"fields": {
"numeric": {
"type": "double"
}
}
}
}
}
}
}
....
Mapping on elasticsearch
{
"logstash_2020-03-12" : {
"mappings" : {
"properties" : {
.....
"index_number" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"city" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"country" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
-----
}
}
}
}
How can i solve it? Thanks for answering.
You need to add template_overwrite to your Logstash output configuration otherwise the logstash_template is not overridden if it already exists:
output{
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash_%{+yyyy-MM-dd}"
manage_template => true
template_override => true <-- add this
template_name => "logstash_template"
template => "..../logstash_template.json"
http_compression => true
}
}
Make sure that your logstash_template.json file has the following format:
{
"index_patterns": [
"logstash_*"
],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0,
"index": {
"sort.field": "index_number",
"sort.order": "asc"
}
},
"mappings": {
"dynamic_templates": {
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "keyword"
}
}
},
"properties": {
"index_number": {
"type": "keyword",
"fields": {
"numeric": {
"type": "double"
}
}
}
}
}
}
You had mappings and settings enclosed within the template section, but this is only for the new index templates which the elasticsearch Logstash output doesn't support yet. You need to use the legacy index templates.
Using ELk 6.X It seems i cannot plot points due to geoip.location not populated?
I also added a template which i hope is correct. Not an expert but i am pretty sure my points aren't rendered bc its missing data there.
Kibana 6.4.2
Logstash 6.4.2-1
Elasticsearch 6.4.2
Following configs
input {
udp {
port => 9996
codec => netflow {
versions => [5, 7, 9, 10]
}
type => netflow
}
}
filter {
geoip {
source => "[netflow][ipv4_src_addr]"
target => "src_geoip"
database => "/usr/share/GeoIP/GeoLite2-City.mmdb"
}
geoip {
source => "[netflow][ipv4_dst_addr]"
target => "dst_geoip"
database => "/usr/share/GeoIP/GeoLite2-City.mmdb"
}
}
output
output {
if [type] == "netflow" {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
}
The Mapping is like such
"geoip": {
"dynamic": "true",
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "half_float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "half_float"
}
}
},
Template
{
"logstash": {
"order": 0,
"version": 60001,
"index_patterns": [
"logstash-*"
],
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}
},
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
],
"properties": {
"#timestamp": {
"type": "date"
},
"#version": {
"type": "keyword"
},
"geoip": {
"dynamic": true,
"properties": {
"ip": {
"type": "ip"
},
"location": {
"type": "geo_point"
},
"latitude": {
"type": "half_float"
},
"longitude": {
"type": "half_float"
}
}
}
}
}
},
"aliases": {}
}
}
My indexes come back with
src or dst but only the below
# dst_geoip.latitude 26.097
# dst_geoip.location.lat 26.097
# dst_geoip.location.lon -80.181
I am trying to override a mapping for a field.
There is a default index template (which I can't change) and I am overriding it with a custom one.
The default index has a mapping for "message" field as text, but I need to make it treated like an object and make its fields indexable/searchable.
This is the default index template, with order 10.
{
"mappings": {
"_default_": {
"dynamic_templates": [
{
"message_field": {
"mapping": {
"index": true,
"norms": false,
"type": "text"
},
"match": "message",
"match_mapping_type": "string"
}
},
...
],
"properties": {
"message": {
"doc_values": false,
"index": true,
"norms": false,
"type": "text"
},
...
}
}
},
"order": 10,
"template": "project.*"
}
And here's my override:
{
"template" : "project.*",
"order" : 100,
"dynamic_templates": [
{
"message_field": {
"mapping": {
"type": "object"
},
"match": "message"
}
}
],
"mappings": {
"message": {
"enabled": true,
"properties": {
"tag": {"type": "string", "index": "not_analyzed"},
"requestId": {"type": "integer"},
...
}
}
}
}
This works nice, but I end up defining all fields (tag, requestId, ...) in the "message" object.
Is there a way to make all the fields in the "message" object indexable/searchable?
Here's a sample document:
{
"level": "30",
...
"kubernetes": {
"container_name": "data-sync-server",
"namespace_name": "alitest03",
...
},
"message": {
"tag": "AUDIT",
"requestId": 1234,
...
},
}
...
}
Tried lots of things, but I can't make it work.
I am using ElasticSearch version 2.4.4.
You can use the path_match property in your dynamic mapping :
Something like :
{
"template": "project.*",
"order": 100,
"mappings": {
"<your document type here>": {
"dynamic_templates": [
{
"message_field": {
"mapping": {
"type": "object"
},
"match": "message"
}
},
{
"message_properties": {
"path_match": "message.*",
"mapping": {
"type": "string",
"index": "not_analyzed"
}
}
}
]
}
}
}
But you will maybe have to distinguish between string / numeric with match_mapping_type
I am using logstash for pushing logs to ES from DynamoDB:
filter {
json {
source => "message"
target => "doc"
}
mutate {
convert => {
"[doc][dynamodb][keys][customerId][N]" => "integer"
"[doc][dynamodb][newImage][callDate][N]" => "integer"
"[doc][dynamodb][newImage][price][S]" => "float"
}
}
date {
match => [ "[doc][dynamodb][newImage][callDate][N]", "UNIX" ]
target => "#timestamp"
}
}
output {
elasticsearch {
hosts => ["localhost"]
codec => "json"
index => "cdr-%{+YYYY.MM.dd}"
document_type => "cdr"
document_id => "%{[doc][dynamodb][keys][uniqueId][S]}"
template_name => "cdr"
template => "/opt/logstash/templates/logstash_dynamodb_template.json"
template_overwrite => true
}
stdout { }
}
Actually mutate.convert does not make any changes, no matter if it is removed or added.
{
"order": 0,
"template": "cdr*",
"settings": {
"index.refresh_interval": "5s"
},
"mappings": {
"cdr": {
"dynamic": "false",
"_all": {
"enabled": false
},
"properties": {
"doc": {
"properties": {
"dynamodb": {
"properties": {
"keys": {
"properties": {
"customerId": {
"properties": {
"N": {
"type": "long"
}
}
},
"uniqueId": {
"properties": {
"S": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
},
"newImage": {
"properties": {
"callDate": {
"properties": {
"N": {
"type": "date",
"format": "epoch_second"
}
}
},
"direction": {
"properties": {
"S": {
"type": "string",
"index": "not_analyzed"
}
}
},
"disposition": {
"properties": {
"S": {
"type": "string",
"index": "not_analyzed"
}
}
},
"price": {
"properties": {
"S": {
"type": "double"
}
}
},
"uniqueId": {
"properties": {
"S": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}
}
}
}
}
}
}
}
Yes, doc.message contains all described fields, but they are not mapped. Here is the screenshot from ES:
As you can see only string mappings work as it is supposed.
Error while querying says: No mapping found for [doc.dynamodb.newImage.callDate.N] in order to sort on
Does anyone know what is the reason of such behavior?
Tip: logstash debug bin/logstash -f filters/logstash-dynamodb.conf --debug does not contain any errors.
Thanks in advance for any ideas.