I would like to index the SMTP receive log of my Exchange Server with ElasticSearch. So I created a logstash config file and it works very well but all of my fields are strings instead ip for source and target server for example. So I tried to change the default mapping in the logstash template:
I run the command curl -XGET http://localhost:9200/_template/logstash?pretty > C:\temp\logstashTemplate.txt
Edit the textfile and add my 'SourceIP' field
{
"template": "logstash-*",
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [{
"message_field": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string"
},
"match_mapping_type": "string",
"match": "message"
}
}, {
"string_fields": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string",
"fields": {
"raw": {
"ignore_above": 256,
"index": "not_analyzed",
"type": "string"
}
}
},
"match_mapping_type": "string",
"match": "*"
}
}],
"_all": {
"omit_norms": true,
"enabled": true
},
"properties": {
"#timestamp": {
"type": "date"
},
"geoip": {
"dynamic": true,
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "float"
}
}
},
"#version": {
"index": "not_analyzed",
"type": "string"
},
"SourceIP": {
"type": "ip"
}
}
}
},
"aliases": {}
}
I uploaded the edited template with the command curl -XPUT http://localhost:9200/_t
emplate/logstash -d#C:\temp\logstash.template
Restart the ElasticSearch server and index deleted/re-created
The 'SourceIP' field did not changed to type ip. What do I wrong? Can you please give me a hint? Thanks!
Related
Make template base on https://github.com/vanthome/winston-elasticsearch/blob/master/index-template-mapping.json
{
"index_patterns": ["applogs-*"],
"settings": {
"number_of_shards": 1
},
"mappings": {
"_source": { "enabled": true },
"properties": {
"#timestamp": { "type": "date" },
"#version": { "type": "keyword" },
"message": { "type": "text", "index": true },
"severity": { "type": "keyword", "index": true },
"geohash":{ "type": "geo-point", "index": true},
"location":{ "type": "geo-point", "index": true},
}
}
}
but get an error
[mapper_parsing_exception] Root mapping definition has unsupported parameters: [severity : {index=true, type=keyword}] [#timestamp : {type=date}] [#version : {type=keyword}] [message : {index=true, type=text}] [fields : {dynamic=true, properties={}}]
probably some obsolete version? What I should update?
Based on the docs:
PUT _template/template_1
{
"index_patterns": [
"applogs-*"
],
"settings": {
"number_of_shards": 1
},
"mappings": {
"_source": {
"enabled": true
},
"properties": {
"#timestamp": {
"type": "date"
},
"#version": {
"type": "keyword"
},
"message": {
"type": "text",
"index": true
},
"severity": {
"type": "keyword",
"index": true
},
"geohash": {
"type": "geo_point",
"index": true
},
"location": {
"type": "geo_point",
"index": true
}
}
}
}
Your json was invalid (one comma too much) and also geo-point --> geo_point.
working with Elasticsearch/Kibana and trying to search on field in a nested object. However it does not seem to work. Here's mapping that I use in a template:
{
"order": 0,
"template": "ss7_signaling*",
"settings": {
"index": {
"mapping.total_fields.limit": 3000,
"number_of_shards": "5",
"refresh_interval": "30s"
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"string_fields": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "no",
"type": "string"
},
"match_mapping_type": "string",
"match": "*"
}
}
],
"properties": {
"message": {
"index": "not_analyzed",
"type": "string"
},
"Protocol": {
"index": "not_analyzed",
"type": "string"
},
"IMSI": {
"index": "not_analyzed",
"type": "string"
},
"nested": {
"type": "nested",
"properties": {
"name": {
"type": "string",
"index": "not_analyzed"
}
}
},
"Timestamp": {
"format": "strict_date_optional_time||epoch_millis",
"type": "date"
},
"#timestamp": {
"type": "date"
},
"#version": {
"index": "not_analyzed",
"type": "string"
}
},
"_all": {
"norms": false,
"enabled": false
}
}
},
"aliases": {
"signaling": {}
}
}
When I do search kibana on single fields - everything works fine. Still though i cannot search on nested fields like 'nested.name'.
Example of my query in kibana: nested.name:hi
Thanks.
Kibana uses the query_string query underneath, and the latter does not support querying on nested fields.
It's still being worked on but in the meantime you need to proceed differently.
UPDATE:
As of ES 7.6, it is now possible to search on nested fields
I'm trying to figure out how the mapping work but can't get it right, I copied the logstash template to be used as my custom index name. However I'm getting the following issue:
MapperParsingException[failed to parse [data]]; nested: IllegalArgumentException[unknown property [customerId]];
at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:329)
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:311)
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:328)
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:254)
at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:124)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:309)
at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:533)
at org.elasticsearch.index.shard.IndexShard.prepareCreateOnPrimary(IndexShard.java:510)
at org.elasticsearch.action.index.TransportIndexAction.prepareIndexOperationOnPrimary(TransportIndexAction.java:214)
at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:223)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:327)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:120)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:68)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:657)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:287)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:279)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:378)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: unknown property [customerId]
at org.elasticsearch.index.mapper.core.StringFieldMapper.parseCreateFieldForString(StringFieldMapper.java:371)
at org.elasticsearch.index.mapper.core.StringFieldMapper.parseCreateField(StringFieldMapper.java:320)
at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:321)
... 22 more
I tried to ignore the data field which is actually an object so it will be processed and saved as raw string, below is the mapping template that I'm attempting to use.
{
"order": 0,
"template": "sl-prod-*",
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"message_field": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string"
},
"match_mapping_type": "string",
"match": "message"
}
},
{
"string_fields": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string",
"fields": {
"raw": {
"ignore_above": 256,
"index": "not_analyzed",
"type": "string"
},
"data": {
"ignore_above": 256,
"index": "not_analyzed",
"type": "string"
}
}
},
"match_mapping_type": "string",
"match": "*"
}
}
],
"_all": {
"omit_norms": true,
"enabled": true
},
"properties": {
"msg": {
"index": "not_analyzed",
"type": "string"
},
"#timestamp": {
"type": "date"
},
"geoip": {
"dynamic": true,
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "float"
}
}
},
"data": {
"index": "not_analyzed",
"type": "string"
},
"#version": {
"index": "not_analyzed",
"type": "string"
}
}
}
},
"aliases": {}
}
Any help will be appreciated ...
In your mapping, 'data' is a string that does not contain a customerID property.
See here for a similar issue: https://github.com/elastic/elasticsearch/issues/5084
logstash configI have created my index on elasticsearch and through kibana as well and have uploaded data. Now i want to change the mapping for the index and change some fields to not analyzed .Below is my mapping which i want to replace from existing one . But when i run below command it gives me error
{"error":{"root_cause":[{"type":"index_already_exists_exception","reason":"already
exists","index":"rettrmt"}],"type":"index_already_exists_exception","reason":"already
exists","index":"rettrmt"},"status":400}
Kindly help to get it close.
curl -XPUT 'http://10.56.139.61:9200/rettrmt' -d '{
"rettrmt": {
"aliases": {},
"mappings": {
"RETTRMT": {
"properties": {
"#timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"#version": {
"type": "string"
},
"acid": {
"type": "string"
},
"actor_id": {
"type": "string",
"index": "not_analyzed"
},
"actor_type": {
"type": "string",
"index": "not_analyzed"
},
"channel_id": {
"type": "string",
"index": "not_analyzed"
},
"circle": {
"type": "string",
"index": "not_analyzed"
},
"cr_dr_indicator": {
"type": "string",
"index": "not_analyzed"
},
"host": {
"type": "string"
},
"message": {
"type": "string"
},
"orig_input_amt": {
"type": "double"
},
"path": {
"type": "string"
},
"r_cre_id": {
"type": "string"
},
"sub_use_case": {
"type": "string",
"index": "not_analyzed"
},
"tran_amt": {
"type": "double"
},
"tran_id": {
"type": "string"
},
"tran_particulars": {
"type": "string"
},
"tran_particulars_2": {
"type": "string"
},
"tran_remarks": {
"type": "string"
},
"tran_sub_type": {
"type": "string"
},
"tran_timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"tran_type": {
"type": "string"
},
"type": {
"type": "string"
},
"use_case": {
"type": "string",
"index": "not_analyzed"
}
}
}
},
"settings": {
"index": {
"creation_date": "1457331693603",
"uuid": "2bR0yOQtSqqVUb8lVE2dUA",
"number_of_replicas": "1",
"number_of_shards": "5",
"version": {
"created": "2000099"
}
}
},
"warmers": {}
}
}'
You first need to delete your index and then recreate it with the proper mapping. Here you're getting an error index_already_exists_exception because you try to create an index while the older index still exists, hence the conflict.
Run this first:
curl -XDELETE 'http://10.56.139.61:9200/rettrmt'
And then you can run your command again. Note that this will erase your data, so you will have to repopulate your index.
Did you try something like that ?
curl -XPUT 'http://10.56.139.61:9200/rettrmt/_mapping/RETTRMT' -d '
{
"properties": {
"actor_id": { // or whichever properties you want to add
"type": "string",
"index": "not_analyzed"
}
}
}
works for me
I want to create a index and here is my mapping.I want to create a multi field on field - 'findings' one with the default mapping(analzyed) and other one with 'orig'(not_analyzed).
PUT nto
{
"mappings": {
"_default_": {
"properties": {
"date": {
"type": "string",
"index": "not_analyzed"
},
"bo": {
"type": "string",
"index": "not_analyzed"
},
"pg": {
"type": "string"
},
"rate": {
"type": "float"
},
"findings": {
"type": "multi_field",
"fields": {
"findings": {
"type": "string",
"index": "analyzed"
},
"orig": {
"type": "string",
"index":"not_analyzed"
}
}
}
}
}
}
}
Once I create the mapping I don't see the orig field being created. Here is the mapping that I see,
{
"ccdn": {
"aliases": {},
"mappings": {
"test": {
"properties": {
"bo": {
"type": "string",
"index": "not_analyzed"
},
"date": {
"type": "string",
"index": "not_analyzed"
},
"findings": {
"type": "string",
"fields": {
"orig": {
"type": "string",
"index": "not_analyzed"
}
}
},
"pg": {
"type": "string"
},
"rate": {
"type": "float"
}
}
},
"_default_": {
"properties": {
"bo": {
"type": "string",
"index": "not_analyzed"
},
"date": {
"type": "string",
"index": "not_analyzed"
},
"findings": {
"type": "string",
"fields": {
"orig": {
"type": "string",
"index": "not_analyzed"
}
}
},
"pg": {
"type": "string"
},
"rate": {
"type": "float"
}
}
}
},
"settings": {
"index": {
"creation_date": "1454893575663",
"uuid": "wJndGz1aSVSFjtidywsRPg",
"number_of_replicas": "1",
"number_of_shards": "5",
"version": {
"created": "2020099"
}
}
},
"warmers": {}
}
}
I don't see the default field 'findings' - analyzed being created.
Elasticsearch multi field has expired. See here.
You might need something like this for your findings field :
"findings": {
"type": "string",
"index": "analyzed",
"fields": {
"orig": { "type": "string", "index": "not_analyzed" }
}
}