How to update field format in Opensearch/Elasticsearch? - elasticsearch

I am trying to change the format of a string field in opensearch:
PUT my_index/_mapping
{
"mappings": {
"properties": {
"timestamp": {
"type": "date",
"format": "YYYY-MM-DD HH:mm:ss.SSS"
}
}
}
}
Response is
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "Root mapping definition has unsupported parameters: [mappings : {properties={timestamp={format=YYYY-MM-DD HH:mm:ss.SSS, type=date}}}]"
}
],
"type" : "mapper_parsing_exception",
"reason" : "Root mapping definition has unsupported parameters: [mappings : {properties={timestamp={format=YYYY-MM-DD HH:mm:ss.SSS, type=date}}}]"
},
"status" : 400
}
I've spent days trying to figure this out, seems to me like Opensearch is just so unnecessarily complex.

You cannot change the type of an existing field once it's been created. You need to reindex your index with the wrong mapping into a new index with the right mapping.
First, create the new index:
PUT new_index
{
"mappings": {
"properties": {
"timestamp": {
"type": "date",
"format": "YYYY-MM-DD HH:mm:ss.SSS"
}
}
}
}
Then, reindex the old index into the new one
POST _reindex
{
"source": {
"index": "old_index"
},
"dest": {
"index": "new_index"
}
}

Related

Null value mapping in Elastic

I have previously created an index on which I want add null value property:
PUT /check_test-1/_mapping
{ "properties": {
"name": {
"type": "keyword",
"null_value":"N/A"
}}}
EXISTING INDEX:
{ "check_test-1" : {
"mappings" : {
"properties" : {
"name" : {
"type" : "keyword"
},
"status_code" : {
"type" : "keyword",
"null_value" : "N/A"
}
}
}}}
when I run the above query it gives this error:
"type" : "illegal_argument_exception",
"reason" : "Mapper for [name] conflicts with existing mapper:\n\tCannot update parameter [null_value] from [null] to [N/A]"
Index created using below query:
PUT /check_test-1/
{
"mappings": {
"properties": {
"name": {
"type": "keyword"
},
"status_code": {
"type": "keyword",
"null_value": "N/A"
}
}
}
}
Elastic version 7.10.0
Like the message said, you cannot change an existing mapping of a field.
Check the Elasticsearch documentation about mapping updates, especially
If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index.

Rejecting mapping update to [] as the final mapping would have more than 1 type

I have create index with explicit mapping:
PUT http://192.168.1.71:9200/items
{
"mappings": {
"properties": {
"name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"num": {
"type": "long"
}
}
}
}
And trying to add document:
POST http://192.168.1.71:9200/items/1
{
"num" : 1.898,
"name" : "aaa"
}
But get the error:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "Rejecting mapping update to [items] as the final mapping would have more than 1 type: [_doc, 1]"
}
],
"type": "illegal_argument_exception",
"reason": "Rejecting mapping update to [items] as the final mapping would have more than 1 type: [_doc, 1]"
},
"status": 400
}
Why and how can I fit it?
You need to specify the document type before the id which is _doc
POST http://192.168.1.71:9200/items/_doc/1
{
"num" : 1.898,
"name" : "aaa"
}

Elastic Search GeoIp location not of type geo_point

I'm running ElasticSearch, Logstash and Kibana using Docker Compose based on the solution: https://github.com/deviantony/docker-elk.
I'm following this tutorial trying to add geoip information when processing my web logs: https://www.elastic.co/blog/geoip-in-the-elastic-stack.
In logstash I'm processing files from FileBeat and I've added geoip to my filter:
filter {
...
geoip {
source => "client_ip"
}
}
When I view the documents in Kibana they do contain additional information like geoip.country_name, geoip.city_name etc. but I expect the geoip.location field being of type geo_point in my index.
Here is an example of how some of the geoip fields are mapped:
Instead of geo_point I see location.lat and location.lon. Why are my location not of type geo_point? Do I need some kind of mapping etc.?
Both ingest-common, ingest-geoip, ingest-user-agent and x-pack are loaded when ElasticSearch starts up. I've refreshed the field list for my index in Kibana.
EDIT1:
Based on answer from #Val I'm trying to change the mapping of my index:
PUT iis-log-*/_mapping/log
{
"properties": {
"geoip": {
"dynamic": true,
"properties": {
"ip": {
"type": "ip"
},
"location": {
"type": "geo_point"
},
"latitude": {
"type": "half_float"
},
"longitude": {
"type": "half_float"
}
}
}
}
}
But that gives me this error:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "mapper [geoip.ip] of different type, current_type [text], merged_type [ip]"
}
],
"type": "illegal_argument_exception",
"reason": "mapper [geoip.ip] of different type, current_type [text], merged_type [ip]"
},
"status": 400
}
In the article you referred to, they do explain that you need to put a specific mapping for the geo_point field in the "Mapping, for Maps" section.
If you're using the default index names (i.e. logstash-*) and the default mapping type (i.e. log), then the mapping is taken care of for you by Logstash. But if not, you need to install it yourself using:
PUT your_index
{
"mappings" : {
"_default_" : {
"_all" : {"enabled" : true, "norms" : false},
"dynamic_templates" : [ {
"message_field" : {
"path_match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text",
"norms" : false
}
}
}, {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text", "norms" : false,
"fields" : {
"keyword" : { "type": "keyword", "ignore_above": 256 }
}
}
}
} ],
"properties" : {
"#timestamp": { "type": "date", "include_in_all": false },
"#version": { "type": "keyword", "include_in_all": false },
"geoip" : {
"dynamic": true,
"properties" : {
"ip": { "type": "ip" },
"location" : { "type" : "geo_point" },
"latitude" : { "type" : "half_float" },
"longitude" : { "type" : "half_float" }
}
}
}
}
}
}
In the above mappings, you see the geoip.location field being treated as a geo_point.

Can not create Elasticsearch Index (logstash-2015.05.18)

I'm using Elasticsearch 2.4
Following the instruction from the Elasticsearch Kibana official documentation here, when I create the index logstash-2015.05.18, the error below were emitted.
# curl -XPUT http://10.15.0.70:9200/logstash-2015.05.18 -d '
{
"mappings": {
"log": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}
}
';
{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"Root mapping definition has unsupported parameters: [“store” : true]"}],"type":"mapper_parsing_exception","reason":"Failed to parse mapping [“date”]: Root mapping definition has unsupported parameters: [“store” : true]","caused_by":{"type":"mapper_parsing_exception","reason":"Root mapping definition has unsupported parameters: [“store” : true]"}},"status":400}
Using the sense plugin of Kibana to create the index also gives me the same error
PUT logstash-2015.05.18
{
"mappings": {
"log": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}
}
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: [“store” : true]"
}
],
"type": "mapper_parsing_exception",
"reason": "Failed to parse mapping [“date”]: Root mapping definition has unsupported parameters: [“store” : true]",
"caused_by": {
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: [“store” : true]"
}
},
"status": 400
}
Can someone tell me did I do something wrong when creating the index?
Had same trouble.
Removing elasticsearch data("/usr/local/var/elasticsearch", if you install it with Homebrew) fixed it for me.

Elasticsearch : "birthday" exception

This is my document:
{
"user" : {
"name" : "test",
"birthday" : "123"
}
}
when I post this to elasticsearch , it went wrong:
"type" : "mapper_parsing_exception",
"reason" : "object mapping for [user.birthday] tried to parse field [birthday] as object, but found a concrete value"
But if I changed it to this:
{
"user" : {
"name" : "test",
"birthay" : "123"
}
}
It went well.
Is the birthday a keyword ? What can I do for it ?
It's a problem with your mapping. I suppose your birthday is a date, like below:
{
"properties": {
"name": {
"type": "string",
"index": "not_analyzed"
},
"birthday": {
"type": "date",
"format": "yyyy-MM-dd"
}
}
}
I imagine your mapping looks something like this:
{
"properties": {
"name": {
"type": "string",
"index": "not_analyzed"
},
"birthday": {
"type": "object",
"properties" : {
"date" : {"type" : "string"}
}
"index": "not_analyzed"
}
}
}
Or at least something similar that sets the birthday field to be an object type. Your mapping actually needs to be as follows:
{
"properties": {
"name": {
"type": "string",
"index": "not_analyzed"
},
"birthday": {
"type": "date",
"format": "YYYY-MM-dd",
"index": "not_analyzed"
}
}
}
And the reason that setting the document field name to 'birthay' instead of 'birthday' worked is that if you don't have a type mapping set for a field Elasticsearch tries to determine one that fits best dynamically.
It's also worth noting that if you don't have a mapping defined and you're getting this error it might be because a document you're indexing before the document that fails has something other than a string format date as the birthday. Thsi would cause ES to determine the field type as something else and then fail on other documents.

Resources