Elastic search update mappings - elasticsearch

I have mappings created wrongly for an object in elastic search. Is there a way to update the mappings. The mapping has been created wrongly for type of the object(String instead of double).

In general, the mapping for existing fields cannot be updated. There are some exceptions to this rule. For instance:
new properties can be added to Object datatype fields.
new multi-fields can be added to existing fields.
doc_values can be disabled, but not enabled.
the ignore_above parameter can be updated.
Source : https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html

That's entirely possible, by PUTting the new mapping over the existing one, here are some examples.
Please note, that you will probably need to reindex all your data after you have done this, because I don't think that ES can convert string indexes to double indexes. (what will instead happen is, that you won't find any document when you search in that field)

PUT Mapping API allows you to add/modified datatype in an existing index.
PUT /assets/asset/_mapping
{
"properties": {
"common_attributes.asset_id": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"doc_values": true,
"normalizer": "lowercase_normalizer"
}
}
},
}
}
After updating the mapping, update the existing documents using bulk Update API.
POST /_bulk
{"update":{"_id":"59519","_type":"asset","_index":"assets"}}
{"doc":{"facility_id":491},"detect_noop":false}
Note - Use 'detect_noop' for detecting noop update.

Related

Elastic Beats - Changing the Field Type of Default Fields in Beats Documents?

I'm still fairly new to the Elastic Stack and I'm still not seeing the entire picture from what I'm reading on this topic.
Let's say I'm using the latest versions of Filebeat or Metricbeat for example, and pushing that data to Logstash output, (which is then configured to push to ES). I want an "out of the box" field from one of these beats to have its field type changed (example: change beat.hostname from it's current default "text" type to "keyword"), what is the best place/practice for configuring this? This kind of change is something I would want consistent across multiple hosts running the same Beat.
I wouldn't change any existing fields since Kibana is building a lot of visualizations, dashboards, SIEM,... on the exptected fields + data types.
Instead extend (add, don't change) the default mapping if needed. On top of the default index template, you can add your own and they will be merged. Adding more fields will require some more disk space (and probably memory when loading), but it should be manageable and avoids a lot of drawbacks of other approaches.
Agreed with #xeraa. It is not advised to change the default template since that field might be used in any default visualizations.
Create a new template, you can have multiple templates for the same index pattern. All the mappings will be merged.The order of the merging can be controlled using the order parameter, with lower order being applied first, and higher orders overriding them.
For your case, probably create a multi-field for any field that needs to be changed. Eg: As shown here create a new keyword multifield, then you can refer the new field as
fieldname.raw
.
"properties": {
"city": {
"type": "text",
"fields": {
"raw": {
"type": "keyword"
}
}
}
}
The other answers are correct but I did the below in Dev console to update the message field from text to text & keyword
PUT /index_name/_mapping
{
"properties": {
"message": {
"type": "match_only_text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 10000
}
}
}
}
}

ElasticSearch 6, copy_to with dynamic index mappings

Maybe I'm missing something simple, but still could not figure out the following thing:
As of ES 6.x the _all field is deprecated, and instead it's suggested to use the copy_to instruction (https://www.elastic.co/guide/en/elasticsearch/reference/current/copy-to.html).
However, I got an impression that you need to explicitly specify the fields which you want to copy to the custom _all field. But if I use dynamic mappings, I don't know the fields in advance, and therefore cannot use copy_to?
Any way I can tell ES to copy all encountered fields to the custom _all field so that I can search across all fields?
Thanks in advance!
You could use Dynamic Templates. Basically create an index, add the custom catch_all field and then specify that particular property for all the fields that are strings. (Haven't done this before, but I believe this is the only way now. Since the field catch_all will be already present when you put the dynamic template, it will not match the catch_all - meaning that the catch_all will not copy to itself, but check it out yourself to make sure).
PUT my_index
{
"mappings": {
"_doc": {
"dynamic_templates": [
{
"strings": {
"match_mapping_type": "string",
"mapping": {
"type": "text",
"copy_to": "catch_all"
}
}
}
]
}
}
}

Scripted Field Kibana Not Working

I am trying to get scripted fields in Kibana to work.
I have two fields in my documents, customer and site
I'd like to create a new scripted field called friendly_name which is customer+" "+site
I've tried
return doc["customer"].value + " "+doc["site"].value
and it doesn't yield any results.
I've even tried just return 1 to see if I can get anything to return.
How can I get this to work?
Scripted fields work with doc_values only and I am guessing that, since this doesn't work for you, your customer and site field are text fields.
From https://www.elastic.co/blog/using-painless-kibana-scripted-fields:
Both Painless and Lucene expressions operate on fields stored in doc_values. So for string data, you will need to have the string to be stored in data type keyword.
So, you either define your two fields to be keyword or you add a subfield to them and in your scrip you use customer.keyword and site.keyword. And the changed mapping should be:
"customer": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}

Is it possible to update an existing field in an index through mapping in Elasticsearch?

I've already created an index, and it contains data from my MySQL database. I've got few fields which are string in my table, where I need them as different types (integer & double) in Elasticsearch.
So I'm aware that I could do it through mapping as follows:
{
"mappings": {
"my_type": {
"properties": {
"userid": {
"type": "text",
"fielddata": true
},
"responsecode": {
"type": "integer"
},
"chargeamount": {
"type": "double"
}
}
}
}
}
But I've tried this when I'm creating the index as a new one. What I wanted to know is how can I update an existing field (ie: chargeamount in this scenario) using mapping as a PUT?
Is this possible? Any help could be appreciated.
Once a mapping type has been created, you're very constrained on what you can update. According to the official documentation, the only changes you can make to an existing mapping after it's been created are the following, but changing a field's type is not one of them:
In general, the mapping for existing fields cannot be updated. There
are some exceptions to this rule. For instance:
new properties can be added to Object datatype fields.
new multi-fields can be added to existing fields.
doc_values can be disabled, but not enabled.
the ignore_above parameter can be updated.

How to set existing elastic search mapping from index: no to index: analyzed

I am new to elastic search, I want to updated the existing mapping under my index. My existing mapping looks like
"load":{
"mappings": {
"load": {
"properties":{
"customerReferenceNumbers": {
"type": "string",
"index": "no"
}
}
}
}
}
I would like to update this field from my mapping to be analyzed, so that my 'customerReferenceNumber' field will be available for search.
I am trying to run the following query in Sense plugin to do so,
PUT /load/load/_mapping { "load": {
"properties": {
"customerReferenceNumbers": {
"type": "string",
"index": "analyzed"
}
}
}}
but I am getting following error with this command,
MergeMappingException[Merge failed with failures {[mapper customerReferenceNumbers] has different index values]
Though there exist data associated with these mappings, here I am unable to understand why elastic search not allowing me to update mapping from no-index to indexed?
Thanks in advance!!
ElasticSearch doesn't allow this kind of change.
And even if it was possible, as you will have to reindex your data for your new mapping to be used, it is faster for you to create a new index with the new mapping, and reindex your data into it.
If you can't afford any downtime, take a look at the alias feature which is designed for these use cases.
This is by design. You cannot change the mapping of an existing field in this way. Read more about this at https://www.elastic.co/blog/changing-mapping-with-zero-downtime and https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html.

Resources