Adding default value to existing mapping in elastic search - elasticsearch

I have an index with mapping. I decided to add a new field to existing mapping:
{
"properties": {
"sexifield": {
"type": "keyword",
"null_value": "NULL"
}
}
}
As far as I understand, the field should appear in existing documents when I reindex. So when I use api to reindex:
{
"source": {
"index": "index_v1"
},
"dest": {
"index": "index_v2",
"version_type": "external"
}
}
I see that the mapping for index_v2 does not consist sexifield, and documents are not consisting it neither. Also this operation took less than 60ms.
Please point me, what I do not understand from it...
Adding the new documents to the first index (via java API, for an entity which has not this field (sexifield), so probably elastic should add me the default one) with sexifield, also does not create me this additional field.
Thanks in advance for tips.
Regards

great question +1 ( I learned something while solving your problem)
I don't know the answer to how to consider the second mapping (reindexed mapping) while reindexing, but here is how I would update the reindexed index (all the documents) once the reindexing is done from original index. I still continue to research to see if there is a way to consider the default values that are defined in the mapping of the second index while reindexing, but for now see if this solution helps..
POST /index_v2/_update_by_query
{
"script": {
"lang": "painless",
"inline": "ctx._source.sexifield = params.null_value",
"params": {
"null_value": "NULL"
}
}
}

Related

How to reindex and change _type

We need to migrate a number of indexes from ElasticSearch 6.8 to ElasticSearch 7.x. To be able to do this, we now need to go back and fix a large number of documents are the _type field of these documents aren't _doc as required. We fixed this for newer indexes, but some of the older data which we still need has other values in here.
How do we reindex these indexes and also change the _type field?
POST /_reindex
{
"source": {
"index": "my-index-2021-11"
},
"dest": {
"index": "my-index-2021-11-n"
},
"script": {
"source": "ctx._type = '_doc';"
}
}
I saw a post indicating the above might work, but on execution, the value for _type in the next index was still the existing of my-index.
The one option I can think of is to iterate through each document in the index and add it to the new index again which should create the correct _type, but that will take days to complete, so not so keen on doing that.
I think below should work . Please test it out, before running on actual data
{
"source": {
"index": "my-index-2021-11"
},
"dest": {
"index": "my-index-2021-11-n",
"type":"_doc"
}
}
Docs to help in upgradation
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/reindex-upgrade-inplace.html

elasticsearch reindex doesnt use index name specified in script

I tried reindexing daily indices from remote cluster and following reindex-daily-indices example
POST _reindex
{
"source": {
"remote": {
"host": "http://remote_es:9200"
},
"index": "telemetry-*"
},
"dest": {
"index": "dummy"
},
"script": {
"lang": "painless",
"source": """
ctx._index = 'telemetry-' + (ctx._index.substring('telemetry-'.length(), ctx._index.length()));
"""
}
}
It looks like if the new ctx._index is exactly the same as the original ctx._index, it will use the dest.index instead. It reindex all the records into "dummy" index
Is this a bug or intended behaviour? I could not find any explanation to this behaviour.
Is there a way to reindex (multiple indices) from remote and still preserve the original name?
It's because according to your logic, the destination index name is the same as the source index name. In the documentation you linked at, they are appending '-1' at the end of the index name.
In your case, the following logic just sets the same destination index name as the source index name, and reindex doesn't allow that, so it's using the destination index name specified in dest.index
ctx._index = 'telemetry-' + (ctx._index.substring('telemetry-'.length(), ctx._index.length()));
Also worth noting that this case has been reported here and here.

How to update data type of a field in elasticsearch

I am publishing a data to elasticsearch using fluentd. It has a field Data.CPU which is currently set to string. Index name is health_gateway
I have made some changes in python code which is generating the data so now this field Data.CPU has now become integer. But still elasticsearch is showing it as string. How can I update it data type.
I tried running below commands in kibana dev tools:
PUT health_gateway/doc/_mapping
{
"doc" : {
"properties" : {
"Data.CPU" : {"type" : "integer"}
}
}
}
But it gave me below error:
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "Types cannot be provided in put mapping requests, unless the include_type_name parameter is set to true."
}
],
"type" : "illegal_argument_exception",
"reason" : "Types cannot be provided in put mapping requests, unless the include_type_name parameter is set to true."
},
"status" : 400
}
There is also this document which says using mutate we can convert the data type but I am not able to understand it properly.
I do not want to delete the index and recreate as I have created a visualization based on this index and after deleting it will also be deleted. Can anyone please help in this.
The short answer is that you can't change the mapping of a field that already exists in a given index, as explained in the official docs.
The specific error you got is because you included /doc/ in your request path (you probably wanted /<index>/_mapping), but fixing this alone won't be sufficient.
Finally, I'm not sure you really have a dot in the field name there. Last I heard it wasn't possible to use dots in field names.
Nevertheless, there are several ways forward in your situation... here are a couple of them:
Use a scripted field
You can add a scripted field to the Kibana index-pattern. It's quick to implement, but has major performance implications. You can read more about them on the Elastic blog here (especially under the heading "Match a number and return that match").
Add a new multi-field
You could add a new multifield. The example below assumes that CPU is a nested field under Data, rather than really being called Data.CPU with a literal .:
PUT health_gateway/_mapping
{
"doc": {
"properties": {
"Data": {
"properties": {
"CPU": {
"type": "keyword",
"fields": {
"int": {
"type": "short"
}
}
}
}
}
}
}
}
Reindex your data within ES
Use the Reindex API. Be sure to set the correct mapping on the target index.
Delete and reindex everything from source
If you are able to regenerate the data from source in a timely manner, without disrupting users, you can simply delete the index and reingest all your data with an updated mapping.
You can update the mapping, by indexing the same field in multiple ways i.e by using multi fields.
Using the below mapping, Data.CPU.raw will be of integer type
{
"mappings": {
"properties": {
"Data": {
"properties": {
"CPU": {
"type": "string",
"fields": {
"raw": {
"type": "integer"
}
}
}
}
}
}
}
}
OR you can create a new index with correct index mapping, and reindex the data in it using the reindex API

How to change the field type in an ElasticSearch Index?

I have index_A, which includes a number field "foo".
I copy the mapping for index_A, and make a dev tools call PUT /index_B with the field foo changed to text, so the mapping portion of that is:
"foo": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
I then reindex index_A to index_B with:
POST _reindex
{
"source": {
"index": "index_A"
},
"dest": {
"index": "index_B"
}
}
When I go to view any document for index_B, the entry for the "foo" field is still a number. (I was expecting for example: "foo": 30 to become "foo" : "30" in the new document's source).
As much as I've read on Mappings and reindexing, I'm still at a loss on how to accomplish this. What specifically do I need to run in order to get this new index with "foo" as a text field, and all number entries for foo in the original index changed to text entries in the new index?
There's a distinction between how a field is stored vs indexed in ES. What you see inside of _source is stored and it's the "original" document that you've ingested. But there's no explicit casting based on the mapping type -- ES stores what it receives but then proceeds to index it as defined in the mapping.
In order to verify how a field was indexed, you can inspect the script stack returned in:
GET index_b/_search
{
"script_fields": {
"debugging_foo": {
"script": {
"source": "Debug.explain(doc['foo'])"
}
}
}
}
as opposed to how a field was stored:
GET index_b/_search
{
"script_fields": {
"debugging_foo": {
"script": {
"source": "Debug.explain(params._source['foo'])"
}
}
}
}
So in other words, rest assured that foo was indeed indexed as text + keyword.
If you'd like to explicitly cast a field value into a different data type in the _source, you can apply a script along the lines of:
POST _reindex
{
"source": {
"index": "index_a"
},
"dest": {
"index": "index_b"
},
"script": {
"source": "ctx._source.foo = '' + ctx._source.foo"
}
}
I'm not overly familiar with java but I think ... = ctx._source.foo.toString() would work too.
FYI there's a coerce mapping parameter which sounds like it could be of use here but it only works the other way around -- casting/parsing from strings to numerical types etc.
FYI#2 There's a pipeline processor called convert that does exactly what I did in the above script, and more. (A pipeline is a pre-processor that runs before the fields are indexed in ES.) The good thing about pipelines is that they can be run as part of the _reindex process too.

How to set existing elastic search mapping from index: no to index: analyzed

I am new to elastic search, I want to updated the existing mapping under my index. My existing mapping looks like
"load":{
"mappings": {
"load": {
"properties":{
"customerReferenceNumbers": {
"type": "string",
"index": "no"
}
}
}
}
}
I would like to update this field from my mapping to be analyzed, so that my 'customerReferenceNumber' field will be available for search.
I am trying to run the following query in Sense plugin to do so,
PUT /load/load/_mapping { "load": {
"properties": {
"customerReferenceNumbers": {
"type": "string",
"index": "analyzed"
}
}
}}
but I am getting following error with this command,
MergeMappingException[Merge failed with failures {[mapper customerReferenceNumbers] has different index values]
Though there exist data associated with these mappings, here I am unable to understand why elastic search not allowing me to update mapping from no-index to indexed?
Thanks in advance!!
ElasticSearch doesn't allow this kind of change.
And even if it was possible, as you will have to reindex your data for your new mapping to be used, it is faster for you to create a new index with the new mapping, and reindex your data into it.
If you can't afford any downtime, take a look at the alias feature which is designed for these use cases.
This is by design. You cannot change the mapping of an existing field in this way. Read more about this at https://www.elastic.co/blog/changing-mapping-with-zero-downtime and https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html.

Resources