Overwrite/Update Existing Elasticsearch Index Mapping (geo_point) using Kibana - elasticsearch

I am trying to update the mapping for a geo_point field in my elasticsearch index but am running into issues. I am using the dev tool console in Kibana.
The data for the geo_point is in a double array format . I am using spark with the elasticsearch-hadoop-5.3.1.jar and the data is coming into elasticsearch/kibana but remains as a number format while I need to convert it to a geo_point.
It seems that I am unable to update the index mapping once it is defined. I've tried using the method below:
PUT my_index
{
"mappings": {
"my_type": {
"properties": {
"my_location": {
"type": "geo_point"
}
}
}
}
}
-but this results in an "index already exists exception" error.
Thanks for any suggestions.

The command you used just try to create new index with mappings mentioned. For more information read the foot notes in first example here .
As per Elasticsearch documentation, updating mappings of an existing field is not possible.
Updating Field Mappings
In general, the mapping for existing fields cannot be updated. There
are some exceptions to this rule. For instance:
new properties can be added to Object datatype fields.
new multi-fields can be added to existing fields.
the ignore_above parameter can be updated.
As geo_point doesn't fall into any case mentioned above, you cannot modify mappings of that field.
You might need to reindex the data.

Related

How to update the mapping in Elasticsearch to change the field datatype and change the type of analyzers in string

While trying to update the mapping I get the following error:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"mapper [timestamp] of different type, current_type [string], merged_type [date]"}],"type":"illegal_argument_exception","reason":"
mapper [timestamp] of different type, current_type [string], merged_type [date]"},"status":400}
I m trying to run the following command on windows
curl -XPUT localhost:9200/logstash-*/_mapping/log?update_all_types -d "{
"properties":
{
"timestamp":
{
"type": "date",
"format": "MM-dd-yyyy HH:mm:ss",
"fielddata":{"loading" : "lazy"} }
}
}";
How I can change the datatype of date field from string to date type with a particular format.
I tried to change the mapping of a string datatype to change it to eager loading and not_analyzed from analyzed, but it gives the following error:
{"root_cause":[{"type":"illegal_argument_exception","reason":"Mapper for [AppName] conflicts with existing mapping in other types:\n[mapper [AppName] has different [index] values, mapper [App
different [doc_values] values, cannot change from disabled to enabled, mapper [AppName] has different [analyzer]]"}],"type":"illegal_argument_exception","reason":"Mapper for [AppName] conflict with
existing mapping in other types:\n[mapper [AppName] has different [index] values, mapper [AppName] has different [doc_values] values, cannot change from disabled to enabled, mapper [AppName]
rent [analyzer]]"},"status":400}
Here is my query for the same:
curl -XPUT localhost:9200/logstash-*/_mapping/log?update_all_types -d "{
"properties":
{"AppName":
{
"type": "string",
"index" : "not_analyzed",
"fielddata":{"loading" : "eager"}
}
}
}"
However, if I change it from not_analyzed to analyzed it gives a acknowledged=true message. How can I change the analyzer.
You cannot change existing data types mapping. As Elastic docs say:
Although you can add to an existing mapping, you can’t change existing field mappings. If a mapping already exists for a field, data from that field has probably been indexed. If you were to change the field mapping, the indexed data would be wrong and would not be properly searchable.
We can update a mapping to add a new field, but we can’t change an
existing field from analyzed to not_analyzed.
Your only option is to create a new index with the new mapping and reindex the data from the old index to the new one.
No, you cannot change a single field definition.
If you want to change the field definition for a single field in a single type, you have little option but to reindex all of the documents in your index.
Why can't you change mappings? This article Changing Mapping with Zero Downtime explains,
In order to make
your data searchable, your database needs to know what type of data
each field contains and how it should be indexed.
If you switch a
field type from e.g. a string to a date, all of the data for that
field that you already have indexed becomes useless. One way or
another, you need to reindex that field.
This applies not just to Elasticsearch, but to any database that uses
indices for searching. And if it isn't using indices then it is
sacrificing speed for flexibility.
What happens when you index a document with an incorrect field type?
A conversion will be attempted. If no valid conversion exists, an exception is thrown.
Elasticsearch: The Definitive Guide has a note regarding an example, a string is entered but long is expected. A conversion will be attempted. But an exception will still be thrown if no valid conversion exists.
[...] if the field is already
mapped as type long, then ES will try to convert the string
into a long, and throw an exception if it can’t.
Can I have the document indexed anyway, ignoring the malformed fields?
Yes. ES5 provides a ignore_malformed mapping parameter. Elasticsearch Reference explains that,
Trying to index the wrong datatype into a field throws an exception by
default, and rejects the whole document. The ignore_malformed
parameter, if set to true, allows the exception to be ignored. The
malformed field is not indexed, but other fields in the document are
processed normally.

Enabling Elasticsearch _size field

I have an index with over 9,000,000 docs.
I have defined my own mapping and everything went fine.
The only problem is that I forgot to enable the _size field and now I need it to localize document with a large size.
From the documentation I found that it's just fine to use the PUT mapping API with these parameters:
{
"my_index": {
"_size": {
"enabled": true
}
}
}
Does the new mapping will be merged with the one already set?
Does the size field will be enabled for already stored documents?
I am a little worried making changes to mapping beacuse the last time that I have updated the settings with a new analyzer the service was having problem due to shard relocation and everything got stuck.
The mappings will be merged OK and the _size field will be enabled for all your documents of type my_index.
Note that if you want to store the _size (in addition to just index its value), you also need to add "store": "yes" in your _size mapping.
Unfortunately, you'll need to re-index your data in order for the _size field to be properly indexed.

Will updating "_mappings" reflect any changes in Indexed data in Elastic search

I didn't found any change in my search result even after updating some fields in my index[_mapping]. so i want to know that "Will updating "_mappings" reflect re-indexing data in Elastic search" [or] "only data inserted after updation will effect with those index parameters[settings n mappings]"
EX:
Initially i've created my index fields as following
"fname":{
"type":"string",
"boost":5
}
"lname":{
"type":"string",
"boost":1
}
then i inserted some data. its working fine.
After updating my index mapping as following,
"fname":{
"type":"string",
"boost":1
}
"lname":{
"type":"string",
"boost":5
}
Still after updating boost values in index, also i'm getting same result.... why?
1: after each and every updation of index [settings n mapping], will elastic-search re-index the data again?
2: do we have different indexed data in same item-type?
Plz clarify this.
While you can add fields to the mappings of an index, any other change to already existing fields will either only operate on new documents or fail.
As mentioned in the comments to the question, there is an interesting article about zero-downtime index switching and there is a whole section about index management in the definitive guide.

Field not searchable in ES?

I created an index myindex in elasticsearch, loaded a few documents into it. When I visit:
localhost:9200/myindex/mytype/1023
I noticed that my particular index has the following metadata for mappings:
mappings: {
mappinggroupname: {
properties: {
Aproperty: {
type: string
}
Bproperty: {
type: string
}
}
}
}
Is there some way to add "store:yes" and index: "analyzed" without having to reload/reindex all the documents?
Note that when i want to view a single document...
i.e. localhost:9200/myindex/mytype/1023
I can see the _source field contains all the fields of that document are and when I go to the "Browser" section of the head plugin it appears that all the columns are correct and corresponding to my fieldnames. So why is it that "stored" is not showing up in metadata? I can even perform a _search on them.
What is the difference between "stored":"true" versus the fact that I can see all my fields and values after indexing all my documents via the means I mention above?
Nope, no way! That's how your documents got indexed in the underlying lucene. The only way to change it is to reindex them all!
You see all those fields because you see the content of the special _source field in lucene, that's stored by default through elasticsearch. You are not storing all the fields separately but you do have the source document that you originally indexed through the _source, a single field that contains the whole document.
Generally the _source field is just enough, you don't usually need to configure every field as stored.
Also, the default is "index":"analyzed" if not specified for all the string fields. That means those fields are indexed and analyzed using the standard analyzer if not specified in the mapping. Therefore, as far as I can see from your mapping those two fields should be indexed, thus searchable.

How to delete document types in elasticsearch?

I create an index "myindex" with a specified document type "mytype". I am able to delete the index, but it appears that "mytype" still exists without being tied to the index.
How do I get rid of "mytype"?
If you really deleted the index, the mapping in this index should not exist anymore.
Do you have any other index in your cluster with a similar type name?
To answer to the question: How to delete document types in elasticsearch?, use Delete Mapping API:
curl -XDELETE http://localhost:9200/index/type
EDIT: From elasticsearch 2.0, it won't be possible anymore. See Mapping changes. You will have to install the Delete By Query plugin and run a query which will remove your documents but the mapping will still exist. So it will most likely better to reindex your documents in another index without the old type.
But as #mguillemin and #javanna said, when you delete an index, every mapping attached to this index is deleted as well:
curl -XDELETE http://localhost:9200/index
You can use _delete_by_query path to delete type.
POST index-name/type-name/_delete_by_query
{
"query": {
"match": {
"message": "some message"
}
}
}
For further reading see docs
In the latest version of elastic search, they no longer support deleting document types. It's mentioned in the documentation
It is no longer possible to delete the mapping for a type. Instead you
should delete the index and recreate it with the new mappings.
You don't even need to specify the request body. Just
curl -XPOST http://<host>:9200/<index>/<type>/_delete_by_query

Resources