remove field and add new field to the mapping in elastic search index - elasticsearch

I made a mistake at the time of first index creation for one field. Instead of "integer" data type mistakenly I assigned as "string" for the field "rating". But the data stored in the field was integer only. When I'm trying to calculate the average rating aggregation it was throwing an error because of the string data type.
Is there a way to change the data type of the field with out reindexing?
If not possible without reindex, how can I remove the rating field and add a rating field with "integer" data type?
Help me on this issue to resolve.
Update
Deleted the type in the index by using below command
curl -XDELETE 'http://localhost:9300/feedbacks_16/responses'
Deleted the type and created the type with the same name and changed the data type for my rating field and re indexed the entire data. Everything goes fine until reindexing. But the avg query not working.
Below is the error I'm getting :
{ "error": "SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[2NsGsPYLR2eP9p2zYSnKGQ][feedbacks-16][0]: ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData]}{[2NsGsPYLR2eP9p2zYSnKGQ][feedbacks_16][1]: ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData]}{[pcwXn3X-TceUO0ub29ZFgA][feedbacks_16][2]: RemoteTransportException[[tsles02][inet[/localhost:9300]][indices:data/read/search[phase/query]]]; nested: ClassCastException; }{[pcwXn3X-TceUO0ub29ZFgA][feedbacks_16][3]: RemoteTransportException[[tsles02][inet[/localhost:9300]][indices:data/read/search[phase/query]]]; nested: ClassCastException; }{[2NsGsPYLR2eP9p2zYSnKGQ][feedbacks_16][4]: ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData]}]", "status": 500 }

Aside from a few exceptions, a mapping cannot be updated. There are some exceptions:
you can add new properties
you can promote a simple field to a multi-field
you can disable doc_values (but not enable them)
you can update the ignore_above parameter
So if you wish to transform your rating field from string to integer without recreating a new index, the only solution you have is to create a sub-field (e.g. called rating.int) of type integer.
Note that you'll still have to reindex your data in order to populate that new sub-field, though. However, if you do so, you'd be much better off simply re-creating a clean index from scratch and re-populating it.

1) You can change data type of field without reindexing but the problem is that It wont affect your data i.e rating field will remain string as documents are stored in immutable segments but newly added field will be integer but again that wont solve your problem
2) You could delete all documents from your current index and then change the mapping with PUT API like this
$ curl -X PUT 'http://localhost:9200/your_index/your_type/_mapping?ignore_conflicts=true' -d
'{
"your_type": {
"properties": {
"rating": {
"type": "integer"
}
}
}
}'
and then reindex
but better would be to create new index with above mapping and reindex with zero downtime

Related

Kibana display number after comma on metric

I'm actually trying to dislay all number after comma in my kibana's datatable but even with json input format, it does display as expected ...
Do you have an idea how to do this ?
Here for example I have 2.521 but in can be 0.632, or 0.194 ...
I only see 0 in Min, Max, Avg columns
In my C# code is a double and indexed as a number in Kibana index:
How to do this plz ?
Thank a lot and best regards
This usually means that your field has been mapped as integer or long. If that's the case, 0.632 is stored as 0 and 2.521 as 2.
You need to make sure that those fields are mapped as float or double in your mapping.
PS: you cannot change the mapping type once the index has been created, you need to create a new index and reindex your data.
You need to pre-create your index with the right mapping types before sending the first document:
PUT webapi-myworkspace-test
{
"mappings": {
"properties": {
"GraphApiResponseTime" : {
"type" : "double"
}
}
}
}

Elasticsearch - check which all fields are indexed in an index?

I browse to curl -XGET 'http://localhost:9200/<index_name>/_mappings and it returns the fields in an index.
This is one of the field.
{"dob" : { "type": "string", "analyzer" : "my_custom_analyzer"}}
With above response, does this mean DOB field is by default indexed? or index: true has to be explicitly there for this field to be indexed?
It seems you're using a very old version of Elasticsearch, likely 2.x or earlier.
However, based on the mapping you've shared, string fields are indexed by default, in your case, dob is analyzed by a custom analyzer called my_custom_analyzer and the resulting tokens will be indexed automatically.

How to change the field data type in elasticsearch

"#version":{
"type":"string",
"index":"not_analyzed",
"ignore_above":1024
},
Here I have to change the type string to long .
I have used curl -XPUT 'http://localhost:9200/' this is just a sample
Does anyone has any idea on this?
Supposing you are using dynamic mapping (which is by default), the type of a field depends of the type of data present in the field of the first indexed document.
So if the first indexed document has a field "version" of type string, the mapping will have a field "version" of type string.
Documentation on the dynamic mapping.
You can't update a mapping. As explained in the documentation, you need to create a new index and reindex your data.

Timestamp not appearing in Kibana

I'm pretty new to Kibana and just set up an instance to look at some ElasticSearch data.
I have one index in Elastic Search, which has a few fields including _timestamp. When I go to the 'Discover' tab and look at my documents, each have the _timestamp field but with a yellow warning next to the field saying "No cached mapping for this field". As a result, I can't seem to sort/filter by time.
When I try and create a new index pattern and click on "Index contains time-based events", the 'Time-field name' dropdown doesn't contain anything.
Is there something else I need to do to get Kibana to recognise the _timestamp field?
I'm using Kibana 4.0.
You'll need to take these quick steps first :
Go to Settings → Advanced.
Edit the metaFields and add "_timestamp". Hit save.
Now go back to Settings → Indices and _timestamp will be available in the drop-down list for "Time-field name".
In newer versions you are required to specify the date field before you send your data.
Your date field must be in a standard format such as miliseconds after Epoch (long number) or - just as suggested by MrE - in ISO8601.
See more info here: https://www.elastic.co/guide/en/elasticsearch/reference/current/date.html
Again, before you send your data to the index, you must specify the mapping for this field. In python:
import requests
mapping = '{"mappings": {"your_index": {"properties": {"your_timestamp_field": { "type": "date" }}}}}'
requests.put('http://yourserver/your_index', data=mapping)
...
send_data()
My es version is 2.2.0
You have to the right schema.
I follow the guide
Eg:
{
"memory": INT,
"geo.coordinates": "geo_point"
"#timestamp": "date"
}
If you have the #timestamp, you will see the
ps: if your schema doesn't have "date" field, do not check "Index
contains time-based events
The accepted answer is obsolete as of Kibana 2.0
you should use a simple date field in your data and set it explicitly using either a timestamp, or a date string in ISO 8601 format.
https://en.wikipedia.org/wiki/ISO_8601
you also need to set a mapping to date BEFORE you start sending data apparently.
curl -XPUT 'http://localhost:9200/myindex' -d '{
"mappings": {
"my_type": {
"properties": {
"date": {
"type": "date"
}
}
}
}
}'
Go to Settings->Indices, select your index, and click the yellow "refresh" icon. That will get rid of the warning, and perhaps make the field available in your visualization.

How to search fields with '-' characters in elastic search

I am new to elastic search. I have got following document where one of the field "eventId" has "-" in value.
When i try to search with complete value of eventId, i don't get any results.
Sample Document app/event
{
"tags": {}
"eventId": "cc98d57b-c6bc-424c-b54c-df1e3df0d942",
}
I haven't created any explicit settings for my index.
Thanks.
you should check if the tokenizer splits your value into multiple fields. Maybe your value is stored as 5 fields: "cc98d57b", "c6bc", "424c", "b54c" and "df1e3df0d942"
You can analyze that with the 'Kopf' Plugin (https://github.com/lmenezes/elasticsearch-kopf).
If that is your problem you should change your field mapping, so that the value is not analyzed ("index" : "not_analyzed").
For an example how to set that mapping see here: Elasticsearch mapping settings 'not_analyzed' and grouping by field in Java
After that, you should be able to search for your specific value.

Resources