How to create mapping with json file in elasticsearch2.3? - elasticsearch

I'm using ES2.3
create new index mapping with below command works.
curl -XPUT 'http://localhost:9200/megacorp' -d '
{
"settings": {
"number_of_shards": 3,
"number_of_replicas": 1
},
"mappings": {
"employee": {
"properties": {
"first_name": {
"type": "string"
},
"last_name": {
"type": "string"
},
"age": {
"type": "integer"
},
"about": {
"type": "string"
},
"interests": {
"type": "string"
},
"join_time": {
"type": "date",
"format": "dateOptionalTime",
"index": "not_analyzed"
}
}
}
}
}
'
now i hope can use a json file to create same index. tmap.json file like below
{
"settings": {
"number_of_shards": 3,
"number_of_replicas": 1
},
"mappings": {
"employee": {
"properties": {
"first_name": {
"type": "string"
},
"last_name": {
"type": "string"
},
"age": {
"type": "integer"
},
"about": {
"type": "string"
},
"interests": {
"type": "string"
},
"join_time": {
"type": "date",
"format": "dateOptionalTime",
"index": "not_analyzed"
}
}
}
},
"aliases": [ "source" ]
}
then i usr curl to create it.
curl -s -XPOST 'localhost:9200/megacorp' --data-binary #tmap.json
and
curl -XPUT 'http://localhost:9200/megacorp' -d #tmap.json
both above commands not working, get error like below.
{"error":{"root_cause":[{"type":"class_cast_exception","reason":"java.util.ArrayList cannot be cast to java.util.Map"}],"type":"class_cast_exception","reason":"java.util.ArrayList cannot be cast to java.util.Map"},"status":500}%
how to create index with curl and my json file? this is really confused me for long time.
can any body help me? thanks.

The way you define alias is wrong. It should be a map instead of an array.
{
"settings": {
"number_of_shards": 3,
"number_of_replicas": 1
},
"mappings": {
"employee": {
"properties": {
"first_name": {
"type": "string"
},
"last_name": {
"type": "string"
},
"age": {
"type": "integer"
},
"about": {
"type": "string"
},
"interests": {
"type": "string"
},
"join_time": {
"type": "date",
"format": "dateOptionalTime",
"index": "not_analyzed"
}
}
}
},
"aliases": { "source": {} }
}
More info about aliases in index creation:
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html#create-index-aliases

Related

Elasticsearch + Kibana, sorting on uri yields no results. (uri isn't analyzed)

I have a log of HTTP requests, one of the fields is a URI field. I want to get the average duration in ms for each URI. I set the y-axis in Kibana to
"Aggregation: Average , Field: durationInMs".
For the x-axis I have
"Aggregation: terms, Field uri, Order by: metric average durationInMs, Order: descending: 5"
Image to clarify:
This gives me a result but it doesn't use the entire URI. It instead splits up the URI and matches parts of it. After a quick google I found "Multi-fields" and I have added a URI.raw field on my index. The analyzed field warning disappeared but I get no result at all.
Any hints or tips?
lsc-logs2 mapping:
{
"lsc-logs2": {
"mappings": {
"httplogentry": {
"properties": {
"context": {
"type": "string"
},
"durationInMs": {
"type": "double"
},
"id": {
"type": "long"
},
"method": {
"type": "string"
},
"source": {
"type": "string"
},
"startTime": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"status": {
"type": "long"
},
"uri": {
"type": "string",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
},
"username": {
"type": "string"
},
"version": {
"type": "long"
}
}
}
}
}
}
An example document:
{
"_index": "lsc-logs2",
"_type": "httplogentry",
"_id": "1148440",
"_score": 1,
"_source": {
"startTime": "2016-08-22T10:30:57.2298086+02:00",
"context": "contexturi",
"method": "GET",
"uri": "http://uri/plannings/unassigned?date=2016-08-22T03:58:57.168Z&page=1&pageSize=9999",
"username": "user",
"source": "192.168.1.82",
"durationInMs": 171.83710000000002,
"status": 200,
"id": 1148440,
"version": 1
}
}
When reindexing data, the httplogentry mapping doesn't get ported from lsc-logs to lsc-logs2, you need to create the destination index+mapping first and only then reindex.
First delete the current destination index
curl -XDELETE localhost:9200/lsc-logs2
Then create it anew by specifying the proper mapping
curl -XPUT localhost:9200/lsc-logs2 -d '{
"mappings": {
"httplogentry": {
"properties": {
"context": {
"type": "string"
},
"durationInMs": {
"type": "double"
},
"id": {
"type": "long"
},
"method": {
"type": "string"
},
"source": {
"type": "string"
},
"startTime": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"status": {
"type": "long"
},
"uri": {
"type": "string",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
},
"username": {
"type": "string"
},
"version": {
"type": "long"
}
}
}
}
}'
Then you can reindex your data:
curl -XPOST localhost:9200/_reindex -d '{
"source": {
"index": "lsc-logs"
},
"dest": {
"index": "lsc-logs2"
}
}'
Then refresh your the fields in your index pattern in Kibana and it should work.

dynamic mapping for nested type

I have an index/type of test1/all which looks as follows:
{
"test1": {
"mappings": {
"all": {
"properties": {
"colors": {
"properties": {
"H": {"type": "double"},
"S": {"type": "long"},
"V": {"type": "long"},
"color_percent": {"type": "long"}
}
},
"file_name": {
"type": "string"
},
"id": {
"type": "string"
},
"no_of_colors": {
"type": "long"
}
}
}
}
}
}
I would like to make the colors field nested, I am trying the following :
PUT /test1/all/_mapping
{
"mappings":{
"all":{
"properties": {
"file_name":{
"type": "string",
"index": "not_analyzed"
},
"id": {
"type": "string",
"index": "not_analyzed"
},
"no_of_colors":{
"type":"long",
"index": "not_analyzed"
},
"colors":{
"type":"nested",
"properties":{
"H":{"type":"double"},
"S":{"type":"long"},
"V":{"type":"long"},
"color_percent":{"type":"integer"}
}
}
}
}
}
}
But I get the following error:
{
"error": "MapperParsingException[Root type mapping not empty after parsing!
Remaining fields: [mappings : {all={properties={file_name={type=string, index=not_analyzed}, id={type=string, index=not_analyzed}, no_of_colors={type=integer, index=not_analyzed}, colors={type=nested, properties={H={type=double}, S={type=long}, V={type=long}, color_percent={type=integer}}}}}}]]",
"status": 400
}
Any suggestions? Appreciate the help.
You're almost there, you simply need to remove the mappings section like this:
PUT /test1/all/_mapping
{
"properties": {
"file_name": {
"type": "string",
"index": "not_analyzed"
},
"id": {
"type": "string",
"index": "not_analyzed"
},
"no_of_colors": {
"type": "long",
"index": "not_analyzed"
},
"colors": {
"type": "nested",
"properties": {
"H": {
"type": "double"
},
"S": {
"type": "long"
},
"V": {
"type": "long"
},
"color_percent": {
"type": "integer"
}
}
}
}
}
However, note that this will not work either, because you cannot change the colors type from object to nested and the other string fields from analyzed to _not_analyzed. You need to delete your index and re-create it from scratch

Recreation of mapping elastic search

logstash configI have created my index on elasticsearch and through kibana as well and have uploaded data. Now i want to change the mapping for the index and change some fields to not analyzed .Below is my mapping which i want to replace from existing one . But when i run below command it gives me error
{"error":{"root_cause":[{"type":"index_already_exists_exception","reason":"already
exists","index":"rettrmt"}],"type":"index_already_exists_exception","reason":"already
exists","index":"rettrmt"},"status":400}
Kindly help to get it close.
curl -XPUT 'http://10.56.139.61:9200/rettrmt' -d '{
"rettrmt": {
"aliases": {},
"mappings": {
"RETTRMT": {
"properties": {
"#timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"#version": {
"type": "string"
},
"acid": {
"type": "string"
},
"actor_id": {
"type": "string",
"index": "not_analyzed"
},
"actor_type": {
"type": "string",
"index": "not_analyzed"
},
"channel_id": {
"type": "string",
"index": "not_analyzed"
},
"circle": {
"type": "string",
"index": "not_analyzed"
},
"cr_dr_indicator": {
"type": "string",
"index": "not_analyzed"
},
"host": {
"type": "string"
},
"message": {
"type": "string"
},
"orig_input_amt": {
"type": "double"
},
"path": {
"type": "string"
},
"r_cre_id": {
"type": "string"
},
"sub_use_case": {
"type": "string",
"index": "not_analyzed"
},
"tran_amt": {
"type": "double"
},
"tran_id": {
"type": "string"
},
"tran_particulars": {
"type": "string"
},
"tran_particulars_2": {
"type": "string"
},
"tran_remarks": {
"type": "string"
},
"tran_sub_type": {
"type": "string"
},
"tran_timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"tran_type": {
"type": "string"
},
"type": {
"type": "string"
},
"use_case": {
"type": "string",
"index": "not_analyzed"
}
}
}
},
"settings": {
"index": {
"creation_date": "1457331693603",
"uuid": "2bR0yOQtSqqVUb8lVE2dUA",
"number_of_replicas": "1",
"number_of_shards": "5",
"version": {
"created": "2000099"
}
}
},
"warmers": {}
}
}'
You first need to delete your index and then recreate it with the proper mapping. Here you're getting an error index_already_exists_exception because you try to create an index while the older index still exists, hence the conflict.
Run this first:
curl -XDELETE 'http://10.56.139.61:9200/rettrmt'
And then you can run your command again. Note that this will erase your data, so you will have to repopulate your index.
Did you try something like that ?
curl -XPUT 'http://10.56.139.61:9200/rettrmt/_mapping/RETTRMT' -d '
{
"properties": {
"actor_id": { // or whichever properties you want to add
"type": "string",
"index": "not_analyzed"
}
}
}
works for me

QueryParsingException[[mobapp] failed to find geo_point field [location.position]

I created index using:
curl -XPUT localhost:9200/mobapp -d '{
"mappings": {
"publish_messages": {
"properties": {
"title": {
"type": "string"
},
"location": {
"type": "nested",
"position": {
"type": "geo_point"
},
"name": {
"type": "string"
},
"state": {
"type": "string"
},
"country": {
"type": "string"
},
"city": {
"type": "integer"
}
},
"time": {
"type": "date",
"format": "dd-MM-YYYY"
}
}
}
}
}'
I have this index
"hits": [
{
"_index": "mobapp",
"_type": "publish_messages",
"_id": "184123e0-6123-11e5-83d5-7bdc2a9aa3c7",
"_score": 1,
"_source": {
"title": "Kolkata rocka",
"tags": [
"Tag5",
"Tag4"
],
"date": "2015-09-22T12:11:46.335Z",
"location": {
"position": {
"lat": 11.81776,
"lon": 10.9376
},
"country": "India",
"locality": "Bengaluru",
"sublocality_level_1": "Koramangala"
}
}
}
]
I am trying to do this query:
FilterBuilder filter = geoDistanceFilter("location")
.point(lat, lon)
.distance(distanceRangeInkm, DistanceUnit.KILOMETERS)
.optimizeBbox("memory")
.geoDistance(GeoDistance.ARC);
FilterBuilder boolFilter = boolFilter()
.must(termFilter("tags", tag))
.must(filter);
GeoDistanceSortBuilder geoSort = SortBuilders.geoDistanceSort("location").point(lat, lon).order(SortOrder.ASC);
SearchResponse searchResponse
= client.prepareSearch(AppConstants.ES_INDEX)
.setTypes("publish_messages")
.addSort("time", SortOrder.DESC)
.addSort(geoSort)
.setSearchType(SearchType.DFS_QUERY_THEN_FETCH)
.setPostFilter(boolFilter)
.setFrom(startPage).setSize(AppConstants.DEFAULT_PAGINATION_SIZE)
.execute()
.actionGet();
I am getting QueryParsingException[[mobapp] failed to find geo_point field [location.position]]; }
If you only want to keep your location data together, you don't need to use the nested type, simply use a normal object type (i.e. the default), like this:
curl -XPUT localhost:9200/mobapp -d '{
"mappings": {
"publish_messages": {
"properties": {
"title": {
"type": "string"
},
"location": {
"type": "object", <--- use object here
"properties": { <--- and don't forget properties here
"position": {
"type": "geo_point"
},
"name": {
"type": "string"
},
"state": {
"type": "string"
},
"country": {
"type": "string"
},
"city": {
"type": "integer"
}
}
},
"time": {
"type": "date",
"format": "dd-MM-YYYY"
}
}
}
}
}'
Note that you first need to wipe out your current index using curl -XDELETE localhost:9200/mobapp and then recreate it with the above command and reindex your data. Your query should work afterwards.

Update ElasticSearch Mapping type without delete it

I have this mapping type on my Index.
{
"iotsens-summarizedmeasures": {
"mappings": {
"summarizedmeasure": {
"properties": {
"id": {
"type": "long"
},
"location": {
"type": "boolean"
},
"rawValue": {
"type": "string"
},
"sensorId": {
"type": "string"
},
"summaryTimeUnit": {
"type": "string"
},
"timestamp": {
"type": "date",
"format": "dateOptionalTime"
},
"value": {
"type": "string"
},
"variableName": {
"type": "string"
}
}
}
}
}
}
I want to update sensorId field to.
"sensorId": {
"type": "string",
"index": "not_analyzed"
}
Is there any way to update the index without delete and re-mapping it? I don't have to change type of field, I only set "index": "not_analyzed".
Thanks you.
What you can do is make a multi-field out of your existing sensorId field like this with a sub-field called raw which is not_analyzed:
curl -XPUT localhost:9200/iotsens-summarizedmeasures/_mapping/summarizedmeasure -d '{
"summarizedmeasure": {
"properties": {
"sensorId": {
"type": "string",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}'
However, you still have to re-index your data to make sure all sensorId.raw sub-fields get created.

Resources