I have an index in elasticsearch 6.2.4, for which i created a new index in new elasticsearch cluster 7.6.1.
I have copied the mapping for this index from 6.2.4 to 7.6.1, but when i tried to _reindex from 6.2.4 to 7.6.1.
I get the below error.
"failures" : [
{
"index" : "newindex",
"type" : "_doc",
"id" : "someid",
"cause" : {
"type" : "mapper_parsing_exception",
"reason" : "failed to parse field [UPDATES.when] of type [date] in document with id 'someid'. Preview of field's value: '1.528501444E9'",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "failed to parse date field [1.528501444E9] with format [epoch_second]",
"caused_by" : {
"type" : "date_time_parse_exception",
"reason" : "Failed to parse with all enclosed parsers"
}
}
},
"status" : 400
}
_reindex call done at 7.6.1's kibana
POST _reindex/?pretty
{
"source": {
"remote": {
"host": "http://oldserver:9200"
},
"index": "oldindex",
"query": {
"match_all": {}
}
},
"dest": {
"index": "newindex"
}
}
Mapping of updates field is same in both places
"UPDATES" : {
"properties" : {
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"when" : {
"type" : "date",
"format" : "epoch_second"
}
What am I missing here ?
I guess the timestamp which is appearing in one of your date feild 1.528501444E9 is UNIX timestamp in scientific notation.
But Elasticsearch fails because it can't parse 1.528501444E9 since I suppose as per you exception the format that you have given for this field is epoch_second which does not take this format.
You can read futher related to this format from here https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-date-format.html
Related
I am working on Elasticsearch. I want to use search suggestor for multiple indices at a time. I have two indices, tags and pool_tags which has name field in each index. How to use suggestor on this two indices having a similarly named field name.
I tried naming the suggestor (pool_tag_suggest in pool_tags) differently and I tried. Here are the mappings
tags:
{
"tags" : {
"mappings" : {
"properties" : {
"name" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword"
},
"suggest" : {
"type" : "completion",
"analyzer" : "simple",
"preserve_separators" : true,
"preserve_position_increments" : true,
"max_input_length" : 50
}
}
}
}
}
}
}
pool_tags:
{
"pool_tags" : {
"mappings" : {
"properties" : {
"name" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword"
},
"pool_tag_suggest" : {
"type" : "completion",
"analyzer" : "simple",
"preserve_separators" : true,
"preserve_position_increments" : true,
"max_input_length" : 50
}
}
}
}
}
}
}
WHAT I TRIED
POST pool_tags,tags/_search
{
"suggest": {
"tags_suggestor": {
"text": "ww",
"term": {
"field": "name.suggest"
}
},
"pooltags_suggestor": {
"text": "ww",
"term": {
"field": "name.pool_tag_suggest"
}
}
}
}
ERROR
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [name.suggest]"
},
{
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [name.pool_tag_suggest]"
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "pool_tags",
"node" : "g2rCnS4PQMWyldWABVJawQ",
"reason" : {
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [name.suggest]"
}
},
{
"shard" : 0,
"index" : "tags",
"node" : "g2rCnS4PQMWyldWABVJawQ",
"reason" : {
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [name.pool_tag_suggest]"
}
}
],
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [name.suggest]",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [name.suggest]"
}
}
},
"status" : 400
}
I am trying to convert latitude and longitude to geo_points in ElasticSearch. The problem is, I already have the data uploaded latitude and longitude values in elasticsearch but am having trouble converting them. I am getting the feeling that there is a solution using painless, but haven't quite pinpointed it.
This is what the mapping looks like
{
"temporary_index" : {
"mappings" : {
"handy" : {
"properties" : {
"CurrentLocationObj" : {
"properties" : {
"lat" : {
"type" : "float"
},
"lon" : {
"type" : "float"
}
}
},
"current_latitude" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"current_longitude" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"location" : {
"type" : "geo_point"
},
}
}
}
}
}
And this is what a sample doc looks like
"hits" : [
{
"_index" : "temporary_index",
"_type" : "handy",
"_id" : "9Q8ijmsBaU9mgS87_blD",
"_score" : 1.0,
"_source" : {
"current_longitude" : "139.7243101",
"current_latitude" : "35.6256271",
"CurrentLocationObj" : {
"lat" : 35.6256271,
"lon" : 139.7243101
}
There are obviously more fields, but I have removed them for the sake of clarity.
This is what I have tried.
POST temporary_index/_update_by_query
{
"query": {
"match_all": {}
},
"script": {
"inline": "ctx._source.location = [ctx._source.current_latitude, ctx._source.current_longitude]",
"lang": "painless"
}
}
However I get the following error:
"reason": "failed to parse field [location] of type [geo_point]",
"caused_by": {
"type": "parse_exception",
"reason": "unsupported symbol [.] in geohash [35.4428348]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "unsupported symbol [.] in geohash [35.4428348]"
}
}
I have used the following stackoverflow as a basis for my solution, but I am clearly doing something wrong. Any advice is helpful! Thanks.
Swapping coordinates of geopoints in elasticsearch index
Great start! You're almost there.
Just note that when specifying a geo_point as an array, the longitude must be the first element and the latitude comes next. However, I suggest you do it like this instead and it will work:
POST temporary_index/_update_by_query
{
"query": {
"match_all": {}
},
"script": {
"inline": "ctx._source.location = ['lat': ctx._source.current_latitude, 'lon': ctx._source.current_longitude]",
"lang": "painless"
}
}
I'm trying an example from https://www.elastic.co/blog/data-visualization-elasticsearch-aggregations
When I try to create indecies and upload data, I get the folllowing error:
rolf#PE:~/nfl/scripts/Elasticsearch-datasets-master/mappings$ curl -XPUT localhost:9200/nfl?pretty
{
"acknowledged" : true,
"shards_acknowledged" : true,
"index" : "nfl"
}
rolf#PE~/nfl/scripts/Elasticsearch-datasets-master/mappings$ curl -XPUT localhost:9200/nfl/2013/_mapping?pretty -d #nfl_mapping.json
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "_index is not configurable"
}
],
"type" : "mapper_parsing_exception",
"reason" : "_index is not configurable"
},
"status" : 400
}
The start of the mapping file is as follows:
{
"2013" : {
"_index" : {
"enabled" : true
},
"_id" : {
"index" : "not_analyzed",
"store" : "yes"
},
"properties" : {
"gameid" : {
"type" : "string",
"index" : "not_analyzed",
"store" : "yes"
}, ...
Appreciate some hints. Thanks.
You're probably using a recent version of ES and the nfl_mapping.json mapping is for an older version. In recent versions, it is not possible anymore to specify _index and _id in your mapping. Change it to this and it will work
{
"2013" : {
"properties" : {
"gameid" : {
"type" : "keyword"
}, ...
Also change all occurrences of string with text and string+not_analyzedto keyword.
After that you should be good to go.
Also note that "index" : "not_analyzed" is no longer supported.
I am trying to stream logs from logstash to elasticsearch (5.5.0). I am using filebeat to send logs to logstash.
I have not defined any index; it is defined automatically (say "test1") when data is pushed for the first time.
Now, I want to create another index ("test2") so that I can manage field data types. For that, I got the mappings for test1. Updated the index name. And did PUT call for test2 with this data. However, it fails with following result:
`ubuntu#elasticsearch:~$ curl -XPUT 'localhost:9200/test2?pretty' -H 'Content-Type: application/json' -d'#/tmp/mappings_test.json'
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "unknown setting [index.test2.mappings.log.properties.#timestamp.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
}
],
"type" : "illegal_argument_exception",
"reason" : "unknown setting [index.test2.mappings.log.properties.#timestamp.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
},
"status" : 400
}
`
Following is the excerpt of the json which I am using.
`
{
"test2" : {
"mappings" : {
"log" : {
"properties" : {
"#timestamp" : {
"type" : "date"
},
"#version" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"accept_date" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
....
`
I modified index name only. Rest of the content is same as mapping of test1 index.
Any help is appreciated on how to create this new index by updating types?
You need to remove test2 on the second line and have only mappings:
PUT test2
{
"mappings" : { <---- this needs to be at the top level
"log" : {
"properties" : {
"#timestamp" : {
"type" : "date"
},
"#version" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"accept_date" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
....
my error log in elasticsearch like that:
[2015-09-04 10:59:49,531][DEBUG][action.bulk ] [baichebao-node-2] [questions][0] failed to execute bulk item (index) index {[questions][baichebao][AU-WS7qZwHwGnxdqIztg], source[_na_]}
org.elasticsearch.index.mapper.MapperParsingException: failed to parse
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:565)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:493)
at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:466)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:418)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:148)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:574)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase$1.doRun(TransportShardReplicationOperationAction.java:440)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.ElasticsearchParseException: Failed to derive xcontent
at org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:195)
at org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:75)
at org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:53)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:507)
... 10 more
and my mapping like that:
{
"mappings" : {
"baichebao" : {
"dynamic" : false,
"_all" : { "enable" : false },
"_id" : {
"store" : true,
"path" : "id"
},
"properties" : {
"id" : {
"type" : "long"
},
"content" : {
"type" : "string",
"analyzer" : "ik_syno_smart"
},
"uid" : {
"type" : "integer"
},
"all_answer_count" : {
"type" : "integer"
},
"answer_users" : {
"type" : "integer"
},
"best_answer" : {
"type" : "long"
},
"status" : {
"type" : "short"
},
"created_at" : {
"type" : "long"
},
"distrust" : {
"type" : "short"
},
"is_expert" : {
"type" : "boolean"
},
"series_id" : {
"type" : "integer"
},
"is_closed" : {
"type" : "boolean"
},
"closed_at" : {
"type" : "long"
},
"tags" : {
"type" : "string"
},
"channel_type" : {
"type" : "integer"
},
"channel_sub_type" : {
"type" : "integer"
}
}
}
}
}
But I can not find out which field parse error?
How can i resolve this problem?
This error typically indicates that the document that was sent to elasticsearch cannot be identified as JSON or SMILE document by checking the first 20 bytes. For example, you would get this error if you omit the leading "{" in a JSON document:
curl -XPUT localhost:9200/test/doc/1 -d 'I am not a json document'
or prepend valid JSON with 20+ whitespace characters:
curl -XPUT localhost:9200/test/doc/1 -d ' {"foo": "bar"}'