Query shared exception error - elasticsearch - elasticsearch

I’m using elasticsearch but I got this error:
error": ("root_cause": ('type" "query _shard _exception" "reason" wildcard queries on keyword, text and wildcard fields - not or
is of type [longl", " index _uuid", "hg_9QPN
IA", "index" "products")I "type" "search_phase _execution_excepti
shards failed" "phase" "query" "grouped": true,
K"shard": •, "index" "products"," node" "yFbQAuUSQHKL&AEUcj {'type" "query_shard _exception", "reason" "Can only use wild keyword, text and wildcard fields - not on [erp_id] v
[long]" "index _uuid".»hg_9QPN
{IA", "index" "products"

Related

QUERY ELASTISEARCH (illegal_argument_exception)

I want to make a query in elasticsearch that returns the duplicates, but the query returns error 400 and set fieldata=True.
I need to make a query in elasticsearch,
I currently have a query:
{
"query": {
"match_all": {}
},
"aggs": {
"duplicateCount": {
"terms": {
"field": "hash_code_document",
"min_doc_count": 2
}
}
}
}
but when doing the query I get this 400 error:
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [hash_code_document] in order to load field data by uninverting the inverted index. Note that this can use significant memory."
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "curriculo-19",
"node" : "QOzYVehEQhezjq1TWxYvAA",
"reason" : {
"type" : "illegal_argument_exception",
"reason" : "Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [hash_code_document] in order to load field data by uninverting the inverted index. Note that this can use significant memory."
}
}
],
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [hash_code_document] in order to load field data by uninverting the inverted index. Note that this can use significant memory.",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [hash_code_document] in order to load field data by uninverting the inverted index. Note that this can use significant memory."
}
}
},
"status" : 400
}
Do I need to change the mapping to make the query?
Look like your field defined as "text" in the mapping. Elasticsearch doesn't allow to do the aggregations in this type of fields.
You need to change the field type in the mapping to the "keyword" like:
"mappings": {
"properties": {
"hash_code_document":{
"type": "keyword"
}
}
}
Or if you already have field like: hash_code_document.keyword, you need to use it for aggregation

How to update Index mapping to update fielddata data type

I dont know how my field mapping changed from keyword to text but now its an issue and i need to change from text to keyword .
I have huge data so re-indexing will take 2 to 3 days time .Now we are looking way to update the index mapping so that the issue will be resolved .
In our lower environment the field data is still keyword and in prod it is changed .
We are using AWS Elastic search 7.1
Please help .
This is what we want
{
"mappings":{
"properties":{
"objectID":{
"type":"keyword"
}
}
}
}
But this gives us error
"type": "resource_already_exists_exception",
This is our search query
Finally we have upgraded our ES cluster so can that be the root cause of the issue ?
Its dynamic mapping for this filed .
As mentioned in the question, it states that now the field has the dynamic mapping. This means that your current index mapping for objectId field is of text type and multifield of keyword type
{
"<index-name>" : {
"mappings" : {
"properties" : {
"objectId" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
}
}
You cannot change the mapping of objectId field from text to keyword type using Update mapping API. If you try to do so, you will get the below error
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "mapper [name] cannot be changed from type [text] to [keyword]"
}
],
"type" : "illegal_argument_exception",
"reason" : "mapper [objectId] cannot be changed from type [text] to [keyword]"
},
"status" : 400
}
So instead you can use objectId.keyword field (that is already created using dynamic mapping as stated in the question above), or you need to use the reindex API.
With the reindex API, you have to create a new index with the required index mapping, and then reindex the old data into the new index (based on the new index mapping)

Unnable to list snapshots in Elasticsearch repository

I have an Elasticsearch snapshot repository configured to folder on NFS share, and i'm unable to do list snapshots because of strange errors
curl -X GET "10.0.1.159:9200/_cat/snapshots/temp_elastic_backup?v&s=id&pretty"
{
"error" : {
"root_cause" : [
{
"type" : "parse_exception",
"reason" : "start object expected"
}
]
"type" : "parse_exception",
"reason" : "start object expected"
},
"status" : 400
}

auditbeat failure in ELK : index_not_found_exception

I followed the guidelines to install auditbeats in ELK to send my auditd logs to ELK, but unfortunately I just can't seem to be able to make it work. I checked my config files multiple times and I just can't wrap my head around it. When I lookup the index "auditbeat-*" in Kibana, it finds no results at all.
When I check the state of the module itself, I get :
curl localhost:9200/auditbeat-6.2.1-2018.02.14/_search?pretty
{
"error" : {
"root_cause" : [ {
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "auditbeat-6.2.1-2018.02.14",
"index" : "auditbeat-6.2.1-2018.02.14"
} ],
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "auditbeat-6.2.1-2018.02.14",
"index" : "auditbeat-6.2.1-2018.02.14"
},
"status" : 404
}
so I am not sure where to take it from there. I tried sending those via both ElasticSearch and Logstach but I keep getting the same results no matter what.
Thanks,
so it turns out this is happening because the port is bound to address 127.0.0.1 instead of 0.0.0.0.

elasticsearch field mapping affects acorss different types in same index

I was told that "Every type has its own mapping, or schema definition" at the official guide.
But the fact I've met is the mapping can affect other types within the same index. Here is the situation:
Mapping definition:
[root#localhost agent]# curl localhost:9200/agent*/_mapping?pretty
{
"agent_data" : {
"mappings" : {
"host" : {
"_all" : {
"enabled" : false
},
"properties" : {
"ip" : {
"type" : "ip"
},
"node" : {
"type" : "string",
"index" : "not_analyzed"
}
}
},
"vul" : {
"_all" : {
"enabled" : false
}
}
}
}
}
and then I index a record:
[root#localhost agent]# curl -XPOST 'http://localhost:9200/agent_data/vul?pretty' -d '{"ip": "1.1.1.1"}'
{
"error" : {
"root_cause" : [ {
"type" : "mapper_parsing_exception",
"reason" : "failed to parse [ip]"
} ],
"type" : "mapper_parsing_exception",
"reason" : "failed to parse [ip]",
"caused_by" : {
"type" : "number_format_exception",
"reason" : "For input string: \"1.1.1.1\""
}
},
"status" : 400
}
Seems that it tries to parse the ip as a number. So I put a number in this field:
[root#localhost agent]# curl -XPOST 'http://localhost:9200/agent_data/vul?pretty' -d '{"ip": "1123"}'
{
"error" : {
"root_cause" : [ {
"type" : "remote_transport_exception",
"reason" : "[Argus][127.0.0.1:9300][indices:data/write/index[p]]"
} ],
"type" : "illegal_argument_exception",
"reason" : "mapper [ip] cannot be changed from type [ip] to [long]"
},
"status" : 400
}
This problem goes away if I explicitly define the ip field of vul type as ip field-type.
I don't quite understand the behavior above. Do I miss something?
Thanks in advance.
The statement
Every type has its own mapping, or schema definition
is true. But this is not complete information.
There may be conflicts between different types with the same field within one index.
Mapping - field conflicts
Mapping types are used to group fields, but the fields in each
mapping type are not independent of each other. Fields with:
the same name
in the same index
in different mapping types
map to the same field internally, and must have the same mapping. If a
title field exists in both the user and blogpost mapping types, the
title fields must have exactly the same mapping in each type. The only
exceptions to this rule are the copy_to, dynamic, enabled,
ignore_above, include_in_all, and properties parameters, which may
have different settings per field.

Resources