How to handle empty field names in ElasticSearch? - elasticsearch

I'd like to log users' input to my RESTful API for debugging purpose but whenever there's an empty field in the JSON payload, an error is generated and the log is discarded.
For instance,
{
"extra": {
"request": {
"body": {
"": ""
}
}
}
}
...will result in
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "failed to parse"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "field name cannot be an empty string"
}
},
"status": 400
}
It seems to be caused by https://github.com/elastic/elasticsearch/blob/45e7e24736eeb4a157ac89bd16a374dbf917ae26/server/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java#L191.
It's a bit tricky since it happens in the parsing phase... Is there any workaround to remove/rename such fields so that it can enable ES to digest these logs?

Related

mapper_parsing_exception while import dashboard to kibana

When I import a dashboard from kibana 7.5.1 to kibana 7.4.1, it failed with error "the file could not be processed". I call kibana import object api on kibana dev tool console instead, and it provide following response:
POST /api/saved_objects/_import
{
"file" : "C:\Users\dashboards-kibana\EKC-Dashboard-Prod.ndjson"
}
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "failed to parse"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse",
"caused_by": {
"type": "json_parse_exception",
"reason": "Unrecognized character escape 'U' (code 85)\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper#33da7471; line: 2, column: 17]"
}
},
"status": 400
}
This is my first time work with kibana. Is there anyway to solve the import error?

Elastic Search Azure Plugin Issue

The elastic search azure plugin troubleshooting issue:
I am trying to snapshot the index data from on prem to azure and am getting the same exception:
{
"error": {
"root_cause": [
{
"type": "repository_verification_exception",
"reason": "[qarepository] path is not accessible on master node"
}
],
"type": "repository_verification_exception",
"reason": "[qarepository] path is not accessible on master node",
"caused_by": {
"type": "i_o_exception",
"reason": "Can not write blob master.dat",
"caused_by": {
"type": "storage_exception",
"reason": "storage_exception: ",
"caused_by": {
"type": "i_o_exception",
"reason": "qaonpremesindex.blob.core.windows.net"
}
}
}
},
"status": 500
}
steps followed:
Created storage account in azure
Created a blob container
added the keystore values(name&key)
PUT _snapshot/qarepository
{
"type": "azure"
}

json array with string and objects inside set mapping in Elasticsearch

here are two jsons:
json 1:
{
"organization": [
"Univ Philippines",
{
"pref": "Y",
"content": "University of the Philippines System"
},
{
"pref": "Y",
"content": "University of the Philippines Diliman"
}
]
}
json 2:
{
"organization": "Univ Philippines"
}
I need index them into Elasticsearch. how to set organization field mapping?
I had tried string and object type but all failed.
PUT sci_test
{
"mappings": {
"sci":{
"properties": {
"organization":{
"type": "object"
}
}
}
}
}
PUT sci_test/sci/1
{
"organization": [
"Univ Philippines",
{
"pref": "Y",
"content": "University of the Philippines System"
},
{
"pref": "Y",
"content": "University of the Philippines Diliman"
}
]
}
error info:
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "object mapping for [organization] tried to parse field [null] as object, but found a concrete value"
}
],
"type": "mapper_parsing_exception",
"reason": "object mapping for [organization] tried to parse field [null] as object, but found a concrete value"
},
"status": 400
}
All the fields must of of same type. You cannot mix string with object
"Univ Philippines", --> text
{ --> object
"pref": "Y",
"content": "University of the Philippines System"
}"
You need to define "Univ Philippines" as "University":"Univ Philippines"(add some key "university" etc).

Elasticsearch reply for expired scroll context

When using the Elasticsearch scroll API to receive query results that have many matches you must provide a scroll time-out amount. Elasticsearch does not guarantee keeping the scroll context alive beyond that time-out (scrolls are processed as a kind of "session" that Elasticsearch remembers).
But what happens if you ask Elasticsearch for another "page" after that time-out expires? What response do you get from Elasticsearch? Does it have a distinctive HTTP status code? Or distinctive fields in a JSON response body?
The response status code is 404. You also get an error message explaining what happened.
{
"error": {
"caused_by": {
"reason": "No search context found for id [35544152]",
"type": "search_context_missing_exception"
},
"failed_shards": [
{
"index": null,
"reason": {
"reason": "No search context found for id [35544152]",
"type": "search_context_missing_exception"
},
"shard": -1
}
],
"grouped": true,
"phase": "query",
"reason": "all shards failed",
"root_cause": [
{
"reason": "No search context found for id [35544152]",
"type": "search_context_missing_exception"
}
],
"type": "search_phase_execution_exception"
},
"status": 404
}

elasticsearch:script sometimes works ok sometimes throw an exception

My elasticsearch script sometimes works ok,and sometimes throw an exception,such as:
{
"error": {
"root_cause": [
{
"type": "remote_transport_exception",
"reason": "[es77][ip:9300] [indices:data/write/update[s]]"
}
],
"type": "illegal_argument_exception",
"reason": "failed to execute script",
"caused_by": {
"type": "script_exception",
"reason": "failed to run inline script [newArray = [];ctx._source.CILastCallResultRemark?.each{ obj->if(obj.id!=item.id){newArray=newArray+obj} }; (ctx._source.CILastCallResultRemark=newArray+item)] using lang [groovy]",
"caused_by": {
"type": "no_class_def_found_error",
"reason": "sun/reflect/MethodAccessorImpl",
"caused_by": {
"type": "class_not_found_exception",
"reason": "sun.reflect.MethodAccessorImpl"
}
}
}
},
"status": 400
}
Here is the script:
{
"script": {
"inline": "newArray = [];ctx._source.CILastCallResultRemark?.each{ obj->if(obj.id!=item.id){newArray=newArray+obj}};(ctx._source.CILastCallResultRemark=newArray+item)",
"params": {
"item": {
"id": "2",
"remart": "x1"
}
}
}
}
And here is the es log:
Caused by: ScriptException[failed to run inline script [newArray = [];ctx._source.CILastCallResultRemark?.each{ obj->if(obj.id!=item.id){newArray=newArray+obj}};(ctx._source.CILastCallResultRemark=newArray+item)] using lang [groovy]]; nested: NoClassDefFoundError[sun/reflect/MethodAccessorImpl]; nested: ClassNotFoundException[sun.reflect.MethodAccessorImpl];
at org.elasticsearch.script.groovy.GroovyScriptEngineService$GroovyScript.run(GroovyScriptEngineService.java:318)
at org.elasticsearch.action.update.UpdateHelper.executeScript(UpdateHelper.java:251)
... 12 more
Caused by: java.lang.NoClassDefFoundError: sun/reflect/MethodAccessorImpl
i see the bug.i will update the es version and try.

Resources