Elastic Search Azure Plugin Issue - elasticsearch

The elastic search azure plugin troubleshooting issue:
I am trying to snapshot the index data from on prem to azure and am getting the same exception:
{
"error": {
"root_cause": [
{
"type": "repository_verification_exception",
"reason": "[qarepository] path is not accessible on master node"
}
],
"type": "repository_verification_exception",
"reason": "[qarepository] path is not accessible on master node",
"caused_by": {
"type": "i_o_exception",
"reason": "Can not write blob master.dat",
"caused_by": {
"type": "storage_exception",
"reason": "storage_exception: ",
"caused_by": {
"type": "i_o_exception",
"reason": "qaonpremesindex.blob.core.windows.net"
}
}
}
},
"status": 500
}
steps followed:
Created storage account in azure
Created a blob container
added the keystore values(name&key)
PUT _snapshot/qarepository
{
"type": "azure"
}

Related

mapper_parsing_exception while import dashboard to kibana

When I import a dashboard from kibana 7.5.1 to kibana 7.4.1, it failed with error "the file could not be processed". I call kibana import object api on kibana dev tool console instead, and it provide following response:
POST /api/saved_objects/_import
{
"file" : "C:\Users\dashboards-kibana\EKC-Dashboard-Prod.ndjson"
}
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "failed to parse"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse",
"caused_by": {
"type": "json_parse_exception",
"reason": "Unrecognized character escape 'U' (code 85)\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper#33da7471; line: 2, column: 17]"
}
},
"status": 400
}
This is my first time work with kibana. Is there anyway to solve the import error?

Elasticsearch snapshot and store to IBM Object Store using `repository-s3`

I am trying to setup S3 Repository as a IBM Object Store.
I have configured, in the elasticsearch.yml
s3.client.default.endpoint: "s3.ap.cloud-object-storage.appdomain.cloud"
And for the access key and secret:
apikey as Access Key ID
resource_instance_id as Secret access key
Error Log:
{
"error": {
"root_cause": [
{
"type": "repository_verification_exception",
"reason": "[ibmdemo] path is not accessible on master node"
}
],
"type": "repository_verification_exception",
"reason": "[ibmdemo] path is not accessible on master node",
"caused_by": {
"type": "i_o_exception",
"reason": "Unable to upload object [tests-L6P1BQdzSl2MXXX8O-Mm4w/master.dat] using a single upload",
"caused_by": {
"type": "amazon_s3_exception",
"reason": "The AWS Access Key ID you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: dfcf5efb-cf35-4d19-b4b6-17cXXXX03a54; S3 Extended Request ID: null)"
}
}
},
"status": 500
}
Let me know if there are other details needed.

How to handle empty field names in ElasticSearch?

I'd like to log users' input to my RESTful API for debugging purpose but whenever there's an empty field in the JSON payload, an error is generated and the log is discarded.
For instance,
{
"extra": {
"request": {
"body": {
"": ""
}
}
}
}
...will result in
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "failed to parse"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "field name cannot be an empty string"
}
},
"status": 400
}
It seems to be caused by https://github.com/elastic/elasticsearch/blob/45e7e24736eeb4a157ac89bd16a374dbf917ae26/server/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java#L191.
It's a bit tricky since it happens in the parsing phase... Is there any workaround to remove/rename such fields so that it can enable ES to digest these logs?

Restore of elasticsearch snapshot not working using cloud-aws plugin and s3 regions

I am trying to restore a snapshot from s3 using the elasticsearch cloud-aws plugin. Both elasticsearch and cloud-aws plugin are on version 2.2.0.
The weird thing is that on my local machine I can only restore the snapshot if I specify the region, like this:
{
"type": "s3",
"settings": {
"bucket": "bucketname",
"region": "us-west-1",
"access_key": "XXXX",
"secret_key": "XXXX",
"base_path": "path/to/snapshot",
"compress": "true"
}
}
If I leave out the region, the snapshot restore will fail with the following error (names have been changed of course):
{
"error": {
"root_cause": [
{
"type": "repository_verification_exception",
"reason": "[repositoryname] path [path][to][snapshot] is not accessible on master node"
}
],
"type": "repository_verification_exception",
"reason": "[repositoryname] path [path][to][snapshot] is not accessible on master node",
"caused_by": {
"type": "i_o_exception",
"reason": "Unable to upload object path/to/snapshot/tests-pJjA4cwNREu8RsFsXTn4Qg/master.dat-temp",
"caused_by": {
"type": "amazon_s3_exception",
"reason": "The request signature we calculated does not match the signature you provided. Check your key and signing method. (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: 7D8675025D7DB3ED)"
}
}
},
"status": 500
}
However, on my server, the snapshot restore will only succeed if I don't specify the region, like this:
{
"type": "s3",
"settings": {
"bucket": "bucketname",
"access_key": "XXXX",
"secret_key": "XXXX",
"base_path": "path/to/snapshot",
"compress": "true"
}
}
If I do specify the region, no matter which region I pick, then the snapshot restore will fail with the same error as shown above.
Since I am automating the snapshot restore, I want the behaviour to be predictable across all servers and localhost. What am I doing wrong? Or missing?
Any help is greatly appreciated, thanks!
Found the cause myself: I did an update of my Java JRE/JDK, and also the JAVA_HOME environmental variable and the Elasticsearch windows service, but for some reason I still needed to do a restart of Windows to get the cloud-aws plugin working again.

elasticsearch:script sometimes works ok sometimes throw an exception

My elasticsearch script sometimes works ok,and sometimes throw an exception,such as:
{
"error": {
"root_cause": [
{
"type": "remote_transport_exception",
"reason": "[es77][ip:9300] [indices:data/write/update[s]]"
}
],
"type": "illegal_argument_exception",
"reason": "failed to execute script",
"caused_by": {
"type": "script_exception",
"reason": "failed to run inline script [newArray = [];ctx._source.CILastCallResultRemark?.each{ obj->if(obj.id!=item.id){newArray=newArray+obj} }; (ctx._source.CILastCallResultRemark=newArray+item)] using lang [groovy]",
"caused_by": {
"type": "no_class_def_found_error",
"reason": "sun/reflect/MethodAccessorImpl",
"caused_by": {
"type": "class_not_found_exception",
"reason": "sun.reflect.MethodAccessorImpl"
}
}
}
},
"status": 400
}
Here is the script:
{
"script": {
"inline": "newArray = [];ctx._source.CILastCallResultRemark?.each{ obj->if(obj.id!=item.id){newArray=newArray+obj}};(ctx._source.CILastCallResultRemark=newArray+item)",
"params": {
"item": {
"id": "2",
"remart": "x1"
}
}
}
}
And here is the es log:
Caused by: ScriptException[failed to run inline script [newArray = [];ctx._source.CILastCallResultRemark?.each{ obj->if(obj.id!=item.id){newArray=newArray+obj}};(ctx._source.CILastCallResultRemark=newArray+item)] using lang [groovy]]; nested: NoClassDefFoundError[sun/reflect/MethodAccessorImpl]; nested: ClassNotFoundException[sun.reflect.MethodAccessorImpl];
at org.elasticsearch.script.groovy.GroovyScriptEngineService$GroovyScript.run(GroovyScriptEngineService.java:318)
at org.elasticsearch.action.update.UpdateHelper.executeScript(UpdateHelper.java:251)
... 12 more
Caused by: java.lang.NoClassDefFoundError: sun/reflect/MethodAccessorImpl
i see the bug.i will update the es version and try.

Resources