I am new to elastic search and I am testing the starter commands. I created an Index, added a document updated it with a simple update. now I am trying to make a scripted update with an upsert tag in the post request. I receiv a null pointr exception as shown below.
POST products/_update/1
{
"script" :"ctx._source.price += 5",
"upsert" : {
"price" : 1
}
}
I received the following instead of success
{
"error": {
"root_cause": [
{
"type": "remote_transport_exception",
"reason": "[DESKTOP-IGOE2EN][127.0.0.1:9300][indices:data/write/update[s]]"
}
],
"type": "illegal_argument_exception",
"reason": "failed to execute script",
"caused_by": {
"type": "script_exception",
"reason": "runtime error",
"script_stack": [
"ctx._source.price += 5",
" ^---- HERE"
],
"script": "ctx._source.price += 5",
"lang": "painless",
"caused_by": {
"type": "null_pointer_exception",
"reason": null
}
}
},
"status": 400
}
I think you should add a condition to check if the field (price) exists before accessing it by adding a if() condition:
POST products/_update/1
{
"script" :"if(ctx._source.price!= null) {ctx._source.price += 5"},
"upsert" : {
"price" : 1
}
}
Related
I'm trying to update a doc in elasticsearch by using:
POST /rcqmkg_eco_ugc_rec/_doc/aaa/_update
{
"script": {
"source": "ctx._source.flower_cnt_0 += params.flower_cnt_0",
"lang": "painless",
"params": {
"flower_cnt_0": 1
}
}
}
But I got a illegal_argument_exception error. The result of elasticsearch is:
{
"error": {
"root_cause": [
{
"type": "remote_transport_exception",
"reason": "[data1_xxx][xxx][indices:data/write/update[s]]"
}
],
"type": "illegal_argument_exception",
"reason": "failed to execute script",
"caused_by": {
"type": "script_exception",
"reason": "runtime error",
"script_stack": [
"ctx._source.flower_cnt_0 += params.flower_cnt_0",
" ^---- HERE"
],
"script": "ctx._source.flower_cnt_0 += params.flower_cnt_0",
"lang": "painless",
"caused_by": {
"type": "null_pointer_exception",
"reason": null
}
}
},
"status": 400
}
Where is the problem in my update request of es?
If the document you're trying to update might not contain a field called flower_cnt_0, you need to account for this in your script:
Try this instead:
POST /rcqmkg_eco_ugc_rec/_doc/aaa/_update
{
"script": {
"source": "ctx._source.flower_cnt_0 = (ctx._source.flower_cnt_0 ?: 0) + params.flower_cnt_0",
"lang": "painless",
"params": {
"flower_cnt_0": 1
}
}
}
I need to update array value with another array and then remove intersections, so I found this method 'unique()' for elasticsearch scripts, but it's causing error:
{
"error": {
"root_cause": [
{
"type": "remote_transport_exception",
"reason": "[probook][127.0.0.1:9300][indices:data/write/update[s]]"
}
],
"type": "illegal_argument_exception",
"reason": "failed to execute script",
"caused_by": {
"type": "script_exception",
"reason": "runtime error",
"script_stack": [
"ctx._source.arrFieldName = ctx._source.arrFieldName.unique();",
" ^---- HERE"
],
"script": "ctx._source.arrFieldName.addAll([111, 222, 333]);ctx._source.arrFieldName = ctx._source.arrFieldName.unique();ctx._source.arrFieldNameCount = ctx._source.arrFieldName.length",
"lang": "painless",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "dynamic method [java.util.ArrayList, unique/0] not found"
}
}
},
"status": 400
}
You should do it this way with distinct() instead of unique() for your inline painless script,
{
"script" : {
"inline": "ctx._source.arrFieldName=
ctx._source.arrFieldName.stream().distinct().sorted()
.collect(Collectors.toList())"
}
}
I incorrectly ingested lots of documents into Elasticsearch using the wrong #timestamp field. I already changed the affected Logstash pipeline to use the correct timestamps, but I cannot re-ingest the old data.
I do however have another document field that can be used as the timestamp (json.created_at). So I'd like to update the field. I've found that I can use the _update_by_query action to do that, but I've tried several versions that didn't work, including this:
POST logstash-rails_models-*/_update_by_query
{
"script": {
"lang": "painless",
"source": "ctx._source.#timestamp = ctx._source.json.created_at"
}
}
This complains about an unexpected character:
{
"error": {
"root_cause": [
{
"type": "script_exception",
"reason": "compile error",
"script_stack": [
"ctx._source.#timestamp = ctx._source. ...",
" ^---- HERE"
],
"script": "ctx._source.#timestamp = ctx._source.json.created_at",
"lang": "painless"
}
],
"type": "script_exception",
"reason": "compile error",
"script_stack": [
"ctx._source.#timestamp = ctx._source. ...",
" ^---- HERE"
],
"script": "ctx._source.#timestamp = ctx._source.json.created_at",
"lang": "painless",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "unexpected character [#].",
"caused_by": {
"type": "lexer_no_viable_alt_exception",
"reason": null
}
}
},
"status": 500
}
What should I do?
The correct way to access this field is via brackets and wrapped in quotes:
POST logstash-rails_models-*/_update_by_query
{
"script": {
"lang": "painless",
"source": "ctx._source['#timestamp'] = ctx._source.json.created_at"
}
}
See also this thread and some more info about updating fields with Painless.
I have a query using script to sort the result, for my business, I need one params named cities to calculate the sort value, but when I use Debug.explain to print the params.cities, the order is not unexpected, it's reordered.
Is there anyone know how to use the correct order in script? thanks in advance for your help.
Query:
GET tour/product/_search
{
"sort" : [
{
"_script" : {
"script" : {
"inline" : "Debug.explain(params.cities);",
"lang" : "painless",
"params" : {
"cities" : {
"2" : 100.0,
"17" : 3.71,
"12" : 3.63,
"13" : 3.63,
"14" : 3.6
}
}
},
"type" : "number",
"order" : "desc"
}
}
]
}
Print result:
{ "error": {
"root_cause": [
{
"type": "painless_explain_error",
"reason": "painless_explain_error: null"
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": "tour-1",
"node": "y5z7ah5cSoCk6n3Q95gbxw",
"reason": {
"type": "script_exception",
"reason": "runtime error",
"painless_class": "HashMap",
"to_string": "{12=3.63, 2=100.0, 13=3.63, 14=3.6, 17=3.71}",
"java_class": "java.util.HashMap",
"script_stack": [
"Debug.explain(params.cities);",
" ^---- HERE"
],
"script": "Debug.explain(params.cities);",
"lang": "painless",
"caused_by": {
"type": "painless_explain_error",
"reason": "painless_explain_error: null"
}
}
}
],
"caused_by": {
"type": "script_exception",
"reason": "runtime error",
"painless_class": "HashMap",
"to_string": "{12=3.63, 2=100.0, 13=3.63, 14=3.6, 17=3.71}",
"java_class": "java.util.HashMap",
"script_stack": [
"Debug.explain(params.cities);",
" ^---- HERE"
],
"script": "Debug.explain(params.cities);",
"lang": "painless",
"caused_by": {
"type": "painless_explain_error",
"reason": "painless_explain_error: null"
}
} }, "status": 500 }
I am trying to add an item to an array in one of my Elastic Search documents. I can do this for simple items, such as strings, but cannot work out now to add objects. Here is my current code:
POST /user_profiles/user_profile/12345/_update
{
"script" : {
"inline": "ctx._source.searches.add(params.search)",
"lang": "painless",
"params" : {
"search" : {
"test": "test2"
}
}
}
}
I am getting the following error:
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "failed to parse [searches]"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse [searches]",
"caused_by": {
"type": "illegal_state_exception",
"reason": "Can't get text on a START_OBJECT at 1:29"
}
},
"status": 400
}
I found the issue. It was unrelated to the query itself. The problem was that mappings for the index did not contain the additional fields.