In elastic search, you can update data in an index using a script in the post request. And you can write multiline script using triple quotation marks like this (in the open search dev console):
POST /item/_update/123
{
"script" : {
"source":
"""
ctx._source.a = params.a;
ctx._source.b = params.b;
ctx._source.c = params.c;
""",
"lang": "painless",
"params" : {
"a" : 3,
"b" : 3,
"c" : 3
}
}
}
However, I want to write the multiline script in the appsync resolver. I am not able to get it to work cos it keeps giving me syntax error. Below is the mapping template example that fails with error.
{
"version":"2017-03-18",
"operation":"POST",
"path":"/item/_update/123
"params":{
"body": {
"script" :
"source": """
ctx._source.a = params.a;
ctx._source.b = params.b;
ctx._source.c = params.c;
""",
"lang": "painless",
"params" : {
"a": $context.arguments.a,
"b": $context.arguments.b,
"c": $context.arguments.c
}
}
}
}
}
The typical error is like this:
{
"data": {
"updateItem": null
},
"errors": [
{
"path": [
"updateItem"
],
"data": null,
"errorType": "MappingTemplate",
"errorInfo": null,
"locations": [
{
"line": 2,
"column": 3,
"sourceName": null
}
],
"message": "Unexpected character (':' (code 58)): was expecting comma to separate Object entries....\n
The problem is mainly about the quotation mark and how to properly write triple quotation mark in appsync resolver such that it translates to the correct format for elastic search API request.
How can we do this?
Related
For example:
when indexing one document into elasticsearch;
i want to analyze a field named description in the document by uax_url_email tokenizer/analyzer;
if description does have any url, put the url into another field named urls array;
finish index this document;
Now i can check whether field urls is empty to know whether description has any url.
Is this possible? Or does analyzer only contributes to the inverted index, not other fields?
You can use Ingest Pipeline Script processor with painless script. I hope this will help you.
POST _ingest/pipeline/_simulate?verbose
{
"pipeline": {
"processors": [
{
"script": {
"description": "Extract 'tags' from 'env' field",
"lang": "painless",
"source": """
def m = /(http|ftp|https):\/\/([\w_-]+(?:(?:\.[\w_-]+)+))([\w.,#?^=%&:\/~+#-]*[\w#?^=%&\/~+#-])/.matcher(ctx["content"]);
ArrayList urls = new ArrayList();
while(m.find())
{
urls.add(m.group());
}
ctx['urls'] = urls;
""",
"params": {
"delimiter": "-",
"position": 1
}
}
}
]
},
"docs": [
{
"_source": {
"content": "My name is Sagar patel and i visit https://apple.com and https://google.com"
}
}
]
}
Above Pipeline will generate result like below:
{
"docs": [
{
"processor_results": [
{
"processor_type": "script",
"status": "success",
"description": "Extract 'tags' from 'env' field",
"doc": {
"_index": "_index",
"_id": "_id",
"_source": {
"urls": [
"https://apple.com",
"https://google.com"
],
"content": "My name is Sagar patel and i visit https://apple.com and https://google.com"
},
"_ingest": {
"pipeline": "_simulate_pipeline",
"timestamp": "2022-07-13T12:45:00.3655307Z"
}
}
}
]
}
]
}
I need to update my mapping in elastic
here is example:
current mapping
{
filed1: 6,
filed2: "some string"
}
I need update it to this
{
outer: {
filed1: 6,
filed2: "some string"
}
}
I do it with update_by_query api and this request
{
"script": {
"source": "ctx._source.outer.field1 = ctx._source.field1; ctx._source.outer.field2 = ctx._source.field2;",
"lang": "painless"
},
}
but I got null pointer exception because there is no outer in documents yet
"type": "script_exception",
"reason": "compile error",
"script_stack": [
"... ctx._source.outer.fiel ...",
" ^---- HERE"
],
How could I change request?
You need to do it this way:
"source": "ctx._source.outer = ['field1': ctx._source.remove('field1'), 'field2': ctx._source.remove('field2')];",
I have a document in elasticsearch that looks like this:
{
"_index": "stats",
"_type": "_doc",
"_id": "1",
"_score": 1.0,
"_source": {
"publishTime": {
"lastUpdate": 1580991095131,
"h0_4": 0,
"h4_8": 0,
"h8_12": 3,
"h12_16": 5,
"h16_20": 2,
"h20_24": 1
},
"postCategories": {
"lastUpdate": 1580991095131,
"tech": 56,
"lifestyle": 63,
"healthcare": 49,
"finances": 25,
}
}
}
Updating/Incrementing existing property values by sending a POST request to /stats/_update/1 works great! However, if I try to upsert a non-existing property name under postCategories, I get a Bad Request (400) error of type remote_transport_exception/illegal_argument_exception:
"ctx._source.postCategories.relationships += params.postCategories.relationships",
^---- HERE"
Upsert
{
"script": {
"source": "ctx._source.postCategories.relationships += params.postCategories.relationships",
"lang": "painless",
"params": {
"postCategories": {
"relationships": 2
}
}
},
"upsert": {
"postCategories": {
"relationships": 2
}
}
}
I also tried the Scripted Upsert method by following the documentation from here, however, the same error occurs:
Scripted Upsert
{
"scripted_upsert":true,
"script": {
"source": "ctx._source.postCategories.relationships += params.postCategories.relationships",
"params": {
"postCategories": {
"relationships": 2
}
}
},
"upsert": {}
}
Can anyone tell me how can I properly add/upsert new property names under postCategories object, please?
Thank You!
Its basically saying that you are trying to assign a value to a field that doesnt exist. I think below should work(not tested).
Try to check if field exists - continue with operation if it exists.
Else add new field and assign value.
"if (ctx._source.postCategories.containsKey(\"relationships\")) { ctx._source.postCategories.relationships += params.postCategories.relationships} else { ctx._source.postCategories[\"relationships\"] = params.postCategories.relationships}",
I have the following document:
{
"likes": {
"data": [
{
"name": "a"
},
{
"name": "b"
},
{
"name": "c"
}
]
}
}
I'm trying to run an update_by_query that will add a field called 'like_count' with the number of array items inside likes.data
It's important to know that not all of my documents have the likes.data object.
I've tried this:
POST /facebook/post/_update_by_query
{
"script": {
"inline": "if (ctx._source.likes != '') { ctx._source.like_count = ctx._source.likes.data.length }",
"lang": "painless"
}
}
But getting this error message:
{
"type": "script_exception",
"reason": "runtime error",
"script_stack": [
"ctx._source.like_count = ctx._source.likes.data.length }",
" ^---- HERE"
],
"script": "if (ctx._source.likes != '') { ctx._source.like_count = ctx._source.likes.data.length }",
"lang": "painless"
}
Try ctx._source['likes.data.name'].length
According to https://www.elastic.co/guide/en/elasticsearch/reference/current/nested.html, the object array in ES is flattened to
{
"likes.data.name" :["a", "b", "c"]
}
The object array datatype we thought is Nest datatype.
Try this
ctx._source['likes']['data'].size()
i have document in this format:
"universities": {
"number": 1,
"state": [
{
"Name": "michigan",
"country": "us",
"code": 5696
},
{
"Name": "seatle",
"country": "us",
"code": 5695
}
]
}
I have to update the the "Name" field where seatle to Denmark in all the documents in the index ?
Is it possible using update_by_query?
I tried it using update_by_query but it is updating all the Name fields rather than updating only for Seatle.
In the same way how can i able to delete the particular "Name" field where seatle is present in state array?
I tried deleting a particular field using
"script": {
"inline": "ctx._source.universities.state.remove{ it.Name== findName}",
"params": {
"findName": "seatle"
}
}
}
it is throwing error like :
{
"error": {
"root_cause": [
{
"type": "invalid_type_name_exception",
"reason": "Document mapping type name can't start with '_'"
}
],
"type": "invalid_type_name_exception",
"reason": "Document mapping type name can't start with '_'"
},
"status": 400
}
You can do it like this:
"script": {
"inline": "ctx._source.universities.state.findAll{ it.Name == findName}.each{it.Name = newName}",
"params": {
"findName": "seatle",
"newName": "Denmark"
}
}
}
First we iterate over the list and find all the elements that have the desired name and then we iterate on the filtered list to update those elements with the new name