AppSync mapping response from Lambda getting result from ElasticSearch - elasticsearch

I am trying to invoke a lambda function from AppSync passing a search query. The lambda is calling elastic search which returns the result set.
I am able to map the result set to different fields in the graphQL Schama
$#set($result = {
"statusCode": "${context.result.statusCode}",
"headers": "${context.result.headers}",
"isBase64Encoded": "${context.result.isBase64Encoded}",
"body": "${context.result.body}"
})
$util.toJson($result)
In body get the search result set which I need to then parse and map them to a Schama.
I am unable to extract the response ${context.result.body.hits.hits} to iterate through the _source and set the search result set
Any suggestion and guidance will very helpful.

AppSync has built in support for Amazon Elasticsearch resolvers. You can find some more information about that here!
However, if you wish to keep your current Lambda resolver you could try the following mapping template:
## Delcare an empty array
#set( $result = [] )
## Loop through results
#foreach($entry in $context.result.hits.hits)
## Add each item to the result array
$util.qr($result.add(
{
'id' : $entry.get("_source")['id'],
'title' : $entry.get("_source")['fields']['title'],
'plot' : $entry.get("_source")['fields']['plot'],
'year' : $entry.get("_source")['fields']['year'],
'url' : $entry.get("_source")['fields']['image_url']
}))
#end
## Parse the result
$util.toJson($result)

The issue was resolved by converting the context results body as below. Once this was done , I was able to iterate the resultSet
[
#set($result = $context.result)
## resultSet - parse back to JSON
#set($result.resultSet = $util.parseJson($context.result.body))
#foreach($entry in $result.resultSet.hits.hits)
## $velocityCount starts at 1 and increments with the #foreach loop **
#if( $velocityCount > 1 ) , #end
$util.toJson(
{
'id' : $entry.get("_source")['id'],
'title' : $entry.get("_source")['fields']['title'],
'plot' : $entry.get("_source")['fields']['plot'],
'year' : $entry.get("_source")['fields']['year'],
'url' : $entry.get("_source")['fields']['image_url']
}
)
#end
]

Related

Elastic Search bulk request does not import all data, but shows no error

I use GuzzleHttp to send data via "_bulk" to an Elastic Search index. It is only a small dataset of 850 records. When I transfer the data record by record, I get an error message for 17 records. That's fine for me, so I can fix the errors.
But when I use _bulk, I do not get any error message at all. The 17 incorrect records are just ignored and are missing inside the index. How can I get an error message here? Are there some kind of options that I can use? Any ideas?
The endpoint is:
Here are my main code parts:
$jsonData = "xxxxx"; // the payload for the request
$elasticUrl = "https://xxxx.xx/xxxxx/_doc/_bulk";
$client = new Client([
"verify" => false, // disable ssl certificate verification
"timeout" => 600, // maximum timeout for requests
"http_errors" => false // disable exceptions
]);
$header = ["Content-Type" => "application/json"];
$result = $client->post($elasticUrl,
[
"headers" => $header,
"body" => $jsonData
]
);
if ($result->getStatusCode() != 200) {
$ret = "Error ".$result->getStatusCode()." with message: ".$result->getReasonPhrase();
}
A bulk request will always succeed with HTTP 200.
However, in the bulk response, you should see an indication whether each item succeeded or not. If you see errors: true in the response, then you know some of the items could not get indexed and looking into the items array, you'll find the error for the corresponding items.
As #Val pointed out the use of $response->getBody() gives the needed information:
$body = (string) $result->getBody();
$bodyArray = json_decode($body, true);
if ($bodyArray["errors"]) {
$retArray = [];
foreach ($bodyArray["items"] as $key => $item) {
if (isset($item["create"]["error"])) {
$retArray[] = $item["create"]["error"]["reason"].": ".json_encode($data[$key]);
}
}
$ret = implode(", ", $retArray);
}
As side note: in $data I keep the data as php array before sending it to Elastic Search.

Mocha and Chai: JSON contains/includes certain text

Using Mocha and Chai, I am trying to check whether JSON array contains a specific text. I tried multiple things suggested on this site but none worked.
await validatePropertyIncludes(JSON.parse(response.body), 'scriptPrivacy');
async validatePropertyIncludes(item, propertyValue) {
expect(item).to.contain(propertyValue);
}
Error that I getting:
AssertionError: expected [ Array(9) ] to include 'scriptPrivacy'
My response from API:
[
{
"scriptPrivacy": {
"settings": "settings=\"foobar\";",
"id": "foobar-notice-script",
"src": "https://foobar.com/foobar-privacy-notice-scripts.js",
}
You can check if the field is undefined.
If field exists in the JSON object, then won't be undefined, otherwise yes.
Using filter() expresion you can get how many documents don't get undefined.
var filter = object.filter(item => item.scriptPrivacy != undefined).length
If attribute exists into JSON file, then, variable filter should be > 0.
var filter = object.filter(item => item.scriptPrivacy != undefined).length
//Comparsion you want: equal(1) , above(0) ...
expect(filter).to.equal(1)
Edit:
To use this method from a method where you pass attribute name by parameter you can use item[propertyName] because properties into objects in node can be accessed as an array.
So the code could be:
//Call function
validatePropertyIncludes(object, 'scriptPrivacy')
function validatePropertyIncludes(object, propertyValue){
var filter = object.filter(item => item[propertyValue] != undefined).length
//Comparsion you want: equal(1) , above(0) ...
expect(filter).to.equal(1)
}

How to add a json in a nested array of a mongodb document using Spring?

Document stored in mongodb:
{
"CNF_SERVICE_ID":"1",
"SERVICE_CATEGORY":"COMMON_SERVICE",
"SERVICES":[{
"SERVICE_NAME":"Authentication Service",
"VERSIONS":[{
"VERSION_NAME":"AuthenticationServiceV6_3",
"VERSION_NUMBER":"2",
"VERSION_NOTES":"test",
"RELEASE_DATE":"21-02-2020",
"OBSOLETE_DATE":"21-02-2020",
"STATUS":"Y",
"GROUPS":[{
"GROUP_NAME":"TEST GROUP",
"CREATED_DATE":"",
"NODE_NAMES":[
""
],
"CUSTOMERS":[{
"CUSTOMER_CONFIG_ID":"4",
"ACTIVATION_DATE":"21-02-2020",
"DEACTIVATION_DATE":"21-02-2020",
"STATUS":"Y"
}]
}]
}]
}
]
}
Now, I need to add another customer json to the array "CUSTOMERS" inside "GROUPS" in the same document above. The customer json would be like this:
{
"CUSTOMER_CONFIG_ID":"10",
"ACTIVATION_DATE":"16-03-2020",
"DEACTIVATION_DATE":"16-03-2021",
"STATUS":"Y"
}
I tried this:
Update update = new Update().push("SERVICES.$.VERSIONS.GROUPS.CUSTOMERS",customerdto);
mongoOperations.update(query, update, Myclass.class, "mycollection");
But, I am getting the exception: org.springframework.data.mongodb.UncategorizedMongoDbException: Command failed with error 28 (PathNotViable): 'Cannot create field 'GROUPS' in element
[ EDIT ADD ]
I was able to update it using the filtered positional operator. Below is the query I used:
update(
{ "SERVICE_CATEGORY":"COMMON_SERVICE", "SERVICES.SERVICE_NAME":"Authentication Service", "SERVICES.VERSIONS.VERSION_NAME":"AuthenticationServiceV6_3"},
{ $push:{"SERVICES.$[].VERSIONS.$[].GROUPS.$[].CUSTOMERS": { "CUSTOMER_CONFIG_ID":"6", "ACTIVATION_DATE":"31-03-2020", "STATUS":"Y" } } }
);
Actually, this query updated all the fields irrespective of the filter conditions. So. I tried this but I am facing syntax exception. Please help.
update(
{"SERVICE_CATEGORY":"COMMON_SERVICE"},
{"SERVICES.SERVICE_NAME":"Authentication Service"},
{"SERVICES.VERSIONS.VERSION_NAME":"AuthenticationServiceV6_3"}
{
$push:{"SERVICES.$[service].VERSIONS.$[version].GROUPS.$[group].CUSTOMERS":{
"CUSTOMER_CONFIG_ID":"6",
"ACTIVATION_DATE":"31-03-2020",
"STATUS":"Y"
}
}
},
{
multi: true,
arrayFilters: [ { $and:[{ "version.VERSION_NAME": "AuthenticationServiceV6_3"},{"service.SERVICE_NAME":"Authentication Service"},{"group.GROUP_NAME":"TEST GROUP"}]} ]
}
);
Update: April 1,2020
The code I tried:
validationquery.addCriteria(Criteria.where("SERVICE_CATEGORY").is(servicedto.getService_category()).and("SERVICES.SERVICE_NAME").is(servicedetail.getService_name()).and("SERVICES.VERSIONS.VERSION_NAME").is(version.getVersion_name()));
Update update=new Update().push("SERVICES.$[s].VERSIONS.$[v].GROUPS.$[].CUSTOMERS", customer).filterArray(Criteria.where("SERVICE_CATEGORY").is(servicedto.getService_category()).and("s.SERVICE_NAME").is(servicedetail.getService_name()).and("v.VERSION_NAME").is(version.getVersion_name()));
mongoOperations.updateMulti(validationquery, update, ServiceRegistrationDTO.class, collection, key,env);
The below exception is thrown:
ERROR com.sample.amt.mongoTemplate.MongoOperations - Exception in count(query, collectionName,key,env) :: org.springframework.dao.DataIntegrityViolationException: Error parsing array filter :: caused by :: Expected a single top-level field name, found 'SERVICE_CATEGORY' and 's'; nested exception is com.mongodb.MongoWriteException: Error parsing array filter :: caused by :: Expected a single top-level field name, found 'SERVICE_CATEGORY' and 's'
This update query adds the JSON to the nested array, "SERVICES.VERSIONS.GROUPS.CUSTOMERS", based upon the specified filter conditions. Note that your filter conditions direct the update operation to the specific array (of the nested arrays).
// JSON document to be added to the CUSTOMERS array
new_cust = {
"CUSTOMER_CONFIG_ID": "6",
"ACTIVATION_DATE": "31-03-2020",
"STATUS": "Y"
}
db.collection.update(
{
"SERVICE_CATEGORY": "COMMON_SERVICE",
"SERVICES.SERVICE_NAME": "Authentication Service",
"SERVICES.VERSIONS.VERSION_NAME": "AuthenticationServiceV6_3"
},
{
$push: { "SERVICES.$[s].VERSIONS.$[v].GROUPS.$[g].CUSTOMERS": new_cust }
},
{
multi: true,
arrayFilters: [
{ "s.SERVICE_NAME": "Authentication Service" },
{ "v.VERSION_NAME": "AuthenticationServiceV6_3" },
{ "g.GROUP_NAME": "TEST GROUP" }
]
}
);
Few things to note when updating documents with nested arrays of more than one level nesting.
Use the all positional operator $[] and the filtered positional
operator $[<identifier>], and not the $ positional operator.
With filtered positional operator specify the array filter conditions
using the arrayFilters parameter. Note that this will direct your update to target the specific nested array.
For the filtered positional operator $[<identifier>], the
identifier must begin with a lowercase letter and contain only
alphanumeric characters.
References:
Array Update
Operators
db.collection.update() with arrayFilters
Thanks to #prasad_ for providing the query. I was able to eventually convert the query successfully to code with Spring data MongoTemplate's updateMulti method. I have posted the code below:
Query validationquery = new Query();
validationquery.addCriteria(Criteria.where("SERVICE_CATEGORY").is(servicedto.getService_category()).and("SERVICES.SERVICE_NAME").is(servicedetail.getService_name()).and("SERVICES.VERSIONS.VERSION_NAME").is(version.getVersion_name()));
Update update=new Update().push("SERVICES.$[s].VERSIONS.$[v].GROUPS.$[].CUSTOMERS", customer).filterArray(Criteria.where("s.SERVICE_NAME").is(servicedetail.getService_name())).filterArray(Criteria.where("v.VERSION_NAME").is(version.getVersion_name()));
mongoOperations.updateMulti(validationquery, update, ServiceRegistrationDTO.class, collection, key,env);
mongoTemplateobj.updateMulti(validationquery, update, ServiceRegistrationDTO.class, collection, key,env);

What kind of Java type to pass into Criteria.all()?

I am trying to find a document with an array of tags which match a list of values,
using the MongoDB's $all function through Spring Data MongoDB API for all().
Tag is a embedded document with two fields: type and value.
I am not sure what kind of Java type to pass in to the method as it accepts an array of Objects, tried to pass in an array of Criteria objects into the the function but the output is below:
Query: { "tags" : { "$all" : [ { "$java" : org.springframework.data.mongodb.core.query.Criteria#5542c4fe }, { "$java" : org.springframework.data.mongodb.core.query.Criteria#5542c4fe } ] } }, Fields: { }, Sort: { }
How should I proceed?
I want to achieve the following:
db.template.find( { tags: { $all: [ {type:"tagOne", value: "valueOne"}, {type:"tagTwo", value: "valueTwo"} ] } } )
Edited for clarity:
The code which I used is similar to:
Query query = new Query(baseCriteria.and("tags").all( criteriaList.toArray()))
The criteria list is formed by:
Criteria criteria = new Criteria();
criteria.and("type").is(tag.getType()).and("value").is(tag.getValue());
criteriaList.add(criteria);
OK, I've just found out, the Java type required is org.bson.Document, which you can get using:
criteria.getCriteriaObject()

Nest DeleteByQuery without the Object name

I want to send a Nest delete request to elasticsearch without specifying the object which I don't have. I've seen solutions like:
var response = elasticClient.DeleteByQuery<MyClass>(q => q
.Match(m => m.OnField(f => f.Guid).Equals(someObject.Guid))
);
From: DeleteByQuery using NEST and ElasticSearch
As I'm just reading plain text from a queue I don't have access to the MyClass object to use with the delete request. Basically I just want to delete all documents in an index (whose name I know) where a variable matches for example ordId = 1234. Something like:
var response = client.DeleteByQuery<string>( q => q
.Index(indexName)
.AllTypes()
.Routing(route)
.Query(rq => rq
.Term("orgId", "1234"))
);
I see that the nest IElasticClient interface does have a DeleteByQuery method that doesn't require the mapping object but just not sure how to implement it.
You can just specify object as the document type T for DeleteByQuery<T> - just be sure to explicitly provide the index name and type name to target in this case. T is used to provide strongly type access within the body of the request only. For example,
var client = new ElasticClient();
var deleteByQueryResponse = client.DeleteByQuery<object>(d => d
.Index("index-name")
.Type("type-name")
.Query(q => q
.Term("orgId", "1234")
)
);
Will generate the following query
POST http://localhost:9200/index-name/type-name/_delete_by_query
{
"query": {
"term": {
"orgId": {
"value": "1234"
}
}
}
}
Replace _delete_by_query with _search in the URI first, to ensure you're targeting the expected documents :)

Resources