Merge Json array using shell script - shell

I need to merge two JSON objects based on first object keys
object1 = {
"params" : {
"type": ["type1", "type2"],
"requeststate": []
}
}
object2 = {
"params" : {
"type": ["type2", "type3", "type4"],
"requeststate": ["Original", "Revised" ],
"responsestate": ["Approved" ]
}
}
I need to merge two object based on first object key and my output should look like below
mergedobject = {
"params" : {
"type": ["type1", "type2", "type3", "type4"],
"requeststate": ["Original", "Revised"]
}
}
i searched for my case and didnt find much details Please let me know is it possible to do with shell script
My case involved with morethan 15 params object and I cant declare all the param object . Also it may grow in future and I need handle that if possible.
Please comment if you need more details. Thanks for your support

Related

Match keys with sibling object JSONATA

I have an JSON object with the structure below. When looping over key_two I want to create a new object that I will return. The returned object should contain a title with the value from key_one's name where the id of key_one matches the current looped over node from key_two.
Both objects contain other keys that also will be included but the first step I can't figure out is how to grab data from a sibling object while looping and match it to the current value.
{
"key_one": [
{
"name": "some_cool_title",
"id": "value_one",
...
}
],
"key_two": [
{
"node": "value_one",
...
}
],
}
This is a good example of a 'join' operation (in SQL terms). JSONata supports this in a path expression. See https://docs.jsonata.org/path-operators#-context-variable-binding
So in your example, you could write:
key_one#$k1.key_two[node = $k1.id].{
"title": $k1.name
}
You can then add extra fields into the resulting object by referencing items from either of the original objects. E.g.:
key_one#$k1.key_two[node = $k1.id].{
"title": $k1.name,
"other_one": $k1.other_data,
"other_two": other_data
}
See https://try.jsonata.org/--2aRZvSL
I seem to have found a solution for this.
[key_two].$filter($$.key_one, function($v, $k){
$v.id = node
}).{"title": name ? name : id}
Gives:
[
{
"title": "value_one"
},
{
"title": "value_two"
},
{
"title": "value_three"
}
]
Leaving this here if someone have a similar issue in the future.

How to create a HashMap with custom object as a key?

In Elasticsearch, I have an object that contains an array of objects. Each object in the array have type, id, updateTime, value fields.
My input parameter is an array that contains objects of the same type but different values and update times. Id like to update the objects with new value when they exist and create new ones when they aren't.
I'd like to use Painless script to update those but keep them distinct, as some of them may overlap. Issue is that I need to use both type and id to keep them unique. So far I've done it with bruteforce approach, nested for loop and comparing elements of both arrays, but I'm not too happy about that.
One of the ideas is to take array from source, build temporary HashMap for fast lookup, process input and later store all objects back into source.
Can I create HashMap with custom object (a class with type and id) as a key? If so, how to do it? I can't add class definition to the script.
Here's the mapping. All fields are 'disabled' as I use them only as intermidiate state and query using other fields.
{
"properties": {
"arrayOfObjects": {
"properties": {
"typ": {
"enabled": false
},
"id": {
"enabled": false
},
"value": {
"enabled": false
},
"updated": {
"enabled": false
}
}
}
}
}
Example doc.
{
"arrayOfObjects": [
{
"typ": "a",
"id": "1",
"updated": "2020-01-02T10:10:10Z",
"value": "yes"
},
{
"typ": "a",
"id": "2",
"updated": "2020-01-02T11:11:11Z",
"value": "no"
},
{
"typ": "b",
"id": "1",
"updated": "2020-01-02T11:11:11Z"
}
]
}
And finally part of the script in it's current form. The script does some other things, too, so I've stripped them out for brevity.
if (ctx._source.arrayOfObjects == null) {
ctx._source.arrayOfObjects = new ArrayList();
}
for (obj in params.inputObjects) {
def found = false;
for (existingObj in ctx._source.arrayOfObjects) {
if (obj.typ == existingObj.typ && obj.id == existingObj.id && isAfter(obj.updated, existingObj.updated)) {
existingObj.updated = obj.updated;
existingObj.value = obj.value;
found = true;
break;
}
}
if (!found) {
ctx._source.arrayOfObjects.add([
"typ": obj.typ,
"id": obj.id,
"value": params.inputValue,
"updated": obj.updated
]);
}
}
There's technically nothing suboptimal about your approach.
A HashMap could potentially save some time but since you're scripting, you're already bound to its innate inefficiencies... Btw here's how you initialize & work with HashMaps.
Another approach would be to rethink your data structure -- instead of arrays of objects use keyed objects or similar. Arrays of objects aren't great for frequent updates.
Finally a tip: you said that these fields are only used to store some intermediate state. If that weren't the case (or won't be in the future), I'd recommend using nested arrays to enable querying independently of other objects in the array.

How to access parent node from child

How do I access a parent object from a child node. Seems that i can't access the scope
This is the source json
{
"content" : {
"date" : "2019-02-10T02:40:48Z",
"production" : {
"productionId" : "918",
}
}
}
This is my Jsonata
{
"productionType": "specificProducts",
"products": [
content.production.(
{"usedProducts" : {
"id" = productionId,
"productDate" = content.date // how do I access content
}
})
]
}
do I have to save "content" in some kind of variable and pass it to the child ?
The Answer is $$.content.date
Here is the documentation of it
https://docs.jsonata.org/programming#built-in-variables
{
"productionType": "specificProducts",
"products": [
content.production.(
{"usedProducts" : {
"id" = productionId,
"productDate" = $$.content.date
}
})
]
}
Another solution is to not dive down into the production element until you want to access its 'productionId' property -- like this:
{
"productionType": "specificProducts",
"products": [
content.{
"usedProducts": {
"id": production.productionId,
"productDate": date
}
}
]
}
Then you can just access the 'date' property in the context of its parent content object.
Of course, these answers may or may not work as expected if the source object is more deeply nested, or contains arrays of child objects...
But to answer your original question, "no" -- in JSONata, elements cannot know what "path" was used to dereference them. Iirc, it was a concious design decision to ensure maximum flexibility and speed.
Use the % symbol to access the parent node from the context of a child node. You can use %.% to access the grandparent node, and so on.
You can read more about it in the documentation here: https://docs.jsonata.org/path-operators
This might be what you were trying to accomplish. Since the query content.production returns an array, your query had to be adjusted slightly.
{
"productionType": "specificProducts",
"products": [
{
"usedProducts": [
content.production.{
"id": $.productionId,
"productDate": %.date
}
]
}
]
}

Elasticsearch: conditionally sort on 2 fields, 1 replaces the other if it exists

Without scripting, I need to sort records based on rating. The system-rating exists for all records, but a user-rating may or may not exist. If a user-rating does exist I want to use that value in the sort instead of the system-rating, for that particular record and only for that record.
Tried looking into the missing setting but it only allows _first, _last or a custom value (that will be used for missing docs as the sort value):
{
"sort" : [
{ "user_rating" : {"missing" : "_last"} },
],
"query" : {
"term" : { "meal" : "cabbage" }
}
}
...but is there a way to specify the custom value should be system_rating when user_rating is missing?
I can do the following:
query_hash[:sort] = []
if user_rating.exist?
query_hash[:sort] << {
"user_rating" => {
"order": sort_direction,
"unmapped_type": "long",
"missing": "_last",
}
}
end
query_hash[:sort] << {
"system_rating" => {
"order": sort_direction,
"unmapped_type": "long",
}
}
...but that will always sort user rated records on top regardless of the user_rating value.
I know that scripting will allow me to do it but we cannot use scripting. Is it possible?
The only way is scripting or building a custom field at indexing time that will contain the already built value for sorting.

Which is the better design for this API response

I'm trying to decide upon the best format of response for my API. I need to return a reports response which provides information on the report itself and the fields contained on it. Fields can be of differing types, so there can be: SelectList; TextArea; Location etc..
They each use different properties, so "SelectList" might use "Value" to store its string value and "Location" might use "ChildItems" to hold "Longitude" "Latitude" etc.
Here's what I mean:
"ReportList": [
{
"Fields": [
{
"Id": {},
"Label": "",
"Value": "",
"FieldType": "",
"FieldBankFieldId": {},
"ChildItems": [
{
"Item": "",
"Value": ""
}
]
}
]
}
The problem with this is I'm expecting the users to know when a value is supposed to be null. So I'm expecting a person looking to extract the value from "Location" to extract it from "ChildItems" and not "Value". The benefit to this however, is it's much easier to query for things than the alternative which is the following:
"ReportList": [
{
"Fields": [
{
"SelectList": [
{
"Id": {},
"Label": "",
"Value": "",
}
]
"Location": [
{
"Id": {},
"Label": "",
"Latitude": "",
"Longitude": "",
"etc": "",
}
]
}
]
}
So this one is a reports list that contains a list of fields which on it contains a list of fieldtype for every fieldtype I have (15 or something like that). This is opposed to just having a list of reports which has a list of fields with a "fieldtype" enum which I think is fairly easy to manipulate.
So the Question: Which format is best for a response? Any alternatives and comments appreciated.
EDIT:
To query all fields by fieldtype in a report and get values with the first way it would go something like this:
foreach(field in fields)
{
switch(field.fieldType){
case FieldType.Location :
var locationValue = field.childitems;
break;
case FieldType.SelectList:
var valueselectlist = field.Value;
break;
}
The second one would be like:
foreach(field in fields)
{
foreach(location in field.Locations)
{
var latitude = location.Latitude;
}
foreach(selectList in field.SelectLists)
{
var value= selectList.Value;
}
}
I think the right answer is the first one. With the switch statement. It makes it easier to query on for things like: Get me the value of the field with the id of this guid. It just means putting it through a big switch statement.
I went with the first one because It's easier to query for the most common use case. I'll expect the client code to put it into their own schema if they want to change it.

Resources