In Elasticsearch, I have an object that contains an array of objects. Each object in the array have type, id, updateTime, value fields.
My input parameter is an array that contains objects of the same type but different values and update times. Id like to update the objects with new value when they exist and create new ones when they aren't.
I'd like to use Painless script to update those but keep them distinct, as some of them may overlap. Issue is that I need to use both type and id to keep them unique. So far I've done it with bruteforce approach, nested for loop and comparing elements of both arrays, but I'm not too happy about that.
One of the ideas is to take array from source, build temporary HashMap for fast lookup, process input and later store all objects back into source.
Can I create HashMap with custom object (a class with type and id) as a key? If so, how to do it? I can't add class definition to the script.
Here's the mapping. All fields are 'disabled' as I use them only as intermidiate state and query using other fields.
{
"properties": {
"arrayOfObjects": {
"properties": {
"typ": {
"enabled": false
},
"id": {
"enabled": false
},
"value": {
"enabled": false
},
"updated": {
"enabled": false
}
}
}
}
}
Example doc.
{
"arrayOfObjects": [
{
"typ": "a",
"id": "1",
"updated": "2020-01-02T10:10:10Z",
"value": "yes"
},
{
"typ": "a",
"id": "2",
"updated": "2020-01-02T11:11:11Z",
"value": "no"
},
{
"typ": "b",
"id": "1",
"updated": "2020-01-02T11:11:11Z"
}
]
}
And finally part of the script in it's current form. The script does some other things, too, so I've stripped them out for brevity.
if (ctx._source.arrayOfObjects == null) {
ctx._source.arrayOfObjects = new ArrayList();
}
for (obj in params.inputObjects) {
def found = false;
for (existingObj in ctx._source.arrayOfObjects) {
if (obj.typ == existingObj.typ && obj.id == existingObj.id && isAfter(obj.updated, existingObj.updated)) {
existingObj.updated = obj.updated;
existingObj.value = obj.value;
found = true;
break;
}
}
if (!found) {
ctx._source.arrayOfObjects.add([
"typ": obj.typ,
"id": obj.id,
"value": params.inputValue,
"updated": obj.updated
]);
}
}
There's technically nothing suboptimal about your approach.
A HashMap could potentially save some time but since you're scripting, you're already bound to its innate inefficiencies... Btw here's how you initialize & work with HashMaps.
Another approach would be to rethink your data structure -- instead of arrays of objects use keyed objects or similar. Arrays of objects aren't great for frequent updates.
Finally a tip: you said that these fields are only used to store some intermediate state. If that weren't the case (or won't be in the future), I'd recommend using nested arrays to enable querying independently of other objects in the array.
Related
I have an JSON object with the structure below. When looping over key_two I want to create a new object that I will return. The returned object should contain a title with the value from key_one's name where the id of key_one matches the current looped over node from key_two.
Both objects contain other keys that also will be included but the first step I can't figure out is how to grab data from a sibling object while looping and match it to the current value.
{
"key_one": [
{
"name": "some_cool_title",
"id": "value_one",
...
}
],
"key_two": [
{
"node": "value_one",
...
}
],
}
This is a good example of a 'join' operation (in SQL terms). JSONata supports this in a path expression. See https://docs.jsonata.org/path-operators#-context-variable-binding
So in your example, you could write:
key_one#$k1.key_two[node = $k1.id].{
"title": $k1.name
}
You can then add extra fields into the resulting object by referencing items from either of the original objects. E.g.:
key_one#$k1.key_two[node = $k1.id].{
"title": $k1.name,
"other_one": $k1.other_data,
"other_two": other_data
}
See https://try.jsonata.org/--2aRZvSL
I seem to have found a solution for this.
[key_two].$filter($$.key_one, function($v, $k){
$v.id = node
}).{"title": name ? name : id}
Gives:
[
{
"title": "value_one"
},
{
"title": "value_two"
},
{
"title": "value_three"
}
]
Leaving this here if someone have a similar issue in the future.
I need to merge two JSON objects based on first object keys
object1 = {
"params" : {
"type": ["type1", "type2"],
"requeststate": []
}
}
object2 = {
"params" : {
"type": ["type2", "type3", "type4"],
"requeststate": ["Original", "Revised" ],
"responsestate": ["Approved" ]
}
}
I need to merge two object based on first object key and my output should look like below
mergedobject = {
"params" : {
"type": ["type1", "type2", "type3", "type4"],
"requeststate": ["Original", "Revised"]
}
}
i searched for my case and didnt find much details Please let me know is it possible to do with shell script
My case involved with morethan 15 params object and I cant declare all the param object . Also it may grow in future and I need handle that if possible.
Please comment if you need more details. Thanks for your support
All:
I am trying to understand the relationship between Entity Array and Object:
Are they just different format to describe diff structure of data? Or Entity is quite diff from the rest two?
The normalized data result has a structure like {result:,entities:}, are the data structures only defined with schema.Entity put inside entities or so can schema.Array and Object? When I define a schema only use Object and Array, it seems nothing put in entities, I am not sure if it is my schema def fault or this is how normalizr work?
If only schema.Entity() defined data can put into entities, then how can I put an data array into it, something like {0:.., 1:..,2:,}?
For exmaple, I have data like:
var data = [
{
id:"0",
items:[
{
id: "0",
data: {name:"data-0-0"}
},
{
id: "1",
data: {name:"data-0-1"}
}
]
},
{
id:"1",
items:[
{
id: "0",
data: {name:"data-1-0"}
},
{
id: "1",
data: {name:"data-1-1"}
}
]
}
]
const normalizedData = normalize(data, [{items:[{data:{}}]}]);
And the normalized data is like:
{
"entities": {},
"result": {
"0": {
"id": "0",
"items": [
{
"id": "0",
"data": {
"name": "data-1-0"
}
}
]
}
}
}
Thanks
Question: Are they just different format to describe diff structure of data? Or Entity is quite diff from the rest two?
Answer: Yes. An Entity is a singular object that has a unique identifier associated with it. Array and Object are more generic structures that can't be uniquely identified. In your case, it looks like you only need to use Array and Entity for the data you're describing.
Question: Are the data structures only defined with schema? Entity put inside entities?
Answer: Yes.
I'm trying to decide upon the best format of response for my API. I need to return a reports response which provides information on the report itself and the fields contained on it. Fields can be of differing types, so there can be: SelectList; TextArea; Location etc..
They each use different properties, so "SelectList" might use "Value" to store its string value and "Location" might use "ChildItems" to hold "Longitude" "Latitude" etc.
Here's what I mean:
"ReportList": [
{
"Fields": [
{
"Id": {},
"Label": "",
"Value": "",
"FieldType": "",
"FieldBankFieldId": {},
"ChildItems": [
{
"Item": "",
"Value": ""
}
]
}
]
}
The problem with this is I'm expecting the users to know when a value is supposed to be null. So I'm expecting a person looking to extract the value from "Location" to extract it from "ChildItems" and not "Value". The benefit to this however, is it's much easier to query for things than the alternative which is the following:
"ReportList": [
{
"Fields": [
{
"SelectList": [
{
"Id": {},
"Label": "",
"Value": "",
}
]
"Location": [
{
"Id": {},
"Label": "",
"Latitude": "",
"Longitude": "",
"etc": "",
}
]
}
]
}
So this one is a reports list that contains a list of fields which on it contains a list of fieldtype for every fieldtype I have (15 or something like that). This is opposed to just having a list of reports which has a list of fields with a "fieldtype" enum which I think is fairly easy to manipulate.
So the Question: Which format is best for a response? Any alternatives and comments appreciated.
EDIT:
To query all fields by fieldtype in a report and get values with the first way it would go something like this:
foreach(field in fields)
{
switch(field.fieldType){
case FieldType.Location :
var locationValue = field.childitems;
break;
case FieldType.SelectList:
var valueselectlist = field.Value;
break;
}
The second one would be like:
foreach(field in fields)
{
foreach(location in field.Locations)
{
var latitude = location.Latitude;
}
foreach(selectList in field.SelectLists)
{
var value= selectList.Value;
}
}
I think the right answer is the first one. With the switch statement. It makes it easier to query on for things like: Get me the value of the field with the id of this guid. It just means putting it through a big switch statement.
I went with the first one because It's easier to query for the most common use case. I'll expect the client code to put it into their own schema if they want to change it.
I am trying to learn Couchdb and have a very very newbie question. I have following two documents
{
"type": "type1",
"code": "10",
"name": "ten",
},
{
"type": "type2",
"code": "20",
"name": "twenty",
}
I have created a view as following
function(doc) {
emit(doc.type, {"code":doc.code, "name":doc.name});
}
The above function works fine but I would like to get the key instead of writing as following example which doesn't work:
function(doc) {
emit(doc.type, {key(doc.code):doc.code, key(doc.name):doc.name});
}
How do I do that???
Simple solution
I'm not sure this is what you're after but you can do this:
function(doc) {
emit(doc.type, doc);
}
Then all the fields (including type but also _id, _rev…) are available without having to type them explicitly.
Full solution
key(doc.code):doc.code does not look better than "code":doc.code to me, but if you really want to avoid duplication, you can do:
function(doc) {
var elem = {}, keys = ["code", "name"];
for (var i in keys) {
elem[keys[i]] = doc[keys[i]];
}
emit(doc.type, elem);
}
It seems overkill unless you have a long list of keys.