Write a code in esql to print following structure- getting data from database - ibm-integration-bus

The output I am expecting, is
{ "a":1, "b": "string", "c":2, "d": "string", "e": 3, "f":[ { "g":4, "h": "string" } ] }
The problem is it is not having a root element at starting and when I am trying to loop into code I am using for loop, but it is overriding values and if I use Item[count] for second iteration. Also, it is printing "Item" like this :
Code:
SET resultSet.rec\[\] = PASSTHRU(sqlQuery);
DECLARE itemCount INTEGER 1;
FOR dataref AS resultSet.rec[] DO
DECLARE inRef REFERENCE TO resultSet.rec[itemCount];
SET OutputRoot.JSON.Data.Item[itemCount].a= inRef.a;
SET OutputRoot.JSON.Data.Item[itemCount].b= inRef.b;
SET OutputRoot.JSON.Data.Item[itemCount].c= inRef.c;
SET OutputRoot.JSON.Data.Item[itemCount].d= inRef.d;
SET OutputRoot.JSON.Data.Item[itemCount].e= inRef.e; --year 5
CREATE FIELD OutputRoot.JSON.Data.f IDENTITY(JSON.Array)f;
SET OutputRoot.JSON.Data.f.Item[itemCount].g= inRef.g;
SET OutputRoot.JSON.Data.f.Item[itemCount].h= inRef.h;
SET itemCount = itemCount+1;
END FOR;
Then I am getting this result:
{ "Item": { "a":1, "b": "string", "c":2, "d": "string", "e": 3 }, "f":[ { "g":4, "h": "string" } ] }
My new code working for 1 iteration but replacing(overriding) values for 2nd iteration:
SET resultSet.rec\[\] = PASSTHRU(sqlQuery);
DECLARE itemCount INTEGER 1;
FOR dataref AS resultSet.rec[] DO
DECLARE inRef REFERENCE TO resultSet.rec[itemCount];
SET OutputRoot.JSON.Data.a= inRef.a;
SET OutputRoot.JSON.Data.b= inRef.b;
SET OutputRoot.JSON.Data.c= inRef.c;
SET OutputRoot.JSON.Data.d= inRef.d;
SET OutputRoot.JSON.Data.e= inRef.e; --year 5
CREATE FIELD OutputRoot.JSON.Data.f IDENTITY(JSON.Array)f;
SET OutputRoot.JSON.Data.f.Item.g= inRef.g;
SET OutputRoot.JSON.Data.f.Item.h= inRef.h;
SET itemCount = itemCount+1;
END FOR;

If I understand you correctly you want a JSON array as result.
You can do it like this:
CREATE FIELD OutputRoot.JSON.Data IDENTITY(JSON.Array)Data;
DECLARE outRef REFERENCE TO OutputRoot;
FOR inRef AS resultSet.rec[] DO
CREATE LASTCHILD OF OutputRoot.JSON.Data AS outRef NAME 'Item';
SET outRef.a = inRef.a;
SET outRef.b = inRef.b;
SET outRef.c = inRef.c;
SET outRef.d = inRef.d;
SET outRef.e = inRef.e;
CREATE FIELD outRef.f IDENTITY(JSON.Array)f;
SET outRef.f.Item.g= inRef.g;
SET outRef.f.Item.h= inRef.h;
END FOR;
This will produce following JSON response:
[
{
"a": 1,
"b": "string",
"c": 2,
"d": "string",
"e": 3,
"f": [
{
"g": 4,
"h": "string"
}
]
},
{
"a": 5,
"b": "string",
"c": 6,
"d": "string",
"e": 7,
"f": [
{
"g": 8,
"h": "string"
}
]
}
]
Use CREATE LASTCHILD to avoid array subscripts [] navigation:
Array subscripts [ ] are expensive in terms of performance
See ESQL array processing.

Related

JSONata array to array manipulation with mapping

I need to transform array to array with some extra logic:
Map field name in array if it exists in the mapping object, if not process it as it is in the source
Sum up values of objects with the same name
Remove objects with zero value
for example here is the source json:
{
"additive": [
{
"name": "field-1",
"volume": "10"
},
{
"name": "field-2",
"volume": "10"
},
{
"name": "field-3",
"volume": "0"
},
{
"name": "field-4",
"volume": "5"
}
]
}
object with mapping config(field-1 and field-2 is mapped to the same value):
{
"field-1": "field-1-mapped",
"field-2": "field-1-mapped",
"field-3": "field-3-mapped"
}
and this is the result that I need to have
{
"chemicals": [
{
"name": "field-1-mapped",
"value": 20
},
{
"name": "field-4",
"value": 5
}
]
}
as you can see field-1 and field-2 is mapped to field-1-mapped so the values are summed up, field-3 has 0 value so it is removed and field-4 is passed as it is because it's missing in the mapping.
so my question is: is it possible to make it with JSONata?
I have tried to make it work but I stuck with this lookup function that doesn't return default value when name is missing in mapping:
{
"chemicals": additive # $additive #$.{
"name": $res := $lookup({
"field-1": "field-1-mapped",
"field-2": "field-1-mapped",
"field-3": "field-3-mapped"
}, $additive.name)[ $res ? $res : $additive.name],
"amount": $number($additive.volume),
} [amount>0]
}
Probably easiest to break it down into steps as follows:
(
/* define lookup table */
$table := {
"field-1": "field-1-mapped",
"field-2": "field-1-mapped",
"field-3": "field-3-mapped"
};
/* substitute the name; if it's not in the table, just use the name */
$mapped := additive.{
"name": [$lookup($table, name), name][0],
"volume": $number(volume)
};
/* group by name, and aggregate the volumes */
$grouped := $mapped[volume > 0]{name: $sum(volume)};
/* convert back to array */
{
"chemicals": $each($grouped, function($v, $n) {{
"name": $n,
"volume": $v
}})
}
)
See https://try.jsonata.org/0BWeRcRoZ

Transform array of values to array of key value pair

I have a json data which is in the form of key and all values in a array but I need to transform it into a array of key value pairs, here is the data
Source data
"2022-08-30T06:58:56.573730Z": [
{ "tag": "AC 3 Phase/7957", "value": 161.37313113545272 },
{ "tag": "AC 3 Phase/7956", "value": 285.46869739695853 }
]
}
Transformation looking for
[
{ "tag": "AC 3 Phase/7957",
"ts": 2022-08-30T06:58:56.573730Z,
"value": 161.37313113545272
},
{ "tag": "AC 3 Phase/7956",
"ts": 2022-08-30T06:58:56.573730Z,
"value": 285.46869739695853
}
]
I would do it like this:
$each($$, function($entries, $ts) {
$entries.{
"tag": tag,
"ts": $ts,
"value": value
}
}) ~> $reduce($append, [])
Feel free to play with this example on the playground: https://stedi.link/g6qJGcP

How to access json fields with Jolt Transform?

How do I access json fields with Jolt transform?
For example I have this json:
{
"a": 110,
"b": 10
}
I would like to have:
{
"a": 110,
"b": 10,
"c": 100 // 110 - 10 (substraction)
}
The following transformation will add a c variable which is set to a - b:
[
{
"operation": "shift",
"spec": {
"a": "a",
"b": "b"
}
},
{
"operation": "modify-default-beta",
"spec": {
"c": "=intSubtract(#(1,a), #(1,b))"
}
}
]
If you wish to test it, the Jolt demo website is an excellent resource. Put your original JSON into the "JSON Input" box:
{
"a": 110,
"b": 10
}
Then place the transformation spec from the top of this answer into the "JOLT Spec" box and hit the Transform button. The result should be as you desired:
{
"a" : 110,
"b" : 10,
"c" : 100
}
You just can use a single modify-overwrite-beta transformation along with a intSubtract function in order to add add an extra element to the current json value such as
[
{
"operation": "modify-overwrite-beta",
"spec": {
"c": "=intSubtract(#(1,a),#(1,b))"
}
}
]

Ruby iterate over an array of hashes

I have the below array of hashes.
I want to add a new key,value pair to "hashes" which are in "all" array. Is there any better way of looping through, than what I am doing currently?
stack = {
"all": [
"mango",
"apple",
"banana",
"grapes"
],
"mango": {
"TYPE": "test",
"MAX_SIZE": 50,
"REGION": "us-east-1"
},
"apple": {
"TYPE": "dev",
"MAX_SIZE": 55,
"REGION": "us-east-1"
},
"banana": {
"TYPE": "test",
"MAX_SIZE": 60,
"REGION": "us-east-1"
},
"grapes": {
"TYPE": "dev",
"MAX_SIZE": 80,
"REGION": "us-east-1"
},
"types": [
"dev",
"test"
]
}
My code:
stack['all'].each do |fruit|
stack[fruit].each do |fruit_name|
stack[fruit_name]['COUNT'] = stack[fruit_name]['MAX_SIZE'] * 2
end
end
Expected output:
stack = {
"all": [
"mango",
"apple",
"banana",
"grapes"
],
"mango": {
"TYPE": "test",
"MAX_SIZE": 50,
"REGION": "us-east-1",
"COUNT" : 100
},
"apple": {
"TYPE": "dev",
"MAX_SIZE": 55,
"REGION": "us-east-1",
"COUNT" : 110
},
"banana": {
"TYPE": "test",
"MAX_SIZE": 60,
"REGION": "us-east-1",
"COUNT" : 120
},
"grapes": {
"TYPE": "dev",
"MAX_SIZE": 80,
"REGION": "us-east-1",
"COUNT" : 160
},
"types": [
"dev",
"test"
]
}
There is no need for the second loop. The following does what you want:
keys = stack[:all].map(&:to_sym)
keys.each do |key|
stack[key][:COUNT] = stack[key][:MAX_SIZE] * 2
end
In the above code-block stack[:all] will return an array of keys as strings, .map(&:to_sym) will convert each string in the resulting array into a symbol.
Another way to achieve the same result would be to use either fetch_values or values_at to retrieve an array of values belonging to the provided keys. The difference being that fetch_values raises an exception if a key is missing while values_at returns nil for that key.
fruits = stack.fetch_values(*stack[:all].map(&:to_sym))
fruits.each do |fruit|
fruit[:COUNT] = fruit[:MAX_SIZE] * 2
end
If you are wondering why there is a * before stack[:all].map(&:to_sym), this is to convert the array into individual arguments. In this context * is called the spat operator.
You might write the code as follows.
stack[:all].each do |k|
h = stack[k.to_sym]
h[:COUNT] = 2*h[:MAX_SIZE] unless h.nil?
end
When, for example, `k = "mango",
h #=> h={:TYPE=>"test", :MAX_SIZE=>50, :REGION=>"us-east-1", :COUNT=>100}
I've defined the local variable h for three reasons:
it simplifies the code by avoiding multiple references to stack[k.to_sym]
when debugging it may may be helpful to be able to examine h
it makes the code more readable
Note that h merely holds an existing hash; it does not create a copy of that hash, so it has a neglibile effect on memory requirements.
The technique of defining local variables to hold objects that are parts of other objects is especially useful for more complex objects. Suppose, for example, we had the hash
hash = {
cat: { sound: "purr", lives: 9 },
dog: { sound: "woof", toys: ["ball", "rope"] }
}
Now suppose we wish to add a dog toy
new_toy = "frisbee"
if it is not already present in the array
hash[:dog][:toys]
We could write
hash[:dog][:toys] << new_toy unless hash[:dog][:toys].include?(new_toy)
#=> ["ball", "rope", "frisbee"]
hash
#=> {:cat=>{:sound=>"purr", :lives=>9},
# :dog=>{:sound=>"woof", :toys=>["ball", "rope", "frisbee"]}}
Alternatively, we could write
dog_hash = hash[:dog]
#=> {:sound=>"woof", :toys=>["ball", "rope"]}
dog_toys_arr = dog_hash[:toys]
#=> ["ball", "rope"]
dog_toys_arr << new_toy unless dog_toys_arr.include?(new_toy)
#=> ["ball", "rope", "frisbee"]
hash
#=> {:cat=>{:sound=>"purr", :lives=>9},
# :dog=>{:sound=>"woof", :toys=>["ball", "rope", "frisbee"]}}
Not only does the latter snippet display intermediate results, it probably is a wash with the first snippet in terms of execution speed and storage requirements and arguably is more readable. It also cuts down on careless mistakes such as
hash[:dog][:toys] << new_toy unless hash[:dog][:toy].include?(new_toy)
If one element of stack[:all] were, for example, "pineapple", stack[:pineapple] #=> nil since stack has no key :pineapple. If, however, stack contained the key-value pair
nil=>{ sound: "woof", toys: ["ball", "rope"] }
that would become problem. Far-fetched? Maybe, but it is perhaps good practice--in part for readability--to avoid the assumption that h[k] #=> nil means h has no key k; instead, use if h.key?(k). For example:
stack[:all].each do |k|
key = k.to_sym
if stack.key?(key)
h = stack[key]
h[:COUNT] = 2*h[:MAX_SIZE]
end
end

Algorithm for calculating position numbers for nested list elements

Hi I am working with nested setes, I have some elements with level, parent, left and right fields. I have to calculate numbers representing element position in a whole set.
Something like this:
1. a
1.1. b
1.2. c
1.3. d
1.3.1. e
1.3.2. f
2. g
2.1. h
Any ideas for the algorithm?
I'll assume you have your items stored in a dictionary/hashmap, keyed by the item's id ("a", "b", ...). If you have parent and left as properties, then values for level and right are implied, so I'll limit myself to objects with only parent and left properties. These have as value an id ("a", "b", ...) with which the corresponding item can be looked up in the dictionary.
So the data could look like this (using JSON syntax):
{
"a": { "parent": null, "left": null },
"b": { "parent": "a", "left": null },
"c": { "parent": "a", "left": "b" },
"d": { "parent": "a", "left": "c" },
"e": { "parent": "d", "left": null },
"f": { "parent": "d", "left": "e" },
"g": { "parent": null, "left": "a" },
"h": { "parent": "g", "left": null },
}
The idea is to use recursion to resolve the numbering for the node at the left (if it exists), and then we can copy that numbering and increment the last part of it.
If there is no left node, then we use recursion to find the numbering of the parent node (if it has one), and then we can copy that numbering and append a 1 to it.
If there is neither a left, neither a parent node, then we know that this item must have 1 as its numbering.
To ease the processing, I have not used a string representation (like "1.3.2"), but an array of numbers ([1, 3, 2]).
Here is a JavaScript implementation:
function resolve(node) {
if ("heading" in node) return; // already numbered
if (node.left != null) {
let left = data[node.left];
resolve(left);
node.heading = left.heading.slice(); // copy
node.heading[node.heading.length - 1]++;
} else if (node.parent != null) {
let parent = data[node.parent];
resolve(parent);
node.heading = parent.heading.slice(); // copy
node.heading.push(1);
} else {
node.heading = [1];
}
}
function addHeadings(data) {
for (let node of Object.values(data)) resolve(node);
}
// Sample data
let data = {
"a": { parent: null, left: null },
"b": { parent: "a", left: null },
"c": { parent: "a", left: "b" },
"d": { parent: "a", left: "c" },
"e": { parent: "d", left: null },
"f": { parent: "d", left: "e" },
"g": { parent: null, left: "a" },
"h": { parent: "g", left: null },
};
addHeadings(data); // add heading properties
console.log(data); // show result

Resources