How to change some properties by using JSONata without describing all of not changed properties? - jsonata

I have a JSON which is quite big, and I want to keep it and just make some changes. How to do it by using JSONata without describing each property.
I would expect to have operator which take all already existing properties and I just need to add overrides.
For example
{
$takeall,
"name": title //changes
"area": width * height
}
Any way to do it?

I believe what you are looking for is the Transform Operator.
From the JSONata documentation:
... ~> | ... | ... | (Transform)
The object transform operator is used
to modify a copy of an object structure using a pattern/action syntax
to target specific modifications while keeping the rest of the
structure unchanged.
The syntax has the following structure:
head ~> | location | update [, delete] |
Assuming your object looks something like this:
{
"width": 5,
"height": 5,
"someOtherProps": "something",
"title": "MyOlditle"
}
You can run this example query using the Transform Operator:
$ ~> | $ | {
"area": width * height,
"title": "myNewTitle"
}, ["height", "width"] |
To produce:
/** "someOtherProps" carried over from original object. **/
/** "height" and "width" removed after calculation. **/
{
"someOtherProps": "something",
"title": "myNewTitle",
"area": 25
}
Working example: https://try.jsonata.org/dLm34sftc

Related

Filtering JSON based on sub array in a Power Automate Flow

I have some json data that I would like to filter in a Power Automate Flow.
A simplified version of the json is as follows:
[
{
"ItemId": "1",
"Blah": "test1",
"CustomFieldArray": [
{
"Name": "Code",
"Value": "A"
},
{
"Name": "Category",
"Value": "Test"
}
]
},
{
"ItemId": "2",
"Blah": "test2",
"CustomFieldArray": [
{
"Name": "Code",
"Value": "B"
},
{
"Name": "Category",
"Value": "Test"
}
]
}
]
For example, I wish to filter items based on Name = "Code" and Value = "A". I should be left with the item with ItemId 1 in that case.
I can't figure out how to do this in Power Automate. It would be nice to change the data structure, but this is the way the data is, and I'm trying to work out if this is possible in Power Automate without changing the data itself.
Firstly, I had to fix your JSON, it wasn't complete.
Secondly, filtering on sub array information isn't what I'd call easy. However, to get around the limitations, you can perform a bit of trickery.
Prior to the step above, I create a variable of type Array and called it Array.
In the step above, the left hand side expression is ...
string(item()?['CustomFieldArray'])
... and the contains comparison on the right hand side is simply as you can see, a string with the appropriate filter value ...
{"Name":"Code","Value":"A"}
... it's not an expression or a proper object, just a string.
If you need to enhance it to cater for case sensitive values, just set everything to lower case using the toLower expression on the left.
Although it's hard to see, that will produce your desired result ...
... you can see by the vertical scrollbars that it's reduced the size of the array.

Match keys with sibling object JSONATA

I have an JSON object with the structure below. When looping over key_two I want to create a new object that I will return. The returned object should contain a title with the value from key_one's name where the id of key_one matches the current looped over node from key_two.
Both objects contain other keys that also will be included but the first step I can't figure out is how to grab data from a sibling object while looping and match it to the current value.
{
"key_one": [
{
"name": "some_cool_title",
"id": "value_one",
...
}
],
"key_two": [
{
"node": "value_one",
...
}
],
}
This is a good example of a 'join' operation (in SQL terms). JSONata supports this in a path expression. See https://docs.jsonata.org/path-operators#-context-variable-binding
So in your example, you could write:
key_one#$k1.key_two[node = $k1.id].{
"title": $k1.name
}
You can then add extra fields into the resulting object by referencing items from either of the original objects. E.g.:
key_one#$k1.key_two[node = $k1.id].{
"title": $k1.name,
"other_one": $k1.other_data,
"other_two": other_data
}
See https://try.jsonata.org/--2aRZvSL
I seem to have found a solution for this.
[key_two].$filter($$.key_one, function($v, $k){
$v.id = node
}).{"title": name ? name : id}
Gives:
[
{
"title": "value_one"
},
{
"title": "value_two"
},
{
"title": "value_three"
}
]
Leaving this here if someone have a similar issue in the future.

How do I use FreeFormTextRecordSetWriter

I my Nifi controller I want to configure the FreeFormTextRecordSetWriter, but I have no Idea what I should put in the "Text" field. I'm getting the text from my source (in my case GetSolr), and just want to write this, period.
Documentation and mailinglist do not seem to tell me how this is done, any help appreciated.
EDIT: Here the sample input + output I want to achieve (as you can see: not ransformation needed, plain text, no JSON input)
EDIT: I now realize, that I can't tell GetSolr to return just CSV data - but I have to use Json
So referencing with attribute seems to be fine. What the documentation omits is, that the ${flowFile} attribute should containt the complete flowfile that is returned.
Sample input:
{
"responseHeader": {
"zkConnected": true,
"status": 0,
"QTime": 0,
"params": {
"q": "*:*",
"_": "1553686715465"
}
},
"response": {
"numFound": 3194,
"start": 0,
"docs": [
{
"id": "{402EBE69-0000-CD1D-8FFF-D07756271B4E}",
"MimeType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"FileName": "Test.docx",
"DateLastModified": "2019-03-27T08:05:00.103Z",
"_version_": 1629145864291221504,
"LAST_UPDATE": "2019-03-27T08:16:08.451Z"
}
]
}
}
Wanted output
{402EBE69-0000-CD1D-8FFF-D07756271B4E}
BTW: The documentation says this:
The text to use when writing the results. This property will evaluate the Expression Language using any of the fields available in a Record.
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
I want to use my source's text, so I'm confused
You need to use expression language as if the record's fields are the FlowFile's attributes.
Example:
Input:
{
"t1": "test",
"t2": "ttt",
"hello": true,
"testN": 1
}
Text property in FreeFormTextRecordSetWriter:
${t1} k!${t2} ${hello}:boolean
${testN}Num
Output(using ConvertRecord):
test k!ttt true:boolean
1Num
EDIT:
Seems like what you needed was reading from Solr and write a single column csv. You need to use CSVRecordSetWriter. As for the same,
I should tell you to consider to upgrade to 1.9.1. Starting from 1.9.0, the schema can be inferred for you.
otherwise, you can set Schema Access Strategy as Use 'Schema Text' Property
then, use the following schema in Schema Text
{
"name": "MyClass",
"type": "record",
"namespace": "com.acme.avro",
"fields": [
{
"name": "id",
"type": "int"
}
]
}
this should work
I'll edit it into my answer. If it works for you, please choose my answer :)

Mongodb Java nested update

I have a MongoDB document structure like this
{
"_id": "002",
"list": [
{
"year": "2015",
"entries": [{...}, {...}]
},
{
"year": "2014",
"entries": [{...}, {...}]
}
]
}
I want to push a new element into "entries". I know it is possible using
collection.updateOne(
Filters.eq("_id", "002"),
new Document("$push", new Document("list.0.entries", "{...}")
);
But this appends to "entries" of the 1st element of "list". I want to append to "entries" for the "year" 2015. How can I do this with MongoDB Java driver API (3.0)?
I think you should use something like
Filters.and(Filters.eq("_id", "002"), Filters.eq("list.year", "2015"))
PS As the Filters javadoc suggests, it's convenient to use static import for it (to make it less verbose by skipping the "Filters." part)
You can use
Bson filter = Filters.and(Filters.eq("_id", "002"), Filters.eq("list", Filters.eq($elemMatch, Filters.eq("year", "2015"))
Document list = collection.find().filter(filter)
Afterwards you can iterate through the list to find the year 2015 and get the entries for this one and insert the new element via java code. Keep the updated list in a local variable and write this one through an update command into your mongoDB.

Cucumber: read a template with placeholders

I am writing a cucumber framework to test a set of API calls which use long JSON formatted parameters. I would like to hide the JSON in a template file in order to make the scenario easier for my users to read and DRYer, in that the templates may be used by other scenarios and feature files. The templates contain placeholders and I would like to rig the cucumber/ruby code to fill in the values defined in the table of examples. It appears that ERB is the closest thing to doing the replacement. However, I have not found a way to bind the definitions from the table of examples.
It may be that the only way around this is to run the feature file and template through a pre-processor which combines them and manufactures the final feature file. I am looking for a more elegant single step solution, if possible.
Example feature file code:
Feature: Create users
Scenario Outline: Test create a merchant user
Given I am logged in
When I send a :post request to "createUser" using the "Merchant" template:
Then the JSON response should have "$..status" with the text "success"
Examples:
| OrgName | KeyName | KeyValue |
| CClient_RMBP_0_UNIQ_ | paramX | TRUE |
| CClient_RMBP_1_UNIQ_ | paramY | some text |
| CClient_RMBP_2_UNIQ_ | paramZ | 12345 |
Sample Merchant.json File:
{
"Organization": {
"parameters": [
{
"key": "orgName",
"value": {
"value": "<OrgName>"
}
},
{
"key": "<KeyName>",
"value": {
"value": "<KeyValue>"
}
}
]
},
"parentOrganizationId": "1",
"User": {
"firstName": "Mary",
"lastName": "Smith",
"id": "<OrgName>",
"language": "en",
"locale": "US",
"primaryEmail": "primary#mailaddr.com",
"cellPhone": "1-123-456-7890"
},
"active": "true"
}
I prefer to hide any notion of the response format from the cucumber steps.
Instead i'd prefer to have the steps at a higher level and have validate methods within the step definitions.
e.g.
When I attempt to create a user using the <type> template
Then the response is successful and contains the user's details
and in my step definition I would have a validate_response method which grabs the last response and checks the user details against the input.

Resources