Say I have an object like so:
{
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
I want to use jq to convert this to:
{
"key1": {
"innerkey": "value1"
},
"key2": {
"innerkey": "value2"
},
"key3": {
"innerkey": "value3"
}
}
i.e. I want to apply a mapping to every value in the object, that converts $value to {"innerkey": $value}. How can I achieve this with jq?
It's literally called map_values. Use it like this
map_values({innerkey:.})
Demo
You could also use the fact that iterating over an object iterates its values. So you could update those values on the object.
.[] |= {innerkey:.}
jqplay
Related
I have an object like:
{ "contact": { "value": 0 },
"temperature": { "value": 5}
}
That I would like converted to
{ "contact": 0,
"temperature": 5
}
And I would like to avoid a spread/map/merge
I believe this is what you're looking for:
$keys($){
$: $lookup($$, $).value
}
You can check out this expression in Stedi's JSONata Playground here: https://stedi.link/V67vnsh
I know you wanted to avoid $merge, but this solution would also work and is relatively short:
$each($, function($v, $k) {{ $k: $v.value }}) ~> $merge
Check it out here: https://stedi.link/3tOCJHb
how can I integrate the following json file
[source.json]
{
"test": "value",
"test1": "value2"
}
into this json file using jq?
[target.json]
{
"header": "stuff"
"values" :
{
"test": "value", //from source.json
"test1": "value2" //from source.json
}
}
It is possible that the "values" key in the target json file does not yet exist or that it already contains values. In both cases, the target.json file shown here should be created.
Something like
$ jq --slurpfile source source.json '.values = $source[0]' target.json
{
"header": "stuff",
"values": {
"test": "value",
"test1": "value2"
}
}
I have a pure ruby hash like the following one:
"1875": {
"child1": {
"field1": 1875,
"field2": "Test1"
},
"child2": {
"field1": "value1",
"field2": "value2"
}
},
"1959": {
"child1": {
"field1": 1875,
"field2": "Test1"
},
"child2": {
"field1": "value1",
"field2": "value2"
}
}
I have so many keys that follow the above structure that I want to paginate it.
I have tried the following code:
#records = #records.t_a.paginate(page: params[:page], per_page: 5)
But it is returning me all the elements in an array, like this:
["1875", {
"child1": {
"field1": 1875,
"field2": "Test1"
},
"child2": {
"field1": "value1",
"field2": "value2"
}
}
]
["1959", {
"child1": {
"field1": 1875,
"field2": "Test1"
},
"child2": {
"field1": "value1",
"field2": "value2"
}
}
]
First of all, note that a Hash is a dictionary-like collection and order shouldn't mater. So if you need to use pagination most likely a hash is the wrong data structure to use and you should use something like an array.
#records.t_a.paginate(page: params[:page], per_page: 5) returns an array because you are converting the hash to array with to_a. Depending what you are using the hash/pagination for this may be enough. For example, to display a the returned records assuming there is a function for printing child:
#records.t_a.paginate(page: params[:page], per_page: 5).each |key, value| do
<h1><%= book.title %></h1>
<p>print_child(value)</p>
If you really want a hash, you can convert the Array back to a hash:
Hash[#records.t_a.paginate(page: params[:page], per_page: 5)]
In my Elasticsearch Index I have documents which contain an array of uniform elements, like this:
Document 1:
"listOfElements": {
"entries": [{
"key1": "value1",
"int1": 4,
"key2": "value2"
}, {
"key1": "value1",
"int1": 7,
"key2": "value2"
}
]
}
Document 2:
"listOfElements": {
"entries": [{
"key1": "value1",
"int1": 5,
"key2": "value2"
}, {
"key1": "value1",
"int1": 7,
"key2": "value2"
}
]
}
Now I want to create a query that returns all documents which have, e.g. key1:value1 AND int1:4 in the same entry element.
However, if I only query for "key1:value1 AND int1:4" I obviously get all documents that have key1:value1 and all that have int1:4 so I would get both documents from the above example.
Is there any way to query for multiple fields that have to be in the same array element?
I have JSON payload like this;
{
"id": "",
"name": "",
"A": {...},
"B": {...},
"C": {...}
}
And I want to extract A, B and C fields with id and name field as different record. Like this;
{
"id": "",
"name": "",
"A": {...}
}
{
"id": "",
"name": "",
"B": {...}
}
{
"id": "",
"name": "",
"C": {...}
}
I'm using record based processors. But I don't know that how can I do this in Nifi using record based processors.
The "EvaluateJsonPath" is probably what you're looking for. You can add JSONPath expressions, that will be converted to attributes, or written to the flowfile.
http://jsonpath.com/ is a handy web tool to test your expressions.
If you want to use record based processors, then JoltTransformRecord would do the trick. Just set Jolt Transformation DSL as Chain and Jolt Specification as:
[
{
"operation": "shift",
"spec": {
"id": "id",
"name": "name",
"*": {
"#": "array.&"
}
}
},
{
"operation": "shift",
"spec": {
"array": {
"*": {
"#(2,id)": "[#2].id",
"#(2,name)": "[#2].name",
"#": "[#2].&"
}
}
}
}
]
This will first put your unique elements in an array and separate the common keys from them, then it will put the common keys in all of the elements while extracting the array to a top array.
Then, if you want them as different FlowFiles too, you can SplitRecord the array and you got it!