Adding One more Parent Attrubute To JSON through Command line - bash

I want edit the structure of json through Terminal using terminal commands or scripts.
If I have a json file Structure like this:
{
"Helloo": [
{
"AlbumTitle": {
"S": "Famous"
},
"SongTitle": {
"S": "Call Me Today"
},
"Artist": {
"S": "No One You Know"
}
},
{
"AlbumTitle": {
"S": "Famous1"
},
"SongTitle": {
"S": "Call Me Today1"
},
"Artist": {
"S": "No One You Know11"
}
}
],
"Music": [
{
"Album": {
"S": "Pop Songs"
},
"Production": {
"S": "X-series"
},
"Song": {
"S": "Once upon
},
"Artist": {
"S": "XYZ"
}
}
]
}
So here i want add "Putrequest" and "Item" attributes to each item of the array.. So i want the output like this:
{
"Helloo": [
{
PutRequest":{
"Item":{
"AlbumTitle": {
"S": "Famous"
},
"SongTitle": {
"S": "Call Me Today"
},
"Artist": {
"S": "No One You Know"
}
}
}
},
{
PutRequest":{
"Item":{
"AlbumTitle": {
"S": "Famous1"
},
"SongTitle": {
"S": "Call Me Today1"
},
"Artist": {
"S": "No One You Know11"
}
}
}
}
],
"Music": [
{
PutRequest":{
"Item":{
"Album": {
"S": "Pop Songs"
},
"Production": {
"S": "X-series"
},
"Song": {
"S": "Once upon
},
"Artist": {
"S": "XYZ"
}
}
}
}
]
}
I tried to use Jq for this but still struggling.. Please help me To add these attributes to json using command prompt or bash/shell scripting.
Thanks

Assuming you actually got valid JSON the following jq expression might work for you:
map_values(map({"PutRequest": { "Item": .}}))
Usage:
jq 'map_values(map({"PutRequest": { "Item": .}}))' file.json
Breakdown:
map_values( # Map values iterate over an object and assign the
# returned value to the property
map( # Map iterate over an array and assign the returned value
# to the index, and creates a new array if an object is
# mapped
{ # Return an object
"PutRequest": { # With PutRequest as a property
"Item": . # And Item, which contains the value (.)
}
}
)
)

Related

How can I change a json structure into an object and rename keys with jq

Using jq I am trying to convert the rawe json below into the desired json outcome.
Objectives:
name renamed to pathParameterName
type renamed to datasetParameter
Raw Json I'm trying to convert
{
"pathOptions": {
"parameters": {
"raw_date": {
"name": "raw_date",
"type": "Datetime",
"datetimeOptions": {
"localeCode": "en-GB"
},
"createColumn": true,
"filter": {
"expression": "(after :date1)",
"valuesMap": {
":date1": "2022-03-08T00:00:00.000Z"
}
}
}
}
}
}
Json desired outcome:
{
"pathOptions": {
"parameters": [
{
"pathParameterName": "raw_date",
"datasetParameter": {
"name": "raw_date",
"type": "Datetime",
"datetimeOptions": {
"localeCode": "en-GB"
},
"createColumn": true,
"filter": {
"expression": "(after :date1)",
"valuesMap": [
{
"valueReference": ":date1",
"value": "2022-03-08T00:00:00.000Z"
}
]
}
}
}
]
}
}
This is what I have so far:
map_values(if type == "object" then to_entries else . end)
This is what my code above currently produces. -I'm struggling with the key renaming.
{
"pathOptions": [
{
"key": "parameters",
"value": [
{
"pathParameterName": "raw_date",
"datasetParameter": {
"name": "raw_date",
"type": "Datetime",
"datetimeOptions": {
"localeCode": "en-GB"
},
"createColumn": true,
"filter": {
"expression": "(after :date1)",
"valuesMap": [
{
"valueReference": ":date1",
"value": "2022-03-08T00:00:00.000Z"
}
]
}
}
}
]
}
]
}
The function to_entries, "converts between an object and an array of key-value pairs" (see the manual). To rename the preset key and value fields, just reassign them to a new name with a new object as in {valueReference: .key, value}.
jq '
.pathOptions.parameters |= (
to_entries | map({
pathParameterName: .key,
datasetParameter: (
.value | .filter.valuesMap |= (
to_entries | map({valueReference: .key, value})
)
)
})
)
'
{
"pathOptions": {
"parameters": [
{
"pathParameterName": "raw_date",
"datasetParameter": {
"name": "raw_date",
"type": "Datetime",
"datetimeOptions": {
"localeCode": "en-GB"
},
"createColumn": true,
"filter": {
"expression": "(after :date1)",
"valuesMap": [
{
"valueReference": ":date1",
"value": "2022-03-08T00:00:00.000Z"
}
]
}
}
}
]
}
}
Demo

Get GraphQL output without second curly bracket

I have a problem. I do not want a "break" down in my GraphQL output. I have a GraphQL schema with a person. That person can have one or more interests. But unfortunately I only get a breakdown
What I mean by breakdown is the second curly brackets.
{
...
{
...
}
}
Is there an option to get the id of the person plus the id of the interests and the status without the second curly bracket?
GraphQL schema
Person
└── Interest
Query
query {
model {
allPersons{
id
name
interest {
id
status
}
}
}
}
[OUT]
{
"data": {
"model": {
"allPersons": [
{
"id": "01",
"name": "Max",
"interest ": {
"id": 4488448
"status": "active"
}
},
{
"id": "02",
"name": "Sophie",
"interest ": {
"id": 15445
"status": "deactivated"
}
},
What I want
{
{
"id": "01",
"id-interest": 4488448
"status": "active"
},
{
"id": "02",
"name": "Sophie",
"id-interest": 15445
"status": "deactivated"
},
}
What I tried but that deliver me the same result
fragment InterestTask on Interest {
id
status
}
query {
model {
allPersons{
id
interest {
...InterestTask
}
}
}
}

DynamoDB streams filter with nested fields not working

I have a Lambda hooked up to my DynamoDB stream. It is configured to trigger if both criteria are met:
eventName = "MODIFY"
status > 10
My filter looks as follows:
{"eventName": ["MODIFY"], "dynamodb": {"NewImage": {"status": [{"numeric": [">", 10]}]}}}
If the filter is configured to only trigger if the event name is MODIFY it works, however anything more complicated than that does not trigger my Lambda. The event looks as follows:
{
"eventID": "ba1cff0bb53fbd7605b7773fdb4320a8",
"eventName": "MODIFY",
"eventVersion": "1.1",
"eventSource": "aws:dynamodb",
"awsRegion": "us-east-1",
"dynamodb":
{
"ApproximateCreationDateTime": 1643637766,
"Keys":
{
"org":
{
"S": "test"
},
"id":
{
"S": "61f7ebff17afad170f98e046"
}
},
"NewImage":
{
"status":
{
"N": "20"
}
}
}
}
When using the test_event_pattern endpoint it confirms the filter is valid:
filter = {
"eventName": ["MODIFY"],
"dynamodb": {
"NewImage": {
"status": [ { "numeric": [ ">", 10 ] } ]
}
}
}
response = client.test_event_pattern(
EventPattern=json.dumps(filter),
Event="{\"id\": \"e00c66cb-fe7a-4fcc-81ad-58eb60f5d96b\", \"eventName\": \"MODIFY\", \"dynamodb\": {\"NewImage\":{\"status\": 20}}, \"detail-type\": \"myDetailType\", \"source\": \"com.mycompany.myapp\", \"account\": \"123456789012\", \"time\": \"2016-01-10T01:29:23Z\", \"region\": \"us-east-1\"}"
)
print(response) >> {'Result': True, 'ResponseMetadata': {'RequestId':...}
Is there something that I'm overlooking? Do DynamoDB filters not work on the actual new image?
probably already found out yourself but for anyone else
its missing the dynamodb json specific numeric field leaf:
{
"eventName": ["MODIFY"],
"dynamodb": {
"NewImage": {
"status": { "N": [{ "numeric": [">", 10] }] }
}
}
}

How to mutate a list of objects in an array as an argument in GraphQL completely

I cannot mutate a list of objects completely, because only the last element of the array will be mutated.
What already works perfectly is, if I put each element ({play_positions_id: ...}) in the array manually like here:
mutation CreateProfile {
__typename
create_profiles_item(data: {status: "draft", play_positions: [{play_positions_id: {id: "1"}}, {play_positions_id: {id: "2"}}]}) {
id
status
play_positions {
play_positions_id {
abbreviation
name
}
}
}
}
Output:
{
"data": {
"__typename": "Mutation",
"create_profiles_item": {
"id": "1337",
"status": "draft",
"play_positions": [
{
"play_positions_id": {
"id": "1",
"abbreviation": "RWB",
"name": "Right Wingback"
}
},
{
"play_positions_id": {
"id": "2",
"abbreviation": "CAM",
"name": "Central Attacking Midfielder"
}
}
],
}
}
}
Since you can add many of those elements, I defined a variable/argument like here
mutation CreateProfile2($cpppi: [create_profiles_play_positions_input]) {
__typename
create_profiles_item(data: {status: "draft", play_positions: $cpppi}) {
id
status
play_positions {
play_positions_id {
id
abbreviation
name
}
}
}
}
Variable object for above:
"cpppi": {
"play_positions_id": {
"id": "1"
},
"play_positions_id": {
"id": "2
}
}
Output:
{
"data": {
"__typename": "Mutation",
"create_profiles_item": {
"id": "1338",
"play_positions": [
{
"play_positions_id": {
"id": "2",
"abbreviation": "CAM",
"name": "Central Attacking Midfielder"
}
}
],
}
}
}
Schema:
input create_profiles_input {
id: ID
status: String!
play_positions: [create_profiles_play_positions_input]
}
input create_profiles_play_positions_input {
id: ID
play_positions_id: create_play_positions_input
}
input create_play_positions_input {
id: ID
abbreviation: String
name: String
}
At the last both snippets, only the last object with the id "2" will be mutated. I need these to use the defined input type from my backend.
I figured it out. I got it wrong with the brackets in the variable. Here the solution:
"cpppi": [
{
"play_positions_id": {
"id": "1"
}
},
{
"play_positions_id": {
"id": "2"
}
}
]

how to sort Data Sources in terraform based on arguments

I use following terraform code to get a list of available db resources:
data "alicloud_db_instance_classes" "resources" {
instance_charge_type = "PostPaid"
engine = "PostgreSQL"
engine_version = "10.0"
category = "HighAvailability"
zone_id = "${data.alicloud_zones.rds_zones.ids.0}"
multi_zone = true
output_file = "./classes.txt"
}
And the output file looks like this:
[
{
"instance_class": "pg.x4.large.2",
"storage_range": {
"max": "500",
"min": "250",
"step": "250"
},
"zone_ids": [
{
"id": "cn-shanghai-MAZ1(b,c)",
"sub_zone_ids": [
"cn-shanghai-b",
"cn-shanghai-c"
]
}
]
},
{
"instance_class": "pg.x8.medium.2",
"storage_range": {
"max": "250",
"min": "250",
"step": "0"
},
"zone_ids": [
{
"id": "cn-shanghai-MAZ1(b,c)",
"sub_zone_ids": [
"cn-shanghai-b",
"cn-shanghai-c"
]
}
]
},
{
"instance_class": "rds.pg.c1.xlarge",
"storage_range": {
"max": "2000",
"min": "5",
"step": "5"
},
"zone_ids": [
{
"id": "cn-shanghai-MAZ1(b,c)",
"sub_zone_ids": [
"cn-shanghai-b",
"cn-shanghai-c"
]
}
]
},
{
"instance_class": "rds.pg.s1.small",
"storage_range": {
"max": "2000",
"min": "5",
"step": "5"
},
"zone_ids": [
{
"id": "cn-shanghai-MAZ1(b,c)",
"sub_zone_ids": [
"cn-shanghai-b",
"cn-shanghai-c"
]
}
]
}
]
And I want to get the one that's cheapest.
One way to do so is by sorting with storage-range.min, but how do I sort this list based on 'storage_range.min'?
Or I can filter by 'instance_class', but "alicloud_db_instance_classes" doesn't seem to like filter as it says: Error: data.alicloud_db_instance_classes.resources: : invalid or unknown key: filter
Any ideas?
The sort() function orders lexicographical and you have no simple key here.
You can use filtering with some code like this (v0.12)
locals {
best_db_instance_class_key = "rds.pg.s1.small"
best_db_instance_class = element( alicloud_db_instance_classes.resources, index(alicloud_db_instance_classes.resources.*.instance_class, best_db_instance_class_key) )
}
(Untested code)

Resources