Parsing json values and converting them to csv using jq - shell

I need your help for parsing below json file and converting them to csv using jq command.
{
"id": 15,
"description": "package",
"active": true,
"name": "linux",
"project": [
{
"id": 1762,
"description": "This Red Hat Server 7 is built from the Redhat Official",
"path": "x86_24",
"url": "some url"
},
{
"id": 1663,
"description": "This Ubuntu 20.04 is built from the Ubuntu Official",
"path": "x86_24",
"url": "some url"
},
{
"id": 1557,
"description": "This Centos 7 is built from the Centos Official",
"path": "x86_24",
"url": "some url"
}
]
}
{
"id": 22,
"description": "exe",
"active": true,
"name": "windows",
"project": []
}
{
"id": 34,
"description": "brew",
"active": true,
"name": "mac",
"project": []
}
The values which I need from this json is: id, description, project.id, project.description, project.url. I tried doing with jq cmd, but at last my csv is getting messed up. Here id holds project, project has multiple ids. I need to separate them and generate my csv like below. I'm stuck up here. Any solution for this ? Thanks in Advance !

Is this what you are looking for:
jq -r '
[.id, .description] + (.project[] | [.id, .description, .url])
| #csv
'
15,"package",1762,"This Red Hat Server 7 is built from the Redhat Official","some url"
15,"package",1663,"This Ubuntu 20.04 is built from the Ubuntu Official","some url"
15,"package",1557,"This Centos 7 is built from the Centos Official","some url"
22,"exe",1332,"This Windows 7 is developed from the Windows Official","some url"
22,"exe",1563,"This Windows 11 is developed from the Windows Official","some url"
Demo
You can also adopt #ikegami's solution to a (technically) very similar problem:
jq -r '
.project[] as $p
| [.id, .description, $p.id, $p.description, $p.url]
| #csv
'
Demo

Related

az webapp list pull all hostnames for all active webapps

I'm attempting to pull down all the enabledhostnames associated with all of my webapps.
IE, if I had a basic webapp with the following configuration, I would want to print out test1.com and test2.com.
{
"id": "foobar",
"name": "foobar",
"type": "Microsoft.Web/sites",
"kind": "app",
"location": "East US",
"properties": {
"name": "foobar",
"state": "Running",
"hostNames": [
"test1.com",
"test2.com"
],
"webSpace": "kwiecom-EastUSwebspace",
"selfLink": "foobar",
"repositorySiteName": "foobar",
"owner": null,
"usageState": 0,
"enabled": true,
"adminEnabled": true,
"enabledHostNames": [
"test1.com",
"test2.com"
]
}
When I run the following, I just get the number of hostnames associated with each webapp.
az webapp list --resource-group resourcegroup1 --query "[?state=='Running']".{Name:enabledHostNames[*]} --output tsv
The output looks like the following
2
Appreciate any help
Removing --output tsv will result in the hostnames being displayed instead of the number of total, eg:
az webapp list --resource-group resourcegroup1 --query "[?state=='Running']".{Name:enabledHostNames[*]}
The output from this command is:
[
{
"Name": [
"test1.com",
"test2.com"
]
}
]
Not sure if this is the exact output you are looking for. Apologies if you have already considered this.

web app works locally and on app engine, but not on cloud run

So I've run into this issue with a web app I've made:
it gets a file path as input
if the file exists on a bucket, it uses a python client api to create a compute engine instance
it passes the file path to the instance in the startup script
When I ran it locally, I created a python virtual environment and then ran the app. When I make the input on the web browser, the virtual machine is created by the api call. I assumed it used my personal account. I changed to the service account in the command line with this command 'gcloud config set account', it ran fine once more.
When I simply go to the source code directory deploy it as is, the application can create the virtual machine instances as well.
When I use Google cloud build and deploy to cloud run, it doesn't create the vm instance.
the web app itself is not throwing any errors, but when I check compute engine's logs, there is an error in the logs:
`{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 3,
"message": "INVALID_PARAMETER"
},
"authenticationInfo": {
"principalEmail": "####"
},
"requestMetadata": {
"callerIp": "#####",
"callerSuppliedUserAgent": "(gzip),gzip(gfe)"
},
"serviceName": "compute.googleapis.com",
"methodName": "v1.compute.instances.insert",
"resourceName": "projects/someproject/zones/somezone/instances/nameofinstance",
"request": {
"#type": "type.googleapis.com/compute.instances.insert"
}
},
"insertId": "######",
"resource": {
"type": "gce_instance",
"labels": {
"instance_id": "#####",
"project_id": "someproject",
"zone": "somezone"
}
},
"timestamp": "2021-06-16T12:18:21.253551Z",
"severity": "ERROR",
"logName": "projects/someproject/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operation-#####",
"producer": "compute.googleapis.com",
"last": true
},
"receiveTimestamp": "2021-06-16T12:18:21.253551Z"
}`
In theory, it is the same exact code that worked from my laptop and on app engine. I'm baffled why it only does this for cloud run.
App engines default service account was stripped of all its roles and given a custom role tailored to the web apps function.
The cloud run is using a different service account, but was given that exact same custom role.
Here is the method I use to call the api.
def create_instance(path):
compute = googleapiclient.discovery.build('compute', 'v1')
vmname = "piinnuclei" + date.today().strftime("%Y%m%d%H%M%S")
startup_script = "#! /bin/bash\napt update\npip3 install pg8000\nexport BUCKET_PATH=my-bucket/{}\ngsutil -m cp -r gs://$BUCKET_PATH /home/connor\ncd /home/connor\n./cloud_sql_proxy -dir=cloudsql -instances=sql-connection-name=unix:sql-connection-name &\npython3 run_analysis_upload.py\nexport ZONE=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')\nexport NAME=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')\ngcloud --quiet compute instances delete $NAME --zone=$ZONE".format(path)
config = {
"kind": "compute#instance",
"name": vmname,
"zone": "projects/my-project/zones/northamerica-northeast1-a",
"machineType": "projects/my-project/zones/northamerica-northeast1-a/machineTypes/e2-standard-4",
"displayDevice": {
"enableDisplay": False
},
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script",
"value": startup_script
}
]
},
"tags": {
"items": []
},
"disks": [
{
"kind": "compute#attachedDisk",
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": vmname,
"initializeParams": {
"sourceImage": "projects/my-project/global/images/my-image",
"diskType": "projects/my-project/zones/northamerica-northeast1-a/diskTypes/pd-balanced",
"diskSizeGb": "100"
},
"diskEncryptionKey": {}
}
],
"canIpForward": False,
"networkInterfaces": [
{
"kind": "compute#networkInterface",
"subnetwork": "projects/my-project/regions/northamerica-northeast1/subnetworks/default",
"accessConfigs": [
{
"kind": "compute#accessConfig",
"name": "External NAT",
"type": "ONE_TO_ONE_NAT",
"networkTier": "PREMIUM"
}
],
"aliasIpRanges": []
}
],
"description": "",
"labels": {},
"scheduling": {
"preemptible": False,
"onHostMaintenance": "MIGRATE",
"automaticRestart": True,
"nodeAffinities": []
},
"deletionProtection": False,
"reservationAffinity": {
"consumeReservationType": "ANY_RESERVATION"
},
"serviceAccounts": [
{
"email": "batch-service-accountg#my-project.iam.gserviceaccount.com",
"scopes": [
"https://www.googleapis.com/auth/cloud-platform"
]
}
],
"shieldedInstanceConfig": {
"enableSecureBoot": False,
"enableVtpm": True,
"enableIntegrityMonitoring": True
},
"confidentialInstanceConfig": {
"enableConfidentialCompute": False
}
}
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
The issue was with the zone. For some reason, when it was ran on cloud run, the code below was the culprit.
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
"northamerica-northeast1" should have been "northamerica-northeast1-a"
EDIT:
I made a new virtual machine image and quickly ran into the same problem, it would work locally and break down in the cloud run environment. After letting it sit for some time, it began to work again. This is leading me to the conclusion that there is also some sort of delay before it can be called by cloud run.

Google home actions.fulfillment.devices not getting enabled

I am using google smarthome actions for IOT... I updated my action url and account linking details. When i am trying to enable the Test in simulator to deploy my TestAPP to cloud, it fails and it gets me an error "GoogleFulfillment 'actions.fulfillment.devices' is not supported" and the linked app not update old URL. This worked a few days ago. Any changes from google side or anybody has any clue ?
There is a manual workaround. Thanks for Google Assistatant forum:
Steps:
1 - Download the gactions cli at https://developers.google.com/actions/tools/gactions-cli
2 - Authenticate with any command:
./gactions list --project [YOUT_PROJECT_ID]
3 - Download the json representation of your action:
./gactions get --project [YOUR_PROJECT_ID] --version draft > action.json
4 - Edit the json. Extract the only object from its array, remove the nested “googleFulfillments” object:
"googleFulfillments": [
{
"endpoint": {
"baseUrl": "[URL]"
},
"name": "actions.fulfillment.devices"
}
],
5 - Delete the brackets "[ ]" on the top and end of file. Only one language can be activated at a time. Delete all data from the action.json file unnecessary. The file looks like this, with its parameters:
{
"accountLinking": {
"accessTokenUrl": "xxxx",
"assertionTypes": [
"ID_TOKEN"
],
"authenticationUrl": "xxx",
"clientId": "xxx",
"clientSecret": "xxxx",
"grantType": "AUTH_CODE"
},
"actions": [
{
"description": "Smart home action for project xxxxxxx",
"fulfillment": {
"conversationName": "AoGSmartHomeConversation_xxxxxx"
},
"name": "actions.devices"
}
],
"conversations": {
"AoGSmartHomeConversation_xxxxxxxx": {
"name": "",
"url": "xxxxxxx"
}
},
"locale": "en",
"manifest": {
"category": "xxx",
"companyName": "xxx",
"contactEmail": "xxx",
"displayName": "xxx",
"largeLandscapeLogoUrl": "xxxxxx",
"longDescription": "xxxx",
"privacyUrl": "xxx",
"shortDescription": "xxxx",
"smallSquareLogoUrl": "xxxx",
"termsOfServiceUrl": "xxxxx",
"testingInstructions": "xxxxx"
}
}
6 - If you have updated the URL of fulfillment, authentication or token, go to Google Actions Console and update his entry on there;
7 - Push your fixed action into test:
./gactions test --project [YOUR_PROJECT_ID] --action_package ./action.json
This replaces the step " Click Simulator under TEST" in the google assistant manual setup. It worked for me!
More help here: https://community.home-assistant.io/t/google-assistant-trouble-shooting/99223/142

Jelastic - using private repository in JPS

Is there a way to use private docker repository images when launching a new environment using jps?
From marketplace, i can add docker containers from private repository and launch them, no problem there. But even when the image has been added to the marketplace, the new environment launched using jps, cannot find the image... "adding privateRepo/image:latest node to env-xxxx | Image not found. Please double-check your entries"
[edit]
Below is a simple example JPS to start from... The karppo/testing image is in hub.docker.com as a private repository and I would like to launch it using JPS.
{
"jpsType": "install",
"description": {
"text": "repo testing qwe",
"short": "repo testing qwe"
},
"name": "repo testing",
"success": {
"text": "repo testing ok"
},
"nodes": [
{
"image": "karppo/testing",
"count": 1,
"cloudlets": 2,
"nodeGroup": "purkka",
"displayName" : "purkka"
}
]
}
Got a bit of help with this. The thing that i was looking for is "registry".
{
"jpsType": "install",
"description": {
"text": "repo testing qwe",
"short": "repo testing qwe"
},
"name": "repo testing",
"success": {
"text": "repo testing ok"
},
"nodes": [
{
"image": "karppo/testing",
"registry": {
"user": "username",
"password": "*******************",
"url": "registry-1.docker.io"
},
"count": 1,
"cloudlets": 2,
"nodeGroup": "purkka",
"displayName" : "purkka"
}
]
}

Unable to create openwhisk trigger on ubuntu local machine with /whisk.system/alarms/alarm feed

I was able to install the system package for alarms successfully, mostly following the link https://github.com/apache/incubator-openwhisk-package-alarms/issues/51#issuecomment-294010619
So, I get the following:
bin/wsk package get --summary /whisk.system/alarms --insecure
package /whisk.system/alarms: Alarms and periodic utility
(parameters: *apihost, *cron, *trigger_payload)
feed /whisk.system/alarms/alarm: Fire trigger when alarm occurs
(parameters: none defined)
Features like actions, triggers, rules are working on my local openwhisk installation.
I am running the command to create a trigger as follows:
bin/wsk trigger create convertTriggerPeriodic --feed /whisk.system/alarms/alarm -p cron "*/9 * * * * *" -p trigger_payload "{\"name\":\"Odin\",\"place\":\"Asgard\"}" -p maxTriggers 6 --insecure
ok: invoked /whisk.system/alarms/alarm with id d5879ab1c97745c9879ab1c977c5c967
{
"activationId": "d5879ab1c97745c9879ab1c977c5c967",
"annotations": [
{
"key": "limits",
"value": {
"logs": 10,
"memory": 256,
"timeout": 60000
}
},
{
"key": "path",
"value": "whisk.system/alarms/alarm"
}
],
"duration": 6402,
"end": 1508984964595,
"logs": [],
"name": "alarm",
"namespace": "guest",
"publish": false,
"response": {
"result": {
"error": {
"code": 30810,
"error": "There was an error processing your request."
}
},
"status": "application error",
"success": false
},
"start": 1508984958193,
"subject": "guest",
"version": "0.0.2"
}
ok: invoked /whisk.system/alarms/alarm with id 4fd67308821e4e0b967308821e4e0bdb
{
"activationId": "4fd67308821e4e0b967308821e4e0bdb",
"annotations": [
{
"key": "limits",
"value": {
"logs": 10,
"memory": 256,
"timeout": 60000
}
},
{
"key": "path",
"value": "whisk.system/alarms/alarm"
}
],
"duration": 4432,
"end": 1508984969257,
"logs": [],
"name": "alarm",
"namespace": "guest",
"publish": false,
"response": {
"result": {
"error": {
"code": 30822,
"error": "There was an error processing your request."
}
},
"status": "application error",
"success": false
},
"start": 1508984964825,
"subject": "guest",
"version": "0.0.2"
}
ok: deleted trigger convertTriggerPeriodic
Run 'wsk --help' for usage.
It is running the trigger twice. Each time, it is reporting error: "error": "There was an error processing your request." Then it is deleting the trigger.
So there is no way I can associate a rule /action with the trigger.
It looks like the alarms action was not installed properly. The directions listed in https://github.com/apache/incubator-openwhisk-package-alarms/issues/51 still work for running the alarms docker container, but are out-of-date for installing the action. Please see the comment I had made on July 21 (https://github.com/apache/incubator-openwhisk-package-alarms/issues/51#issuecomment-317007147) in this issue. The parameters for installCatalog.sh have changed. If you are having trouble following the install steps in this issue you can also checkout the comment I had left on August 9th (https://github.com/apache/incubator-openwhisk-package-alarms/issues/51#issuecomment-321320242). It contains a link to some ansible code I had written to handle the install for you.

Resources