change varible inside json file using bash [duplicate] - bash

This question already has answers here:
How do I use sed to change my configuration files, with flexible keys and values?
(8 answers)
Closed 4 years ago.
I have total 3 environment dev, stag and Prod all environment have config.json like this
{
"braintree": {
"merchantid": "MERCHANTID",
"publickey": "PUBLICKEY",
"privatekey": "PRIVATEKEY"
},
"karix": {
"url": "URL",
"pass": "PASS",
"user": "USER"
},
"fikarix": {
"source": "SOURCE"
},
"mailgun": {
"api_key": "API_KEY",
"domain": "DOMAIN",
"apikey": "APIKEY"
},
"paymentrails": {
"key": "KEY",
"environment": "ENVIRONMENT",
"secret": "SECRET"
}
}
Now I want to convert it into like this for all environment using shell script
dev environment config.json
{
"braintree": {
"merchantid": "dev_MERCHANTID",
"publickey": "dev_PUBLICKEY",
"privatekey": "dev_PRIVATEKEY"
},
"karix": {
"url": "dev_URL",
"pass": "dev_PASS",
"user": "dev_USER"
},
"fikarix": {
"source": "dev_SOURCE"
},
"mailgun": {
"api_key": "dev_API_KEY",
"domain": "dev_DOMAIN",
"apikey": "dev_APIKEY"
},
"paymentrails": {
"key": "dev_KEY",
"environment": "dev_ENVIRONMENT",
"secret": "dev_SECRET"
}
}
How I can get this using sed or any other solution?

sed 's/: "/: "dev_/g' config.json
using sed u can do
Edit:
To insert pass -i
sed -i 's/: "/: "dev_/g' config.json

Related

Pass Step Function variable to AWS Glue Job Not Working

I'm trying to pass an AWS Step Function variable to a Glue Job parameter, similar to this:
aws-passing-job-parameters-value-to-glue-job-from-step-function
However, this is not working for me. The glue job error message indicates that it's getting the passed variable name--not the actual value of the variable. Here's my Step Function code:
{
"Comment": "Converts CSV files to parquet for a date range.",
"StartAt": "ConfigureCount",
"States": {
"ConfigureCount": {
"Type": "Pass",
"Result": {
"start": 201601,
"end": 201602,
"index": 201601
},
"ResultPath": "$.iterator",
"Next": "Iterator"
},
"Iterator": {
"Type": "Task",
"Resource": "arn:aws:lambda:eu-west-1:123456789:function:date-iterator",
"ResultPath": "$.iterator",
"Next": "IsCountReached"
},
"IsCountReached": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.iterator.continue",
"BooleanEquals": true,
"Next": "ConvertToParquet"
}
],
"OutputPath": "$.iterator",
"Default": "Done"
},
"ConvertToParquet": {
"Comment": "Your application logic, to run a specific number of times",
"Type": "Task",
"Resource": "arn:aws:states:::glue:startJobRun.sync",
"Parameters": {
"JobName": "convert-to-parquet",
"Arguments": {
"--DATE_RANGE": "$.iterator.index"
}
},
"ResultPath": "$.iterator.index",
"Next": "Iterator"
},
"Done": {
"Type": "Pass",
"End": true
}
}
}
The step "Iterator"step is calling a Lambda called "date-iterator" which returns JSON similar to the following:
{
"start": "201601",
"end": "201602",
"index": "201601"
}
This was based on this article, so that I can loop through values: Iterating a Loop Using Lambda
My Step Function fails, saying "$.iterator.index" is not a valid date.
How do I pass this value, and not the variable name?
from Amazon States Language (https://states-language.net/spec.html):
If any field within the Payload Template (however deeply nested) has a name ending with the characters ".$", its value is transformed according to rules below and the field is renamed to strip the ".$" suffix.
Based on that adding .$ should solve your issue:
"Parameters": {
"JobName": "convert-to-parquet",
"Arguments": {
"--DATE_RANGE.$": "$.iterator.index"
}
},

Update yaml via bash

I would like to udpate a file config.yaml file by inserting some configuration parameters via bash.
The file to be updated looks like:
{
"log": [
{
"format": "plain",
"level": "info",
"output": "stderr"
}
],
"p2p": {
"topics_of_interest": {
"blocks": "normal",
"messages": "low"
},
"trusted_peers": [
{
"address": "/ip4/13.230.137.72/tcp/3000",
"id": "fe3332044877b2034c8632a08f08ee47f3fbea6c64165b3b"
}
]
},
"rest": {
"listen": "127.0.0.1:3100"
}
}
And it needs to look like:
{
"log": [
{
"format": "plain",
"level": "info",
"output": "stderr"
}
],
"storage": "./storage",
"p2p": {
"listen_address":"/ip4/0.0.0.0/tcp/3000",
"public_address":"/ip4/0.0.0.0/tcp/3000",
"topics_of_interest": {
"blocks": "normal",
"messages": "low"
},
"trusted_peers": [
{
"address": "/ip4/13.230.137.72/tcp/3000",
"id": "fe3332044877b2034c8632a08f08ee47f3fbea6c64165b3b"
}
]
},
"rest": {
"listen": "127.0.0.1:3100"
}
}
so adding
on the first level "storage": "./storage",
and on the second level in the p2p section "listen_address":"/ip4/0.0.0.0/tcp/3000", and "public_address":"/ip4/0.0.0.0/tcp/3000",
How do I do this with sed?
For YAML to JSON editor checkout---YAML to JSON editor
If you are certain that your YAML file is written in the JSON subset of YAML, you can use jq:
jq --arg a "/ip4/0.0.0.0/tcp/3000" \
'.storage = "./storage" |
.php += {listen_address: $a, public_address: $a}' config.yaml > tmp &&
mv tmp config.yaml

Run powershell command on Azure Windows via ARM template

I am trying to mount a data disk to a Windows Vm on Azure through an ARM template which is creating the VM. This is my ARM resource
{
"name": "[parameters('virtualMachineName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2016-04-30-preview",
"location": "[parameters('location')]",
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', parameters('networkInterfaceName'))]"
],
"tags": {
"Busuness Group": "[parameters('busunessGroup')]",
"Role": "[parameters('role')]"
},
"properties": {
"osProfile": {
"computerName": "[parameters('virtualMachineName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]",
"windowsConfiguration": {
"provisionVmAgent": "true"
}
},
"hardwareProfile": {
"vmSize": "[parameters('virtualMachineSize')]"
},
"storageProfile": {
"imageReference": {
"publisher": "microsoft-ads",
"offer": "standard-data-science-vm",
"sku": "standard-data-science-vm",
"version": "latest"
},
"dataDisks": [
{
"lun": 0,
"createOption": "Empty",
"caching": "None",
"managedDisk": {
"storageAccountType": "Premium_LRS"
},
"diskSizeGB": "[parameters('dataDiskSizeGB')]",
}
]
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('networkInterfaceName'))]"
}
]
}
},
"plan": {
"name": "standard-data-science-vm",
"publisher": "microsoft-ads",
"product": "standard-data-science-vm"
},
"resources": [
{
"type": "extensions",
"name": "CustomScriptExtension",
"apiVersion": "2015-06-15",
"location": "[resourceGroup().location]",
"dependsOn": [
"[parameters('virtualMachineName')]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.8",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": ["https://paste.fedoraproject.org/paste/FMoOq4E3sKoQzqB5Di0DcV5M1UNdIGYhyRLivL9gydE=/raw"]
}
}
}
]
}
It failed with following error
{
"code": "VMExtensionProvisioningError",
"message": "VM has reported a failure when processing extension 'CustomScriptExtension'. Error message: \"Invalid Configuration - CommandToExecute is not specified in the configuration; it must be specified in either the protected or public configuration section\"."
}
I also tried passing the command directly
"settings": {
"commandToExecute": "Get-Disk |Where partitionstyle -eq ‘raw’ | Initialize-Disk -PartitionStyle MBR -PassThru | New-Partition -AssignDriveLetter -UseMaximumSize | Format-Volume -FileSystem NTFS -NewFileSystemLabel “Data” -Confirm:$false"
}
both didn't work. what I am doing wrong here ?
So, you need to explicitly call powershell to use powershell, just like in the examples:
"commandToExecute": "[concat('powershell -command ', variable('command'))]"
you can attempt to paste your command directly, but due to all the quotes it won't parse properly, so save your command as a variable and concat like that.

jq: output array of json objects [duplicate]

This question already has an answer here:
How to convert a JSON object stream into an array with jq
(1 answer)
Closed 6 years ago.
Say I have the input:
{
"name": "John",
"email": "john#company.com"
}
{
"name": "Brad",
"email": "brad#company.com"
}
How do I get the output:
[
{
"name": "John",
"email": "john#company.com"
},
{
"name": "Brad",
"email": "brad#company.com"
}
]
I tried both:
jq '[. | {name, email}]'
and
jq '. | [{name, email}]'
which both gave me the output
[
{
"name": "John",
"email": "john#company.com"
}
]
[
{
"name": "Brad",
"email": "brad#company.com"
}
]
I also saw no options for an array output in the documentations, any help appreciated
Use slurp mode:
o --slurp/-s:
Instead of running the filter for each JSON object
in the input, read the entire input stream into a large
array and run the filter just once.
$ jq -s '.' < tmp.json
[
{
"name": "John",
"email": "john#company.com"
},
{
"name": "Brad",
"email": "brad#company.com"
}
]

Need to execute a ruby file from Amazon web service Data pipeline

i have a ruby file in my application and i need to call and execute a ruby file as background job from amazon web service data pipeline
i have given the json file below
#json file
{ "objects": [
{
"id": "ScheduleId4",
"startDateTime": "2013-08-01T00:00:00",
"name": "schedule",
"type": "Schedule",
"period": "15 Minutes"
},
{
"id": "DataNodeId2",
"schedule": {
"ref": "ScheduleId4"
},
"name": "Input",
"directoryPath": "s3://pipeline_test/input/",
"type": "S3DataNode"
},
{
"id": "ActivityId1",
"input": {
"ref": "DataNodeId2"
},
"schedule": {
"ref": "ScheduleId4"
},
"stdout": "s3://pipeline_test/logs",
"scriptUri": "s3://pipeline_test/input/sample.sh",
"name": "Shell",
"runsOn": {
"ref": "ResourceId5"
},
"stderr": "s3://pipeline_test/logs",
"type": "ShellCommandActivity",
"output": {
"ref": "DataNodeId3"
},
"stage": "true"
},
{
"terminateAfter": "1 Hours",
"id": "ResourceId5",
"schedule": {
"ref": "ScheduleId4"
},
"name": "Resource1",
"logUri": "s3://pipeline_test/logs/",
"type": "Ec2Resource"
},
{
"id": "Default",
"scheduleType": "timeseries",
"name": "Default",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"id": "DataNodeId3",
"schedule": {
"ref": "ScheduleId4"
},
"directoryPath": "s3://pipeline_test/output1/",
"name": "Output",
"type": "S3DataNode"
}
]
}
sample.sh
echo "Hello"
ruby sample.rb
sample.rb
puts "Hello world"
i have given correct path of sample.sh file. Still i am not to get the sample.rb calling or not.
Anyone tell me step by step procedure to follow it as i am newbie to amazon web service datapipeline.
Help me to solve it.
The default image launched by Data Pipeline does not actually have ruby on it. You'll have to build your own image and install ruby by hand first. Then, reference that image in your Resource by instanceId

Resources