I have two shell scripts which run one by one sequentially .
Everything all parameter is same except BUCKETNAME
Is there any way to refactor this in such a way that in one command only i can run this .
Here is the both command that i am running to execute .
Command 1
jsonDumpFL()
{
cat <<EOF
{
"QUEUEURL":"",
"BUCKETREGION":"us-east-1",
"FLAGFILE":"",
"FTPUSERID":"pcfp-test",
"FTPPATH":"/PCFP/Incr1",
"FTPPASSWORD":"pcfp-test",
"PARAMETERSTOREREGION":"us-east-1",
"ISFTP2S3":"false",
"FTPSERVER":"11.11.11.11",
"BUCKETNAME":"FinancialLineItem/FINALSPARK",
"QUEUEREGION":"",
"ISSFTPENABLED":"false",
"LOCALPATH":"path"
}
EOF
}
aws apigateway test-invoke-method --rest-api-id int1234udj --resource-id 1asde1 --http-method POST --body "$(jsonDumpFL)"
Command 2
jsonDumpSEG()
{
cat <<EOF
{
"QUEUEURL":"",
"BUCKETREGION":"us-east-1",
"FLAGFILE":"",
"FTPUSERID":"pcfp-test",
"FTPPATH":"/PCFP/Incr1",
"FTPPASSWORD":"pcfp-test",
"PARAMETERSTOREREGION":"us-east-1",
"ISFTP2S3":"false",
"FTPSERVER":"11.11.11.11",
"BUCKETNAME":"Segments/FINALSPARK",
"QUEUEREGION":"",
"ISSFTPENABLED":"false",
"LOCALPATH":"path"
}
EOF
}
aws apigateway test-invoke-method --rest-api-id int1234udj --resource-id 1asde1 --http-method POST --body "$(jsonDumpSEG)"
Simply re-factor your function to take one argument that's the value of BUCKETNAME and change your function name to make it dynamic
jsonDump()
{
cat <<-EOF
{
"QUEUEURL":"",
"BUCKETREGION":"us-east-1",
"FLAGFILE":"",
"FTPUSERID":"pcfp-test",
"FTPPATH":"/PCFP/Incr1",
"FTPPASSWORD":"pcfp-test",
"PARAMETERSTOREREGION":"us-east-1",
"ISFTP2S3":"false",
"FTPSERVER":"11.11.11.11",
"BUCKETNAME":"$1",
"QUEUEREGION":"",
"ISSFTPENABLED":"false",
"LOCALPATH":"path"
}
EOF
}
and now call your function as
"$(jsonDump "FinancialLineItem/FINALSPARK")"
or as
"$(jsonDump "Segments/FINALSPARK")"
jq is a better option for creating dynamic JSON, as it ensures your parameter will be correctly quoted.
jsonDump () {
jq -n --argjson bn "$1" '{
QUEUEURL: "",
BUCKETREGION: "us-east-1",
FLAGFILE: "",
FTPUSERID: "pcfp-test",
FTPPATH: "/PCFP/Incr1",
FTPPASSWORD: "pcfp-test",
PARAMETERSTOREREGION: "us-east-1",
ISFTP2S3: "false",
FTPSERVER: "11.11.11.11",
BUCKETNAME: $bn,
QUEUEREGION: "",
ISSFTPENABLED: "false",
LOCALPATH: "path"
}'
}
(It also lets you drop the quotes around the object keys if they don't contain any "special" characters.)
Related
I have the following bash script that I execute in order to create new Glue Job via CLI:
#!/usr/bin/env bash
set -e
NAME=$1
PROFILE=$2
SCRIPT_LOCATION='s3://bucket/scripts/'$1'.py'
echo [*]--- Creating new job on AWS
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json | jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
I'm using jq as i need one of the values to be replaced on runtime before i pass the .json as --cli-input-json argument. How can i pass json with replaced value to this command? As of now, it prints out the json content (although with value already replaced).
Running the command above causes the following error:
[*]--- Creating new job on AWS
{
"Description": "Template for Glue Job",
"LogUri": "",
"Role": "arn:aws:iam::11111111111:role/role",
"ExecutionProperty": {
"MaxConcurrentRuns": 1
},
"Command": {
"Name": "glueetl",
"ScriptLocation": "s3://bucket/scripts/script.py",
"PythonVersion": "3"
},
"DefaultArguments": {
"--TempDir": "s3://temp/admin/",
"--job-bookmark-option": "job-bookmark-disable",
"--enable-metrics": "",
"--enable-glue-datacatalog": "",
"--enable-continuous-cloudwatch-log": "",
"--enable-spark-ui": "true",
"--spark-event-logs-path": "s3://assets/sparkHistoryLogs/"
},
"NonOverridableArguments": {
"KeyName": ""
},
"MaxRetries": 0,
"AllocatedCapacity": 0,
"Timeout": 2880,
"MaxCapacity": 0,
"Tags": {
"KeyName": ""
},
"NotificationProperty": {
"NotifyDelayAfter": 60
},
"GlueVersion": "3.0",
"NumberOfWorkers": 2,
"WorkerType": "G.1X"
}
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws.exe: error: argument --cli-input-json: expected one argument
The command line
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json | jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
executes the command
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json,
takes its standard output and uses it as input to
jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
(which will ignore the input and read from the file given as argument). Please also note that blanks or spaces in $SCRIPT_LOCATION will break your script, because it is not quoted (your quotes are off).
To use the output of one command in the argument list of another command, you must use Command Substitution: outer_command --some-arg "$(inner_command)".
So your command should become:
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json "$(jq '.Command.ScriptLocation = "'"$SCRIPT_LOCATION"'"' ./resources/config.json)"
# or simplified with only double quotes:
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json "$(jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json)"
See https://superuser.com/questions/1306071/aws-cli-using-cli-input-json-in-a-pipeline for additional examples.
Although, I have to admit I am not 100% certain that the JSON content can be passed directly on the command line. From looking at the docs and some official examples, it looks like this parameter expects a file name, not a JSON document's content. So it could be possible that your command in fact needs to be:
# if "-" filename is specially handled:
jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json | aws glue create-job --profile $PROFILE --name $NAME --cli-input-json -
# "-" filename not recognized:
jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json > ./resources/config.replaced.json && aws glue create-job --profile $PROFILE --name $NAME --cli-input-json file://./resources/config.replaced.json
Let us know which one worked.
I have a Bash script that uses jq and a for loop to iterate through an array, grab a directory that I need to be monitored by Amazon CloudWatch, and stick it into the latter's JSON configuration file. However, for some reason, only the last item in the array is actually being written. I assume there's something in my logic that is not appending my changes, and instead overwriting them in a particular place, but I can't quite figure out the fix.
Here is my code:
logPaths=("/shared/logs/application/application1"
"/shared/logs/application/application2"
"/shared/logs/application/application3")
# Loop through array to create stanzas and export them to the temp file
for i in ${logPaths[#]}; do
jq "
.logs.logs_collected.files.collect_list[-1] |= . + {
\"file_path\": \"$i\",
\"log_group_name\": \"/aws-account/aws/ec2/syslogs\",
\"log_stream_name\": \"$definedElsewhere\",
\"timestamp_format\": \"%b %d %H:%M:%S\"}" \
/opt/aws/amazon-cloudwatch-agent/amazon-cloudwatch-agent.json \
> /opt/aws/amazon-cloudwatch-agent/amazon-cloudwatch-agent.json.tmp \
&& cp /opt/aws/amazon-cloudwatch-agent/amazon-cloudwatch-agent.json.tmp /opt/aws/amazon-cloudwatch-agent/amazon-cloudwatch-agent.json
done
When this is executed, and I look at amazon-cloudwatch-agent.json, only a record for the 3rd entry in the array (/application3) appears in the configuration file.
I can't reproduce your bug -- but it's irrelevant, because if this were correctly written there wouldn't be any loop needed at all.
Using jq --args allows the logPaths array to be passed in as a set of positional arguments, and referred to from within the relevant jq code as $ARGS.positional. Thus:
#!/usr/bin/env bash
logPaths=("/shared/logs/application/application1"
"/shared/logs/application/application2"
"/shared/logs/application/application3")
# Make up some sample input, since the OP didn't provide any
cat >old.json <<'EOF'
{
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{"test": "make sure this old data is retained"}
]
}
}
}
}
EOF
jq --arg definedElsewhere "Other Value" '
($ARGS.positional | [
.[] | { "file_path": .,
"log_group_name": "/aws-account/aws/ec2/syslogs",
"log_stream_name": $definedElsewhere,
"timestamp_format": "%b %d %H:%M:%S"
}]) as $newLogSinks |
.logs.logs_collected.files.collect_list += $newLogSinks
' --args "${logPaths[#]}" <old.json >new.json && mv new.json old.json
...which correctly emits as output:
{
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"test": "make sure this old data is retained"
},
{
"file_path": "/shared/logs/application/application1",
"log_group_name": "/aws-account/aws/ec2/syslogs",
"log_stream_name": "Other Value",
"timestamp_format": "%b %d %H:%M:%S"
},
{
"file_path": "/shared/logs/application/application2",
"log_group_name": "/aws-account/aws/ec2/syslogs",
"log_stream_name": "Other Value",
"timestamp_format": "%b %d %H:%M:%S"
},
{
"file_path": "/shared/logs/application/application3",
"log_group_name": "/aws-account/aws/ec2/syslogs",
"log_stream_name": "Other Value",
"timestamp_format": "%b %d %H:%M:%S"
}
]
}
}
}
}
I have a sequence of make commands to upload zip file to s3 bucket and then update the lambda function reading that s3 file as source code. Once I update the lambda function, I wish to publish it and after publishing it, I want to attach an event to that lambda function using lambda bridge.
I can do most of these commands automatically using make. For example:
clean:
#rm unwanted_build_files.zip
build-lambda-pkg:
mkdir pkg
cd pkg && docker run #something something
cd pkg && zip -9qr build.zip
cp pkg/build.zip .
rm pkg
upload-s3:
aws s3api put-object --bucket my_bucket \
--key build.zip --body build.zip
update-lambda:
aws lambda update-function-code --function-name my_lambda \
--s3-bucket my_bucket \
--s3-key build.zip
publish-lambda:
aws lambda publish-version --function-name my_lambda
## I can get "Arn" value from publish-lambda command. publish-lambda ##returns a json (or I would say it prints a json type structure on cmd) which has one key as "FunctionArn"
attach-event:
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="arn:aws:lambda:::function/my_lambda/version_number"
## the following combines the above command into single command
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
I am stuck at the last step i.e. to combine and include publish-lambda and attach-event in the build-n-update command. The problem is I am unable to pass argument from previous command to next command. I will try to explain it better:
publish-lambda prints a json style output on terminal:
{
"FunctionName": "my_lambda",
"FunctionArn": "arn:aws:lambda:us-east-2:12345:function:my_lambda:5",
"Runtime": "python3.6",
"Role": "arn:aws:iam::12345:role/my_role",
"Handler": "lambda_function.lambda_handler",
"CodeSize": 62403592,
"Description": "",
"Timeout": 180,
"MemorySize": 512,
"LastModified": "2021-02-28T17:34:04.374+0000",
"CodeSha256": "ErfsYHVMFCQBg4iXx5ev9Z0U=",
"Version": "5",
"Environment": {
"Variables": {
"PATH": "/var/task/bin",
"PYTHONPATH": "/var/task/src:/var/task/lib"
}
},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "49b5-acdd-c1032aa16bfb",
"State": "Active",
"LastUpdateStatus": "Successful"
}
I wish to extract function arn from the above output stored in key "FunctionArn" and use it in the next command i.e. attach-event as attach-event has a --targets argument which takes the "Arn" of last published function.
Is it possible to do in single command?
I have tried to experiment a bit as follows:
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
make publish-lambda | xargs jq .FunctionArn -r {}
But this throws an error:
jq: Unknown option --function-name
Please help
Well, running:
make publish-lambda | xargs jq .FunctionArn -r {}
will print the command to be run, then the output of the command (run it yourself from you shell prompt and see). Of course, jq cannot parse the command line make prints.
Anyway, what would be the goal of this? You'd just print the function name to stdout and it wouldn't do you any good.
You basically have two choices: one is to combine the two commands into a single make recipe, so you can capture the information you need in a shell variable:
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
func=$$(aws lambda publish-version --function-name my_lambda \
| jq .FunctionArn -r); \
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="$$func"
The other alternative is to redirect the output of publish-version to a file, then parse that file in the attach-event target recipe:
publish-lambda:
aws lambda publish-version --function-name my_lambda > publish.json
attach-event:
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="$$(jq .FunctionArn -r publish.json)"
This question already has answers here:
Add new element to existing JSON array with jq
(3 answers)
Closed 3 years ago.
I want to create valid json using jq in bash.
each time when bash script will execute "Add new element to existing JSON array" and if file is empty create new file.
I am using following jq command to create my json (which is incomplete, please help me to complete it)
$jq -n -s '{service: $ARGS.named}' \
--arg transcationId $TRANSACTION_ID_METRIC '{"transcationId":"\($transcationId)"}' \
--arg name $REALPBPODDEFNAME '{"name ":"\($name )"}'\
--arg lintruntime $Cloudlintruntime '{"lintruntime":"\($lintruntime)"}' \
--arg status $EXITCODE '{"status":"\($status)"}' \
--arg buildtime $totaltime '{"buildtime":"\($buildtime)"}' >> Test.json
which is producing output like
{
"service": {
"transcationId": "12345",
"name": "sdsjkdjsk",
"lintruntime": "09",
"status": "0",
"buildtime": "9876"
}
}
{
"service": {
"transcationId": "123457",
"servicename": "sdsjkdjsk",
"lintruntime": "09",
"status": "0",
"buildtime": "9877"
}
}
but I don't want output in this format
json should be created first time like
what should be jq command for creating below jason
{
"ServiceData":{
"date":"30/1/2020",
"ServiceInfo":[
{
"transcationId":"20200129T130718Z",
"name":"MyService",
"lintruntime":"178",
"status":"0",
"buildtime":"3298"
}
]
}
}
and when I next time execute the bash script element should be added into the array like
what is the jq command for getting json in this format
{
"ServiceData":{
"date":"30/1/2020",
"ServiceInfo":[
{
"transcationId":"20200129T130718Z",
"name":"MyService",
"lintruntime":"16",
"status":"0",
"buildtime":"3256"
},
{
"transcationId":"20200129T130717Z",
"name":"MyService",
"lintruntime":"16",
"status":"0",
"buildtime":"3256"
}
]
}
}
also I want "date " , "service data" , "service info"
fields in my json which are missing in my current one
You don't give a separate filter to each --arg option; it just defines a variable which can be used in the single filter argument. You just want to add new object to your input. jq doesn't do in-place file editing, so you'll have to write to a temporary file and replace your original after the fact.
jq --arg transactionId "$TRANSACTION_ID_METRIC" \
--arg name "$REALPBPODDEFNAME" \
--arg lintruntime "$Cloudlintruntime" \
--arg status "$EXITCODE" \
--arg buildtime "$totaltime" \
'.ServiceData.ServiceInfo += [ {transactionID: $transactionId,
name: $name,
lintruntime: $lintruntime,
status: $status,
buildtime: $buildtime
}]' \
Test.json > tmp.json &&
mv tmp.json Test.json
Here's the same command, but using an array to store all the --arg options and a variable to store the filter so the command line is a little simpler. (You also don't need explicit line continuations inside an array definition.)
args=(
--arg transactionId "$TRANSACTION_ID_METRIC"
--arg name "$REALPBPODDEFNAME"
--arg lintruntime "$Cloudlintruntime"
--arg status "$EXITCODE"
--arg buildtime "$totaltime"
)
filter='.ServiceData.ServiceInfo += [
{
transactionID: $transactionId,
name: $name,
lintruntime: $lintruntime,
status: $status,
buildtime: $buildtime
}
]'
jq "${args[#]}" "$filter" Test.json > tmp.json && mv tmp.json Test.json
I can scriptably access, list and post comments on gerrit. All is good.
However, when I use my function below, I am always added as CC (carbon copy).
How can I remove myself or avoid being added as CC altogether?
# Usage: read (multiline) comment from stdin, post that to review named by argument
function gerrit_send_comment
{
local sha=$1
function generate_json {
# Format description: http://gerrit.ci.kitenet.com/Documentation/rest-api-changes.html#review-input
# Heredoc's EOF must be not be indented!
cat << EOF
{
"notify": "NONE",
"tag": "autogenerated:mpedbot",
"message": $(cat - | json_escape_to_single_string)
}
EOF
}
cat - \
| generate_json \
| ssh -p ${GERRIT_PORT} ${GERRIT_HOST} gerrit review $sha --json
}