Issue with single quotes running Azure CLI command - bash

My script snippet is as below:
End goal to accomplish is to create a Azure DevOps variable group and inject key-values from another variable group into it(the newly created Azure DevOps Variable Group)
set -x
echo "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" | az devops login --organization https://dev.azure.com/tempcompenergydev
az devops configure --defaults organization=https://dev.azure.com/tempcompenergydev project=Discovery
export target_backend=automation
export target_backend="tempcomp.Sales.Configuration.Spa ${target_backend}"
export new_env="abc"
values=(addressSearchBaseUrl addressSearchSubscriptionKey cacheUrl calendarApiUrl checkoutBffApiUrl cpCode)
az_create_options=""
for ptr in "${values[#]}"
do
result=$(
az pipelines variable-group list --group-name "${target_backend}" | jq '.[0].variables.'${ptr}'.value'
)
printf "%s\t%s\t%d\n" "$ptr" "$result" $?
# add the variable and value to the array
az_create_options="${az_create_options} ${ptr}=${result}"
done
az pipelines variable-group create \
--name "test ${new_env}" \
--variables "${az_create_options}"
However, when the above command executes, I get an unexpected output as below:
+ az pipelines variable-group create --name 'test abc' --variables ' addressSearchBaseUrl="https://qtruat-api.platform.tempcomp.com.au/shared" addressSearchSubscriptionKey="xxxxxxxxxxxxxxxxxxx" cacheUrl="https://tempcompdstqtruat.digital.tempcomp.com.au/app/config" calendarApiUrl="https://qtruat-api.platform.tempcomp.com.au/sales/calendar/v1;rev=deadpool/AvailableDates/EnergyMovers/" checkoutBffApiUrl="https://qtruat-api.platform.tempcomp.com.au/sales/checkout-experience/v1;rev=deadpool/" cpCode="1067076"'
cpCode "1067076" 0
WARNING: Command group 'pipelines variable-group' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
{
"authorized": false,
"description": null,
"id": 1572,
"name": "test abc",
"providerData": null,
"type": "Vsts",
"variables": {
"addressSearchBaseUrl": {
"isSecret": null,
"value": "\"https://qtruat-api.platform.tempcomp.com.au/shared\" addressSearchSubscriptionKey=\"xxxxxxxxxxxxxxxxxxxxxxxxx\" cacheUrl=\"https://tempcompdstqtruat.digital.tempcomp.com.au/app/config\" calendarApiUrl=\"https://qtruat-api.platform.tempcomp.com.au/sales/calendar/v1;rev=deadpool/AvailableDates/EnergyMovers/\" checkoutBffApiUrl=\"https://qtruat-api.platform.tempcomp.com.au/sales/checkout-experience/v1;rev=deadpool/\" cpCode=\"1067076\""
}
}
}
##[section]Finishing: Bash Script
On a side note, If I run manually, I do get correct response. Example below:
az pipelines variable-group create --name "test abc" --variables addressSearchBaseUr="https://qtruat-api.platform.tempcomp.com.au/shared" addressSearchSubscriptionKey="xxxxxxxxxxxxxxxxxxxxxxxxx" cacheUrl="https://tempcompdstqtruat.digital.tempcomp.com.au/app/config" calendarApiUrl="https://qtruat-api.platform.tempcomp.com.au/sales/calendar/v1;rev=deadpool/AvailableDates/EnergyMovers/" checkoutBffApiUrl="https://qtruat-api.platform.tempcomp.com.au/sales/checkout-experience/v1;rev=deadpool/" cpCode="1067076"
Output:
+ az pipelines variable-group create --name 'test abc' --variables addressSearchBaseUr=https://qtruat-api.platform.tempcomp.com.au/shared addressSearchSubscriptionKey=xxxxxxxxxxxxxxxxxxx cacheUrl=https://tempcompdstqtruat.digital.tempcomp.com.au/app/config 'calendarApiUrl=https://qtruat-api.platform.tempcomp.com.au/sales/calendar/v1;rev=deadpool/AvailableDates/EnergyMovers/' 'checkoutBffApiUrl=https://qtruat-api.platform.tempcomp.com.au/sales/checkout-experience/v1;rev=deadpool/' cpCode=1067076
WARNING: Command group 'pipelines variable-group' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
{
"authorized": false,
"description": null,
"id": 1573,
"name": "test abc",
"providerData": null,
"type": "Vsts",
"variables": {
"addressSearchBaseUr": {
"isSecret": null,
"value": "https://qtruat-api.platform.tempcomp.com.au/shared"
},
"addressSearchSubscriptionKey": {
"isSecret": null,
"value": "xxxxxxxxxx"
},
"cacheUrl": {
"isSecret": null,
"value": "https://tempcompdstqtruat.digital.tempcomp.com.au/app/config"
},
"calendarApiUrl": {
"isSecret": null,
"value": "https://qtruat-api.platform.tempcomp.com.au/sales/calendar/v1;rev=deadpool/AvailableDates/EnergyMovers/"
},
"checkoutBffApiUrl": {
"isSecret": null,
"value": "https://qtruat-api.platform.tempcomp.com.au/sales/checkout-experience/v1;rev=deadpool/"
},
"cpCode": {
"isSecret": null,
"value": "1067076"
}
}
}
##[section]Finishing: Bash Script

Work Around
I am not a bash pro, but I found a solution that should work for you. I think the core of the issue is that when you print out your array there, it is treating the set of --variables arguments like they're one long string. So you're getting this...
az pipelines variable-group create --name "test ${new_env}" --variables ' addressSearchBaseUrl="val" addressSearchSubscriptionKey="val" ...'
instead of this...
az pipelines variable-group create --name "test ${new_env}" --variables addressSearchBaseUrl="val" addressSearchSubscriptionKey="val" ...
Bad: I think you could get around this using eval instead of printing out arguments but using eval in almost any situation is advised against.
Good: Instead, I thought this problem could be tackled a different way. Instead of doing a batch where all variables are created at once, this script will copy the desired variables to a new group one at a time:
set -x
echo "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" | az devops login --organization https://dev.azure.com/tempcompenergydev
az devops configure --defaults organization=https://dev.azure.com/tempcompenergydev project=Discovery
export target_backend=automation
export target_backend="tempcomp.Sales.Configuration.Spa ${target_backend}"
export new_env="abc"
values=(addressSearchBaseUrl addressSearchSubscriptionKey cacheUrl calendarApiUrl checkoutBffApiUrl cpCode)
az_create_options=()
for ptr in "${values[#]}"
do
result=$( az pipelines variable-group list --group-name "${target_backend}" | jq '.[0].variables.'${ptr}'.value' )
echo "$result"
# Result from this az command always wraps the response in double quotes. They should be removed.
stripQuote=$( echo $result | sed 's/^.//;s/.$//' )
printf "%s\t%s\t%d\n" "$ptr" "$stripQuote" $?
# add the variable and value to the array
# When adding this way you ensure each key|value pair gets added as a separate element
az_create_options=("${az_create_options[#]}" "${ptr}|${stripQuote}")
# ^ Note on this: The | is used because there needs to be a way to separate
# the key from the value. What you're really trying to do here is an
# Associative Array, but not all versions of bash support that out of the
# box. This is a bit of a hack to get the same result. CAUTION if your
# variable value contains a | character in it, it will cause problems
# during the copy.
done
# Create the new group, save the group id (would be best to add a check to see if the group by this name exists or not first)
groupId=$( az pipelines variable-group create --name "test ${new_env}" --variables bootstrap="start" | jq '.id' )
for var in "${az_create_options[#]}"
do
# Split the key|value pair at the | character and use the set to create the new variable in the new group
# Some check could also be added here to see if the variable already exists in the new group or not if you want to use this create or update an existing group
arrVar=(${var//|/ })
echo "Parsing variable ${arrVar[0]} with val of ${arrVar[1]}"
az pipelines variable-group variable create \
--id "${groupId}" \
--name "${arrVar[0]}" \
--value "${arrVar[1]}"
done
# This is needed cause the az command is dumb and won't let you create a group in the first place without a --variables argument. So I had it create a dummy var and then delete it
az pipelines variable-group variable delete --id "${groupId}" --name "bootstrap" --yes
Notes to call out
I would suggest using an Associative Array instead of the array I listed here if possible. However, Associative Arrays are only supported in bash v4 or higher. Use bash --version to see if you'd be able to use them. See this guide as an example of some ways you could work with those if you're able.
If using my method, be wary that any | character in a variable value that's being copied will cause the script to fail. You may need to pick a different delimiter to split on.
The output of the az pipelines variable-group list command will give the values of the variables wrapped in ". If you try to turn around and throw the exact result you get from the list at a variable create command, you will get a bunch of variables in your group with values like
{
"variable": {
"isSecret": null,
"value": "\”value\”"
}
}
instead of
{
"variable": {
"isSecret": null,
"value": "value"
}
}
The az command is dumb and won't let you create a new variable group in the first place without a --variables argument. So I had it create a dummy var bootstrap="start" and then delete it.
As I mentioned, there may be a better way to print out the array that I'm missing. Comments on my post are encouraged if you know a better way.

This might help....
There is an open issue on how Azure hosted agents authenticate back to ADO using the az-pipelines command.
Feel this is related but if it's not feel free to respond and I'll remove the answer.

Related

Replace string with Bash variable in jq command

I realize this is a simple question but I haven't been able to find the answer. Thank you to anyone who may be able to help me understand what I am doing wrong.
Goal: Search and replace a string in a specific key in a JSON file with a string in a Bash variable using jq.
For example, in the following JSON file:
"name": "Welcome - https://docs.mysite.com/",
would become
"name": "Welcome",
Input (file.json)
[
{
"url": "https://docs.mysite.com",
"name": "Welcome - https://docs.mysite.com/",
"Ocurrences": "679"
},
{
"url": "https://docs.mysite.com",
"name": "Welcome",
"Ocurrences": "382"
}
]
Failing script (using variable)
siteUrl="docs.mysite.com"
jq --arg siteUrl "$siteUrl" '.[].name|= (gsub(" - https://$siteUrl/"; ""))' file.json > file1.json`
Desired output (file1.json)
[
{
"url": "https://docs.mysite.com",
"name": "Welcome",
"Ocurrences": "679"
},
{
"url": "https://docs.mysite.com",
"name": "Welcome",
"Ocurrences": "382"
}
]
I've tried various iterations on removing quotes, changing between ' and ", and adding and removing backslashes.
Successful script (not using variable)
jq '.[].name|= (gsub(" - https://docs.mysite.com/"; ""))' file.json > file1.json
More specifically, if it matters, I am processing an export of a website's usage data from Azure App Insights. Unfortunately, the same page may be assigned different names. I sum the Ocurrences of the two objects with the newly identical url later. If it is better to fix this in App Insights I am grateful for that insight, although I know Bash better than Kusto queries. I am grateful for any help or direction.
Almost. Variables are not automatically expanded within a string. You must interpolate them explicitly with \(…):
jq --arg siteUrl 'docs.mysite.com' '.[].name |= (gsub(" - https://\($siteUrl)/"; ""))' file.json
Alternatively, detect a suffix match and extract the prefix by slicing:
jq --arg siteUrl 'docs.mysite.com' '" - https://\($siteUrl)/" as $suffix | (.[].name | select(endswith($suffix))) |= .[:$suffix|-length]' file.json

Passing json to aws glue create-job after replacement done using jq

I have the following bash script that I execute in order to create new Glue Job via CLI:
#!/usr/bin/env bash
set -e
NAME=$1
PROFILE=$2
SCRIPT_LOCATION='s3://bucket/scripts/'$1'.py'
echo [*]--- Creating new job on AWS
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json | jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
I'm using jq as i need one of the values to be replaced on runtime before i pass the .json as --cli-input-json argument. How can i pass json with replaced value to this command? As of now, it prints out the json content (although with value already replaced).
Running the command above causes the following error:
[*]--- Creating new job on AWS
{
"Description": "Template for Glue Job",
"LogUri": "",
"Role": "arn:aws:iam::11111111111:role/role",
"ExecutionProperty": {
"MaxConcurrentRuns": 1
},
"Command": {
"Name": "glueetl",
"ScriptLocation": "s3://bucket/scripts/script.py",
"PythonVersion": "3"
},
"DefaultArguments": {
"--TempDir": "s3://temp/admin/",
"--job-bookmark-option": "job-bookmark-disable",
"--enable-metrics": "",
"--enable-glue-datacatalog": "",
"--enable-continuous-cloudwatch-log": "",
"--enable-spark-ui": "true",
"--spark-event-logs-path": "s3://assets/sparkHistoryLogs/"
},
"NonOverridableArguments": {
"KeyName": ""
},
"MaxRetries": 0,
"AllocatedCapacity": 0,
"Timeout": 2880,
"MaxCapacity": 0,
"Tags": {
"KeyName": ""
},
"NotificationProperty": {
"NotifyDelayAfter": 60
},
"GlueVersion": "3.0",
"NumberOfWorkers": 2,
"WorkerType": "G.1X"
}
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws.exe: error: argument --cli-input-json: expected one argument
The command line
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json | jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
executes the command
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json,
takes its standard output and uses it as input to
jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
(which will ignore the input and read from the file given as argument). Please also note that blanks or spaces in $SCRIPT_LOCATION will break your script, because it is not quoted (your quotes are off).
To use the output of one command in the argument list of another command, you must use Command Substitution: outer_command --some-arg "$(inner_command)".
So your command should become:
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json "$(jq '.Command.ScriptLocation = "'"$SCRIPT_LOCATION"'"' ./resources/config.json)"
# or simplified with only double quotes:
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json "$(jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json)"
See https://superuser.com/questions/1306071/aws-cli-using-cli-input-json-in-a-pipeline for additional examples.
Although, I have to admit I am not 100% certain that the JSON content can be passed directly on the command line. From looking at the docs and some official examples, it looks like this parameter expects a file name, not a JSON document's content. So it could be possible that your command in fact needs to be:
# if "-" filename is specially handled:
jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json | aws glue create-job --profile $PROFILE --name $NAME --cli-input-json -
# "-" filename not recognized:
jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json > ./resources/config.replaced.json && aws glue create-job --profile $PROFILE --name $NAME --cli-input-json file://./resources/config.replaced.json
Let us know which one worked.

pass arguments of make commands

I have a sequence of make commands to upload zip file to s3 bucket and then update the lambda function reading that s3 file as source code. Once I update the lambda function, I wish to publish it and after publishing it, I want to attach an event to that lambda function using lambda bridge.
I can do most of these commands automatically using make. For example:
clean:
#rm unwanted_build_files.zip
build-lambda-pkg:
mkdir pkg
cd pkg && docker run #something something
cd pkg && zip -9qr build.zip
cp pkg/build.zip .
rm pkg
upload-s3:
aws s3api put-object --bucket my_bucket \
--key build.zip --body build.zip
update-lambda:
aws lambda update-function-code --function-name my_lambda \
--s3-bucket my_bucket \
--s3-key build.zip
publish-lambda:
aws lambda publish-version --function-name my_lambda
## I can get "Arn" value from publish-lambda command. publish-lambda ##returns a json (or I would say it prints a json type structure on cmd) which has one key as "FunctionArn"
attach-event:
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="arn:aws:lambda:::function/my_lambda/version_number"
## the following combines the above command into single command
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
I am stuck at the last step i.e. to combine and include publish-lambda and attach-event in the build-n-update command. The problem is I am unable to pass argument from previous command to next command. I will try to explain it better:
publish-lambda prints a json style output on terminal:
{
"FunctionName": "my_lambda",
"FunctionArn": "arn:aws:lambda:us-east-2:12345:function:my_lambda:5",
"Runtime": "python3.6",
"Role": "arn:aws:iam::12345:role/my_role",
"Handler": "lambda_function.lambda_handler",
"CodeSize": 62403592,
"Description": "",
"Timeout": 180,
"MemorySize": 512,
"LastModified": "2021-02-28T17:34:04.374+0000",
"CodeSha256": "ErfsYHVMFCQBg4iXx5ev9Z0U=",
"Version": "5",
"Environment": {
"Variables": {
"PATH": "/var/task/bin",
"PYTHONPATH": "/var/task/src:/var/task/lib"
}
},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "49b5-acdd-c1032aa16bfb",
"State": "Active",
"LastUpdateStatus": "Successful"
}
I wish to extract function arn from the above output stored in key "FunctionArn" and use it in the next command i.e. attach-event as attach-event has a --targets argument which takes the "Arn" of last published function.
Is it possible to do in single command?
I have tried to experiment a bit as follows:
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
make publish-lambda | xargs jq .FunctionArn -r {}
But this throws an error:
jq: Unknown option --function-name
Please help
Well, running:
make publish-lambda | xargs jq .FunctionArn -r {}
will print the command to be run, then the output of the command (run it yourself from you shell prompt and see). Of course, jq cannot parse the command line make prints.
Anyway, what would be the goal of this? You'd just print the function name to stdout and it wouldn't do you any good.
You basically have two choices: one is to combine the two commands into a single make recipe, so you can capture the information you need in a shell variable:
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
func=$$(aws lambda publish-version --function-name my_lambda \
| jq .FunctionArn -r); \
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="$$func"
The other alternative is to redirect the output of publish-version to a file, then parse that file in the attach-event target recipe:
publish-lambda:
aws lambda publish-version --function-name my_lambda > publish.json
attach-event:
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="$$(jq .FunctionArn -r publish.json)"

Trying to verify jq command output equal string or string has more than one occurrence (AWS ELB instances state query)

I'm trying to check that all instances attached to an AWS ELB are in a state of "InService",
For that, I created an AWS CLI command to check the status of the instances.
problem is that the JSON output returns the status of both instances.
So it is not that trivial to examine the output as I wish.
When I run the command:
aws elb describe-instance-health --load-balancer-name ELB-NAME | jq -r '.[] | .[] | .State'
The output is:
InService
InService
The complete JSON is:
{
"InstanceStates": [
{
"InstanceId": "i-0cc1e6d50ccbXXXXX",
"State": "InService",
"ReasonCode": "N/A",
"Description": "N/A"
},
{
"InstanceId": "i-0fc21ddf457eXXXXX",
"State": "InService",
"ReasonCode": "N/A",
"Description": "N/A"
}
]
}
What I've done so far is creating that one liner shell command:
export STR=$'InService\nInService'
if aws elb describe-instance-health --load-balancer-name ELB-NAME | jq -r '.[] | .[] | .State' | grep -q "$STR"; then echo 'yes'; fi
But I get "yes" as long as there is "InService" at the first command output
Is there a way I can get TRUE/YES only if I get twice "InService" as an output?
or any other way to determine that this is indeed what I got in return?
Without seeing an informative sample of the JSON it's not clear what the best solution would be, but the following meets the functional requirements as I understand them, without requiring any further post-processing:
jq -r '
def count(stream): reduce stream as $s (0; .+1);
if count(.[][] | select(.State == "InService")) > 1 then "yes" else empty end
'

Shell command to return value in json output

How to return a particular value using shell command?
In the following example I would like to query to return the value of "StackStatus" which is "CREATE_COMPLETE"
Here is the command:
aws cloudformation describe-stacks --stack-name stackname
Here is the output:
{
"Stacks": [{
"StackId": "arn:aws:cloudformation:ap-southeast-2:64560756805470:stack/stackname/8c8e3330-9f35-1er6-902e-50fae94f3fs42",
"Description": "Creates base IAM roles and policies for platform management",
"Parameters": [{
"ParameterValue": "64560756805470",
"ParameterKey": "PlatformManagementAccount"
}],
"Tags": [],
"CreationTime": "2016-10-31T06:45:02.305Z",
"Capabilities": [
"CAPABILITY_IAM"
],
"StackName": "stackname",
"NotificationARNs": [],
"StackStatus": "CREATE_COMPLETE",
"DisableRollback": false
}]
}
The aws cli supports the --query option to get parts. In addition you could also pipe to another command line tool, jq to do similar query.
But in aws notation to get the 1st result:
aws cloudformation describe-stacks --stack-name stackname --query 'Stacks[0].StackStatus' --output text
Based on the above output, Stacks is an array of objects (a key/value), so hence need the [0] to get the 1st element of the array, and then .StackStatus is a key in this object containing a string as value. The --output text simply presents the output as simple text value instead of a json-looking object.
Edited per Charles comment.

Resources