Bash - Set Azure Variable From Bash Var or AZ Command - bash

I can't set a variable to anything other than a raw value. The docs (and another set) don't really help with this.
Context of how the variable is defined:
jobs:
- job: "Do things"
variables:
STORAGE_ACCOUNT_NAME: ''
steps:
- script: #do stuff here
This works fine:
echo '##vso[task.setvariable variable=STORAGE_ACCOUNT_NAME]bob'
However, if I run the code below, STORAGE_ACCOUNT_NAME is null:
name_of_storage_account_to_release_to=$(az resource list --tag is_live=false --query [0].name --out tsv)
echo '##vso[task.setvariable variable=STORAGE_ACCOUNT_NAME]$name_of_storage_account_to_release_to'
This also fails:
echo '##vso[task.setvariable variable=STORAGE_ACCOUNT_NAME]$(az resource list --tag is_live=false --query [0].name --out tsv)'
Looks like it should be the simplest thing possible, but I can't figure out the syntax. Note that I am sure my fetching commands work, because I can echo the result of:
name_of_storage_account_to_release_to=$(az resource list --tag is_live=false --query [0].name --out tsv)
and it works just fine. It's setting Azure variable that's the problem.

You can use double quote which allows variable expansion :
echo "##vso[task.setvariable variable=STORAGE_ACCOUNT_NAME]$name_of_storage_account_to_release_to"

Related

Run aws cli looping over names from a text file - parameter store

I'm trying to run an awscli command for multiple resources as a loop in a bash script.
For example:
aws ssm get-parameters --name "NAME1", "NAME2", "NAME3"
I've added all the parameter names into a text file. How do I run the CLI command against each name in the file?
Here is my script:
AWS_PARAM="aws ssm get-parameters --name" $FILE
FILE="parameters.txt"
for list in $FILE; do
$AWS_PARAM $list
done
The expected output should run the CLI on all the names in the file.
I know the CLI is expecting the "name" of the parameter store. I'm hoping someone can help with looping the names from the list and running the CLI.
Thank you!
Here's an example of how to iterate over the parameter names and log the output to one file:
#!/bin/bash
AWS_PARAM="aws ssm get-parameters --name"
input="input.txt"
output="output.log"
echo > "$output"
while IFS= read -r line
do
echo "$line"
$AWS_PARAM "$line" >> "$output"
done < "$input"

Error when trying to get-iam-policy gcloud

I'm trying to get an IAM policy from some specific list of projects in CSV file using this bash script:
#! /bin/bash
echo "Getting IAM list from accounts:"
sleep 4
while read -r projectId || [ -n $projectId ]
do
gcloud projects get-iam-policy ${projectId}
echo $projectId
done < NonBillingAccountGCP.csv
But I'm getting this error:
ERROR: (gcloud.projects.get-iam-policy) INVALID_ARGUMENT: Request contains an invalid argument.
<project-ID-from-csv>
If I'm running this script using the project-id it does work and print all IAM policies.
Any idea?
Thanks!
I suspect the error results from the first line heading (PROJECT_ID or similar) in your CSV.
You can use awk to drop the first line and for a slightly cleaner variant:
FILE="NonBillingAccountGCP.cs"
PROJECTS=$(awk "NR>1" ${FILE})
for PROJECT in ${PROJECTS}
do
echo ${PROJECT}
gcloud projects get-iam-policy ${PROJECT}
done
This format also allows you to compose gcloud projects list:
PROJECTS=$(gcloud projects list \
--filter=... \
--format="value(projectId)")

Expand environment variable inside container on Docker command line

Suppose that I create a Dockerfile that just runs an echo command:
FROM alpine
ENTRYPOINT [ "echo" ]
and that I build it like this:
docker build -t my_echo .
If I run docker run --rm my_echo test it will output test as expected.
But how can I run the command to display an environment variable that is inside the container?
Example:
docker run --rm --env MYVAR=foo my_echo ???
How to access the $MYVAR variable that is in the container to display foo by replacing the ??? part of that command?
Note:
This is a simplified version of my real use case. My real use case is a WP-CLI Docker image that I built with a Dockerfile. It has the wp-cli command as the ENTRYPOINT.
I am trying to run a container based on this image to update a WordPress parameter with an environment variable. My command without Docker is wp-cli option update siteurl "http://example.com" where http://example.com would be in an environment variable.
This is the command I am trying to run (wp_cli is the name of my container):
docker run --rm --env WEBSITE_URL="http://example.com" wp_cli option update siteurl ???
It's possible to have the argument that immediately follows ["bash", "-c"] itself be a shell script that looks for sigils to replace. For example, consider the following script, which I'm going to call argEnvSubst:
#!/usr/bin/env bash
args=( "$#" ) # collect all arguments into a single array
for idx in "${!args[#]}"; do # iterate over the indices of that array...
arg=${args[$idx]} # ...and collect the associated values.
if [[ $arg =~ ^#ENV[.](.*)#$ ]]; then # if we have a value that matches a pattern...
varname=${BASH_REMATCH[1]} # extract the variable name from that pattern
args[$idx]=${!varname} # and replace the value with a lookup result
fi
done
exec "${args[#]}" # run our resulting array as a command.
Thus, argEnvSubst "echo" "#ENV.foobar#" will replace #ENV.foobar# with the value of the environment named foobar before it invokes echo.
While I would strongly suggest injecting this into your Dockerfile as a separate script and naming that script as your ENTRYPOINT, it's possible to do it in-line:
ENTRYPOINT [ "bash", "-c", "args=(\"$#\"); for idx in \"${!args[#]}\"; do arg=${args[$idx]}; if [[ $arg =~ ^#ENV[.](.*)#$ ]]; then varname=${BASH_REMATCH[1]}; args[$idx]=${!varname}; fi; done; \"${args[#]}\"", "_" ]
...such that you can then invoke:
docker run --rm --env WEBSITE_URL="http://example.com" \
wp_cli option update siteurl '#ENV.WEBSITE_URL#'
Note the use of bash -- this means alpine (providing only dash) isn't sufficient.

Show error message on null return for AWS CLI describe-instances

I've got a bash script calling for the IPs of various instances based on a parameter from the user. Right now if their query doesn't match the script doesn't return anything at all, not even null. I'd love to incorporate some kind of error handling to prompt the user to retry. This could be anything from an inbuilt AWS function to a custom error message, I'm not picky.
My script is as follows;
#!/usr/bin/env bash
set -e
#READ ARGUMENTS PASSED IN - expects stack name
if [ "$#" != 1 ]; then
echo "Illegal number of parameters. Expecting 1: stack name"
exit 1
fi
name=$1
aws ec2 describe-instances --query "Reservations[].Instances[].[PublicIpAddress,Tags[?Key=='Name'].Value]" --filter Name=tag:Name,Values=${name} --output text
If it succeeds I'll get something like
00.00.00.000
name-of-instance
but if it fails I get nothing.
Is there a way to prompt the user or otherwise show an error message if an aws describe-instances returns no matches?
output=`aws ec2 describe-instances --query "Reservations[].Instances[].[PublicIpAddress,Tags[?Key=='Name'].Value]" --filter Name=tag:Name,Values=${name} --output text`
if [ -n "$output" ]; then
echo "$output"
else
echo "No such instance $name exists"
fi
Capture the output into a variable first like so:
output=$(aws ec2 describe-instances --query "Reservations[].Instances[].[PublicIpAddress,Tags[?Key=='Name'].Value]" --filter Name=tag:Name,Values=${name} --output text)
Then check the content of output. If there is something there just echo it to the screen. If there isn't, show your custom error message.

How to escape space in bash script from inline if?

I know that similar questions have been asked and answered before on stackoverflow (for example here and here) but so far I haven't been able to figure it out for my particular case.
I'm trying to create a script that adds the -v flag only if the variable something is equal to "true" (what I'm trying to do is to mount the current folder as a volume located at /src in the Docker container):
docker run --name image-name `if [ "${something}" == "true" ]; then echo "-v $PWD:/src"; fi` ....
The problem is that $PWD may contain spaces and if so my script won't work. I've also tried assigning "$PWD" to an intermediate variable but it still doesn't work:
temp="$PWD"
docker run --name image-name `if [ "${something}" == "true" ]; then echo "-v $temp:/src"; fi` ....
If I run:
docker run --name image-name -v "$PWD":/src ....
from plain bash (without using my script) then everything works.
Does anyone know how to solve this?
Use an array.
docker_args=()
if something; then
docker_args+=( -v "$PWD/src" )
fi
docker run --blah "${docker_args[#]}" …
Don't have arrays? Use set (in a function, so it doesn't affect outer scope).
Generally:
knacker() {
if something; then
set -- -v "$PWD:/src" "$#"
fi
crocker "$#"
}
knacker run --blah
But some commands (like docker, git, etc) need special treatment because of their two-part command structure.
slacker() {
local cmd="$1"
shift
if something; then
set -- -v "$PWD:/src" "$#"
fi
docker "$cmd" "$#"
}
slacker run --blah
Try this (using the array way):
declare -a cmd=()
cmd+=(docker run --name image-name)
if [ "${something}" = "true" ]
then
cmd+=(-v "$PWD:/src")
fi
"${cmd[#]}"

Resources