bash function with third arg is an array - bash

I'm trying to write a bash function that does the following:
If there is no 3rd argument, run a command.
If there is a third argument, take every argument from the third one and run a command.
The problem I have is the last bit of the command --capabilities CAPABILITY_IAM in the else statement that I don't want to pass in all the time if I have multiple parameters.
An error occurred (InsufficientCapabilitiesException) when calling the CreateStack operation: Requires capabilities : [CAPABILITY_NAMED_IAM]
// that means I need to pass in --capabilities CAPABILITY_IAM
Is there a way to tell bash that: hey, take all the args from the 3rd one, then add the --capabilities CAPABILITY_IAM after? Like in JavaScript I can do this:
function allTogetherNow(a, b, ...c) {
console.log(`${a}, ${b}, ${c}. Can I have a little more?`);
}
allTogetherNow('one', 'two', 'three', 'four')
Here's my function:
cloudformation_create() {
if [ -z "$3" ]; then
aws cloudformation create-stack --stack-name "$1" --template-body file://"$2" --capabilities CAPABILITY_IAM
else
aws cloudformation create-stack --stack-name "$1" --template-body file://"$2" --parameters "${#:3}" --capabilities CAPABILITY_IAM
fi
}
And the 3rd and so on parameters look like this if I don't use a bash function:
aws cloudformation create-stack --stack-name MY_STACK_NAME --template-body file://MY_FILE_NAME --parameters ParameterKey=KeyPairName,ParameterValue=TestKey ParameterKey=SubnetIDs,ParameterValue=SubnetID1 --capabilities CAPABILITY_IAM
Update 22 May 2019:
Following Dennis Williamson's answer below. I've tried:
Passing the parameters in the AWS way:
cloudformation_create STACK_NAME FILE_NAME ParameterKey=KeyPairName,ParameterValue=TestKey ParameterKey=SubnetIDs,ParameterValue=SubnetID1
Got error:
An error occurred (ValidationError) when calling the CreateStack operation: Parameters: [...] must have values
Pass in as a string:
cloudformation_create STACK_NAME FILE_NAME "ParameterKey=KeyPairName,ParameterValue=TestKey ParameterKey=SubnetIDs,ParameterValue=SubnetID1"
Got error:
An error occurred (ValidationError) when calling the CreateStack operation: ParameterValue for ... is required
Pass in without ParameterKey and ParameterValue:
cloudformation_create STACK_NAME FILE_NAME KeyPairName=TestKey SubnetIDs=SubnetID1
Got error:
Parameter validation failed:
Unknown parameter in Parameters[0]: "PARAM_NAME", must be one of: ParameterKey, ParameterValue, UsePreviousValue, ResolvedValue
// list of all the params with the above error
Pass in without ParameterKey and ParameterValue and as a string. Got error:
arameter validation failed:
Unknown parameter in Parameters[0]: "PARAM_NAME", must be one of: ParameterKey, ParameterValue, UsePreviousValue, ResolvedValue
I tried Alex Harvey's answer and got this:
An error occurred (ValidationError) when calling the CreateStack operation: Template format error: unsupported structure.

Based on LeBlue's answer and a quick read of the docs, it looks like you need to build the argument to --parameters from the arguments to the function:
cloudformation_create () {
local IFS=,
local parameters="${*:3}"
#if block not shown
aws cloud_formation ... --parameters "$parameters" ...
this presumes that your function is called like this:
cloudformation_create foo bar baz=123 qux=456
that is, with the key, value pairs already formed.
The snippet above works by setting IFS to a comma. Using $* inside quotes causes the elements contained in it to be joined using the first character of IFS. If you need to make use of the word splitting features in another part of your function, you may want to save the current value of IFS before changing it then restore it after the joining assignment.
As a result, the expansion of $parameters will be baz=123,qux=456

I am not sure that answers your question but maybe it will help you.
$# is the number of parameters.
$# will include all parameters and you can pass that.
#!/bin/bash
foo()
{
echo "params in foo: " $#
echo "p1: " $1
echo "p2: " $2
echo "p3: " $3
}
echo "number of paramters: " $#
foo $# 3 # pass params and add one to the end
Call:
./test.sh 1 2
Output:
number of paramters: 2
params in foo: 3
p1: 1
p2: 2
p3: 3

I suspect the parameter expansion is wrong, as --parameters probably needs to have one argument.
Either quote all arguments to cloudformation_create that need to end up as value for the --parameters flag:
cloudformation_create "the-stack" "the-filename" "all the parameters"
or rewrite the function to not expand into multiple arguments with "$*" (merge every arg into one)
cloudformation_create () {
...
else
aws cloudformation ... --parameters "${*:3}" --capabilities CAPABILITY_IAM
fi
}
This will preserve all values as one string/argument, both will translate to:
aws cloudformation ... --parameters "all other parameters" --capabilities CAPABILITY_IAM
as opposed to your version:
aws cloudformation ... --parameters "all" "other" "parameters" --capabilities CAPABILITY_IAM

First of all, thank you all for your help.
I've realised the issue (and my mistake):
AWS returned the error with Requires capabilities : [CAPABILITY_NAMED_IAM] and my function has [CAPABILITY_IAM].
Depends on the template with params related to creating IAM, [CAPABILITY_NAMED_IAM] or [CAPABILITY_IAM] is required. I found the answer here helpful.
So in my case, the bash function is good, for the template I was trying to create, I need to pass in --capabilities CAPABILITY_NAMED_IAM. I've tried it and it works.

I would write it this way:
cloudformation_create() {
local stack_name=$1
local template_body=$2
shift 2 ; local parameters="$#"
local command=(aws cloudformation create-stack
--stack-name "$stack_name"
--template-body "file://$template_body")
[ ! -z "$parameters" ] && \
command+=(--parameters "$parameters")
command+=(--capabilities CAPABILITY_IAM)
${command[#]}
}
Note:
calling shift 2 results in $3 being rotated to $1 such that you can just use $# as normal.

Related

Bash how to add conditional quoted argument '--pull "always"' to docker command

I am trying to conditionally add arguments to a docker call in a bash script but docker says the flag is unknown, but I can run the command verbatim by hand and it works.
I have tried a few strategies to add the command, including using a string instead of an array, and I have tried using a substitution like the solution here ( using ${array[#]/#/'--pull '} ): https://stackoverflow.com/a/68675860/10542275
docker run --name application --pull "always" -p 3000:3000 -d private.docker.repository/group/application:version
This bash script
run() {
getDockerImageName "/group" "$PROJECT_NAME:$VERSION" "latest";
local imageName=${imageName};
local additionalRunParameters=${additionalRunParameters};
cd "$BASE_PATH/$PROJECT_NAME" || exit 1;
stopAnyRunning "$PROJECT_NAME";
echo docker run --name "$PROJECT_NAME" \
"${additionalRunParameters[#]}" \
-p 3000:3000 \
-d "$imageName";
// docker run --name application --pull "always" -p 3000:3000 -d private.docker.repository/group/application:version
docker run --name "$PROJECT_NAME" \
"${additionalRunParameters[#]}" \
-p 3000:3000 \
-d "$imageName";
//unknown flag: --pull "always"
}
The helper 'getDockerImageName'
# Gets the name of the docker image to use for deploy.
# $1 - The path to the image in the container registry
# $2 - The name of the image and the tag
# $3 - 'latest' if the deploy should use the container built by CI
export imageName="";
export additionalRunParameters=();
getDockerImageName() {
imageName="group/$2";
if [[ $3 == "latest" ]]; then
echo "Using docker image from CI...";
docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "https://$DOCKER_BASE_URL";
imageName="${DOCKER_BASE_URL}${1}/$2";
additionalRunParameters=('--pull "always"');
fi
}
Don't put code (such as arguments) in a variable. Basically, use an array is good, and you are almost doing that. This line -
local additionalRunParameters=${additionalRunParameters};
is probably what's causing you trouble, along with
additionalRunParameters=('--pull "always"');
which is embedding the spaces between what you seem to have meant to be two operands (the option and its argument), turning them into a single string that is an unrecognized garble. //unknown flag: --pull "always" is telling you the flag it's parsing is --pull "always", which is NOT the --pull flag docker DOES know, followed by an argument.
Also,
export additionalRunParameters=(); # nope
arrays don't really export. Take that out, it will only confuse someone.
A much simplified set of examples:
$: declare -f a b c
a ()
{
foo=('--pull "always"') # single value array
}
b ()
{
echo "1: \${foo}='${foo}' <= scalar, returns first element of the array";
echo "2: \"\${foo[#]}\"='${foo[#]}' <= returns entire array (be sure to put in quotes)";
echo "3: \"\${foo[1]}\"='${foo[1]}' <= indexed, returns only second element of array"
}
c ()
{
foo=(--pull "always") # two values in this array
}
$: a # sets ${foo[0]} to '--pull "always"'
$: b
1: ${foo}='--pull "always"' <= scalar, returns first element of the array
2: "${foo[#]}"='--pull "always"' <= returns entire array (be sure to put in quotes)
3: "${foo[1]}"='' <= indexed, returns only second element of array
$: c # setd ${foo[0]} to '--pull' and ${foo[1]} to "always"
$: b
1: ${foo}='--pull' <= scalar, returns first element of the array
2: "${foo[#]}"='--pull always' <= returns entire array (be sure to put in quotes)
3: "${foo[1]}"='always' <= indexed, returns only second element of array
So what you need is:
getDockerImageName(){
. . . # stuff
additionalRunParameters=( --pull "always" ); # no single quotes
}
and just take OUT
local additionalRunParameters=${additionalRunParameters}; # it's global, no need
You have one more issue though - "${additionalRunParameters[#]}" \ is good, as long as the array isn't empty. In your example it will apparently always be loaded with the same values, so I don't see why you are adding all this extra complication of putting it into a global array that gets loaded incidentally in another function... seems like an antipattern. Just put the arguments you are universally enforcing anyway on the command itself.
However, on the possibility that you simplified some details out, then if this array is ever empty it's going to pass a literal quoted empty string as an argument on the command line, and you're likely to get something like the following error:
$: docker run --name foo "" -p 3000:3000 -d bar # docker doesn't understand the empty string
docker: invalid reference format.
Maybe, rather than
additionalRunParameters=( --pull "always" ); # no single quotes
and
"${additionalRunParameters[#]}" \
what you really want is
pull_always=true
and
${pull_always:+ --pull always } \
...with no quotes, so that if the var has been set (with anything) it evaluates to the desired result, but if it's unset and unquoted it evaluates to nothing and gets ignored, as nothing actually gets passed in - not even a null string.
Good luck. Let us know if you need more help.

How to do a bash `for` loop in terraform termplatefile?

I'm trying to include a bash script in an AWS SSM Document, via the Terraform templatefile function. In the aws:runShellScript section of the SSM document, I have a Bash for loop with an # sign that seems to be creating an error during terraform validate.
Version of terraform: 0.13.5
Inside main.tf file:
resource "aws_ssm_document" "magical_document" {
name = "magical_ssm_doc"
document_type = "Command"
document_format = "YAML"
target_type = "/AWS::EC2::Instance"
content = templatefile(
"${path.module}/ssm-doc.yml",
{
Foo: var.foo
}
)
}
Inside my ssm-doc.yaml file, I loop through an array:
for i in "$\{arr[#]\}"; do
if test -f "$i" ; then
echo "[monitor://$i]" >> $f
echo "disabled=0" >> $f
echo "index=$INDEX" >> $f
fi
done
Error:
Error: Error in function call
Call to function "templatefile" failed:
./ssm-doc.yml:1,18-19: Invalid character;
This character is not used within the language., and 1 other diagnostic(s).
I tried escaping the # symbol, like \#, but it didn't help. How do I
Although the error is pointing to the # symbol as being the cause of the error, it's the ${ } that's causing the problem, because this is Terraform interpolation syntax, and it applies to templatefiles too. As the docs say:
The template syntax is the same as for string templates in the main Terraform language, including interpolation sequences delimited with ${ ... }.
And the way to escape interpolation syntax in Terraform is with a double dollar sign.
for i in "$${arr[#]}"; do
if test -f "$i" ; then
echo "[monitor://$i]" >> $f
echo "disabled=0" >> $f
echo "index=$INDEX" >> $f
fi
done
The interpolation syntax is useful with templatefile if you're trying to pass in an argument, such as, in the question Foo. This argument could be accessed within the yaml file as ${Foo}.
By the way, although this article didn't give the answer to this exact issue, it helped me get a deeper appreciation for all the work Terraform is doing to handle different languages via the templatefile function. It had some cool tricks for doing replacements to escape for different scenarios.

How do you add spaces for aws cloudformation deploy --parameter-overrides and/or --tags?

I am trying to get spaces into the tags parameter for the aws cli and it works if I hardcode it but not if I use bash variables. What is going on and how do I fix it?
This works with out spaces:
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags Key1=Value1 Key2=Value2
This works with out spaces but with variables:
tags="Key1=Value1 Key2=Value2"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags $tags
This works with spaces:
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags 'Key1=Value1' 'Key Two=Value2'
This does not work, spaces and variables:
tags="'Key1=Value1' 'Key Two=Value2'"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags $tags
Attempting to fix bash expansion, also does not work, spaces and variables:
tags="'Key1=Value1' 'Key Two=Value2'"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags "$tags"
Attempting to fix bash expansion, also does not work, spaces and variables:
tags="'Key1=Value1' 'Key Two=Value2'"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags "$(printf '%q' $tags)"
Error:
Invalid parameter: Tags Reason: The given tag(s) contain invalid
characters (Service: AmazonSNS; Status Code: 400; Error Code:
InvalidParameter; Request ID
Would you please try:
tags=('Key1=Value1' 'Key Two=Value2')
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags "${tags[#]}"
Stealing some ideas from https://github.com/aws/aws-cli/issues/3274 I was able to get this working by doing the following
deploy=(aws cloudformation deploy
...
--tags $(cat tags.json | jq '.[] | (.Key + "=" + .Value)'))
eval $(echo ${deploy[#]})
With a tags.json file structure of
[
{
"Key": "Name With Spaces",
"Value": "Value With Spaces"
},
{
"Key": "Foo",
"Value": "Bar"
}
]
Try this :
tags="'Key1=Value1' 'Key Two=Value2'"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags "$tags"
#  ^ ^
#  double quotes
Learn how to quote properly in shell, it's very important :
"Double quote" every literal that contains spaces/metacharacters and every expansion: "$var", "$(command "$var")", "${array[#]}", "a & b". Use 'single quotes' for code or literal $'s: 'Costs $5 US', ssh host 'echo "$HOSTNAME"'. See
http://mywiki.wooledge.org/Quotes
http://mywiki.wooledge.org/Arguments
http://wiki.bash-hackers.org/syntax/words
As of 2022-02 this was still an issue described
here
here also
and a little here
#esolomon is correct you have to array expansion. His answer which works just fine:
deploy=(aws cloudformation deploy
...
--tags $(cat tags.json | jq '.[] | (.Key + "=" + .Value)'))
eval $(echo ${deploy[#]})
The actual problem is a result of the shell environment (bin/bash here) that is used in combination with how python cli executable's handling of values. Since the aws cloudformation deploy does not standardize the input but expects the shell program to standardize array input this was causing my problem.
So my errors with the --debug flag turned on produced the first response which is the error and the second response is the expected input into aws cloudformation deploy
Error input:
2022-02-10 17:32:28,137 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['cloudformation', 'deploy', '--region', 'us-east-1', ..., '--parameter-overrides', 'PARAM1=dev PARAM2=blah', '--tags', "TAG1='Test Project' TAG2='blah'...", '--debug']
Expected input:
2022-02-10 17:39:40,390 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['cloudformation', 'deploy', '--region', 'us-east-1', ..., '--parameter-overrides', 'PARAM1=dev', 'PARAM2=blah', '--tags', "TAG2='Test Project'", 'TAG2=blah',..., '--debug']
I was unexpectedly sending in a string instead of array of strings this error resulted in several errors depending on how I sent it:
example TAG: TAG1=Test Project
['Project'] value passed to --tags must be of format Key=Value
the means IFS needs to be set to something other than the default ' \t\n', solution below
An error occurred (ValidationError) when calling the CreateChangeSet operation: 1 validation error detected: Value 'Test Project Tag2=Value2 ...' at 'tags.1.member.value' failed to satisfy constraint: Member must have length less than or equal to 256
the error starts after the first = this error means that I am sending in one long string instead of array items, as seen here when doing [*] instead of [#] aws cloudformation deploy ... --tags "${TAGS[*]}" diff between [*] and [#]
To fix this the most important thing was that IFS needed to be set to anything other than ' \t\n' and secondly I still need to do array expansion with [#] and could not input a string. The --parameter-overrides for me did not have this problem even though similar variable loading BECAUSE it did not have a string.
This was my solution, my params and tags input is all over the place, spaces + sometimes arrays + bad indenting thus the sed:
export IFS=$'\n'
# Build up the parameters and Tags
PARAMS=($(jq '.[] | .ParameterKey + "=" + if .ParameterValue|type=="array" then .ParameterValue | join(",") else .ParameterValue end ' parameters-${environment}.json \
| sed -e 's/"//g' \
| sed -e $'s/\r//g' | tr '\n' ' '))
TAGS=("$(jq -r '.[] | [.Key, .Value] | "\(.[0])=\(.[1])"' tags-common.json)")
TAGS=($TAGS "$(jq -r '.[] | [.Key, .Value] | "\(.[0])=\(.[1])"' tags-${environment}.json)")
aws cloudformation deploy \
--region "${REGION}" \
--no-fail-on-empty-changeset \
--template-file "stack-name-cfn-transform.yaml" \
--stack-name "stack-name-${environment}" \
--capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides "${params[#]}" \
--tags "${TAGS[#]}" \
--profile "${PROFILE}"
parameters file
[
{
"ParameterKey": "Environment",
"ParameterValue": "dev"
}
]
tags file - both common and environment specific tag files have same format
[
{
"Key": "TAG1",
"Value": "Test Project"
},
{
"Key": "Name With Spaces",
"Value": "Value With Spaces"
},
{
"Key": "Foo",
"Value": "Bar"
}
]
I resolved this scenario using options below:
"scripts": { "invoke": "sam ... --parameter-overrides \"$(jq -j 'to_entries[] | \"\\(.key)='\\\\\\\"'\\(.value)'\\\\\\\"''\\ '\"' params.json)\"" }
Or
sam ... --parameter-overrides "$(jq -j 'to_entries[] | "\(.key)='\\\"'\(.value)'\\\"''\ '"' params.json)"

AWS S3: How to check if a file exists in a bucket using bash

I'd like to know if it's possible to check if there are certain files in a certain bucket.
This is what I've found:
Checking if a file is in a S3 bucket using the s3cmd
It should fix my problem, but for some reason it keeps returning that the file doesn't exist, while it does. This solution is also a little dated and doesn't use the doesObjectExist method.
Summary of all the methods that can be used in the Amazon S3 web service
This gives the syntax of how to use this method, but I can't seem to make it work.
Do they expect you to make a boolean variable to save the status of the method, or does the function directly give you an output / throw an error?
This is the code I'm currently using in my bash script:
existBool=doesObjectExist(${BucketName}, backup_${DomainName}_${CurrentDate}.zip)
if $existBool ; then
echo 'No worries, the file exists.'
fi
I tested it using only the name of the file, instead of giving the full path. But since the error I'm getting is a syntax error, I'm probably just using it wrong.
Hopefully someone can help me out and tell me what I'm doing wrong.
!Edit
I ended up looking for another way to do this since using doesObjectExist isn't the fastest or easiest.
Last time I saw performance comparisons getObjectMetadata was the fastest way to check if an object exists. Using the AWS cli that would be the head-object method, example:
aws s3api head-object --bucket www.codeengine.com --key index.html
which returns:
{
"AcceptRanges": "bytes",
"ContentType": "text/html; charset=utf-8",
"LastModified": "Sun, 08 Jan 2017 22:49:19 GMT",
"ContentLength": 38106,
"ContentEncoding": "gzip",
"ETag": "\"bda80810592763dcaa8627d44c2bf8bb\"",
"StorageClass": "REDUCED_REDUNDANCY",
"CacheControl": "no-cache, no-store",
"Metadata": {}
}
Following to #DaveMaple & #MichaelGlenn answers, here is the condition I'm using:
aws s3api head-object --bucket <some_bucket> --key <some_key> || not_exist=true
if [ $not_exist ]; then
echo "it does not exist"
else
echo "it exists"
fi
Note that "aws s3 ls" does not quite work, even though the answer was accepted. It searches by prefix, not by a specific object key. I found this out the hard way when someone renamed a file by adding a '1' to the end of the filename, and the existence check would still return True.
(Tried to add this as a comment, but do not have enough rep yet.)
One simple way is using aws s3 ls
exists=$(aws s3 ls $path_to_file)
if [ -z "$exists" ]; then
echo "it does not exist"
else
echo "it exists"
fi
I usually use set -eufo pipefail and the following works better for me because I do not need to worry about unset variables or the entire script exiting.
object_exists=$(aws s3api head-object --bucket $bucket --key $key || true)
if [ -z "$object_exists" ]; then
echo "it does not exist"
else
echo "it exists"
fi
This statement will return a true or false response:
aws s3api list-objects-v2 \
--bucket <bucket_name> \
--query "contains(Contents[].Key, '<object_name>')"
So, in case of the example provided in the question:
aws s3api list-objects-v2 \
--bucket ${BucketName} \
--query "contains(Contents[].Key, 'backup_${DomainName}_${CurrentDate}.zip')"
I like this approach, because:
The --query option uses the JMESPath syntax for client-side filtering and it is well documented here how to use it.
Since the --query option is build into the aws cli, no additional dependencies need to be installed.
You can first run the command without the --query option, like:
aws s3api list-objects-v2 --bucket <bucket_name>
That returns a nicely formatted JSON, something like:
{
"Contents": [
{
"Key": "my_file_1.tar.gz",
"LastModified": "----",
"ETag": "\"-----\"",
"Size": -----,
"StorageClass": "------"
},
{
"Key": "my_file_2.txt",
"LastModified": "----",
"ETag": "\"----\"",
"Size": ----,
"StorageClass": "----"
},
...
]
}
This then allows you to design an appropriate query. In this case you want to check if the JSON contains a list Contents and that an item in that list has a Key equal to your file (object) name:
--query "contains(Contents[].Key, '<object_name>')"
A simpler solution, but not as sophisticated as other aws s3 api's is to use the exit code
aws s3 ls <full path to object>
Returns a non-zero return code if the object doesn't exist. 0 if it exists.
From awscli, we do a ls along with a grep.
Example: aws s3 ls s3://<bucket_name> | grep 'filename'
This can be included in the bash script.
Inspired by the answers above, I use this to also check the file size, because my bucket was trashed by some script with a 404 answers. It requires jq tho.
minsize=100
s3objhead=$(aws s3api head-object \
--bucket "$BUCKET" --key "$KEY"
--output json || echo '{"ContentLength": 0}')
if [ $(printf "%s" "$s3objhead" | jq '.ContentLength') -lt "$minsize" ]; then
# missing or small
else
# exist and big
fi
Here's a simple POSIX shell function (so it also works in Bash) based on #Dmitri Orgonov's answer:
s3_key_exists() {
aws >/dev/null 2>&1 s3api head-object --bucket "$1" --key "$2"
test $? != 254
}
And here's how to use it:
s3_key_exists myBucket path/to/my/file.txt \
&& echo "It's there!" \
|| echo "Not found..."
Now, if what you have is an S3 path instead of a bucket and a key:
s3_file_exists() {
local bucketAndKey="$(s3_bucket_and_key "$1")"
s3_key_exists "${bucketAndKey%:*}" "${bucketAndKey#*:}"
}
s3_bucket_and_key() {
local input="${1#/}"; local bucket="${input%%/*}"; local key="${input#$bucket}"
echo "$bucket:${key#/}"
}
And here's a usage example:
s3_file_exists /myBucket/path/to/my/file.txt \
&& echo "It's there!" \
|| echo "Not found..."
Or...
s3_file_exists myBucket/path/to/my/other-file.txt \
&& echo "It's there too!" \
|| echo "Not found either..."

JMESPath query expression with bash variable

Messing around with a simple aws cli query to check for the existence of a Lambda function and echo the associated role if it exists:
#!/bin/bash
fname=$1
role=$(aws lambda list-functions --query 'Functions[?FunctionName == `$fname`].Role' --output text)
echo "$fname role: $role"
However, $fname appears to be resolving to an empty string in the aws command. I've tried escaping the back ticks, swapping ` to ' and a miriad of other thrashing edits (and yes, I'm passing a string on the cl when invoking the script :)
How do I properly pass a variable into JMESPath query inside a bash script?
Because the whole JMESPath expression is enclosed in single quotes, bash is not expanding the $fname variable. To fix this you can surround the value with double quotes and then use single quotes (raw string literals) for the $fname var:
aws lambda list-functions --query "Functions[?FunctionName == '$fname'].Role" --output text
Swapping the backticks to single quotes, didn't work for me... :(
But escaping the backticks works :)
Here are my outputs:
aws elbv2 describe-listeners --load-balancer-arn $ELB_ARN --query "Listeners[?Port == '$PORT'].DefaultActions[].TargetGroupArn | [0]"
null
aws elbv2 describe-listeners --load-balancer-arn $ELB_ARN --query "Listeners[?Port == \`$PORT\`].DefaultActions[].TargetGroupArn | [0]"
"arn:aws:elasticloadbalancing:ap-southeast-2:1234567:targetgroup/xxx"

Resources