pass arguments of make commands - bash

I have a sequence of make commands to upload zip file to s3 bucket and then update the lambda function reading that s3 file as source code. Once I update the lambda function, I wish to publish it and after publishing it, I want to attach an event to that lambda function using lambda bridge.
I can do most of these commands automatically using make. For example:
clean:
#rm unwanted_build_files.zip
build-lambda-pkg:
mkdir pkg
cd pkg && docker run #something something
cd pkg && zip -9qr build.zip
cp pkg/build.zip .
rm pkg
upload-s3:
aws s3api put-object --bucket my_bucket \
--key build.zip --body build.zip
update-lambda:
aws lambda update-function-code --function-name my_lambda \
--s3-bucket my_bucket \
--s3-key build.zip
publish-lambda:
aws lambda publish-version --function-name my_lambda
## I can get "Arn" value from publish-lambda command. publish-lambda ##returns a json (or I would say it prints a json type structure on cmd) which has one key as "FunctionArn"
attach-event:
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="arn:aws:lambda:::function/my_lambda/version_number"
## the following combines the above command into single command
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
I am stuck at the last step i.e. to combine and include publish-lambda and attach-event in the build-n-update command. The problem is I am unable to pass argument from previous command to next command. I will try to explain it better:
publish-lambda prints a json style output on terminal:
{
"FunctionName": "my_lambda",
"FunctionArn": "arn:aws:lambda:us-east-2:12345:function:my_lambda:5",
"Runtime": "python3.6",
"Role": "arn:aws:iam::12345:role/my_role",
"Handler": "lambda_function.lambda_handler",
"CodeSize": 62403592,
"Description": "",
"Timeout": 180,
"MemorySize": 512,
"LastModified": "2021-02-28T17:34:04.374+0000",
"CodeSha256": "ErfsYHVMFCQBg4iXx5ev9Z0U=",
"Version": "5",
"Environment": {
"Variables": {
"PATH": "/var/task/bin",
"PYTHONPATH": "/var/task/src:/var/task/lib"
}
},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "49b5-acdd-c1032aa16bfb",
"State": "Active",
"LastUpdateStatus": "Successful"
}
I wish to extract function arn from the above output stored in key "FunctionArn" and use it in the next command i.e. attach-event as attach-event has a --targets argument which takes the "Arn" of last published function.
Is it possible to do in single command?
I have tried to experiment a bit as follows:
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
make publish-lambda | xargs jq .FunctionArn -r {}
But this throws an error:
jq: Unknown option --function-name
Please help

Well, running:
make publish-lambda | xargs jq .FunctionArn -r {}
will print the command to be run, then the output of the command (run it yourself from you shell prompt and see). Of course, jq cannot parse the command line make prints.
Anyway, what would be the goal of this? You'd just print the function name to stdout and it wouldn't do you any good.
You basically have two choices: one is to combine the two commands into a single make recipe, so you can capture the information you need in a shell variable:
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
func=$$(aws lambda publish-version --function-name my_lambda \
| jq .FunctionArn -r); \
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="$$func"
The other alternative is to redirect the output of publish-version to a file, then parse that file in the attach-event target recipe:
publish-lambda:
aws lambda publish-version --function-name my_lambda > publish.json
attach-event:
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="$$(jq .FunctionArn -r publish.json)"

Related

Passing json to aws glue create-job after replacement done using jq

I have the following bash script that I execute in order to create new Glue Job via CLI:
#!/usr/bin/env bash
set -e
NAME=$1
PROFILE=$2
SCRIPT_LOCATION='s3://bucket/scripts/'$1'.py'
echo [*]--- Creating new job on AWS
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json | jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
I'm using jq as i need one of the values to be replaced on runtime before i pass the .json as --cli-input-json argument. How can i pass json with replaced value to this command? As of now, it prints out the json content (although with value already replaced).
Running the command above causes the following error:
[*]--- Creating new job on AWS
{
"Description": "Template for Glue Job",
"LogUri": "",
"Role": "arn:aws:iam::11111111111:role/role",
"ExecutionProperty": {
"MaxConcurrentRuns": 1
},
"Command": {
"Name": "glueetl",
"ScriptLocation": "s3://bucket/scripts/script.py",
"PythonVersion": "3"
},
"DefaultArguments": {
"--TempDir": "s3://temp/admin/",
"--job-bookmark-option": "job-bookmark-disable",
"--enable-metrics": "",
"--enable-glue-datacatalog": "",
"--enable-continuous-cloudwatch-log": "",
"--enable-spark-ui": "true",
"--spark-event-logs-path": "s3://assets/sparkHistoryLogs/"
},
"NonOverridableArguments": {
"KeyName": ""
},
"MaxRetries": 0,
"AllocatedCapacity": 0,
"Timeout": 2880,
"MaxCapacity": 0,
"Tags": {
"KeyName": ""
},
"NotificationProperty": {
"NotifyDelayAfter": 60
},
"GlueVersion": "3.0",
"NumberOfWorkers": 2,
"WorkerType": "G.1X"
}
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws.exe: error: argument --cli-input-json: expected one argument
The command line
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json | jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
executes the command
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json,
takes its standard output and uses it as input to
jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
(which will ignore the input and read from the file given as argument). Please also note that blanks or spaces in $SCRIPT_LOCATION will break your script, because it is not quoted (your quotes are off).
To use the output of one command in the argument list of another command, you must use Command Substitution: outer_command --some-arg "$(inner_command)".
So your command should become:
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json "$(jq '.Command.ScriptLocation = "'"$SCRIPT_LOCATION"'"' ./resources/config.json)"
# or simplified with only double quotes:
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json "$(jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json)"
See https://superuser.com/questions/1306071/aws-cli-using-cli-input-json-in-a-pipeline for additional examples.
Although, I have to admit I am not 100% certain that the JSON content can be passed directly on the command line. From looking at the docs and some official examples, it looks like this parameter expects a file name, not a JSON document's content. So it could be possible that your command in fact needs to be:
# if "-" filename is specially handled:
jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json | aws glue create-job --profile $PROFILE --name $NAME --cli-input-json -
# "-" filename not recognized:
jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json > ./resources/config.replaced.json && aws glue create-job --profile $PROFILE --name $NAME --cli-input-json file://./resources/config.replaced.json
Let us know which one worked.

How to run aws bash commands consecutively?

How can I execute the following bash commands consecutively?
aws logs create-export-task --task-name "cloudwatch-log-group-export1" \
--log-group-name "/my/log/group1" \
--from 1488708419000 --to 1614938819000 \
--destination "my-s3-bucket" \
--destination-prefix "my-log-group1"
aws logs create-export-task --task-name "cloudwatch-log-group-export" \
--log-group-name "/my/log/group2" \
--from 1488708419000 --to 1614938819000 \
--destination "my-s3-bucket" \
--destination-prefix "my-log-group2"
The problem I have with the above commands is that after the first command completes execution, the script will stuck at the following state, making the second command not reachable.
{
"taskId": "0e3cdd4e-1e95-4b98-bd8b-3291ee69f9ae"
}
It seems that I should find a way to wait for cloudwatch-log-group-export1 task to complete.
You could have to crate a waiter function which uses describe-export-tasks to get current status of an export job.
Example of such function:
wait_for_export() {
local sleep_time=${2:-10}
while true; do
job_status=$(aws logs describe-export-tasks \
--task-id ${1} \
--query "exportTasks[0].status.code" \
--output text)
echo ${job_status}
[[ $job_status == "COMPLETED" ]] && break
sleep ${sleep_time}
done
}
Then you use it:
task_id1=$(aws logs create-export-task \
--task-name "cloudwatch-log-group-export1" \
--log-group-name "/my/log/group1" \
--from 1488708419000 --to 1614938819000 \
--destination "my-s3-bucket" \
--destination-prefix "my-log-group1" \
--query 'taskId' --output text)
wait_for_export ${task_id1}
# second export
aws-cli auto access to vim edit mode by default.
You can avoid it by setting AWS_PAGER environment variable is "" before execute aws command.
export AWS_PAGER=""
aws logs create-export-task...
Or, you can fix it in to aws's config file (~/.aws/config):
[default]
cli_pager=

How do you add spaces for aws cloudformation deploy --parameter-overrides and/or --tags?

I am trying to get spaces into the tags parameter for the aws cli and it works if I hardcode it but not if I use bash variables. What is going on and how do I fix it?
This works with out spaces:
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags Key1=Value1 Key2=Value2
This works with out spaces but with variables:
tags="Key1=Value1 Key2=Value2"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags $tags
This works with spaces:
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags 'Key1=Value1' 'Key Two=Value2'
This does not work, spaces and variables:
tags="'Key1=Value1' 'Key Two=Value2'"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags $tags
Attempting to fix bash expansion, also does not work, spaces and variables:
tags="'Key1=Value1' 'Key Two=Value2'"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags "$tags"
Attempting to fix bash expansion, also does not work, spaces and variables:
tags="'Key1=Value1' 'Key Two=Value2'"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags "$(printf '%q' $tags)"
Error:
Invalid parameter: Tags Reason: The given tag(s) contain invalid
characters (Service: AmazonSNS; Status Code: 400; Error Code:
InvalidParameter; Request ID
Would you please try:
tags=('Key1=Value1' 'Key Two=Value2')
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags "${tags[#]}"
Stealing some ideas from https://github.com/aws/aws-cli/issues/3274 I was able to get this working by doing the following
deploy=(aws cloudformation deploy
...
--tags $(cat tags.json | jq '.[] | (.Key + "=" + .Value)'))
eval $(echo ${deploy[#]})
With a tags.json file structure of
[
{
"Key": "Name With Spaces",
"Value": "Value With Spaces"
},
{
"Key": "Foo",
"Value": "Bar"
}
]
Try this :
tags="'Key1=Value1' 'Key Two=Value2'"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags "$tags"
#  ^ ^
#  double quotes
Learn how to quote properly in shell, it's very important :
"Double quote" every literal that contains spaces/metacharacters and every expansion: "$var", "$(command "$var")", "${array[#]}", "a & b". Use 'single quotes' for code or literal $'s: 'Costs $5 US', ssh host 'echo "$HOSTNAME"'. See
http://mywiki.wooledge.org/Quotes
http://mywiki.wooledge.org/Arguments
http://wiki.bash-hackers.org/syntax/words
As of 2022-02 this was still an issue described
here
here also
and a little here
#esolomon is correct you have to array expansion. His answer which works just fine:
deploy=(aws cloudformation deploy
...
--tags $(cat tags.json | jq '.[] | (.Key + "=" + .Value)'))
eval $(echo ${deploy[#]})
The actual problem is a result of the shell environment (bin/bash here) that is used in combination with how python cli executable's handling of values. Since the aws cloudformation deploy does not standardize the input but expects the shell program to standardize array input this was causing my problem.
So my errors with the --debug flag turned on produced the first response which is the error and the second response is the expected input into aws cloudformation deploy
Error input:
2022-02-10 17:32:28,137 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['cloudformation', 'deploy', '--region', 'us-east-1', ..., '--parameter-overrides', 'PARAM1=dev PARAM2=blah', '--tags', "TAG1='Test Project' TAG2='blah'...", '--debug']
Expected input:
2022-02-10 17:39:40,390 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['cloudformation', 'deploy', '--region', 'us-east-1', ..., '--parameter-overrides', 'PARAM1=dev', 'PARAM2=blah', '--tags', "TAG2='Test Project'", 'TAG2=blah',..., '--debug']
I was unexpectedly sending in a string instead of array of strings this error resulted in several errors depending on how I sent it:
example TAG: TAG1=Test Project
['Project'] value passed to --tags must be of format Key=Value
the means IFS needs to be set to something other than the default ' \t\n', solution below
An error occurred (ValidationError) when calling the CreateChangeSet operation: 1 validation error detected: Value 'Test Project Tag2=Value2 ...' at 'tags.1.member.value' failed to satisfy constraint: Member must have length less than or equal to 256
the error starts after the first = this error means that I am sending in one long string instead of array items, as seen here when doing [*] instead of [#] aws cloudformation deploy ... --tags "${TAGS[*]}" diff between [*] and [#]
To fix this the most important thing was that IFS needed to be set to anything other than ' \t\n' and secondly I still need to do array expansion with [#] and could not input a string. The --parameter-overrides for me did not have this problem even though similar variable loading BECAUSE it did not have a string.
This was my solution, my params and tags input is all over the place, spaces + sometimes arrays + bad indenting thus the sed:
export IFS=$'\n'
# Build up the parameters and Tags
PARAMS=($(jq '.[] | .ParameterKey + "=" + if .ParameterValue|type=="array" then .ParameterValue | join(",") else .ParameterValue end ' parameters-${environment}.json \
| sed -e 's/"//g' \
| sed -e $'s/\r//g' | tr '\n' ' '))
TAGS=("$(jq -r '.[] | [.Key, .Value] | "\(.[0])=\(.[1])"' tags-common.json)")
TAGS=($TAGS "$(jq -r '.[] | [.Key, .Value] | "\(.[0])=\(.[1])"' tags-${environment}.json)")
aws cloudformation deploy \
--region "${REGION}" \
--no-fail-on-empty-changeset \
--template-file "stack-name-cfn-transform.yaml" \
--stack-name "stack-name-${environment}" \
--capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides "${params[#]}" \
--tags "${TAGS[#]}" \
--profile "${PROFILE}"
parameters file
[
{
"ParameterKey": "Environment",
"ParameterValue": "dev"
}
]
tags file - both common and environment specific tag files have same format
[
{
"Key": "TAG1",
"Value": "Test Project"
},
{
"Key": "Name With Spaces",
"Value": "Value With Spaces"
},
{
"Key": "Foo",
"Value": "Bar"
}
]
I resolved this scenario using options below:
"scripts": { "invoke": "sam ... --parameter-overrides \"$(jq -j 'to_entries[] | \"\\(.key)='\\\\\\\"'\\(.value)'\\\\\\\"''\\ '\"' params.json)\"" }
Or
sam ... --parameter-overrides "$(jq -j 'to_entries[] | "\(.key)='\\\"'\(.value)'\\\"''\ '"' params.json)"

Curl returns Invalid JSON error in a Jenkins Pipeline script but returns the expected response on a bash shell run or in a Jenkins Freestyle job

I am writing a Jenkins Pipeline job for setting up AWS infrastructure using API calls to our in-house AWS CLI wrapper library. Running the raw bash scripts on a CentOS box or as a Jenkins Freestyle job runs fine. However, it fails in the context of a Pipeline job. I think that the quotes may need to be different for the Pipeline job but I am not sure how.
After further investigation, I found that the curl command returns the wrong response from the service when running the scripts within a Jenkins Pipeline job.
pipeline {
agent any
stages {
stage('Checkout code from Git'){
steps {
echo "Checkout code from a GitHub repository"
// Checkout code from a GitHub repository
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: false, recursiveSubmodules: true, reference: '', trackingSubmodules: false]], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'xxxx', url: 'git#github.com:bbc/repo.git']]])
}
}
stage('Call our internal AWS CLI Wrapper System API to perform an ACTION on a specified ENVIRONMENT') {
steps {
script {
if("${params.ENVIRONMENT}" == 'int' && "${params.ACTION}" == 'create'){
echo "ENVIRONMENT=${params.ENVIRONMENT}, ACTION=${params.ACTION}"
echo ""
sh '''#!/bin/bash
# Create Neptune Cluster for the Int environment
cd blah-db
echo "Current working directory is $PWD"
CLOUD_FORMATION_FILE=$PWD/infrastructure/templates/neptune-cluster.json
echo "The CloudFormation file to operate on is $CLOUD_FORMATION_FILE"
echo "Running jq to transform the source CloudFormation file"
template=$(jq -M '.Parameters.Env.Default="int"' $CLOUD_FORMATION_FILE)
echo "Echoing the transformed CloudFormation file: \n$template"
echo "Running curl to make the http request to our internal AWS CLI Wrapper System"
curl -d "{\"aws_account\": \"1111111111\", \"region\": \"us-east-1\", \"name_suffix\": \"cluster\", \"template\": $template}" \
-H 'Content-Type: application/json' -H 'Accept: application/json' https://base.api.url/v1/services/blah-neptune/int/stacks \
--cert /path/to/client/certificate/client.crt --key /path/to/client/private-key/client.key
cd ..
pwd
# Set a timer to run for 300 seconds or 5 minutes to create a delay to allow for the Neptune Cluster to be fully provisioned first before adding instances to it.
'''
}
}
}
}
}
}
The actual result that I get from making the API call:
{"error": "Invalid JSON. Expecting property name: line 1 column 1 (char 1)"}
try change the curl as following:
curl -d '{"aws_account": "1111111111", "region": "us-east-1", "name_suffix": "cluster", "template": $template}'
Or assign the whole cmd to a variable and print it out to see it's as your wanted or not.
cmd = '''#!/bin/bash
cd blah-db
...
'''
echo cmd // compare the output string to the cmd of freestyle job.
sh cmd

AWS S3: How to check if a file exists in a bucket using bash

I'd like to know if it's possible to check if there are certain files in a certain bucket.
This is what I've found:
Checking if a file is in a S3 bucket using the s3cmd
It should fix my problem, but for some reason it keeps returning that the file doesn't exist, while it does. This solution is also a little dated and doesn't use the doesObjectExist method.
Summary of all the methods that can be used in the Amazon S3 web service
This gives the syntax of how to use this method, but I can't seem to make it work.
Do they expect you to make a boolean variable to save the status of the method, or does the function directly give you an output / throw an error?
This is the code I'm currently using in my bash script:
existBool=doesObjectExist(${BucketName}, backup_${DomainName}_${CurrentDate}.zip)
if $existBool ; then
echo 'No worries, the file exists.'
fi
I tested it using only the name of the file, instead of giving the full path. But since the error I'm getting is a syntax error, I'm probably just using it wrong.
Hopefully someone can help me out and tell me what I'm doing wrong.
!Edit
I ended up looking for another way to do this since using doesObjectExist isn't the fastest or easiest.
Last time I saw performance comparisons getObjectMetadata was the fastest way to check if an object exists. Using the AWS cli that would be the head-object method, example:
aws s3api head-object --bucket www.codeengine.com --key index.html
which returns:
{
"AcceptRanges": "bytes",
"ContentType": "text/html; charset=utf-8",
"LastModified": "Sun, 08 Jan 2017 22:49:19 GMT",
"ContentLength": 38106,
"ContentEncoding": "gzip",
"ETag": "\"bda80810592763dcaa8627d44c2bf8bb\"",
"StorageClass": "REDUCED_REDUNDANCY",
"CacheControl": "no-cache, no-store",
"Metadata": {}
}
Following to #DaveMaple & #MichaelGlenn answers, here is the condition I'm using:
aws s3api head-object --bucket <some_bucket> --key <some_key> || not_exist=true
if [ $not_exist ]; then
echo "it does not exist"
else
echo "it exists"
fi
Note that "aws s3 ls" does not quite work, even though the answer was accepted. It searches by prefix, not by a specific object key. I found this out the hard way when someone renamed a file by adding a '1' to the end of the filename, and the existence check would still return True.
(Tried to add this as a comment, but do not have enough rep yet.)
One simple way is using aws s3 ls
exists=$(aws s3 ls $path_to_file)
if [ -z "$exists" ]; then
echo "it does not exist"
else
echo "it exists"
fi
I usually use set -eufo pipefail and the following works better for me because I do not need to worry about unset variables or the entire script exiting.
object_exists=$(aws s3api head-object --bucket $bucket --key $key || true)
if [ -z "$object_exists" ]; then
echo "it does not exist"
else
echo "it exists"
fi
This statement will return a true or false response:
aws s3api list-objects-v2 \
--bucket <bucket_name> \
--query "contains(Contents[].Key, '<object_name>')"
So, in case of the example provided in the question:
aws s3api list-objects-v2 \
--bucket ${BucketName} \
--query "contains(Contents[].Key, 'backup_${DomainName}_${CurrentDate}.zip')"
I like this approach, because:
The --query option uses the JMESPath syntax for client-side filtering and it is well documented here how to use it.
Since the --query option is build into the aws cli, no additional dependencies need to be installed.
You can first run the command without the --query option, like:
aws s3api list-objects-v2 --bucket <bucket_name>
That returns a nicely formatted JSON, something like:
{
"Contents": [
{
"Key": "my_file_1.tar.gz",
"LastModified": "----",
"ETag": "\"-----\"",
"Size": -----,
"StorageClass": "------"
},
{
"Key": "my_file_2.txt",
"LastModified": "----",
"ETag": "\"----\"",
"Size": ----,
"StorageClass": "----"
},
...
]
}
This then allows you to design an appropriate query. In this case you want to check if the JSON contains a list Contents and that an item in that list has a Key equal to your file (object) name:
--query "contains(Contents[].Key, '<object_name>')"
A simpler solution, but not as sophisticated as other aws s3 api's is to use the exit code
aws s3 ls <full path to object>
Returns a non-zero return code if the object doesn't exist. 0 if it exists.
From awscli, we do a ls along with a grep.
Example: aws s3 ls s3://<bucket_name> | grep 'filename'
This can be included in the bash script.
Inspired by the answers above, I use this to also check the file size, because my bucket was trashed by some script with a 404 answers. It requires jq tho.
minsize=100
s3objhead=$(aws s3api head-object \
--bucket "$BUCKET" --key "$KEY"
--output json || echo '{"ContentLength": 0}')
if [ $(printf "%s" "$s3objhead" | jq '.ContentLength') -lt "$minsize" ]; then
# missing or small
else
# exist and big
fi
Here's a simple POSIX shell function (so it also works in Bash) based on #Dmitri Orgonov's answer:
s3_key_exists() {
aws >/dev/null 2>&1 s3api head-object --bucket "$1" --key "$2"
test $? != 254
}
And here's how to use it:
s3_key_exists myBucket path/to/my/file.txt \
&& echo "It's there!" \
|| echo "Not found..."
Now, if what you have is an S3 path instead of a bucket and a key:
s3_file_exists() {
local bucketAndKey="$(s3_bucket_and_key "$1")"
s3_key_exists "${bucketAndKey%:*}" "${bucketAndKey#*:}"
}
s3_bucket_and_key() {
local input="${1#/}"; local bucket="${input%%/*}"; local key="${input#$bucket}"
echo "$bucket:${key#/}"
}
And here's a usage example:
s3_file_exists /myBucket/path/to/my/file.txt \
&& echo "It's there!" \
|| echo "Not found..."
Or...
s3_file_exists myBucket/path/to/my/other-file.txt \
&& echo "It's there too!" \
|| echo "Not found either..."

Resources