s3api put-object with spaces included in the name of the file - bash

I am trying to upload files to S3 in order to get some redirects working.
Unfortunately I notice some anomalies when the file names have spaces.
My bash script is very simple:
#!/usr/bin/env bash
declare -A redirects
redirects["foo/bar/index.html"]="/foo/"
redirects["foo/bar/test.pdf"]="/foo/test.pdf"
redirects["assets/docs/NEW-TEST WELCOME TO MYTEST.pdf"]="/"
for i in "${!redirects[#]}"
do
echo "Executing command: aws s3api put-object --bucket $BUCKET_NAME --key" '"'${i}'"' "--website-redirect-location" "${redirects[$i]}"
aws s3api put-object --bucket $BUCKET_NAME --key '"'${i}'"' --website-redirect-location "${redirects[$i]}"
done
From the output what I can see is:
Executing command: aws s3api put-object --bucket myamazingbucket --key "assets/docs/NEW-TEST WELCOME TO MYTEST.pdf" --website-redirect-location /
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: TO, MYTEST.pdf", WELCOME
Do you have suggestions on how to make these put-objects on S3?

By specifying '"'${i}'"' you are forcing a literal quote at the beginning and end of the string, which internally results in an aws command like:
aws s3api put-object --bucket BUCKET_NAME --key '"assets/docs/NEW-TEST' WELCOME TO 'MYTEST.pdf"' --website-redirect-location /
Instead, you should properly quote your string, changing your aws command line to:
aws s3api put-object --bucket $BUCKET_NAME --key "${i}" --website-redirect-location "${redirects[$i]}"
... which internally results in an aws command like below.
aws s3api put-object --bucket BUCKET_NAME --key 'assets/docs/NEW-TEST WELCOME TO MYTEST.pdf' --website-redirect-location /
Btw. your echo command behaves differently because you're passing the quoted command line to echo as a quoted string.

Related

Escaping characters in AWS SSM command

I can't get my escape characters correct for an AWS SSM command with double quotes inside. Here's the last attempt:
aws ssm send-command --instance-ids "i-012345678" --document-name "AWS-RunShellScript" --query "Command.CommandId" --output text --parameters commands='["sudo su","cd /opt/cassandra/bin","cqlsh -e \"select * from system_schema.keyspaces;"\"]'
Essentially it's the last command, the double quotes around the cqlsh command that I can't escape from erroring. Have tried to store it in variable and echo but neither works. Also looked at answers below.
aws ssm send-command not working with special characters
Send multiple lines of script to EC2 instance from PowerShell SSM cmdlets
Per the documentation,
v1 docs
v2 docs
quoting and escaping rules
You do not need to escape double quotation marks embedded in the JSON string, as they are being treated literally.
Could you instead try the below?
aws ssm send-command \
--instance-ids "i-012345678" \
--document-name "AWS-RunShellScript" \
--query "Command.CommandId" \
--output text \
--parameters commands='[{"sudo su","/opt/cassandra/bin/cqlsh -e \"SELECT * FROM system_schema.keyspaces;\""}]'

bash solution to dynamically build value for prefix parameter of my tagging script for AWS CLI commands

I am calling AWS CLI commands in a bash script. I have a need to add tags to files whose prefix is as follows:
/base/user1/foo/file1
/base/user2/foo/fileA
/base/user3/foo/fileX
I only want to delete those under "foo", but if user has files like:
/base/user1/bar/fileZ
, I don't want to delete those under "bar".
I have a script:
!/bin/bash
aws s3api list-objects --bucket myBucket --query 'Contents[].{Key: Key}' --prefix myPrefix --output text | xargs -n 1 aws s3api put-object-tagging --bucket myBucket --tagging "{\"TagSet\": [{ \"Key\": \"myKey\", \"Value\": \"myValue\" }]}" --key
This works fine as long as myPrefix is an absolute path like:
/base/user1/foo/
, but I have too many users to manual do this, so I wanted to do something like:
/base/*/foo/
for the prefix. However, that throws an error:
An error occurred (NoSuchKey) when calling the PutObjectTagging operation: The specified key does not exist
Is there a way in bash in a loop or something that traverses down to the "foo" level so that I can have the full path: prefix /base/user1/foo/, /base/user2/foo, /ase/user3/foo dynamically defined for the prefix? Thanks for any response.
I was able to find a decent solution on my own. Replacing this line from the original post:
aws s3api list-objects --bucket myBucket --query 'Contents[].{Key: Key}' --prefix myPrefix --output text | xargs -n 1 aws s3api put-object-tagging --bucket myBucket --tagging "{\"TagSet\": [{ \"Key\": \"myKey\", \"Value\": \"myValue\" }]}" --key
, with the following:
for BUCKET_PATH in $(aws s3 ls --recursive --summarize $BUCKET);
do
if [[ $BUCKET_PATH == *$PREFIX* ]]; then
echo $BUCKET_PATH
aws s3api put-object-tagging --bucket ${BUCKET} --key ${BUCKET_PATH} --tagging "{\"TagSet\": [{ \"Key\": \"${KEY}\", \"Value\": \"${VALUE}\" }]}"
fi
done
works great.

aws cli command from bash with jq to get S3 logs from logging bucket failing

The below script is intended to get the content of each entry in the S3 logging bucket and save it to a file
#!/bin/bash
#
# Get the content of each entry in the S3 logging bucket and save it to a file
#
LOGGING_BUCKET=dtgd-hd00
aws s3api list-objects-v2 --bucket "$LOGGING_BUCKET" | jq '.Contents' >> entries.json &&
keys=$(jq '.[].Key' entries.json )
for key in $keys;do
echo $key
aws s3api get-object --bucket "$LOGGING_BUCKET" --key "$key" ouput_file_"$key"
done
Once executed I got:
An error occurred (NoSuchKey) when calling the GetObject operation:
The specified key does not exist.
"dtgd-hd00/logs2021-08-10-05-43-18-01393D975686FA45"
However, if I do it from the CLI:
aws s3api get-object --bucket dtgd-hd00 \
--key "dtgd-hd00/logs2021-08-10-05-43-18-01393D975686FA45" \
output_file_"$key"
It works perfectly, getting the content and saving it to an output file as requested.
What could be wrong ??
The variable $key will be a quoted string, so you're basically double quoting the string, and S3 is failing to find "key_name" with the quotes. You could remove the quotes before passing them along:
for key in $keys;do
key="${key%\"}"
key="${key#\"}"
aws s3api get-object --bucket "$LOGGING_BUCKET" --key "$key" ouput_file_"$key"
done
Of course, it would be much more performant to use aws s3 sync and avoid this issue altogether.

get subnet id AWS

i am trying to get the subnet ids within a particular VPC and store them in variables
so I can use them in a bash script
aws ec2 describe-subnets --filter "Name=vpc-id,Values=VPCid" --region $REGION --query "Subnets[*].SubnetId" --output text
and this gives something like this
subnet-12345 subnet-78910
(END)
I wonder how I can store them into a variable.
I tried with
SBnet=$(aws ec2 describe-subnets --filter "Name=vpc-id,Values=VPCid" --region $REGION --query "Subnets[*].SubnetId" --output text)
but then I do not know I can access the array/list created.
I tried with
echo $(SBnet[0])
but does not work
I am on MACos usin zsh
You can do this as follows (add your VPC and the region):
#!/bin/bash
SUBNET_IDS=$(aws ec2 describe-subnets --filter "Name=vpc-id,Values=vpc-1234" --query "Subnets[*].SubnetId" --output text)
for SUBNET_ID in $SUBNET_IDS;
do
echo $SUBNET_ID
done
To split the list of subnet IDs into variables, you can do this:
#!/bin/bash
SUBNET_IDS=$(aws ec2 describe-subnets --filter "Name=vpc-id,Values=vpc-1234" --query "Subnets[*].SubnetId" --output text)
IFS=$'\t ' read -r -a subnet_ids <<< $SUBNET_IDS
echo "${subnet_ids[0]}"
echo "${subnet_ids[1]}"
And the individual subnet IDs will be in the subnet_ids array.
you can do as #jarmod suggested and you could also write a query to extract all the subnets tied to all the VPC's in your system in a comma separated output and use it further like this
aws ec2 describe-subnets --query "Subnets[].[SubnetId,VpcId,CidrBlock,AvailabilityZone]" --output text|sed 's/\t/,/g'

Shell script syntax, escape character

I have a shell script as given below. This script actually add AWS instance in autoscalling scale in protection group. When I run individual commands that went fine. But when I created a shell file and tried to execute same there are error. See below script
set -x
INSTANCE_ID=$(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id)
ASG_NAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" --region us-east-2 | jq '.Tags[] | select(.["Key"] | contains("a:autoscaling:groupName")) | .Value')
ASG_NAME=$(echo $ASG_NAME | tr -d '"')
aws autoscaling set-instance-protection --instance-ids $INSTANCE_ID --auto-scaling-group-name $ASG_NAME --protected-from-scale-in --region us-east-2
error is as given below. I think issue is with second line. It is not able to get ASG_NAME, I tried some of escape character but nothing is working.
+++ wget -q -O - http://169.254.169.254/latest/meta-data/instance-id
++ INSTANCE_ID=i-----
+++ aws ec2 describe-tags --filters Name=resource-id,Values=i------ --region us-east-2
+++ jq '.Tags[] | select(.["Key"] | contains("a:autoscaling:groupName")) | .Value'
++ ASG_NAME=
+++ echo
+++ tr -d '"'
++ ASG_NAME=
++ aws autoscaling set-instance-protection --instance-ids i---- --auto-scaling-group-name --protected-from-scale-in --region us-east-2
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument --auto-scaling-group-name: expected one argument
> Blockquote
Solved issue by recommendation of #chepner. Modified second line by
ASG_NAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" --region us-east-2 --query 'Tags[1].Value')

Resources