I can't get my escape characters correct for an AWS SSM command with double quotes inside. Here's the last attempt:
aws ssm send-command --instance-ids "i-012345678" --document-name "AWS-RunShellScript" --query "Command.CommandId" --output text --parameters commands='["sudo su","cd /opt/cassandra/bin","cqlsh -e \"select * from system_schema.keyspaces;"\"]'
Essentially it's the last command, the double quotes around the cqlsh command that I can't escape from erroring. Have tried to store it in variable and echo but neither works. Also looked at answers below.
aws ssm send-command not working with special characters
Send multiple lines of script to EC2 instance from PowerShell SSM cmdlets
Per the documentation,
v1 docs
v2 docs
quoting and escaping rules
You do not need to escape double quotation marks embedded in the JSON string, as they are being treated literally.
Could you instead try the below?
aws ssm send-command \
--instance-ids "i-012345678" \
--document-name "AWS-RunShellScript" \
--query "Command.CommandId" \
--output text \
--parameters commands='[{"sudo su","/opt/cassandra/bin/cqlsh -e \"SELECT * FROM system_schema.keyspaces;\""}]'
Related
The below script is intended to get the content of each entry in the S3 logging bucket and save it to a file
#!/bin/bash
#
# Get the content of each entry in the S3 logging bucket and save it to a file
#
LOGGING_BUCKET=dtgd-hd00
aws s3api list-objects-v2 --bucket "$LOGGING_BUCKET" | jq '.Contents' >> entries.json &&
keys=$(jq '.[].Key' entries.json )
for key in $keys;do
echo $key
aws s3api get-object --bucket "$LOGGING_BUCKET" --key "$key" ouput_file_"$key"
done
Once executed I got:
An error occurred (NoSuchKey) when calling the GetObject operation:
The specified key does not exist.
"dtgd-hd00/logs2021-08-10-05-43-18-01393D975686FA45"
However, if I do it from the CLI:
aws s3api get-object --bucket dtgd-hd00 \
--key "dtgd-hd00/logs2021-08-10-05-43-18-01393D975686FA45" \
output_file_"$key"
It works perfectly, getting the content and saving it to an output file as requested.
What could be wrong ??
The variable $key will be a quoted string, so you're basically double quoting the string, and S3 is failing to find "key_name" with the quotes. You could remove the quotes before passing them along:
for key in $keys;do
key="${key%\"}"
key="${key#\"}"
aws s3api get-object --bucket "$LOGGING_BUCKET" --key "$key" ouput_file_"$key"
done
Of course, it would be much more performant to use aws s3 sync and avoid this issue altogether.
i am trying to get the subnet ids within a particular VPC and store them in variables
so I can use them in a bash script
aws ec2 describe-subnets --filter "Name=vpc-id,Values=VPCid" --region $REGION --query "Subnets[*].SubnetId" --output text
and this gives something like this
subnet-12345 subnet-78910
(END)
I wonder how I can store them into a variable.
I tried with
SBnet=$(aws ec2 describe-subnets --filter "Name=vpc-id,Values=VPCid" --region $REGION --query "Subnets[*].SubnetId" --output text)
but then I do not know I can access the array/list created.
I tried with
echo $(SBnet[0])
but does not work
I am on MACos usin zsh
You can do this as follows (add your VPC and the region):
#!/bin/bash
SUBNET_IDS=$(aws ec2 describe-subnets --filter "Name=vpc-id,Values=vpc-1234" --query "Subnets[*].SubnetId" --output text)
for SUBNET_ID in $SUBNET_IDS;
do
echo $SUBNET_ID
done
To split the list of subnet IDs into variables, you can do this:
#!/bin/bash
SUBNET_IDS=$(aws ec2 describe-subnets --filter "Name=vpc-id,Values=vpc-1234" --query "Subnets[*].SubnetId" --output text)
IFS=$'\t ' read -r -a subnet_ids <<< $SUBNET_IDS
echo "${subnet_ids[0]}"
echo "${subnet_ids[1]}"
And the individual subnet IDs will be in the subnet_ids array.
you can do as #jarmod suggested and you could also write a query to extract all the subnets tied to all the VPC's in your system in a comma separated output and use it further like this
aws ec2 describe-subnets --query "Subnets[].[SubnetId,VpcId,CidrBlock,AvailabilityZone]" --output text|sed 's/\t/,/g'
I am trying to upload files to S3 in order to get some redirects working.
Unfortunately I notice some anomalies when the file names have spaces.
My bash script is very simple:
#!/usr/bin/env bash
declare -A redirects
redirects["foo/bar/index.html"]="/foo/"
redirects["foo/bar/test.pdf"]="/foo/test.pdf"
redirects["assets/docs/NEW-TEST WELCOME TO MYTEST.pdf"]="/"
for i in "${!redirects[#]}"
do
echo "Executing command: aws s3api put-object --bucket $BUCKET_NAME --key" '"'${i}'"' "--website-redirect-location" "${redirects[$i]}"
aws s3api put-object --bucket $BUCKET_NAME --key '"'${i}'"' --website-redirect-location "${redirects[$i]}"
done
From the output what I can see is:
Executing command: aws s3api put-object --bucket myamazingbucket --key "assets/docs/NEW-TEST WELCOME TO MYTEST.pdf" --website-redirect-location /
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: TO, MYTEST.pdf", WELCOME
Do you have suggestions on how to make these put-objects on S3?
By specifying '"'${i}'"' you are forcing a literal quote at the beginning and end of the string, which internally results in an aws command like:
aws s3api put-object --bucket BUCKET_NAME --key '"assets/docs/NEW-TEST' WELCOME TO 'MYTEST.pdf"' --website-redirect-location /
Instead, you should properly quote your string, changing your aws command line to:
aws s3api put-object --bucket $BUCKET_NAME --key "${i}" --website-redirect-location "${redirects[$i]}"
... which internally results in an aws command like below.
aws s3api put-object --bucket BUCKET_NAME --key 'assets/docs/NEW-TEST WELCOME TO MYTEST.pdf' --website-redirect-location /
Btw. your echo command behaves differently because you're passing the quoted command line to echo as a quoted string.
I need to trim a single " from a bash string both from starting and ending. I tried many things, but still didn't get the output.
Note: I tried $a{// \"}, but it didn't work.
The following code is what I have tried:
repoUri=$(aws ecr create-repository --repository-name $reponame | jq ".repository.repositoryUri")
$repoUri
You could use the -r jq option for "raw output" to suppress the double quotes:
aws ecr create-repository --repository-name "$reponame" \
| jq -r '.repository.repositoryUri'
But you don't actually need jq at all – you can use the --query option in the request, and suppress the double quotes with --output text:
aws ecr create-repository --repository-name "$reponame" \
--query 'repository.repositoryUri' --output text
I want to list the public IP addresses of my EC2 instances using Bash, separated by a delimiter (space or a new-line).
I tried to pipe the output to jq with aws ec2 describe-instances | jq, but can't seem to isolate just the IP addresses.
Can this be done by aws alone, specifying arguments to jq, or something else entirely?
Directly from the aws cli:
aws ec2 describe-instances \
--query "Reservations[*].Instances[*].PublicIpAddress" \
--output=text
Filter on running instances (you can drop that part if you don't need it)
Query for each PublicIPaddress and the Name Tag, handling when Name isn't set
aws ec2 describe-instances \
--filter "Name=instance-state-name,Values=running" \
--query "Reservations[*].Instances[*].[PublicIpAddress, Tags[?Key=='Name'].Value|[0]]" \
--output text
The below command would list the IP addresses of all your running EC2 instances
aws ec2 describe-instances | grep PublicIpAddress | grep -o -P "\d+\.\d+\.\d+\.\d+" | grep -v '^10\.'
Hope that answers your query...
But this works without all the errors about access:
wget -qO- http://instance-data/latest/meta-data/public-ipv4/|grep .
You can use instance metadata so you can run the following command from the ec2 instance:
curl http://169.254.169.254/latest/meta-data/public-ipv4
and it will give you the public IP of the instance. If you want the private IP, you will run
curl http://169.254.169.254/latest/meta-data/local-ipv4
aws ec2 describe-instances --query "Reservations[].Instances[][PublicIpAddress]"
Refer:
http://docs.aws.amazon.com/cli/latest/userguide/controlling-output.html