AWS CLI command list for checking limits? - bash

I'm working on a project where a single section of our deployment pipeline can easily take up to an hour to deploy onto AWS. We have about 30 steps in our pipeline and one of the primary time killers of spinning up a new environment is hitting a random limit in AWS. I've searched their website for checking limits and have found a few select commands for specific environments, but are there commands (and if so, a list of them) that can check for each limit such as 'NatGatewayLimitExceeded' for example? It would be great if I could make a script that checked all of our limits before we wasted time spinning up half an instance to be blocked by something like this. Thank you in advance!

From here they say that if you have AWS Premium Support, you can do this:
CHECK_ID=$(aws --region us-east-1 support describe-trusted-advisor-checks \
--language en --query 'checks[?name==`Service Limits`].{id:id}[0].id' \
--output text)
aws support describe-trusted-advisor-check-result --check-id "$CHECK_ID" \
--query 'result.sort_by(flaggedResources[?status!="ok"],&metadata[2])[].metadata' \
--output table --region us-east-1
If you do not have AWS Premium Support, I hacked together this:
awscommands=($(COMP_LINE='aws' aws_completer))
for command in "${awscommands[#]}"; do COMP_LINE="aws $command" \
aws_completer | xargs -n1 -I% printf "aws $command %\n"; done | grep limit | \
bash 2>/dev/null
This uses AWS's own bash completion program to find all possible aws commands (mutatis mutandis for your environment), and then all subcommands of those commands that have "limit" in their name, and then runs them. Some of those "limit" subcommands have required options; my trick does not account for those and they just error out, so I redirected stderr to /dev/null. Therefore the results are incomplete. Suggestions for improvement are welcome.

Related

Stop all ECS Cluster tasks with AWS CLI

Self answers:How to stop all tasks on a cluster with a single cli command, easily allowing for extra parameters to be passed.
The below will:
Get all the tasks in the cluster
Select the task arns using jq, -r removes the quotes from the
json value.
Pass each arn to the next command using xargs, the value is
appended to the command (after --task). n-1 just ensures there is
one command per arn, not sure if necessary.
aws ecs list-tasks --cluster "$ecs_cluster" | jq -r ".taskArns[]" | xargs -n1 aws ecs stop-task --no-cli-pager --cluster "$ecs_cluster" --task
--no-cli-pager prevents the output from stop-task from getting stuck after each execution.
Any optimization welcome. I saw another solution with awk but found it hard to use with passing extra params to the second command.

shell script to start ec2 instances and ssh and introducing delay in second command

I have written a small shell script to automate the starting and loggin in to my aws instances via terminal.
#!/bin/bash
aws ec2 start-instances --instance-ids i-070107834ab273992
public_ip=aws ec2 describe-instances --instance-ids i-070107834ab273992 \
--query 'Reservations[*].Instances[*].PublicDnsName' --output text
AWS_KEY="/home/debian/cs605 data management/assignment6/mumbai instance keys"
ssh -v -i "$AWS_KEY"/mumbai-instance-1.pem\
ec2-user#$public_ip
~
~
The problem is public_ip variable I want it to be used in line ssh
1) how do I get value of a variable to use in a command.
2) The instance takes some time to boot when it is switched on from power off to power on so how do I keep checking that instances has been powered on after aws start instance command in the script or retrieve the public ip once it has started fully and then ssh into it.
I am not good at python know just basics so is there a pythonic way of doing it.If there is an example script some where that would be better for me to have a look at it.
You do not set the variable public_ip in your script. It would not surprise me if the script complained about "ec2: command not found".
To set the variable:
public_ip=$(aws ec2 describe-instances --instance-ids i-070107834ab273992 --query 'Reservations[*].Instances[*].PublicDnsName' --output text)
(disclaimer: I have not used aws so I assume that the command is correct).
The information on whether an instance is running should be available with
aws ec2 describe-instance-status
You may want to apply some filters and/or grep for a specific result. You could try polling with a while loop:
while ! aws ec2 describe-instance-statusv --instance-ids i-070107834ab273992 | grep 'something that characterizes running' ; do
sleep 5
done

Deleting all Transcribe Jobs in one CLI Command for AWS

I am trying to delete all of my AWS Transcribe jobs at the same time. I know I can go through and delete them one by one through the console, and I can also delete them all through the CLI through the following command:
$ aws transcribe delete-transcription-job --transcription-job-name YOUR_JOB_NAME
The issue with this is that I have to do this for each individual job! I am dealing with them on a mass scale (about 1000 jobs). I have tried the following code, however this does not work:
for jobName in ${aws transcribe list-transcription-jobs --query '[TranscriptionJobSummaries[*].TranscriptionJobName]' --output text}; do aws delete-transcription-job --transcription-job-name $jobName
When I run this code, it does nothing. Any ideas how to fix this?
If you expect to have a large number of values returned by the list-transcription-jobs command, then a for loop may hit argument list limits. In situations like this it's better to use a while read loop instead, for example:
aws transcribe list-transcription-jobs --query '[TranscriptionJobSummaries[*].TranscriptionJobName]' --output text | while read jobName; do
aws delete-transcription-job --transcription-job-name "$jobName"
done

How to run a AWS CLI command using crontab for every minute

I have AWS CLI Command to create db snapshot, I want to create db snapshot with current time stamp.
I could not able to run the command using cron tab.
To create an Amazon RDS DB instance use below command
aws rds create-db-instance --db-instance-identifier testrds --allocated-storage 5 --db-instance-class db.m1.small --engine mysql --availability-zone us-east-1d --master-username rajuuser --master-user-password mrajuuser --port 7007 --no-multi-az --no-auto-minor-version-upgrade
To create db snapshot use below command
aws rds create-db-snapshot --db-instance-identifier testrds --db-snapshot-identifier testrds
shell script what I am following
#!/bin/sh
#echo "Hello world"
now=$(date +"%Y-%m-%d-%H-%M-%S")
cd /home/ubuntu
cmd="$(aws rds create-db-snapshot --db-instance-identifier testrds --db-snapshot-identifier testrds:"$(now)")"
echo $cmd
I got the same error, and I found that providing the full path to the aws cli solved the issue (for me was on a different path that the one in hjpotter answer).
#!/bin/sh
HOME="/home/ubuntu"
AWS_CONFIG_FILE="/home/ubuntu/.aws/config"
d=$(date +"%Y-%m-%d-%H-%M")
/home/ubuntu/.local/bin/aws rds create-db-snapshot --db-instance-identifier myid --db-snapshot-identifier prod-scheduled-$d
As AWS creates RDS snapshots only once per day, my requirements were to create several snapshots each day, at a fixed schedule, fired from a cronjob (e.g. at 6am, 10am, 2pm, 6pm, 10pm).
So, to keep "reasonable" costs, I also added a step to delete all "cron" snapshots taken yesterday:
y=$(date -d "1 day ago" +"%Y-%m-%d-%H-%M")
aws rds delete-db-snapshot --db-snapshot-identifier prod-scheduled-$y
This way I can keep one snapshot per day for historical purposes, and several snapshots from the last 24hs in case I need to shorter gaps.
Although this was not part of the question, it was commented by Luke Petersen as cost-prohibitive, and maybe someone else is having the same requirements (as I did).
One last thing: a similar (and AFAIK, a cleaner) solution can be achieved by using Restore to a point in time feature, which uses the daily snapshots and the transaction log to restore a db-instance to a custom specific date and time (within the backup retention period).
I have a similar cron task setup for backing up certain instances in EC2. Here is how I set it up:
$ crontab -l
0 14 * * * /usr/bin/zsh /home/hjpotter92/snapshot.zsh
and the contents of snapshot.zsh:
#!/usr/bin/zsh
HOME="/home/hjpotter92"
AWS_HOME="$HOME/.aws"
PATH="/usr/local/bin:/usr/bin:/bin:$PATH"
DATE=`date +%c`
aws ec2 create-snapshot --volume-id XXXXXXXX \
--description "${DATE}" \
--profile hjpotter92 \
--region "us-west-2" >> /home/hjpotter92/cron.out 2>&1
Note that while my script above is executable (x permission bit set), I still provide the shell name to it.
The problem is, you have string/variable interpolation issues with the command.
Also, /bin/sh does not have a lot of features which other shells provide. Change the head section of script to use bash(?).
#!/bin/bash
now=$(date +"%Y-%m-%d-%H-%M-%S")
cd /home/ubuntu
aws rds create-db-snapshot --db-instance-identifier testrds --db-snapshot-identifier "testrds:${now}" >> some-log.txt

EC2 instance region is not populated in user-data script

I want to fill some tags of the EC2 spot instance, however as it is impossible to do it directly in spot request, I do it via user data script. All is going fine when I specify region statically, but it is not universal approach. When I try to detect current region from instance userdata, the region variable is always empty. I do it in a following way:
#!/bin/bash
region=$(ec2-metadata -z | awk '{print $2}' | sed 's/[a-z]$//')
aws ec2 create-tags \
--region $region \
--resources `wget -q -O - http://169.254.169.254/latest/meta-data/instance-id` \
--tags Key=sometag,Value=somevalue Key=sometag,Value=somevalue
I tried to made a delay before region populating
/bin/sleep 30
but this had no result.
However, when I run this script manually after start, the tags are added fine. What is going on?
Why all in all aws-cli doesn't get default region from profile? I have aws configure properly configured inside the instance, but without --region clause it throws error that region is not specified.
I suspect the ec2-metadata command is not available when your userdata script is executed. Try getting the region from the metadata server directly (which is what ec2-metadata does anyway)
region=$(curl -fsq http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
AWS CLI does use the region from default profile.
You can now use this endpoint to get only the instance region (no parsing needed):
http://169.254.169.254/latest/meta-data/placement/region
So in this case:
region=`curl -s http://169.254.169.254/latest/meta-data/placement/region`
I ended up with
region=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | python -c "import json,sys; print"
which worked fine. However, it would be fine if somebody explain the nuts-and-bolts.

Resources