Deleting all Transcribe Jobs in one CLI Command for AWS - bash

I am trying to delete all of my AWS Transcribe jobs at the same time. I know I can go through and delete them one by one through the console, and I can also delete them all through the CLI through the following command:
$ aws transcribe delete-transcription-job --transcription-job-name YOUR_JOB_NAME
The issue with this is that I have to do this for each individual job! I am dealing with them on a mass scale (about 1000 jobs). I have tried the following code, however this does not work:
for jobName in ${aws transcribe list-transcription-jobs --query '[TranscriptionJobSummaries[*].TranscriptionJobName]' --output text}; do aws delete-transcription-job --transcription-job-name $jobName
When I run this code, it does nothing. Any ideas how to fix this?

If you expect to have a large number of values returned by the list-transcription-jobs command, then a for loop may hit argument list limits. In situations like this it's better to use a while read loop instead, for example:
aws transcribe list-transcription-jobs --query '[TranscriptionJobSummaries[*].TranscriptionJobName]' --output text | while read jobName; do
aws delete-transcription-job --transcription-job-name "$jobName"
done

Related

Stop all ECS Cluster tasks with AWS CLI

Self answers:How to stop all tasks on a cluster with a single cli command, easily allowing for extra parameters to be passed.
The below will:
Get all the tasks in the cluster
Select the task arns using jq, -r removes the quotes from the
json value.
Pass each arn to the next command using xargs, the value is
appended to the command (after --task). n-1 just ensures there is
one command per arn, not sure if necessary.
aws ecs list-tasks --cluster "$ecs_cluster" | jq -r ".taskArns[]" | xargs -n1 aws ecs stop-task --no-cli-pager --cluster "$ecs_cluster" --task
--no-cli-pager prevents the output from stop-task from getting stuck after each execution.
Any optimization welcome. I saw another solution with awk but found it hard to use with passing extra params to the second command.

Get a list of AWS pipelines ready for stage approval

I have 40+ pipelines that I need to approve from dev to QA and then QA to stage. I am working on a script to use AWS CLI commands to do do. I have been able to do that for a single pipeline where I know that the specific pipeline is ready to be approved.
aws codepipeline put-approval-result --cli-input-json file://TestPipeline.json
This is how I gathered the information for the approval for a single pipeline
aws codepipeline get-pipeline-state --name Pipeline-Ready-for-Approval
What I am trying to find out is - is there a way to loop through all of the pipelines using the get-pipeline-state and identify the stage name and action name without manually going through output of each of the pipelines.
I can try to get the pipeline names from aws codepipeline list-pipelines to get the list to loop through.
Is it possible using bash script and awscli and jq together?
Thank you
You can get most of the way there using the following
pipelines=$(aws codepipeline list-pipelines --query 'pipelines[].name' --output text)
for p in pipelines; do
aws codepipeline get-pipeline-state \
--name $p \
--query 'stageStates[?latestExecution.status==`InProgress`].{stageName:stageName,actionName:actionStates[0].actionName,token:actionStates[0].latestExecution.token}'
done
This assumes the approval action exists as the first or only action within a stage.
You'll get the output required for the put-approval-result command

shell script to start ec2 instances and ssh and introducing delay in second command

I have written a small shell script to automate the starting and loggin in to my aws instances via terminal.
#!/bin/bash
aws ec2 start-instances --instance-ids i-070107834ab273992
public_ip=aws ec2 describe-instances --instance-ids i-070107834ab273992 \
--query 'Reservations[*].Instances[*].PublicDnsName' --output text
AWS_KEY="/home/debian/cs605 data management/assignment6/mumbai instance keys"
ssh -v -i "$AWS_KEY"/mumbai-instance-1.pem\
ec2-user#$public_ip
~
~
The problem is public_ip variable I want it to be used in line ssh
1) how do I get value of a variable to use in a command.
2) The instance takes some time to boot when it is switched on from power off to power on so how do I keep checking that instances has been powered on after aws start instance command in the script or retrieve the public ip once it has started fully and then ssh into it.
I am not good at python know just basics so is there a pythonic way of doing it.If there is an example script some where that would be better for me to have a look at it.
You do not set the variable public_ip in your script. It would not surprise me if the script complained about "ec2: command not found".
To set the variable:
public_ip=$(aws ec2 describe-instances --instance-ids i-070107834ab273992 --query 'Reservations[*].Instances[*].PublicDnsName' --output text)
(disclaimer: I have not used aws so I assume that the command is correct).
The information on whether an instance is running should be available with
aws ec2 describe-instance-status
You may want to apply some filters and/or grep for a specific result. You could try polling with a while loop:
while ! aws ec2 describe-instance-statusv --instance-ids i-070107834ab273992 | grep 'something that characterizes running' ; do
sleep 5
done

AWS CLI command list for checking limits?

I'm working on a project where a single section of our deployment pipeline can easily take up to an hour to deploy onto AWS. We have about 30 steps in our pipeline and one of the primary time killers of spinning up a new environment is hitting a random limit in AWS. I've searched their website for checking limits and have found a few select commands for specific environments, but are there commands (and if so, a list of them) that can check for each limit such as 'NatGatewayLimitExceeded' for example? It would be great if I could make a script that checked all of our limits before we wasted time spinning up half an instance to be blocked by something like this. Thank you in advance!
From here they say that if you have AWS Premium Support, you can do this:
CHECK_ID=$(aws --region us-east-1 support describe-trusted-advisor-checks \
--language en --query 'checks[?name==`Service Limits`].{id:id}[0].id' \
--output text)
aws support describe-trusted-advisor-check-result --check-id "$CHECK_ID" \
--query 'result.sort_by(flaggedResources[?status!="ok"],&metadata[2])[].metadata' \
--output table --region us-east-1
If you do not have AWS Premium Support, I hacked together this:
awscommands=($(COMP_LINE='aws' aws_completer))
for command in "${awscommands[#]}"; do COMP_LINE="aws $command" \
aws_completer | xargs -n1 -I% printf "aws $command %\n"; done | grep limit | \
bash 2>/dev/null
This uses AWS's own bash completion program to find all possible aws commands (mutatis mutandis for your environment), and then all subcommands of those commands that have "limit" in their name, and then runs them. Some of those "limit" subcommands have required options; my trick does not account for those and they just error out, so I redirected stderr to /dev/null. Therefore the results are incomplete. Suggestions for improvement are welcome.

How can I trigger an event or notification when an AWS SSM RunCommand Script completes executing on my EC2 Instance?

Some background:
I am writing an application (using AWS Ruby SDK/API) that deploys another application on AWS EC2 instances. Specifically, I deploy it as an ECS Cluster of 4 ec2 instances and then start ECS Tasks (not services), one on each ec2 instance and run these docker images. That all works fine. The problem is that at some point I need to save one of those docker images off in my ECR repo. I do that by using Simple System Manager(SSM)'s aws:runShellScript to run the command on the ECS Container Instance. That command may take 1-4 minutes and I don't have any way of finding out when the command completes. Right now I do a sleep and then just grab the tagged container image from the repository and that is error prone.
The question:
Is there any way to:
wait_until for an SSM run command to complete? or
have my deploying application be notified through AWS Lambda or some such? or
Listen for events?
I resolved this problem by checking periodically if command's status is no longer "InProgress". The code is in bash, but the logic should be applicable elsewhere as well.
ID="instance_id"
COMMAND_ID=$(aws ssm send-command --instance-ids $ID --document-name "AWS-RunShellScript"
--parameters commands="python" --output text --query "Command.CommandId")
STATUS=$(aws ssm list-commands --instance-id $ID --command-id $COMMAND_ID
--query "Commands[0].Status")
while [ $STATUS "==" '"InProgress"' ]; do
sleep 30;
STATUS=$(aws ssm list-commands --instance-id $ID --command-id $COMMAND_ID
--query "Commands[0].Status");
done
Here are some options I can think of:
You could add a final step to the SSM command which would send an email or post to an SNS topic or something similar.
The AWS SSM send_command API takes a notification_config parameter which you can configure to send a notification to an SNS topic when the command is in certain states, like the "Success" state. This is probably the best option for monitoring the state of the command.
Once you have done something to post to an SNS topic, you can configure a Lambda function to be triggered by messages in that SNS topic.

Resources