Self answers:How to stop all tasks on a cluster with a single cli command, easily allowing for extra parameters to be passed.
The below will:
Get all the tasks in the cluster
Select the task arns using jq, -r removes the quotes from the
json value.
Pass each arn to the next command using xargs, the value is
appended to the command (after --task). n-1 just ensures there is
one command per arn, not sure if necessary.
aws ecs list-tasks --cluster "$ecs_cluster" | jq -r ".taskArns[]" | xargs -n1 aws ecs stop-task --no-cli-pager --cluster "$ecs_cluster" --task
--no-cli-pager prevents the output from stop-task from getting stuck after each execution.
Any optimization welcome. I saw another solution with awk but found it hard to use with passing extra params to the second command.
Related
I am writing automation task for creating AWS AMI image, the goal is get output from aws import-image (.ova to ami convert) and add the name in 2nd command:
importaskid=$(aws ec2 import-image --disk-containers Format=ova,UserBucket="{S3Bucket=acp17,S3Key=XXXXX.ova}" | jq -r '.ImportTaskId')
aws ec2 create-tags --resources echo $importaskid --tags 'Key=Name, Value=acp_ami_test'
I am able to $importaskid and see needed output but when use aws ec2 create-tags the AMI image created without name and the output from 2nd command is empty.
Appreciate your assistance.
This should work for you:
# set bash variable "importaskid":
importaskid=$(aws ec2 import-image --disk-containers Format=ova,UserBucket="{S3Bucket=acp17,S3Key=XXXXX.ova}" | jq -r '.ImportTaskId')
# Verify that importaskid is set correctly
echo $importaskid
# Now use it:
aws ec2 create-tags --resources "$importaskid" --tags 'Key=Name, Value=acp_ami_test'
The "$()" syntax for assigning the output of a command to a variable is discussed here: https://www.cyberciti.biz/faq/unix-linux-bsd-appleosx-bash-assign-variable-command-output/
The double quotes in "$importaskid" would be necessary if
the value of "$importaskid" happens to have spaces in it.
'Hope that helps!
thanks for reply, so when i run the command without echo $ImportTaskId
see below
aws ec2 create-tags --resource import-ami-XXXXXXXXXXXXX --tags Key=Name,Value='name_ami_test'
i got empty response and the name is not assigned in aws console so i will speak to AWS support/check syntax /check after the assign name to AMI ID and not to import-ami-XXXXXXXXXXXXXXXX
I have 40+ pipelines that I need to approve from dev to QA and then QA to stage. I am working on a script to use AWS CLI commands to do do. I have been able to do that for a single pipeline where I know that the specific pipeline is ready to be approved.
aws codepipeline put-approval-result --cli-input-json file://TestPipeline.json
This is how I gathered the information for the approval for a single pipeline
aws codepipeline get-pipeline-state --name Pipeline-Ready-for-Approval
What I am trying to find out is - is there a way to loop through all of the pipelines using the get-pipeline-state and identify the stage name and action name without manually going through output of each of the pipelines.
I can try to get the pipeline names from aws codepipeline list-pipelines to get the list to loop through.
Is it possible using bash script and awscli and jq together?
Thank you
You can get most of the way there using the following
pipelines=$(aws codepipeline list-pipelines --query 'pipelines[].name' --output text)
for p in pipelines; do
aws codepipeline get-pipeline-state \
--name $p \
--query 'stageStates[?latestExecution.status==`InProgress`].{stageName:stageName,actionName:actionStates[0].actionName,token:actionStates[0].latestExecution.token}'
done
This assumes the approval action exists as the first or only action within a stage.
You'll get the output required for the put-approval-result command
I am trying to delete all of my AWS Transcribe jobs at the same time. I know I can go through and delete them one by one through the console, and I can also delete them all through the CLI through the following command:
$ aws transcribe delete-transcription-job --transcription-job-name YOUR_JOB_NAME
The issue with this is that I have to do this for each individual job! I am dealing with them on a mass scale (about 1000 jobs). I have tried the following code, however this does not work:
for jobName in ${aws transcribe list-transcription-jobs --query '[TranscriptionJobSummaries[*].TranscriptionJobName]' --output text}; do aws delete-transcription-job --transcription-job-name $jobName
When I run this code, it does nothing. Any ideas how to fix this?
If you expect to have a large number of values returned by the list-transcription-jobs command, then a for loop may hit argument list limits. In situations like this it's better to use a while read loop instead, for example:
aws transcribe list-transcription-jobs --query '[TranscriptionJobSummaries[*].TranscriptionJobName]' --output text | while read jobName; do
aws delete-transcription-job --transcription-job-name "$jobName"
done
I'm working on a project where a single section of our deployment pipeline can easily take up to an hour to deploy onto AWS. We have about 30 steps in our pipeline and one of the primary time killers of spinning up a new environment is hitting a random limit in AWS. I've searched their website for checking limits and have found a few select commands for specific environments, but are there commands (and if so, a list of them) that can check for each limit such as 'NatGatewayLimitExceeded' for example? It would be great if I could make a script that checked all of our limits before we wasted time spinning up half an instance to be blocked by something like this. Thank you in advance!
From here they say that if you have AWS Premium Support, you can do this:
CHECK_ID=$(aws --region us-east-1 support describe-trusted-advisor-checks \
--language en --query 'checks[?name==`Service Limits`].{id:id}[0].id' \
--output text)
aws support describe-trusted-advisor-check-result --check-id "$CHECK_ID" \
--query 'result.sort_by(flaggedResources[?status!="ok"],&metadata[2])[].metadata' \
--output table --region us-east-1
If you do not have AWS Premium Support, I hacked together this:
awscommands=($(COMP_LINE='aws' aws_completer))
for command in "${awscommands[#]}"; do COMP_LINE="aws $command" \
aws_completer | xargs -n1 -I% printf "aws $command %\n"; done | grep limit | \
bash 2>/dev/null
This uses AWS's own bash completion program to find all possible aws commands (mutatis mutandis for your environment), and then all subcommands of those commands that have "limit" in their name, and then runs them. Some of those "limit" subcommands have required options; my trick does not account for those and they just error out, so I redirected stderr to /dev/null. Therefore the results are incomplete. Suggestions for improvement are welcome.
I want to fill some tags of the EC2 spot instance, however as it is impossible to do it directly in spot request, I do it via user data script. All is going fine when I specify region statically, but it is not universal approach. When I try to detect current region from instance userdata, the region variable is always empty. I do it in a following way:
#!/bin/bash
region=$(ec2-metadata -z | awk '{print $2}' | sed 's/[a-z]$//')
aws ec2 create-tags \
--region $region \
--resources `wget -q -O - http://169.254.169.254/latest/meta-data/instance-id` \
--tags Key=sometag,Value=somevalue Key=sometag,Value=somevalue
I tried to made a delay before region populating
/bin/sleep 30
but this had no result.
However, when I run this script manually after start, the tags are added fine. What is going on?
Why all in all aws-cli doesn't get default region from profile? I have aws configure properly configured inside the instance, but without --region clause it throws error that region is not specified.
I suspect the ec2-metadata command is not available when your userdata script is executed. Try getting the region from the metadata server directly (which is what ec2-metadata does anyway)
region=$(curl -fsq http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
AWS CLI does use the region from default profile.
You can now use this endpoint to get only the instance region (no parsing needed):
http://169.254.169.254/latest/meta-data/placement/region
So in this case:
region=`curl -s http://169.254.169.254/latest/meta-data/placement/region`
I ended up with
region=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | python -c "import json,sys; print"
which worked fine. However, it would be fine if somebody explain the nuts-and-bolts.