I am using the following command to display snaptshots.
aws ec2 describe-snapshots --filters "Name=volume-id,Values=vol-035a45c00af749577" --query 'Snapshots[*].{ID: SnapshotId,StartTime: StartTime,Key:Tags[?Key==`Name`].Value[]}' --output text
The output of the command is as follows:
KEY WebServices
snap-0905735cc7d8543b6 2021-12-07T03:37:21.532000+00:00
KEY WebServices
snap-007ab931e25f43136 2021-12-06T03:41:11.753000+00:00
KEY WebServices
snap-000d71e3b1b1bb929 2021-12-08T03:31:10.383000+00:00
KEY WebServices
I need to set up an exit similar to this.
KEY WebServices
snap-0ba6345e8c19e697b 2021-12-09T03:29:40.251000+00:00
snap-0905735cc7d8543b6 2021-12-07T03:37:21.532000+00:00
snap-007ab931e25f43136 2021-12-06T03:41:11.753000+00:00
snap-000d71e3b1b1bb929 2021-12-08T03:31:10.383000+00:00
I’m not an advanced user of sed, could someone help me do it?
Regards,
Related
I have 2 instances of the same deployment/project on AWS Elastic Beanstalk.
Both contain a Laravel project which contains scheduling code which runs various commands which can be found in the schedule method/function of the Kernel.php class within 'app/Console' - the problem I have is that if a command runs from one instance then it will also run the command from the second instance which is not what I want to happen.
What I would like to happen is that the commands get run from only one instance and not the other. How do I achieve this in the easiest way possible?
Is there a Laravel package which could help me achieve this?
From Laravel 5.6:
Laravel provides a onOneServer method which you can use if your applications share a single cache server. You could use something like ElastiCache to host Redis or Memcached and use it as your cache server for both of your application instances. Then you would be able to use the onOneServer method like this:
$schedule->command('report:generate')
->fridays()
->at('17:00')
->onOneServer();
For older versions of Laravel:
You could use the jdavidbakr/multi-server-event package. Once you have it set up you should be able to use it like:
$schedule->command('inspire')
->daily()
->withoutOverlappingMultiServer();
I had the same issue to run some cronjobs (nothing related to Laravel) and I found a nice solution (don't remember where I found it)
What I do is check if the instance running the code is the first instance on the Auto Scaling Group, if it's the first then I execute the command otherwise just exit.
This is the way it's implemented:
#!/bin/bash
INSTANCE_ID=`curl http://169.254.169.254/latest/meta-data/instance-id 2>/dev/null`
REGION=`curl -s http://169.254.169.254/latest/dynamic/instance-identity/document 2>/dev/null | jq -r .region`
# Find the Auto Scaling Group name from the Elastic Beanstalk environment
ASG=`aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" \
--region $REGION --output json | jq -r '.[][] | select(.Key=="aws:autoscaling:groupName") | .Value'`
# Find the first instance in the Auto Scaling Group
FIRST=`aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $ASG \
--region $REGION --output json | \
jq -r '.AutoScalingGroups[].Instances[] | select(.LifecycleState=="InService") | .InstanceId' | sort | head -1`
# If the instance ids are the same exit 0
[ "$FIRST" = "$INSTANCE_ID" ]
Try implementing those calls using PHP and it should work.
This results instances listed with ELB:
aws elb describe-load-balancers --load-balancer-name XXXXXXX --region us-east-1 | jq -r '.LoadBalancerDescriptions[].Instances[].InstanceId'
I need some help in writing a script, which checks for new aws instances attached to the ELB.
- Need to compare the present result and previous results, which helps us to get the new instance info.
I'm using the AWS CLI and I want to get the ID of security group whose name I know (kingkajou_sg). How can I do it?
When I ask it to list all the security groups, it does so happily:
$ aws ec2 describe-security-groups | wc -l
430
When I grep through this information, I see that the SG in question is listed:
$ aws ec2 describe-security-groups | grep -i kingkajou_sg
"GroupName": "kingkajou_sg",
However, when I try to get the information about only that security group, it won't let me. Why?
$ aws ec2 describe-security-groups --group-names kingkajou_sg
An error occurred (InvalidGroup.NotFound) when calling the
DescribeSecurityGroups operation: The security group 'kingkajou_sg' does not exist in default VPC 'vpc-XXXXXXXX'
Can someone please provide me the one line command that I can use to extract the Security group's ID given its name? You can assume that the command will be run from within an EC2 which is in the same VPC as the Security group.
From the API Documentation:
--group-names (list)
[EC2-Classic and default VPC only] One or more security group names. You can specify either the security group name or the security group ID. For security groups in a nondefault VPC, use the group-name filter to describe security groups by name.
If you are using a non-default VPC, use the Filter
aws ec2 describe-security-groups --filter Name=vpc-id,Values=<my-vpc-id> Name=group-name,Values=kingkajou_sg --query 'SecurityGroups[*].[GroupId]' --output text
If it's in a VPC and you know the name & region and vpc id, you can try it like below:
aws ec2 describe-security-groups --region eu-west-1 --filter Name=vpc-id,Values=vpc-xxxxx Name=group-name,Values=<your sg name> --query 'SecurityGroups[*].[GroupId]' --output text
You just need to add --query 'SecurityGroups[*].[GroupId]' option with aws cli command.
aws ec2 describe-security-groups --group-names kingkajou_sg --query 'SecurityGroups[*].[GroupId]' --output text
To get the IDs of all security groups with a name matching exactly a specified string (default in this example) without specifying a VPC ID, use the following:
aws ec2 describe-security-groups --filter Name=group-name,Values=default --output json | jq -r .SecurityGroups[].GroupId
Note: this works for security groups even if they are not in the default VPC.
Small shell script to list security with search string as a variable. and we can tag the security groups.
https://ideasofpraveen.blogspot.com/2022/09/aws-cli-get-security-group-id-with-name.html.
If you want boto3 script to integrate with lambda for automations .
https://ideasofpraveen.blogspot.com/2022/09/aws-cli-get-security-group-id-with-name_15.html
I'm trying to remove only the files which are ONLY older than 5 days according to the file name containing "DITN1_" and "DITS1_" time using a bash script within the AWS S3 Bucket but the issue is all the files i'm trying to delete looks like as follows:
DITN1_2016.12.01_373,
DITS1_2012.10.10_141,
DITN1_2016.12.01_3732,
DITS1_2012.10.10_1412
if someone can help me out with the code would be nice.
thanks in advance
You can use aws cli command for deleting stuff using the bash script as follows
aws s3 rm s3://mybucket/ --recursive --include "mybucket/DITN1*"
However it does not support timestamp
For details see aws S3 cli
Is it important to use the name of the objects instead of metadata? You could get a list of objects in the bucket using the s3api:
aws s3api list-objects --bucket example --no-paginate # this last option will avoid pagination, don't use it if you have thousands of objects
Adding
--query Contents[]
Will give you back the contents of every object, including a LastModified section, which will tell you when the object was last modified, for example "2016-12-16T13:56:23.000Z".
http://docs.aws.amazon.com/cli/latest/reference/s3api/list-objects.html
You could change this timestamp to epoch using
date "+%s" -d "put the timestamp here"
And compare it with the current time - 5 days.
OR if you really want to delete objects based on name, you could loop over the keys like this:
for key in $(aws s3api list-objects --bucket example --no-paginate --query Contents[].Key)
And add logic to determine the date. Something like this might work, judging by your examples:
key_without_prefix=${key#*_}
key_without_suffix=${key_without_prefix%_*}
Then you have your date, which you can compare with the current time - 5 days.
I am trying to find a way to perform a simultaneously copy of a AMI to all other regions.
I have search near and far but beside seeing on a blog post that it can be done, I haven't found a way using aws cli ...
https://aws.amazon.com/blogs/aws/ec2-ami-copy-between-regions/
Currently I have written a bash script to do so, but I would like to find a better, easier way to do so
I have 8 AMI's that need to be passed to all regions.
using an array-
declare -a DEST=('us-east-1' ...2....3)
aws copy-image --source-region $SRC --region ${DESTx[#]} --source-ami-id $ami
Do you guys have any other suggestion?
Thanks.
you can make a single line bash, specially useful if in future there are new regions:
aws ec2 describe-regions
--output text |\
cut -f 3 | \
xargs -I {} aws copy-image
--source-region $SRC
--region {}
--source-ami-id $ami
basically it goes like this:
aws ec2 describe-regions --output text returns the list of all available regions for ec2, its a 3 columns table ("REGIONS", endpoint, region-name)
cut -f 3 takes the 3rd column of the previous table (read as list)
keep the current region from previous argument (xargs) into {} so you can send it to the region parameter of the copy-image command