I've installed Cloudera cluster on 4 nodes Amazon EC2 instance.
For certain time such as Monday-Friday Night, Saturday, and Sunday, I didn't need to use those 4 nodes Amazon EC2 instance for more effective cost.
How to automate start and stop an those Amazon EC2 instances using script?
Could anybody give me the example of the script to do it?
Thanks,
You can create script to stop and start instance(s) or directly run commands through crontab in linux or schedule-task in windows
for example if you want to stop instance at 11.00 pm
add below line in crontab (you will get this through crontab -e )
0 23 * * * sh stop.sh
format is
m h dom mon dow command
for start instance
aws ec2 start-instances --instance-ids i-1a1234
for stop instance
aws ec2 stop-instances --instance-ids i-1a1234
I have written a small shell script to automate the starting and loggin in to my aws instances via terminal. You can use it
#!/bin/bash
aws ec2 start-instances --instance-ids i-070107834ab273992
public_ip=aws ec2 describe-instances --instance-ids i-070107834ab2 \
--query 'Reservations[*].Instances[*].PublicDnsName' --output text
AWS_KEY="/home/debian/cs605 data management/assignment6/mumbai instance keys"
ssh -v -i "$AWS_KEY"/mumbai-instance-1.pem\
ec2-user#$public_ip
~
~
Related
In my company when we SSH to our AWS EC2 instances we are required to use the aws CLI session-manager plugin for auth. Using this SSH config snippet works:
Host my-aws-host
ProxyCommand bash -c "aws ssm start-session --target 'i-0abc123def456hij' \
--document-name AWS-StartSSHSession --parameters 'portNumber=22' \
--region us-west-1 --profile MAIN"
However, when the EC2 instance is relaunched, which happens semi-regularly, the 'target' instance ID changes. When this happens, all users need to update their SSH config with the new ID. We don't have any sort of DNS that resolves these instances to a static hostname unfortunately, and so would need to somehow publish the new instance ID to all interested users.
So instead I wrote a bash script (ssh_proxy_command.sh) that first queries our AWS account to grab the current instance ID based on a known tag value, and use that for the target - here's a cut-down version:
#!/bin/bash
INSTANCE_ID=$(aws ec2 describe-instances --region us-west-1 \
--filters Name=tag:Name,Values=my-server-nametag* \
--query "Reservations[*].Instances[*].{Instance:InstanceId}" --output text)
aws ssm start-session --target $INSTANCE_ID --document-name AWS-StartSSHSession --parameters 'portNumber=22' --region us-west-1 --profile MAIN
Now the SSH config looks like
Host my-aws-host
ProxyCommand bash -c "/path/to/my/ssh_proxy_command.sh %h"
This has been working fine. However, we have just started running multiple instances built from the same base image (AMI), and which use the same tags, etc. so the given describe-instances query now returns multiple instance IDs. So I tried wrapping the output returned by the query in a bash select loop, thinking I could offer the user a list of instance IDs and let them choose the one they want. This works when running the script directly, but not when it's used as the ProxyCommand. In the latter case when it reaches the select statement it prints out the options as expected, but doesn't wait for the user input - it just continues straight to the end of the script with an empty $INSTANCE_ID variable, which makes the aws ssm command fail.
I'm guessing this is a side-effect of the way SSH runs its ProxyCommands — from the ssh_config man page:
[the proxy command] is executed using the user's shell ‘exec’ directive [...]
I'm hoping I can find a way around this problem while still using SSH config and ProxyCommand, rather than resorting to a complete stand-alone wrapper around the ssh executable and requiring everyone use that. Any suggestions gratefully accepted...
host my-aws-host
ProxyCommand aws ssm start-session --target $(aws ec2 describe-instances --filter "Name=tag:Name,Values=%h" --query "Reservations[].Instances[?State.Name == 'running'].InstanceId[]" --output text) --document-name AWS-StartSSHSession --parameters portNumber=%p
The above will dynamically filter for your target-id based on host name (%h) provided so you can login using ssh my-aws-host. I personally have a prefix for all my machines in AWS so I ssh config is as follows:
host your-custom-prefix-*
ProxyCommand aws ssm start-session --target $(aws ec2 describe-instances --filter "Name=tag:Name,Values=%h" --query "Reservations[].Instances[?State.Name == 'running'].InstanceId[]" --output text) --document-name AWS-StartSSHSession --parameters portNumber=%p
This works only when the name of your machines in AWS match the host name provided.
I have written a small shell script to automate the starting and loggin in to my aws instances via terminal.
#!/bin/bash
aws ec2 start-instances --instance-ids i-070107834ab273992
public_ip=aws ec2 describe-instances --instance-ids i-070107834ab273992 \
--query 'Reservations[*].Instances[*].PublicDnsName' --output text
AWS_KEY="/home/debian/cs605 data management/assignment6/mumbai instance keys"
ssh -v -i "$AWS_KEY"/mumbai-instance-1.pem\
ec2-user#$public_ip
~
~
The problem is public_ip variable I want it to be used in line ssh
1) how do I get value of a variable to use in a command.
2) The instance takes some time to boot when it is switched on from power off to power on so how do I keep checking that instances has been powered on after aws start instance command in the script or retrieve the public ip once it has started fully and then ssh into it.
I am not good at python know just basics so is there a pythonic way of doing it.If there is an example script some where that would be better for me to have a look at it.
You do not set the variable public_ip in your script. It would not surprise me if the script complained about "ec2: command not found".
To set the variable:
public_ip=$(aws ec2 describe-instances --instance-ids i-070107834ab273992 --query 'Reservations[*].Instances[*].PublicDnsName' --output text)
(disclaimer: I have not used aws so I assume that the command is correct).
The information on whether an instance is running should be available with
aws ec2 describe-instance-status
You may want to apply some filters and/or grep for a specific result. You could try polling with a while loop:
while ! aws ec2 describe-instance-statusv --instance-ids i-070107834ab273992 | grep 'something that characterizes running' ; do
sleep 5
done
I have AWS CLI Command to create db snapshot, I want to create db snapshot with current time stamp.
I could not able to run the command using cron tab.
To create an Amazon RDS DB instance use below command
aws rds create-db-instance --db-instance-identifier testrds --allocated-storage 5 --db-instance-class db.m1.small --engine mysql --availability-zone us-east-1d --master-username rajuuser --master-user-password mrajuuser --port 7007 --no-multi-az --no-auto-minor-version-upgrade
To create db snapshot use below command
aws rds create-db-snapshot --db-instance-identifier testrds --db-snapshot-identifier testrds
shell script what I am following
#!/bin/sh
#echo "Hello world"
now=$(date +"%Y-%m-%d-%H-%M-%S")
cd /home/ubuntu
cmd="$(aws rds create-db-snapshot --db-instance-identifier testrds --db-snapshot-identifier testrds:"$(now)")"
echo $cmd
I got the same error, and I found that providing the full path to the aws cli solved the issue (for me was on a different path that the one in hjpotter answer).
#!/bin/sh
HOME="/home/ubuntu"
AWS_CONFIG_FILE="/home/ubuntu/.aws/config"
d=$(date +"%Y-%m-%d-%H-%M")
/home/ubuntu/.local/bin/aws rds create-db-snapshot --db-instance-identifier myid --db-snapshot-identifier prod-scheduled-$d
As AWS creates RDS snapshots only once per day, my requirements were to create several snapshots each day, at a fixed schedule, fired from a cronjob (e.g. at 6am, 10am, 2pm, 6pm, 10pm).
So, to keep "reasonable" costs, I also added a step to delete all "cron" snapshots taken yesterday:
y=$(date -d "1 day ago" +"%Y-%m-%d-%H-%M")
aws rds delete-db-snapshot --db-snapshot-identifier prod-scheduled-$y
This way I can keep one snapshot per day for historical purposes, and several snapshots from the last 24hs in case I need to shorter gaps.
Although this was not part of the question, it was commented by Luke Petersen as cost-prohibitive, and maybe someone else is having the same requirements (as I did).
One last thing: a similar (and AFAIK, a cleaner) solution can be achieved by using Restore to a point in time feature, which uses the daily snapshots and the transaction log to restore a db-instance to a custom specific date and time (within the backup retention period).
I have a similar cron task setup for backing up certain instances in EC2. Here is how I set it up:
$ crontab -l
0 14 * * * /usr/bin/zsh /home/hjpotter92/snapshot.zsh
and the contents of snapshot.zsh:
#!/usr/bin/zsh
HOME="/home/hjpotter92"
AWS_HOME="$HOME/.aws"
PATH="/usr/local/bin:/usr/bin:/bin:$PATH"
DATE=`date +%c`
aws ec2 create-snapshot --volume-id XXXXXXXX \
--description "${DATE}" \
--profile hjpotter92 \
--region "us-west-2" >> /home/hjpotter92/cron.out 2>&1
Note that while my script above is executable (x permission bit set), I still provide the shell name to it.
The problem is, you have string/variable interpolation issues with the command.
Also, /bin/sh does not have a lot of features which other shells provide. Change the head section of script to use bash(?).
#!/bin/bash
now=$(date +"%Y-%m-%d-%H-%M-%S")
cd /home/ubuntu
aws rds create-db-snapshot --db-instance-identifier testrds --db-snapshot-identifier "testrds:${now}" >> some-log.txt
Some background:
I am writing an application (using AWS Ruby SDK/API) that deploys another application on AWS EC2 instances. Specifically, I deploy it as an ECS Cluster of 4 ec2 instances and then start ECS Tasks (not services), one on each ec2 instance and run these docker images. That all works fine. The problem is that at some point I need to save one of those docker images off in my ECR repo. I do that by using Simple System Manager(SSM)'s aws:runShellScript to run the command on the ECS Container Instance. That command may take 1-4 minutes and I don't have any way of finding out when the command completes. Right now I do a sleep and then just grab the tagged container image from the repository and that is error prone.
The question:
Is there any way to:
wait_until for an SSM run command to complete? or
have my deploying application be notified through AWS Lambda or some such? or
Listen for events?
I resolved this problem by checking periodically if command's status is no longer "InProgress". The code is in bash, but the logic should be applicable elsewhere as well.
ID="instance_id"
COMMAND_ID=$(aws ssm send-command --instance-ids $ID --document-name "AWS-RunShellScript"
--parameters commands="python" --output text --query "Command.CommandId")
STATUS=$(aws ssm list-commands --instance-id $ID --command-id $COMMAND_ID
--query "Commands[0].Status")
while [ $STATUS "==" '"InProgress"' ]; do
sleep 30;
STATUS=$(aws ssm list-commands --instance-id $ID --command-id $COMMAND_ID
--query "Commands[0].Status");
done
Here are some options I can think of:
You could add a final step to the SSM command which would send an email or post to an SNS topic or something similar.
The AWS SSM send_command API takes a notification_config parameter which you can configure to send a notification to an SNS topic when the command is in certain states, like the "Success" state. This is probably the best option for monitoring the state of the command.
Once you have done something to post to an SNS topic, you can configure a Lambda function to be triggered by messages in that SNS topic.
I am having AWS EC2 instance through which i am creating new AWS EC2 instances using command "ec2-run-instances".
This new instance is pre-configured with EC2 command line API and S3cmd API.
While creating instance i am passing user data to new instance in which i have written code for transferring file from that instance to AWS s3 bucket as follows.
s3cmd put res.doc s3://BucketName/DocFiles/res.doc
but it not transfers res.doc to bucket.
After that i came to know that this script uploads that files which are exist on first EC2 instance from which i creates new instances.
So how can i solve this problem?
Script file is here:-
str=$"#! /bin/bash"
str+=$"\ncd /home"
str+=$"\nmkdir pravin"
str+=$"\ns3cmd put res.doc s3://BuckectName/DocFiles/res.csv"
ud=`echo -e "$str" |base64`
ec2-run-instances ami-784c2823 -t t1.micro -g group -n 1 -k key1 -d "$ud"