Schedule to create AMI weekly from a running instance? - amazon-ec2

I have few servers running on Amazon's EC2 and I would like to backup the image (create AMI) every week (replacing the old image).
Is there any way to automate this?

You should be able to use the command line tools to create an ami. Something like ec2-create-image -n "<image name here>" <your instancId here>. Put that in a cron entry that is scheduled weekly and you are done. You should be able to use the answer to this question to figure out what your instance id is programatically.

You can now use AWS Lambda to create AMIs automatically. The whole setup should be completed in around 10 minutes along with the schedule that you like.

Related

How can I run a .sh script on Google Cloud Shell on schedule?

I have a .sh script in Google Cloud Shell that automates my instance shutdown, backup, restart sequence.
How can I run a .sh script on Schedule (i.e. daily) in a simplest possible way?
I am not a professional and I've read all documentation about cron jobs, Cloud Scheduler, Cloud Tasks... but none of the examples in the documentation appear to detail a simple task that I need, and I do not have enough knowledge yet to understand these multiple services in details.... I just need a simple direction pointer to understand how to connect my Google Cloud Shell .sh script with any form of scheduler, as in:
Run a .sh script that I have in my virtual 5gb Cloud Sell Storage on schedule (daily at specific time), instead of manually opening Google Cloud Console and using a terminal to run the same script with "bash" command?
I just need to know what I need to learn/do to make this happen.
Thank you for your input.
That's not going to be possible. The Cloud Shell will turn off shortly after you close the tab. For this you'll need to use an actual VM. You can run one for free using the e2 micro instance.
https://cloud.google.com/free/docs/gcp-free-tier/#compute
Once you got this setup you can learn crontab to run your script on a schedule.

TerraForm: deploying EC2 instances without starting them

I want to deploy my infrastructure in different AWS environments (dev, prod, qa).
That deployment creates a few EC2 instances from a custom AMI. When deployed, instances are in the "running" state. I understand this seems to be related to some constraint in the EC2 API. However, I don't necessarily want my instances started, depending on context. Sometimes, I just want the instances to be created, and they will be started later on. I guess this is a quite common scenario.
Reading the few related issues/requests on Hashicorp's github, makes me think so:
Terraform aws instance state change
Stop instances
aws_instance should allow to specify the instance state
There must be some TerraForm based solution which doesn't require to rely on AWS CLI / CDK or lambda, right? Something in the TerraForm script that, for example, would stop the instance right after its creation.
My google foo didn't help me much here. Any help / suggestion for dealing with that scenario is welcome.
Provisioning a new instance automatically puts it in a 'started' state.
As Marcin suggested, you can use user data scripts, here's some psuedo user data script. For you to figure out the actual implementation ;)
#!/bin/bash
get instance id, pass it to the subsequent line
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
You can read about running scripts as part of the bootstrapping here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Basically its all up to your use case. We don't do this generally. Still, if you want to provision your EC2 instances and need to put them in stopped state, as bschaatsbergen suggested, you can use the user_data in Terraform. Make sure to attach the role with relevant permission.
#!/bin/bash
INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id/`
REGION=`curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/.$//'`
aws ec2 stop-instances --instance-ids $INSTANCE_ID --region $REGION
As already stated by others, you cannot just "create" instances, they will be in "started" state.
Rather I would ask what is the exact use case here :
Sometimes, I just want the instances to be created, and they will be started later on.
Why you have to create instances now and use them later? Can't they be created exactly when they are required? Any specific requirement to keep them initialized before they are used? Or the instances take time to start?

how to schedule the shell script using Google Cloud Shell?

I have a .sh file that is stored in GCS. I am trying to schedule the .sh file through google cloud shell.
I can run the same file using gsutil cat gs://miptestauto/baby.sh | sh command but not able to schedule it.
Following is my code for scheduling the file:
16 17 * * * gsutil cat gs://miptestauto/baby.sh | sh
It displays the message as "auto saving..done" but the scheduled job is not get displayed when I use crontab -l
# contents of .sh file
bin/bash
bq load --source_format=CSV babynames.baby_destination13 gs://testauto/yob2010.txt name:string,gender:string,count:integer
Please can anyone tell me how schedule it using google cloud shell.
I am not using compute engine/app engine. Just wanted to schedule it using the cloud shell.
thank you in advance :)
As per the documentation, Cloud Shell is intended for interactive use only. The Cloud Shell instances are provisioned on a per-user, per-session basis and sessions are terminated after an hour of inactivity.
In order to schedule a daily cron job, the instance needs to be up and running all time but this doesn’t happen with Cloud Shell and I believe your jobs are not running because of this.
When you start Cloud Shell, it provisions a f1-micro instance which is the same machine type you can get for free if you are eligible for “Always Free”. Therefore you can create a f1-micro instance, configure the cron job on it and leave it running so it can execute the daily job.
You can check free usage limits at https://cloud.google.com/compute/pricing#freeusage
You can also use the Cloud Scheduler product https://cloud.google.com/scheduler which is a serverless managed Cron like scheduler.
To schedule a script you first have to create a project if you don’t have one. I assume you already have a project so if that’s the case just create the instance that you want for scheduling this script.
To create the new instance:
At the Google Cloud Platform Console click on Products & Services which is the icon with the four bars at the top left hand corner.
On the menu go to the Compute section and hover on Compute Engine and then click on VM Instances.
Go to the menu bar above the instance section and there you will see a Create Instance button. Click it and fill in the configuration values that you want your new instance to have. The values that you select will determine your VM instance features. You can choose, among other values, the name, zone and machine type for your new instance.
In the Machine type section click the drop-down menu tab to select an “f1-micro instance”.
In the Identity and API access section, give access scope to the Storage API so that you can read and write to your bucket in case you need to do so; the default access scope only allows you to read. Also enable BigQuery API.
Once you have the instance created and access to the bucket, just create your cron job inside your new instance: In the user account under which the cron job will execute, run crontab -e and edit this file to run the cron job that will execute your baby.sh script.The following documentation link should help you with this.
Please note, if you want to view output from your script you may need to redirect it to your current terminal.

How to email time-out notice from Google Cloud Storage?

I have a gsutil script that that periodically backs up data to Google Could Storage.
The gsutil backup script runs on my local box.
I would like to run a script (or service) on Google Could Storage, that emails a warning to the administrator when no backup has been made in 24 hours.
I am new to cloud services. Please point me in the right direction.
Where would such a script be located? Is there a similar example script?
Thank you.
There's no built-in feature that accomplishes this. However, you could accomplish something like this with another monitor program.
For example, I might edit my backup script such that after successfully completing a backup, it writes the current time to a "last_successful_backup.txt" file. Then, I'd put a cronjob wherever I keep my monitors and alerting systems that would check the "last_successful_backup.txt" file every few hours and set off an alarm if the time it contains is older than 24 hours.
What about to spin up Google VM and send emails from the instance? Using, say, SendGrid, Mailgun, or Mailjet

How can I launch 10 instances, and tag them at once

I want a single script that can lauch, and tag my instances which I can then use chef to configure them accordingly.
Say my service requires 10 instances, I want to be able to run 10 instances, then tag them according to their role (web, db, app server).
Then once I do that, I can use chef to connect to each one and configure them how i want.
But I'm confused, I know I can launch instances, but how do you wait for them to come online? Do you have to continously loop in some sort of a timer? That seems like a very hacky way to do it!
If you're going to do everything from the outside you do just have to poll to wait for the instance to be ready (which doesn't necessarily mean its ready to use - actual startup completed a little later)
You can also pass user data when you start an instance. Most amis support cloud init, and will interpret the data passed as a shell script if in the right format. That shell script could run chef or do other configuration tasks.

Resources