I would like to lower the cost paying to Amazon.
There are stopped instances that I want to backup and save on my local server, on-prem.
After creating an image from the instance, is there any way I can copy AMI to my local server and remove it from Amazon. So in a day, I will need back, it can transfer back from my local server to Amazon to use it again?
The instance first created on Amazon.I rather a way to save instance on-premise as a file and not as a virtual server.
The main issue is: How can I transfer and save the image of an instance, that created on Amazon, as a file to the local server and how I can return it back to be in Amazon, in case I need to build the instance again.
Is there any way to do it?
Thanks a lot!
You can use some backup software (duplicati, cloudberry, or anything else):
Install backup software to your EC2
Make an image backup to S3 cloud storage
Install backup software to your physical machine
Restore image from S3 cloud storage to physical machine or your local storage to keep this backup locally.
And the last, but not least thing:
Good luck!)))
You would need to use the VM Import/Export Tool for that. Read the docs to make sure you know how to upload again.
As to the cost, I am not sure how Amazon configures the cost, that is something you have to check out from your account. Once you create the image it is on your account. Even after you download it, not sure when AWS charges you or not.
You can create an image file from your current drive but it will be quite expensive:
create another instance
attach your volume there as the second drive
use something like dd if=/dev/xvd0 of=drive.img ... to copy volume to a file
rsync / ftp / etc the file to your local drive.
You will be billed for the second instance and for the transfer. When you want to restore the machine - you'll be billed too.
Have you checked free tier? You have a year of free access to AWS for small instances and volumes.
You need a tool to get what you want. Take eg Cloudberry and create an image and store it at Amazon and then restore and things are done. This is the best option for you. No other ways.
Related
I have 4 workstations (windows) that I need to backup and save to gloud storage, I would like it to be automatically. it is possible?
You can imagine to set up, on each workstation a planned task that performs a gcloud rsync regularly in dedicated folder on Google Cloud Storage (and that get the correct folder from the local workstation)
My web application server on AWS ec2 instance.
And using MEAN stack.
I'd like to upload image to ec2 instance.(ex - /usr/local/web/images)
I can't found that how can i get the credentials.
There are just about AWS S3.
How can i upload image file to ec2 instance?
If you do file transfer repeatedly try, unison. It is bidirectional, kind of sync. Allows options to handle conflicts.
I've found the easiest way to do this as a one-off is to upload the file to google drive, and then download the file from there. View this thread to see how simply this can be done!
I have an AMI which has configured with production code setup.I am using Nginx + unicorn as server setup.
The problem I am facing is, whenever traffic goes up I need to boot the instance log in to instance and do a git pull,bundle update and also precompile the assets.Which is time consuming.So I want to avoid all this process.
Now I want to go with a script/process where I can automate whole deployment process, like git pull, bundle update and precompile as soon as I boot a new instance from this AMI.
Is there any best way process to get this done ? Any help would be appreciated.
You can place your code in /etc/rc.local (commands in this file will be executed when server will be loaded).
But the best way is using (capistrano). You need to add require "capistrano/bundler" to your deploy.rb file, and bundle update will be runned automatically. For more information you can read this article: https://semaphoreapp.com/blog/2013/11/26/capistrano-3-upgrade-guide.html
An alternative approach is to deploy your app to a separate EBS volume (you can still mount this inside /var/www/application or wherever it currently is)
After deploying you create an EBS snapshot of this volume. When you create a new instance, you tell ec2 to create a new volume for your instance from the snapshot, so the instance will start with the latest gems/code already installed (I find bundle install can take several minutes). All your startup script needs to do is to mount the volume (or if you have added it to the fstab when you make the ami then you don't even need to do that). I much prefer scaling operations like this to have no dependencies (eg what would you do if github or rubygems have an outage just when you need to deploy)
You can even take this a step further by using amazon's autoscaling service. In a nutshell you create a launch configuration where you specify the ami, instance type, volume snapshots etc. Then you control the group size either manually (through the web console or the api) according to a fixed schedule or based on cloudwatch metrics. Amazon will create or destroy instances as needed, using the information in your launch configuration.
Is it possible to clone a EC2 instance data and all?
You can make an AMI of an existing instance, and then launch other instances using that AMI.
The easier way is through the web management console:
go to the instance
select the instance and click on instance action
create image
Once you have an image you can launch another cloned instance, data and all. :)
There is no explicit Clone button. Basically what you do is create an image, or snapshot of an existing EC2 instance, and then spin up a new instance using that snapshot.
First create an image from an existing EC2 instance.
Check your snapshots list to see if the process is completed. This usually takes around 20 minutes depending on how large your instance drive is.
Then, you need to create a new instance and use that image as the AMI.
Nowadays it is even easier to clone the machine with EBS-backed instances released a while ago. This is how we do it in BitNami Cloud Hosting.
Basically you just take a snapshot of the instance which can be used
later to launch a new server. You can do it either using AWS console
(saving the EBS-backed instance as AWS AMI) or using the EC2 API
tools:
create a snapshot with ec2-create-snapshot
and then launch an instance from a snapshot
Cloning the instance is nothing else but creating the backup and then
launching a new server based on that. You can find bunch of articles
out there describing this problem, try to find the info about "how to
..." backup or resize the whole EC2 instance, for example this blog is
a really good place to start: alestic.com
To Answer your question: now AWS make cloning real easy see Launch instance from your Existing Instance
On the EC2 Instances page, select the instance you want to use
Choose Actions --> Image and Templates, and then Launch More Like This.
Review & Launch
This will take the existing instance as a Template for the new once.
or you can also take a snapshot of the existing volume and use the snapshot with the AMI (existing one) which you ping during your instance launch
You can use AWS API or console UI to create an AMI(Amazon Machine Image) of your running instance. You can specify to reboot the instance when create your AMI. Then you can use AWS API or console UI to launch more instances with the AMI you created.
You can do it very easily with a Cloud Management software -like enStratus, RightScale or Scalr (disclaimer: I work there). With the cloned farm you can:
Create a snapshot or a pre-made image to launch another day
Duplicate your configuration to test it before production
My fantasy is to be able to spin up a standard AMI, load a tiny script and end up with a properly configured server instance.
Part of this is that I would like to have a PRIVATE yum repo in S3 that would contain some proprietary code.
It seems that S3 wants you to either be public or use AMZN's own special flavor of authentication.
Is there any way that I can use standard HTTPS + either Basic or Digest auth with S3? I'm talking about direct references to S3, not going through a web-server to get to S3.
If the answer is 'no', has anyone thought about adding AWS Auth support to yum?
The code in cgbystrom's git repo is an expression of intent rather than working code.
I've made a fork and gotten things working, at least for us, and would love for someone else to take over.
https://github.com/rmela/yum-s3-plugin
I'm not aware that you can use non-proprietary authentication with S3, however we accomplish a similar goal by mounting an EBS volume to our instances once they fire up. You can then access the EBS volume as if it were part of the local file system.
We can make changes to EBS as needed to keep it up to date (often updating it hourly). Each new instance that mounts the EBS volume gets the data current as of the mount time.
You can certainly use Amazon S3 to host a private Yum repository. Instead of fiddling with authentication, you could try a different route: limit access to your private S3 bucket by IP address. This is entirely supported, see the S3 documentation.
A second option is to use a Yum plug-in that provides the necessary authentication. Seems like someone already started working on such a plug-in: https://github.com/cgbystrom/yum-s3-plugin.