Cloud Snapshots Vs Images - snapshot

Thinking out loud:
If I can create a snapshot of a root disk and create a running VM from the snapshot, why do cloud providers also provide the option to create images from a running VM and use the same for creating other instances?
Well, Snapshots are point-in-time copies, and Images created from running VMs also do the same job.
Let us say I need a point in time of a running machine to be used over and again for deployments - Snapshots are the answer
Or, if I need a customized OS, specific to my environmental requirements, I create an image. Other than these purposes,
Question 1: Why do the two options exist?
Question 2: where should I use a snapshot for creating disk images?
Question 3: Where should I use an Image, rather than a snapshot?
Question 4: Why Cloned Disks?

Related

How To "Image" An EC2 Instance For Training Purposes

I want to create some training videos where I install software on a Windows machine. There might be several scenarios that I have to cover so I want to start with an "out of the box" install of Windows for each video. Is there a way to do this with AWS EC2 without having to destroy and create a new instance each time?
The perfect situation would be where I could export an image and then reload that image when I start the instance again. Is this possible?
Amazon EC2 uses Amazon Machine Images (AMIs) to create a bootable copy of a disk. An AMI can be created from an existing instance. When an AMI is used to launch a new instance, the disk will contain an exact copy of the original disk.
This does, however, involve launching a new instance.
If you want your disk to magically reset without creating a new instance, then you'll need to find a Windows utility that provides this capability (like Deep Freeze) since this takes place inside the computer and Operating System, which AWS cannot access.

Need to update my EC2 environment regularly?

I have an EC2 instance, Python 3 environment with many installed features like SQLAlchemy, cherrypy, etc. How often do I need to update them? Are they updated automatically, or do I need to do this updating manually? I haven't found this information online.
You EC2 instance will not be updated automatically. In this regard, it will behave exactly like a local machine. So, if you need to updates to your OS or other software, you will have to apply them manually.
However, manually running software update commands or even release upgrades is not the ideal scenario and something you may want to avoid in cloud environments.
Instead, you should think about how to automate your environment setup and configuration.
AWS provides some ways to run scripts at instance launch:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
This way, you could easily migrate to more up-to-data base images (AMIs) or change the hardware configuration of your instance.

Using Docker to share projects to work on, on Windows 10?

I'm a junior web developer working in a small web agency. We work on windows 10, with wampserver and mainly with Prestashop, Wordpress and Symfony websites. For every task I am given a "ticket" for which I must develop on a new branch (if needed). When the work is done I merge my branch on the develop branch which is hosted on a preproduction server, and if it is considered ok, it is then merged on the master branch, which is the website in production hosted on another server.
I was given the task to do some research on Docker and to find how it could improve our workflow.
But from what I understood yet, docker containers are just similar to Virtual Machines and are only useful for building isolated environments in order to test an application without having to think about dependencies.
But given the fact that we already have a preproduction server, I don't see the point of using Docker ? Did I miss something ?
Also, could Docker be of use for sharing projects between teamworkers (we all work on Windows) ? (for example a developer is working on a website locally, can he create a container and its image that could be used instantly as it is without configurations by another developer to work on it on his part ?)
But from what I understood yet, docker containers are just similar to Virtual Machines and are only useful for building isolated environments in order to test an application without having to think about dependencies.
No, Docker containers are not only useful for testing.
When building a correct workflow with Docker you can achieve 100% parity between development, staging and production if all use the same docker images.
But given the fact that we already have a preproduction server, I don't see the point of using Docker ? Did I miss something ?
This pre production server, aka what is normally called a staging server should also use docker to run the code.
Also, could Docker be of use for sharing projects between teamworkers (we all work on Windows) ? (for example a developer is working on a website locally, can he create a container and its image that could be used instantly as it is without configurations by another developer to work on it on his part ?)
Yes... You create the base docker images with only the necessary stuff to run in production and from them you can build other docker images with the necessary developers tools.
Production image company/app-name:
FROM php:alpine
# other your production dependencies here
Assumming that ones build the docker image with name company/app-name then the image for development.
Development image company/app-name-dev:
FROM company/app-name
# add here only developer tools
Now the developer uses both images company/app-name and company/app-name-dev during development and in the staging server only company/app-name docker image will be used to run the code.
After soem months of interaction in this flow you may even fill confident to start using the company/app-name to deploy the app to production and now you are with a 100% parity between development, staging and production.
Take a look to the Php Docker Stack for some inspiration, because I built it with this goal in mind for my latest job, but I end up to leave the company when we where in the process of adopting it in development...
But don't put all services you need in one single docker image because that is a bad practice in docker, instead use one service per docker image, like one service for PHP, another for the database, another for the Nginx server, etc.. See here how several services can be composed together with docker compose.

Creating an easily-distributable dev environment - Docker/Vagrant? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to figure out how to go about making an easy way to distribute a "dev environment" for working with my organization's Wordpress site. We currently have a local Linux server running the Wordpress site, and a VirtualBox image that is horribly out of date and a very poor representation of this server. We currently distribute this to team members for their local development, which causes lots of problems as the local image is often too different.
I'm not too worried about the database aspect of things; I'm thinking of just doing weekly dumps from the live server which can be imported by developers to keep their local up to date.
I'm more interested in finding an easy to distribute a preconfigured stack to users on OSX or Windows that already has PHP/Apache/MySQL configured by me, a git client set up to pull all the static files on command--something the user can just run, then go to localhost:8000 to see it. I'd also like some way for them to edit the files that were pulled from the git repository.
I'm currently looking into Docker and Vagrant but I'm not sure what's more appropriate for this task—Docker seems like it would be more suited to Linux machines. I know Vagrant supports mapping external folders into the VM, which seems like it'd solve my problem, but I wanted to ask for more suggestions before I start learning Chef/Puppet/etc.
I think both Vagrant and Docker may be used to solve your problem.
Vagrant may be more adequate to share the environment with Windows/mac machines, but integration with Docker in these systems are better day-by-day using tools such as boot2docker.
Docker by contrast requires using a moder Linux Kernel or one of these tools.
If I had to develop the Vagrant option, I would setup a machine with all the dependencies installed in the same machine. To install you can use one of the provisioners available in Vagrant (e.g.: Chef, Puppet). This may be easier if you have previous experience with them and/or if you are not very keen on bash. There are a lot of examples you can check to see how you can do it, such as https://github.com/r8/vagrant-lamp
Using Docker is also a very good option. Answering your question, you can share any local folder of the host machine with a container (using the docker run option -v or --volume). In this case I would run each of the services you want to provide (i.e.: php server, MySQL, Apache..) as independent containers, and linking them using the docker run option --link. Programming your Dockerfiles to build this containers may result more difficult than if you were using Chef or Puppet (although you could use them to build the containers, the integration is not as good as with Vagrant). But with Docker you can take advance of all the apps ready to use you have available in the Docker Hub. Also I would recommend you a docker tool called fig (www.fig.sh) that let running a container cluster linking and configuring the services easily, and it's allow to manage all the containers in a very comfortable way. Again you can find very illustrative examples of this user case over the internet, as for example https://github.com/kasperisager/phpstack

Run Amazon EC2 AMI in Windows

Is there a way to run an Amazon EC2 AMI image in Windows? I'd like to be able to do some testing and configuration locally. I'm looking for something like Virtual PC.
If you build your images from scratch you can do it with VMware (or insert your favorite VM software here).
Build and install your linux box as you'd like it, then run the AMI packaging/uploading tools in the guest. Then, just keep backup copies of your VM image in sync with the different AMI's you upload.
Some caveats: you'll need to make sure you're using compatible kernels, or at least have compatible kernel modules in the VM, or your instance won't boot on the EC2 network. You'll also have to make sure your system can autoconfigure itself, too (network, mounts, etc).
If you want to use an existing AMI, it's a little trickier. You need to download and unpack the AMI into a VM image, add a kernel and boot it. As far as I know, there's no 'one click' method to make it work. Also, the AMI's might be encrypted (I know they are at least signed).
You may be able to do this by having a 'bootstrap' VM set up to specifically extract the AMI's into a virtual disk using the AMI tools, then boot that virtual disk separately.
I know it's pretty vague, but those are the steps you'd have to go through. You could probably do some scripting to automate the process of converting AMI's to vdks.
The Amazon forum is also helpful. For example, see this article.
Oh, this article also talks about some of these processes in detail.
Amazon EC2 with Windows Server - announced this morning, very exciting
http://aws.amazon.com/windows/
It's a bit of a square peg in a round hole ... kind of like running MS-Office on Linux.
Depending on how you value your time, it's cheaper to just get another PC and install Linux and Xen.

Resources