I have a question about the images to mount on openStack.
I can use any image of any operative system? I guess not... but why?
I found images already suitable for openStack, but what's the different between an image cloud-ready and a normal image?
For instance, I can create a virtual machine with windows desktop? If not, why?
thank you
Cloud-ready images have been customised by the distro maker to run well under a hypervisor such as OpenStack, EC2, kvm, and LXC (not strictly a hypervisor) instead of on physical hardware. This entails removing packages that are only need in physical environments like wireless drivers etc, and adding packages that are useful in a cloud environment. For example during the boot process, cloud-ready images download metadata from the environment such as hostname and networking information. This data is used to "personalise" a new instance when it boots up for the first time.
If you really want to get in to the nuts and bolts of things, the Ubuntu UEC Images page has lots of details about the composition of the Ubuntu cloud images and other information like how to build one yourself.
I'm sure you can create a virtual machine running Windows desktop, but I've never had occasion to do so. If you look at the Amazon page about Windows it's all about running server apps like SQL Server and ASP.NET apps.
As Everett Toews pointed out in a comment above, one of the main things for making an image cloud-ready is that it can retrieve data from the metadata server when it boots up. This is used for things like retrieving the private key and collecting user data.
In addition to CloudInit, there's also Condenser. Or, you can roll your own. OpenStack uses the same protocol as the Amazon EC2 metadata service, so the EC2 metadata docs explain how to access this data.
Related
Maybe anyone know the way how to migrate Windows images from Amazon EC2 to Google Compute Engine and back. I've read about the Linux images migration in GCE documentation, but there are no any info about the Windows images. I also seen this Is it possible to upload a windows image to Google compute engine? question, but reference to google group is banned, so I can't read it.
Thanks.
Sorry there was a problem with that Groups link. The answer I wrote there was that at least two companies offer migration tools for moving a Windows VM into GCE. One is Racemi, and the other is CloudEndure.
I wanted to update this to reflect that GCE now has an "Import VM" option on the Cloud Console. This will direct you to a service that will enable free migration of Windows VMs
I have some professional servers, and I want to create a cluster of 7-15 machines with CoreOS. I'm a little familiar with Proxmox, but I'm not clear about how create a virtual machine(VM) with CoreOS on proxmox. Also, I'm not sure if the idea of cluster of CoreOS's VM on proxmox it's right to do.
Then, I need:
How create a VM with CoreOS on proxmox.
If will be viable proxmox to create CoreOS's cluster.
I have no experience with Proxmox, but if you can make an image that runs then you can use it to stamp out the cluster. What you'd need to do is boot the ISO, run the installer and then make an image of that. Be sure to delete /etc/machine-id before you create the image.
CoreOS uses cloud-config to connect the machines together and configure a few parameters related to networking -- basically anything to get the machines talking to the cluster. A cloud-config file should be provided as a config-drive image, which is basically like mounting a CD-ROM to the VM. You'll have to check the docs on Proxmox to see if it supports that. More info here: http://coreos.com/docs/cluster-management/setup/cloudinit-config-drive/
The other option you have is to skip the VMs altogether and instead of using Proxmox, just boot CoreOS directly on your hardware. You can do this by booting the ISO and installing or doing something like iPXE: http://coreos.com/docs/running-coreos/bare-metal/booting-with-ipxe/
My organization's website is a Django app running on front end webservers + a few background processing servers in AWS.
We're currently using Ansible for both :
system configuration (from a bare OS image)
frequent manually-triggered code deployments.
The same Ansible playbook is able to provision either a local Vagrant dev VM, or a production EC2 instance from scratch.
We now want to implement autoscaling in EC2, and that requires some changes towards a "treat servers as cattle, not pets" philosophy.
The first prerequisite was to move from a statically managed Ansible inventory to a dynamic, EC2 API-based one, done.
The next big question is how to deploy in this new world where throwaway instances come up & down in the middle of the night. The options I can think of are :
Bake a new fully-deployed AMI for each deploy, create a new AS Launch config and update the AS group with that. Sounds very, very cumbersome, but also very reliable because of the clean slate approach, and will ensure that any system changes the code requires will be here. Also, no additional steps needed on instance bootup, so up & running more quickly.
Use a base AMI that doesn't change very often, automatically get the latest app code from git upon bootup, start webserver. Once it's up just do manual deploys as needed, like before. But what if the new code depends on a change in the system config (new package, permissions, etc) ? Looks like you have to start taking care of dependencies between code versions and system/AMI versions, whereas the "just do a full ansible run" approach was more integrated and more reliable. Is it more than just a potential headache in practice ?
Use Docker ? I have a strong hunch it can be useful, but I'm not sure yet how it would fit our picture. We're a relatively self-contained Django front-end app with just RabbitMQ + memcache as services, which we're never going to run on the same host anyway. So what benefits are there in building a Docker image using Ansible that contains system packages + latest code, rather than having Ansible just do it directly on an EC2 instance ?
How do you do it ? Any insights / best practices ?
Thanks !
This question is very opinion based. But just to give you my take, I would just go with prebaking the AMIs with Ansible and then use CloudFormation to deploy your stacks with Autoscaling, Monitoring and your pre-baked AMIs. The advantage of this is that if you have most of the application stack pre-baked into the AMI autoscaling UP will happen faster.
Docker is another approach but in my opinion it adds an extra layer in your application that you may not need if you are already using EC2. Docker can be really useful if you say want to containerize in a single server. Maybe you have some extra capacity in a server and Docker will allow you to run that extra application on the same server without interfering with existing ones.
Having said that some people find Docker useful not in the sort of way to optimize the resources in a single server but rather in a sort of way that it allows you to pre-bake your applications in containers. So when you do deploy a new version or new code all you have to do is copy/replicate these docker containers across your servers, then stop the old container versions and start the new container versions.
My two cents.
A hybrid solution may give you the desired result. Store the head docker image in S3, prebake the AMI with a simple fetch and run script on start (or pass it into a stock AMI with user-data). Version control by moving the head image to your latest stable version, you could probably also implement test stacks of new versions by making the fetch script smart enough to identify which docker version to fetch based on instance tags which are configurable at instance launch.
You can also use AWS CodeDeploy with AutoScaling and your build server. We use CodeDeploy plugin for Jenkins.
This setup allows you to:
perform your build in Jenkins
upload to S3 bucket
deploy to all the EC2s one by one which are part of the assigned AWS Auto-Scaling group.
All that with a push of a button!
Here is the AWS tutorial: Deploy an Application to an Auto Scaling Group Using AWS CodeDeploy
I am trying to create an Amazon EC2 instance. I want to create a micro, 64-bit, Ubuntu 12.04 LTS instance.
In Amazon Web Services I have seen all instance have AMI numbers. Now I found two ami(s) with numbers ami-8a7f3ed8 and ami-b8a8e9ea. both looks same to me - micro, ebs-based, 64-bit Ubuntu 12.04LTS images.
If so, what is the difference and why two number for the same machine image?
When selecting an AMI, select from a trusted source.
The AMI number is just a unique identifier for a particular image that someone published. The title (e.g. Ubuntu 12.04LTS) is just a claim by the person who published the AMI about what is on it.
If you get your AMI from a source that is not known to be trustworthy, it could potentially contain built-in security holes, pre-installed spam relays, etc.
From Amazon
You launch AMIs at your own risk. Amazon cannot vouch for the integrity or security of AMIs shared by other EC2 users. Therefore, you should treat shared AMIs as you would any foreign code that you might consider deploying in your own data center and perform the appropriate due diligence.
Ideally, you should get the AMI ID from a trusted source (a web site, another EC2 user, etc). If you do not know the source of an AMI, we recommend that you search the forums for comments on the AMI before launching it. Conversely, if you have questions or observations about a shared AMI, feel free to use the AWS forums to ask or comment.
Amazon's public images have an aliased owner and display amazon in the userId field. This allows you to find Amazon's public images easily.
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/AESDG-chapter-usingsharedamis.html#usingsharedamis-security
Personally I select AMIs published well-known entities like Amazon or RighScale.
An infinite number of people can create an infinite number of disk image variations that are still "64-bit micro Ubuntu 12.04 LTS". Just like there are over 500 million PCs in the world running Windows 7 64-bit, yet their hard drives all contain different data. If you wanted to be able to differentiate Joe's disk image from Sue's disk image, you'd need to give them different identifiers. That's why the AMI numbers are different.
AMI is just an image of a disk. It has nothing to do with type(micro). You can create multiple AMIs from the same instance and they will have have different IDs.
They are just different images. You could make two images from the same machine a minute after each other with no changes on the machine at all and they will have different AMI id's. The AMI ID is just applied at the time the image is created as a unique identifier, it infers nothing about the uniqueness of the image content.
Does Creating an Image of an Amazon EC2 Linux instance cause any downtime? Can I image a running server?
Already answered correctly, but I wanted to add a couple of caveats:
--no-reboot, no guarantee: When you create an image of a AMI with EBS backed root device, you may opt for --no-reboot, but AWS warns about this. It does not guarantee integrity of the file system. If it's really busy instance and heavy RW operations going on, you may get a corrupted image.
Instance Store, no reboot: Creating an instance store backed image never required reboot to me. It's three simple steps -- bundle-image, upload to S3, and register the image without any rebooting in this whole process.
It is my opinion that No Reboot should prevent the image creation from rebooting.
If you are the api user, it also provides argument --no-reboot to do it.