Run Amazon EC2 AMI in Windows - amazon-ec2

Is there a way to run an Amazon EC2 AMI image in Windows? I'd like to be able to do some testing and configuration locally. I'm looking for something like Virtual PC.

If you build your images from scratch you can do it with VMware (or insert your favorite VM software here).
Build and install your linux box as you'd like it, then run the AMI packaging/uploading tools in the guest. Then, just keep backup copies of your VM image in sync with the different AMI's you upload.
Some caveats: you'll need to make sure you're using compatible kernels, or at least have compatible kernel modules in the VM, or your instance won't boot on the EC2 network. You'll also have to make sure your system can autoconfigure itself, too (network, mounts, etc).
If you want to use an existing AMI, it's a little trickier. You need to download and unpack the AMI into a VM image, add a kernel and boot it. As far as I know, there's no 'one click' method to make it work. Also, the AMI's might be encrypted (I know they are at least signed).
You may be able to do this by having a 'bootstrap' VM set up to specifically extract the AMI's into a virtual disk using the AMI tools, then boot that virtual disk separately.
I know it's pretty vague, but those are the steps you'd have to go through. You could probably do some scripting to automate the process of converting AMI's to vdks.
The Amazon forum is also helpful. For example, see this article.
Oh, this article also talks about some of these processes in detail.

Amazon EC2 with Windows Server - announced this morning, very exciting
http://aws.amazon.com/windows/

It's a bit of a square peg in a round hole ... kind of like running MS-Office on Linux.
Depending on how you value your time, it's cheaper to just get another PC and install Linux and Xen.

Related

In GCP, what is the difference between SSH'ing into a VM and using Cloud Shell?

I'm trying to learn ML on GCP. Some of the Qwiklabs and Tutorials start with Cloud Shell to setup things like env variables and install Python packages, while others start by opening an SSH terminal into a VM to do those preliminary steps.
I can't really tell the difference between the two approaches, other than the fact that in the second case a VM needs to be provisioned first. Presumably, when you use Cloud Shell some sort of VM instance is being provisioned for you behind the scenes anyway.
So how are the two approaches different?
Cloud Shell is a product that is designed to give a large number of preconfigured tools that are kept updated, as well as being quick to start, accessable from the UI, and free. Basically, its a quick way to get an interactive shell. You can learn more about this environment from its documentation.
There are also limits to Cloud Shell -- you can only use it for 60 hours a week, if you go idle your session is terminated, and there is only 5GB of storage. It is also only an f1-micro instance, IIRC. So while it is provisioned for you (and free!), it isn't really useful for anything other than an interactive shell.
On the other hand, SSHing into a VM places you directly in a terminal on that VM, much like you would on any specific host -- you only have whatever tools that the image installed onto that VM provides (and many VMs come pretty bare bones, it depends on the image). But, you're now in a terminal on the host that is likely executing the code you want to work with, and it has as much CPU and RAM as you provisioned in that instance.
As far as guides pointing you to one or the other -- thats really up to them, but I suspect they'd point client / tool type work to the cloud shell (since its easy and a reasonably standard environment, which can even be scripted with tutorials), while they'd probably point how to install necessary software for use in production to a "real" VM.

Ways to build different types of Virtual Machine images

Inside a CI/CD environment, I have a tar.gz file that I need to package into a virtual machine image. I want to take Ubuntu Server installation, install some packages, install my tar.gz file, and output the image into various both EC2/AMI and VMware OVF formats (and possibly others in the future, i.e. docker images).
I've been looking at Packer, Vagrant, and Ansible. But I'm not certain which of these tools will help me accomplish what I need.
Packer sounds like the right solution, but the documentation isn't very clear on how to start with a VMware OVF/OVA image and build an EC2/AMI image. Or am I able to start with a Docker image and output an EC2/AMI image??? Based on the docs, it seems like I need to start with AMI and build an AMI. Or start with ".vmx" (it doesn't actually say anything about OVF/OVA files) and build an OFA/OVF. But can I start with format A and end up with format B?
Is Vagrant or Ansible better for this??
Here is what I've gathered.
I found a nicely documented approach that starts with an Ubuntu ISO image, uses a preseed file for unattended Ubuntu installation, and deploys it out to vSphere as a template:
https://www.thehumblelab.com/automating-ubuntu-18-packer/
In order to deploy it out to vSphere, it requires a plugin from Jetbrains. It uses vSphere/ESXi to build the template image. Packer docs also discuss "Building on a Remote vSphere Hypervisor" using remote_* key words in the json file. But I suppose this accomplishes the same thing.
Then to build the EC2 AMI images, I believe you add another builder to the json. However, I don't believe you can start from the same ISO image as done in the VMware builder. Instead, I believe you need to start with a prebuilt AMI image (specified by the source_ami field in the json).
I guess Packer doesn't allow you to start from a single source A and fan out to target formats B, C, etc... If you want to build an AMI, you need to start with an AMI. If you want to build VMware images, you need to start from an ISO or .vmx (I suppose this means OVF) and build out an OVF/OVA or template.

Clarity on Vagrant usage and provisioning tool

Ok, so I'm a bit late jumping onto the Vagrant band-wagon, but figured it's about time I did.
Brief background: I've been a freelance developer for quite some time now developing solutions based on Magento and Drupal, and have finally gathered enough demand to warrant the need to build up a team. Previously, whenever I started development on any new project, I use to clone a preconfigured base VM in Virtualbox, and use that. Of course there were still configurations to do on it until I could start with actual development. Every project's web files therefore all resided inside /var/www/projectname on an Ubuntu VM.
Now I've read up on why I should be Vagrant, especially considering that I now have a team of 4 developers working with me, but I would appreciate any feedback on the following questions I have:
Moderator note: I know this isn't exactly asking a programming question, so please advise if this could be turned into a wiki, as I'm sure that feedback into this will help someone just like me.
I am still reading through the Vagrant docs, so please be kind...noob questions ahead!
I now work on a Mac. Does it matter if I use Parallels, and another developer uses VirtualBox on Windows if we need to share or collaborate on projects?
When I issue the command, vagrant up for an existing project, will it start the VM up as I would in VirtualBox or will it recreate the VM?
Is the command vagrant halt the same issuing sudo poweroff in Ubuntu, for example?
I currently use PhpStorm and its SFTP feature for project files synchronization with the option to exclude certain files on the remote server (VM) from being imported and sync'ed...will I be able to specify the same using Vagrant folder sharing?
Could I easily zip or archive a Vagrant VM, move it to a file server, and then "re-import" when and if needed? (example bug fixes, or new feature enhancements)
What do we use to easily provision VMs for common projects? Should we being using Puppet, Chef, Puphpet or Salt? I've seen that Puphpet provides a nice GUI to create a vagrantfile which I'm sure once generated, we could customize for future projects. At a very basic level, we need to ensure that certain applications are installed onto the server (zip, phpmyadmin, OpenSSL, etc.), certain PHP settings, PHP and PEAR modules, and Apache settings. I already have base VMs set up as I'd like them for both Magento projects as well as Drupal projects.
EDIT: I should also add that I use to enable Host Adapter in VirtualBox (on Windows), configure the VHost inside Ubuntu, and then update my host machine's hosts file with something like 192.168.56.3 drupalsite1.dev. So I'm unsure if Port Forwarding would be better to use? I'm not very clued up on that I must admit.
Like i said - noob questions! However, I would really appreciate any feedback on these questions. My deepest thanks!
Most of what you are asking is subjective so common sense and experience are the best tools.
I recommend all team members use the same provider (parallels isn't officially supported) and virtualbox is readily available. The base boxes, by provider, could have slight variances, you never know.
Vagrant will start the vm similarly but vagrant also does other things like configuration the network, hostname, shared folders, etc. Not quite the same. The big power lies in the capability to be able to teardown the environment and bring it back in a cleanly provisioned state.
Basically, yes.
Yes, your vagrant VMs are just like your own mini cloud. You would interact the servers similar to the way you'd interact with external boxes.
Yes, the simple answer is that it's called packaging and you can share the resultant .box. However, it's good practice to keep the base box and provisioning scripts under CM so you can rebuild and modify as needed.
For provisioners, I think it is dependent upon your experience and your familiarity with the provisioner language and how much you want to invest in learning them. Look through the provisioner support and see what fits your need and budget. Chef has a very steep learning curve, in my experience, but also has a lot of thought built in. Most provisioners have wide libraries of available installation "scripts".
The host adapter can be handled identically in vagrant.
Learn by doing, I recommend going down the table of contents (navbar) of the vagrant docs and trying each step where it makes sense. Then make your decisions.
That is my 2 cents. Hope this helps!

How should I select a Windows Server AMI image for use with the AWS free tier as a WAMP stack?

I recently realized that Amazon's AWS free tier allows you to use both a micro Linux and a micro Windows server free for one year. I've only been running Linux instances so far, but I'm curious to give the Windows server a try since it's free.
Ubuntu has a sweet cloud portal which shows you what AMI images they have available for use with EC2, but I haven't found anything like that for Windows.
I realize that the launch instance wizard gives you a few options:
But I dont' see any pre-built WAMP stacks. Also, bitnami has a WAMP stack but I can't seem to find an AMI image for it.
Is launching a Windows instance similar to Linux? I'm assuming I can find a reputable WAMP AMI somewhere where, put the AMI number into the launch console, and then RDP to the box. So, if that's the case, how can I find a reputable WAMP stack AMI to use?
First of all, yes. Launching a Windows AMI is exactly like launching a Linux one. Except for the fact that you RDP, and not SSH to the instance, and you have to wait a couple of minutes to connect to your instance, in order for the Admin password to be generated.
For your second question, I would recommend you to start at the Bitnami site, but I saw that they are only providing LAMP instances as of today. I don't know what is your concept of reputable, but I found out two public AMIs (from Bitnami as well, but a little older, as it seems) that might help you. Just launch (on your EC2 Management Console) the Classic Wizard, Community AMIs, and search for WAMP and you will find them out.
Hope it helps.

Creating a virtual machine image as a continuous integration artifact?

I'm currently working on a server-side product which is a bit complex to deploy on a new server, which makes it an ideal candidate for testing out in a VM. We are already using Hudson as our CI system, and I would really like to be able to deploy a virtual machine image with the latest and greatest software as a build artifact.
So, how does one go about doing this exactly? What VM software is recommended for this purpose? How much scripting needs to be done to accomplish this? Are there any issues in particular when using Windows 2003 Server as the OS here?
Sorry to deny anyone an accepted answer here, but based on further research (thanks to your answers!), I've found a better solution and wanted to summarize what I've found.
First, both VirtualBox and VMWare Server are great products, and since both are free, each is worth evaluating. We've decided to go with VMWare Server, since it is a more established product and we can get support for it should we need. This is especially important since we are also considering distributing our software to clients as a VM instead of a special server installation, assuming that the overhead from the VMWare Player is not too high. Also, there is a VMWare scripting interface called VIX which one can use to directly install files to the VM without needing to install SSH or SFTP, which is a big advantage.
So our solution is basically as follows... first we create a "vanilla" VM image with OS, nothing else, and check it into the repository. Then, we write a script which acts as our installer, putting the artifacts created by Hudson on the VM. This script should have interfaces to copy files directly, over SFTP, and through VIX. This will allow us to continue distributing software directly on the target machine, or through a VM of our choice. This resulting image is then compressed and distributed as an artifact of the CI server.
Regardless of the VM software (I can recommend VirtualBox, too) I think you are looking at the following scenario:
Build is done
CI launches virtual machine (or it is always running)
CI uses scp/sftp to upload build into VM over the network
CI uses the ssh (if available on target OS running in VM) or other remote command execution facility to trigger installation in the VM environment
VMWare Server is free and a very stable product. It also gives you the ability to create snapshots of the VM slice and rollback to previous version of your virtual machine when needed. It will run fine on Win 2003.
In terms of provisioning new VM slices for your builds, you can simply copy and past the folder that contains the VMWare files, change the SID and IP of the new VM and you have a new machine. Takes 15 minutes depending on the size of your VM slice. No scripting required.
If you use VirtualBox, you'll want to look into running it headless, since it'll be on your server. Normally, VirtualBox runs as a desktop app, but it's possible to start VMs from the commandline and access the virtual machine over RDP.
VBoxManage startvm "Windows 2003 Server" -type vrdp
We are using Jenkins + Vagrant + Chef for this scenario.
So you can do the following process:
Version control your VM environment using vagrant provisioning scripts (Chef or Puppet)
Build your system using Jenkins/Hudson
Run your Vagrant script to fetch the last stable release from CI output
Save the VM state to reuse in future.
Reference:
vagrantup.com
I'd recommend VirtualBox. It is free and has a well-defined programming interface, although I haven't personally used it in automated build situations.
Choosing VMWare is currently NOT a bad choice.
However,
Just like VMWare gives support for VMWare server, SUN gives support for VirtualBOX.
You can also accomplish this task using VMWare Studio, which is also free.
The basic workflow is this:
1. Create an XML file that describes your virtual machine
2. Use studio to create the shell.
3. Use VMWare server to provision the virtual machine.

Resources