I am a very satisfied user of EC2 for various regular needs.
But, today, I have something more special to do : I would need to run tests / benchmarks of kvm on various machines. Amazon Ec2 various types of AMI would suite my needs: I'll be able to try various levels of performances very easily.
But, EC2 instances are already virtualized over Xen.
So, my question is : can I try to install kvm on top of an EC2 AMI ? Will it run ok ? Any special tweaks needed ?
Thanks in advance
regards
didier
No. Amazon is already a virtualised environment. And even if you did, the benchmarks would be of no use anyway. If you need to benchmark a virtualisation solution, you should do so on hardware which is as close as possible to the hardware you plan to virtualise.
Related
I'm trying to learn ML on GCP. Some of the Qwiklabs and Tutorials start with Cloud Shell to setup things like env variables and install Python packages, while others start by opening an SSH terminal into a VM to do those preliminary steps.
I can't really tell the difference between the two approaches, other than the fact that in the second case a VM needs to be provisioned first. Presumably, when you use Cloud Shell some sort of VM instance is being provisioned for you behind the scenes anyway.
So how are the two approaches different?
Cloud Shell is a product that is designed to give a large number of preconfigured tools that are kept updated, as well as being quick to start, accessable from the UI, and free. Basically, its a quick way to get an interactive shell. You can learn more about this environment from its documentation.
There are also limits to Cloud Shell -- you can only use it for 60 hours a week, if you go idle your session is terminated, and there is only 5GB of storage. It is also only an f1-micro instance, IIRC. So while it is provisioned for you (and free!), it isn't really useful for anything other than an interactive shell.
On the other hand, SSHing into a VM places you directly in a terminal on that VM, much like you would on any specific host -- you only have whatever tools that the image installed onto that VM provides (and many VMs come pretty bare bones, it depends on the image). But, you're now in a terminal on the host that is likely executing the code you want to work with, and it has as much CPU and RAM as you provisioned in that instance.
As far as guides pointing you to one or the other -- thats really up to them, but I suspect they'd point client / tool type work to the cloud shell (since its easy and a reasonably standard environment, which can even be scripted with tutorials), while they'd probably point how to install necessary software for use in production to a "real" VM.
I was after some help / advise, we are looking at using packer to build our windows templates, so we could provision locally on our workstations, also into our private-cloud at work and also our public cloud offering "AWS"
Amazon AMI, have a lot of config regarding the EC2config tools and persistant drivers and enabling rdp post sysprep, etc, etc
Do you know what I need to including in my packer templates for my templates to work in EC2
Also how will these hosts be patched ?
regards
James
When using packer, normally only apply the basic setting and software as base image. Adding EC2config tools, persistant drivers and enabling rdp post sysprep, etc, are not bad idea.
As #fmtn07 recommended, go through packer.io documents is the starting and to speed the developing, I recommend to go with the open source repository, there are a lot of samples for your reference.
joefitzgerald/packer-windows
Ok, so I'm a bit late jumping onto the Vagrant band-wagon, but figured it's about time I did.
Brief background: I've been a freelance developer for quite some time now developing solutions based on Magento and Drupal, and have finally gathered enough demand to warrant the need to build up a team. Previously, whenever I started development on any new project, I use to clone a preconfigured base VM in Virtualbox, and use that. Of course there were still configurations to do on it until I could start with actual development. Every project's web files therefore all resided inside /var/www/projectname on an Ubuntu VM.
Now I've read up on why I should be Vagrant, especially considering that I now have a team of 4 developers working with me, but I would appreciate any feedback on the following questions I have:
Moderator note: I know this isn't exactly asking a programming question, so please advise if this could be turned into a wiki, as I'm sure that feedback into this will help someone just like me.
I am still reading through the Vagrant docs, so please be kind...noob questions ahead!
I now work on a Mac. Does it matter if I use Parallels, and another developer uses VirtualBox on Windows if we need to share or collaborate on projects?
When I issue the command, vagrant up for an existing project, will it start the VM up as I would in VirtualBox or will it recreate the VM?
Is the command vagrant halt the same issuing sudo poweroff in Ubuntu, for example?
I currently use PhpStorm and its SFTP feature for project files synchronization with the option to exclude certain files on the remote server (VM) from being imported and sync'ed...will I be able to specify the same using Vagrant folder sharing?
Could I easily zip or archive a Vagrant VM, move it to a file server, and then "re-import" when and if needed? (example bug fixes, or new feature enhancements)
What do we use to easily provision VMs for common projects? Should we being using Puppet, Chef, Puphpet or Salt? I've seen that Puphpet provides a nice GUI to create a vagrantfile which I'm sure once generated, we could customize for future projects. At a very basic level, we need to ensure that certain applications are installed onto the server (zip, phpmyadmin, OpenSSL, etc.), certain PHP settings, PHP and PEAR modules, and Apache settings. I already have base VMs set up as I'd like them for both Magento projects as well as Drupal projects.
EDIT: I should also add that I use to enable Host Adapter in VirtualBox (on Windows), configure the VHost inside Ubuntu, and then update my host machine's hosts file with something like 192.168.56.3 drupalsite1.dev. So I'm unsure if Port Forwarding would be better to use? I'm not very clued up on that I must admit.
Like i said - noob questions! However, I would really appreciate any feedback on these questions. My deepest thanks!
Most of what you are asking is subjective so common sense and experience are the best tools.
I recommend all team members use the same provider (parallels isn't officially supported) and virtualbox is readily available. The base boxes, by provider, could have slight variances, you never know.
Vagrant will start the vm similarly but vagrant also does other things like configuration the network, hostname, shared folders, etc. Not quite the same. The big power lies in the capability to be able to teardown the environment and bring it back in a cleanly provisioned state.
Basically, yes.
Yes, your vagrant VMs are just like your own mini cloud. You would interact the servers similar to the way you'd interact with external boxes.
Yes, the simple answer is that it's called packaging and you can share the resultant .box. However, it's good practice to keep the base box and provisioning scripts under CM so you can rebuild and modify as needed.
For provisioners, I think it is dependent upon your experience and your familiarity with the provisioner language and how much you want to invest in learning them. Look through the provisioner support and see what fits your need and budget. Chef has a very steep learning curve, in my experience, but also has a lot of thought built in. Most provisioners have wide libraries of available installation "scripts".
The host adapter can be handled identically in vagrant.
Learn by doing, I recommend going down the table of contents (navbar) of the vagrant docs and trying each step where it makes sense. Then make your decisions.
That is my 2 cents. Hope this helps!
Is there a way to run an Amazon EC2 AMI image in Windows? I'd like to be able to do some testing and configuration locally. I'm looking for something like Virtual PC.
If you build your images from scratch you can do it with VMware (or insert your favorite VM software here).
Build and install your linux box as you'd like it, then run the AMI packaging/uploading tools in the guest. Then, just keep backup copies of your VM image in sync with the different AMI's you upload.
Some caveats: you'll need to make sure you're using compatible kernels, or at least have compatible kernel modules in the VM, or your instance won't boot on the EC2 network. You'll also have to make sure your system can autoconfigure itself, too (network, mounts, etc).
If you want to use an existing AMI, it's a little trickier. You need to download and unpack the AMI into a VM image, add a kernel and boot it. As far as I know, there's no 'one click' method to make it work. Also, the AMI's might be encrypted (I know they are at least signed).
You may be able to do this by having a 'bootstrap' VM set up to specifically extract the AMI's into a virtual disk using the AMI tools, then boot that virtual disk separately.
I know it's pretty vague, but those are the steps you'd have to go through. You could probably do some scripting to automate the process of converting AMI's to vdks.
The Amazon forum is also helpful. For example, see this article.
Oh, this article also talks about some of these processes in detail.
Amazon EC2 with Windows Server - announced this morning, very exciting
http://aws.amazon.com/windows/
It's a bit of a square peg in a round hole ... kind of like running MS-Office on Linux.
Depending on how you value your time, it's cheaper to just get another PC and install Linux and Xen.
At my house I have about 10 computers all different processors and speeds (all x86 compatible). I would like to cluster these. I have looked at openMosix but since they stopped development on it I am deciding against using it. I would prefer to use the latest or next to latest version of a mainstream distribution of Linux (Suse 11, Suse 10.3, Fedora 9 etc).
Does anyone know any good sites (or books) that explain how to get a cluster up and running using free open source applications that are common on most mainstream distributions?
I would like a load balancing cluster for custom software I would be writing. I can not use something like Folding#home because I need constant contact with every part of the application. For example if I was running a simulation and one computer was controlling where rain was falling, and another controlling what my herbivores are doing in the simulation.
I recently set up an OpenMPI cluster using Ubuntu. Some existing write up is at https://wiki.ubuntu.com/MpichCluster .
Your question is too vague. What cluster application do you want to use?
By far the easiest way to set up a "cluster" is to install Folding#Home on each of your machines. But I doubt that's really what you're asking for.
I have set up clusters for music/video transcoding using simple bash scripts and ssh shared keys before.
I manage mail server clusters at work.
You only need a cluster if you know what you want to do. Come back with an actual requirement, and someone will suggest a solution.
Take a look at Rocks. It's a fullblown cluster "distribution" based on CentOS 5.1. It installs all you need (libs, applications and tools) to run a cluster and is dead simple to install and use. You do all the tweaking and configuration on the master node and it helps you with kickstarting all your other nodes. I've recently been installing a 1200+ nodes (over 10.000 cores!) cluster with it! And would not hesitate to install it on a 4 node cluster since the workload to install the master is none!
You could either run applications written for cluster libs such as MPI or PVM or you could use the queue system (Sun Grid Engine) to distribute any type of jobs. Or distcc to compile code of choice on all nodes!
And it's open source, gpl, free, everything that you like!
I think he's looking for something similar with openMosix, some kind of a general cluster on top of which any application can run distributed among the nodes. AFAIK there's nothing like that available. MPI based clusters are the closest thing you can get, but I think you can only run MPI applications on them.
Linux Virtual Server
http://www.linuxvirtualserver.org/
I use pvm and it works. But even with a nice ssh setup, allowing for login without entering passwd to the machine, you can easily remotely launch commands on your different computing nodes.