Can anyone share documented steps to launch OpenStack nodes as VMs - installation

I would like to set up a simple controller/compute OpenStack setup with controller node in one VM and compute as other, preferably created using KVM. I may look at adding more compute VM's later. Can anyone share any documented steps for the same?
I'm currently using Openstack training labs (out of the box install), however, I would like some documented steps to have a customized setup.

You can use devstack. Refer to the following link for more details, steps of installation are fairly straightforward.
https://docs.openstack.org/devstack/latest/guides/single-machine.html

Related

Vagrant. VM after provisioning

I tried to adopt Vagrant in our team. I created a Vagrantfile and make provisioning in some way. Everything works as charm, but ...
It's unclear for me how I can automate some routine tasks like:
running django(I use django, but it's framework agnostic problem) dev server on 0.0.0.0
running grunt watcher
providing a separate console for django-specific commands
It is looks like vagrant not intended to help with this kind of automation and I look for some community adopted way to do that. I goggled and found nothing.
I see a few way to that:
bootstrap.sh script but messy and hard to mantain
something like tmuxinator -- requires tmux on host machine and now it's impossible to put tmuxconfig in project repo
etc
What is the 'canonical' way to resolve this problem?
P.S.: Please, think about designers, manual testers and other guys which like to use tools as is
In general you are best off using a provisioner. To be honest, a bootstrap.sh file is a good place to start unless you want to learn the ins and outs of something like chef / ansible / salt / puppet. If you do you might want to start at salt (SaltStack) because it is written in python which I'm guessing you use given the django angle.
For your specific questions:
Part of the point of vagrant is it lets you develop against real stacks and real web servers so you can avoid the "oh, that don't quite work the same on apache" moment that often comes in projects. So for your first question I would look at how to provision the app behind apache / nginx or whatever you are using for the production web servers.
Because of the shared file systems users can just run grunt locally on the host machine. This also lets grunt do things like hook into OSX notifications.
I'm not familiar with tmuxinator so I'm not sure how to start here. But if it is a service that the server really runs then you should figure out a way to package the install and deploy it to the provisioned VM. As for configuration, is it possible to get a dev config in the repo?
Same as #Wyatt, I recommend use Vagrant with provision tools, such as puppet, saltstack, chef, anisble, etc. These tools are created for the requirements you ask for, and most are open source. Choice is no wrong, you can start learning from any one, they are similar.
With that, you can quickly and easily run several VM servers with all applications installed automatically. With the customised Puppet codes or chef cookbooks, you can update them any time and provision to VM easily, you can re-use them for your PROD environment as well.
Take some times to learn one of these automation tools first, you will get benefit to save a lot of time.
I use Puppet, and recommend the best puppet book PRO PUPPET to you. It has all you need.

Clarity on Vagrant usage and provisioning tool

Ok, so I'm a bit late jumping onto the Vagrant band-wagon, but figured it's about time I did.
Brief background: I've been a freelance developer for quite some time now developing solutions based on Magento and Drupal, and have finally gathered enough demand to warrant the need to build up a team. Previously, whenever I started development on any new project, I use to clone a preconfigured base VM in Virtualbox, and use that. Of course there were still configurations to do on it until I could start with actual development. Every project's web files therefore all resided inside /var/www/projectname on an Ubuntu VM.
Now I've read up on why I should be Vagrant, especially considering that I now have a team of 4 developers working with me, but I would appreciate any feedback on the following questions I have:
Moderator note: I know this isn't exactly asking a programming question, so please advise if this could be turned into a wiki, as I'm sure that feedback into this will help someone just like me.
I am still reading through the Vagrant docs, so please be kind...noob questions ahead!
I now work on a Mac. Does it matter if I use Parallels, and another developer uses VirtualBox on Windows if we need to share or collaborate on projects?
When I issue the command, vagrant up for an existing project, will it start the VM up as I would in VirtualBox or will it recreate the VM?
Is the command vagrant halt the same issuing sudo poweroff in Ubuntu, for example?
I currently use PhpStorm and its SFTP feature for project files synchronization with the option to exclude certain files on the remote server (VM) from being imported and sync'ed...will I be able to specify the same using Vagrant folder sharing?
Could I easily zip or archive a Vagrant VM, move it to a file server, and then "re-import" when and if needed? (example bug fixes, or new feature enhancements)
What do we use to easily provision VMs for common projects? Should we being using Puppet, Chef, Puphpet or Salt? I've seen that Puphpet provides a nice GUI to create a vagrantfile which I'm sure once generated, we could customize for future projects. At a very basic level, we need to ensure that certain applications are installed onto the server (zip, phpmyadmin, OpenSSL, etc.), certain PHP settings, PHP and PEAR modules, and Apache settings. I already have base VMs set up as I'd like them for both Magento projects as well as Drupal projects.
EDIT: I should also add that I use to enable Host Adapter in VirtualBox (on Windows), configure the VHost inside Ubuntu, and then update my host machine's hosts file with something like 192.168.56.3 drupalsite1.dev. So I'm unsure if Port Forwarding would be better to use? I'm not very clued up on that I must admit.
Like i said - noob questions! However, I would really appreciate any feedback on these questions. My deepest thanks!
Most of what you are asking is subjective so common sense and experience are the best tools.
I recommend all team members use the same provider (parallels isn't officially supported) and virtualbox is readily available. The base boxes, by provider, could have slight variances, you never know.
Vagrant will start the vm similarly but vagrant also does other things like configuration the network, hostname, shared folders, etc. Not quite the same. The big power lies in the capability to be able to teardown the environment and bring it back in a cleanly provisioned state.
Basically, yes.
Yes, your vagrant VMs are just like your own mini cloud. You would interact the servers similar to the way you'd interact with external boxes.
Yes, the simple answer is that it's called packaging and you can share the resultant .box. However, it's good practice to keep the base box and provisioning scripts under CM so you can rebuild and modify as needed.
For provisioners, I think it is dependent upon your experience and your familiarity with the provisioner language and how much you want to invest in learning them. Look through the provisioner support and see what fits your need and budget. Chef has a very steep learning curve, in my experience, but also has a lot of thought built in. Most provisioners have wide libraries of available installation "scripts".
The host adapter can be handled identically in vagrant.
Learn by doing, I recommend going down the table of contents (navbar) of the vagrant docs and trying each step where it makes sense. Then make your decisions.
That is my 2 cents. Hope this helps!

Using Chef Solo to provision a Windows EC2 instance and bootstrap it

I'm trying to automate our CI process for a couple of .NET apps, and in a perfect world I'd like to spin up a Windows EC2 instance for each, bootstrap the instance to install Chef Solo and then execute a Chef recipe to install some dependencies and the packaged software itself.
However - I'm a novice and have no idea even if that is feasible let alone where to start :)
I'm fairly well versed with the command line tools for AWS so can spin up an AMI ok, but beyond that point I'm pretty stuck. I would like to avoid building a custom AMI with chef pre-installed as that takes a lot of the advantages away.
I think this is essentially what I need to do - but is (unsurprisingly) focused on Linux:
http://www.opinionatedprogrammer.com/2011/06/chef-solo-tutorial-managing-a-single-server-with-chef/
Does anyone have a link to someone who has done this or similar before? Or a better way of achieving what I'd like to do?
Any help appreciated.
Okay, this requires that you have Chef preinstalled on your AMI:
http://scottwb.com/blog/2012/12/13/provision-and-bootstrap-windows-ec2-instances-with-chef/
But this is a strategy for installing Puppet to a stock Windows AMI, which could easily be modified for Chef:
http://dansrandombits.blogspot.com/2012/06/bootstrapping-custom-windows-ec2.html
I can't say I've done this yet, but I've both in my bookmarks bar since they was posted and have been planning on giving it a shot in at least our dev environment at some point. It seems like as long as there's a solid silent install for Chef, you could pull this off.
I realize this post is a bit old, but for those that still may come across this. I'm provisioning servers using Chef-Solo. Essentially I configure the User-Data of the instance to download and install Chef, download the cookbooks/recipes, and then launch Chef-Solo.
Here's a blog post I've made to demonstrate the steps: http://thesysadminswatercooler.blogspot.com/2015/11/aws-bootstrap-windows-ec2-instance-with.html

Run Amazon EC2 AMI in Windows

Is there a way to run an Amazon EC2 AMI image in Windows? I'd like to be able to do some testing and configuration locally. I'm looking for something like Virtual PC.
If you build your images from scratch you can do it with VMware (or insert your favorite VM software here).
Build and install your linux box as you'd like it, then run the AMI packaging/uploading tools in the guest. Then, just keep backup copies of your VM image in sync with the different AMI's you upload.
Some caveats: you'll need to make sure you're using compatible kernels, or at least have compatible kernel modules in the VM, or your instance won't boot on the EC2 network. You'll also have to make sure your system can autoconfigure itself, too (network, mounts, etc).
If you want to use an existing AMI, it's a little trickier. You need to download and unpack the AMI into a VM image, add a kernel and boot it. As far as I know, there's no 'one click' method to make it work. Also, the AMI's might be encrypted (I know they are at least signed).
You may be able to do this by having a 'bootstrap' VM set up to specifically extract the AMI's into a virtual disk using the AMI tools, then boot that virtual disk separately.
I know it's pretty vague, but those are the steps you'd have to go through. You could probably do some scripting to automate the process of converting AMI's to vdks.
The Amazon forum is also helpful. For example, see this article.
Oh, this article also talks about some of these processes in detail.
Amazon EC2 with Windows Server - announced this morning, very exciting
http://aws.amazon.com/windows/
It's a bit of a square peg in a round hole ... kind of like running MS-Office on Linux.
Depending on how you value your time, it's cheaper to just get another PC and install Linux and Xen.

How to Setup a Low cost cluster

At my house I have about 10 computers all different processors and speeds (all x86 compatible). I would like to cluster these. I have looked at openMosix but since they stopped development on it I am deciding against using it. I would prefer to use the latest or next to latest version of a mainstream distribution of Linux (Suse 11, Suse 10.3, Fedora 9 etc).
Does anyone know any good sites (or books) that explain how to get a cluster up and running using free open source applications that are common on most mainstream distributions?
I would like a load balancing cluster for custom software I would be writing. I can not use something like Folding#home because I need constant contact with every part of the application. For example if I was running a simulation and one computer was controlling where rain was falling, and another controlling what my herbivores are doing in the simulation.
I recently set up an OpenMPI cluster using Ubuntu. Some existing write up is at https://wiki.ubuntu.com/MpichCluster .
Your question is too vague. What cluster application do you want to use?
By far the easiest way to set up a "cluster" is to install Folding#Home on each of your machines. But I doubt that's really what you're asking for.
I have set up clusters for music/video transcoding using simple bash scripts and ssh shared keys before.
I manage mail server clusters at work.
You only need a cluster if you know what you want to do. Come back with an actual requirement, and someone will suggest a solution.
Take a look at Rocks. It's a fullblown cluster "distribution" based on CentOS 5.1. It installs all you need (libs, applications and tools) to run a cluster and is dead simple to install and use. You do all the tweaking and configuration on the master node and it helps you with kickstarting all your other nodes. I've recently been installing a 1200+ nodes (over 10.000 cores!) cluster with it! And would not hesitate to install it on a 4 node cluster since the workload to install the master is none!
You could either run applications written for cluster libs such as MPI or PVM or you could use the queue system (Sun Grid Engine) to distribute any type of jobs. Or distcc to compile code of choice on all nodes!
And it's open source, gpl, free, everything that you like!
I think he's looking for something similar with openMosix, some kind of a general cluster on top of which any application can run distributed among the nodes. AFAIK there's nothing like that available. MPI based clusters are the closest thing you can get, but I think you can only run MPI applications on them.
Linux Virtual Server
http://www.linuxvirtualserver.org/
I use pvm and it works. But even with a nice ssh setup, allowing for login without entering passwd to the machine, you can easily remotely launch commands on your different computing nodes.

Resources