Vagrant Virtualbox Openstack - or is there a better way? - vagrant

Current Steup
Using a Vagrant/Virtualbox image for development
Vagrant file and php code are both checked into a git repo
When a new user joins the project they pull down the git repo and type vagrant up
When we deploy to our "dev production" server we are on a CentOS 7 machine that has virtual box and vagrant and we just run the vagrant image
Future Setup
We are moving towards an OpenStack "cloud" and are wondering how to best integrate this current setup into the workflow
As I understand it OpenStack allows you to create individual VMs - which sounds cool because on one hand we could then launch our VM's, but the problem is we are taking advantage of Vagrant/Virtual Box's "mapping" functionality so that we are mounting /var/www/html to a /html directory in the folder we run vagrant out of. I assume this is not possible with OpenStack - and was wondering whether there is a specified best practice for how to handle this situation.
Approach
The only approach i can think of is to:
Install a VM on OpenStack that runs Centos7 and then inside that VM run Vagrant/VirtualBox (this seems bonkers)
But then we have VM inside a VM inside a VM and that just doesn't seem efficient.
Is there a tool - or guide - or guidance how to work with both a local vagrant image and the cloud? It seems like there may not be as easy a mapping as I initially though.
Thanks

It sounds like you want to keep using vagrant, presumably using https://github.com/ggiamarchi/vagrant-openstack-provider or similar? With that assumption, the way to do this which is probably the smallest iteration from your current setup is just to use an rsync synced folder - see https://www.vagrantup.com/docs/synced-folders/rsync.html. You should be able to something like this:
config.vm.synced_folder "html/", "/var/www/html", type: 'rsync'
Check the rest of that rsync page though -- depending on your ssh user, you might need to use the --rsync-path option.
NB - you don't mention whether you vagrant host is running windows or linux etc. If you're on windows then I tend to use cygwin, though I expect you can otherwise find some rsync.exe to use.
If you can free yourself from the vagrant pre-requisite then there are many solutions, but the above should be a quick win from where you are now.

Related

How can I retain shared folders when cloning VMs using virtualbox and vagrant

First, I am on a Mac. Second, I have a virtualbox VM which was created using vagrant and which uses a shared folder to easily pass files back and forth, etc.
I would now like to clone this VM from a particular state so that I can upgrade an application on it and move forward with it. The issue is that the only way I know of to use shared folders here is to start the box using vagrant up (this makes sense as vagrant mounts the folders as part of its boot process); however, using vagrant up always triggers the original VM.
Is there a way to create a clone of a VM using Virtual Box and then to be able to use shared folders so I can easily copy files to and from the host and guest via ssh?
Did some more researching and found that I can mount a shared folder in a clone in the same way I can with the original virtualbox VM using:
mount -t vboxsf -o rw,uid=33,gid=33 <shared_folder_name> <guest_folder>
Note that the uid and gid specified here relate only to Debian-based systems. CentOS IDs are different.
For more on the technique and for solutions for CentOS boxes, see here: http://jimmybonney.com/articles/configure_virtualbox_shared_folder_apache_virtual_host/
I've tried the steps in the above article to allow for auto mounting the shared folder when the VM boots, but I've had no success. As a work-around (which I find acceptable for now), I created an alias in my .bashrc file which seems to work fine.
I would now like to clone this VM from a particular state so that I can upgrade an application on it and move forward with it
one thing you can look is vagrant snapshot
Snapshot works with VirtualBox provider to take a snapshot of your VM at the particular point of time when snapshot is taken. You can then continue working on your VM and when needed you can easily recover from a previous snapshot

Vagrant provisioning with different dependencies based on branch

Different branches and versions of my codebase have different dependencies, e.g. master branch might be on Ruby 1.9 and use Rails 4, but some release branch might be on Ruby 1.8 and use Rails 3. I imagine this is a common problem, but I haven't really seen much about it.
Is there a clean way to detect/re-provision the Vagrant VM based on the current branch (or maybe Gemfile)?
Currently I have it set up to "bundle install" and other stuff in the provisioner, which sorta works but it still clutters the VM enviroment with the dependencies for each branch I've ever been on.
Why don't you add the Vagrantfile and the provisioning scripts to the repository? Since they are part of the branch then, they can look like you wish.
If you often switch between branches this might not suite anymore since you would need to re-provision the vm every time you change the branch. In that case I would suggest to setup multiple vms in the Vagrantfile and add provisioning scripts for all the vms in parallel. Then you could do something like
vagrant up ruby1.8
vagrant up ruby1.9
... and so on.
In the spirit of hek2mgl answer but I would use different VM repository rather than multiple VM from the same Vagrantfile
For example I sometimes do the following :
have the VM on an external hard drive but the project on my local drive
I have a shell script workhere.sh which set the corresponding VM to use
#!/usr/bin/env bash
export VAGRANT_CWD="/Volumes/VMDrive/..."/;
If you commit the script to your git repo, all you need to do after you checkout a branch is to do source workhere.sh and it'll point you to the correct VM so you can have multiple VM running in parallel
In the case you switch first time, you'll need to boot the VM and after ssh, but in the case you switch multiple times, the VM will be up and by indicating which VM you need to point too, vagrant ssh would connect to the correct VM.
If you leave VM up and running you would need to be cautious about IP (if you use fixed IP) and port forwarding as you could have conflict.

Is Vagrant capable of managing all of a project's servers? (e.g. local, development, staging and production)

A few weeks ago I came across Vagrant and fell in love. I'm currently trying it out on a new project and it's working great locally. I'm using Chef Solo to build all the box's software and Berkshelf to manage the cookbooks.
What I'm wondering is how everyone is managing their development, staging and production severs for each project. While I'm working on this project locally, I would like to have a development server and eventually staging and production servers once the project is complete.
I have successfully setup Vagrant with Amazon's Ec2 using the Vagrant AWS Plugin, but it would appear that Vagrant doesn't let you vagrant up a local box and a development box at the same time, you can only have one.
I wrote a small bash script that basically gives me this capability. I can run
$ vagrant up local
$ vagrant up development
$ vagrant up staging
$ vagrant up production
And it will build each box I specify by looking for a Vagrantfile dedicated to that box's configuration. I have a directory name machines where each vagrant file lives. The directory structure I have looks like this
- app/
- public/
- vagrant/
- attributes/
- machines/
- local.rb
- development.rb
- staging.rb
- production.rb
- recipes/
- templates/
Is this a sensible solution?
How are you managing multiple servers for a single project. Can ALL the configuration live in one Vagrantfile, should it?
Please never ever do this. Vagrant is not built for it and it will break. For example, if you ever need to manipulate production instances from a different workstation, good luck :-( If you are looking for tools to manage server provisioning, it is definitely some slim pickings, but there are tools to try:
CloudFormation (and Heat for OpenStack)
Terraform
chef-metal
The first is a bit ungainly but very powerful if you are all AWS-based (or OpenStack). The latter two are very young but look promising.

How to apply changes to Puphpet config (via Vagrant)

I built my config with https://puphpet.com and have successfully deployed my virtual machine using vagrant up. Now I'm trying to customize it a little bit and see in the docs that there are two folders to run scripts in the VM: puphpet/files/exec-always and puphpet/files/exec-once.
I wrote a couple test BASH scripts and dropped them there but the only way I could "apply" these was if I destroy the VM completely and reprovision it from scratch. This takes an enormous amount of time.
What's the proper way to test and debug these scripts?
You should use $ vagrant provision!

Using dotcloud and vagrant together

Has anyone thought about using dotcloud and vagrant together? It would be super sweet to be able to type "vagrant up" into a dotcloud application and have vagrant read from the dotcloud.yml file and create an local environment that would mirror what it would look like if you did a "dotcloud push".
I have a semi-working version, but it isn't automatic. You just create both the dotcloud.yml and Vagrantfile in the same folder and have slightly different setup scripts.
I think the nearest thing you could use would be http://docker.io

Resources