How do I speed up my puppet module development-testing cycle? - vagrant

I'm looking for some best practices on how to increase my productivity when writing new puppet modules. My workflow looks like this right now:
vagrant up
Make changes/fixes
vagrant provision
Find mistakes/errors, GOTO 2
After I get through all the mistakes/errors I do:
vagrant destroy
vagrant up
Make sure everything is working
commit my changes
This is too slow... how can i make this workflow faster?
I am in denial about writing tests for puppet. What are my other options?

cache your apt/yum repository on your host with the vagrant-cachier plugin
use profile –evaltrace to find where you loose time on full provisioning
use package base distribution :
eg: rvm install ruby-2.0.0 vs a pre-compiled ruby package created with fpm
avoid a "wget the internet and compile" approach
this will probably make your provisioning more reproducible and speedier.
don't code modules
try reusing some from the forge/github/...
note that it can be against my previous advice
if this is an option, upgrade your puppet/ruby version
iterate and prevent full provisioning
vagrant up
vagrant provision
modify manifest/modules
vagrant provision
modify manifest/modules
vagrant provision
vagrant destroy
vagrant up
launch server-spec
minimize typed command
launch command as you modify your files
you can perhaps setup guard to launch lint/test/spec/provision as you save
you can also send notifications from guest to host machine with vagrant-notify
test without actually provisioning in vagrant
rspec puppet (ideal when refactoring modules)
test your provisioning instead of manual checking
stop vagrant ssh-ing checking if service is running or a config has a given value
launch server-spec
take a look at Beaker
delegate running the test to your preferred ci server (jenkins, travis-ci,...)
if you are a bit fustrated by puppet... take a look at ansible
easy to setup (no ruby to install/compile)
you can select portion of stuff you want to run with tags
you can share the playbooks via synched folders and run ansible in the vagrant box locally (no librairian-puppet to launch)
update : after discussion with #garethr, take a look at his last presentation about guard.

I recommand using language-puppet. It comes with a command line tool (puppetresources) that can compute catalogs on your computer and let you examine them. It has a few useful features that can't be found in Puppet :
It is really fast (6 times faster on a single catalog, something like 50 times on many catalogs)
It tracks where each resource was defined, and what was the "class stack" at that point, which is really handy when you have duplicate resources
It automatically checks that the files you refer to exist
It is stricter than Puppet (breaks on undefined variables for example)
It let you print to standard output the content of any file, which is useful for developing complex templates
The only caveat is that it only works with "modern" Puppet practices. For example, require is not implemented. It also only works on Linux.

Related

How can I test a bash script for setting up a new machine

I have a couple bash scripts to automate the setup process for a new devices e.g. installing packages, configuring environment variables, etc.
I'm working on making the process more automated with autoexpect, and adding a few thing other things; however, it's difficult to test since every time I run the install script I have to manually go back an undo the changes that were made from running the script. Is there a way to run the scripts without actually installing anything so I can observe the behaviour for testing? something like the --dry-run option with rsync
for configuring your machine and being able to test this quickly and knowing you won't cause problems to your PC locally, create a VM using Virtual box or VMwWare player and then snapshot the VM so you can revert back to the state before you run the script, and then you can run your script on this VM, and check what configuration has been applied successfully.

Vagrant Virtualbox Openstack - or is there a better way?

Current Steup
Using a Vagrant/Virtualbox image for development
Vagrant file and php code are both checked into a git repo
When a new user joins the project they pull down the git repo and type vagrant up
When we deploy to our "dev production" server we are on a CentOS 7 machine that has virtual box and vagrant and we just run the vagrant image
Future Setup
We are moving towards an OpenStack "cloud" and are wondering how to best integrate this current setup into the workflow
As I understand it OpenStack allows you to create individual VMs - which sounds cool because on one hand we could then launch our VM's, but the problem is we are taking advantage of Vagrant/Virtual Box's "mapping" functionality so that we are mounting /var/www/html to a /html directory in the folder we run vagrant out of. I assume this is not possible with OpenStack - and was wondering whether there is a specified best practice for how to handle this situation.
Approach
The only approach i can think of is to:
Install a VM on OpenStack that runs Centos7 and then inside that VM run Vagrant/VirtualBox (this seems bonkers)
But then we have VM inside a VM inside a VM and that just doesn't seem efficient.
Is there a tool - or guide - or guidance how to work with both a local vagrant image and the cloud? It seems like there may not be as easy a mapping as I initially though.
Thanks
It sounds like you want to keep using vagrant, presumably using https://github.com/ggiamarchi/vagrant-openstack-provider or similar? With that assumption, the way to do this which is probably the smallest iteration from your current setup is just to use an rsync synced folder - see https://www.vagrantup.com/docs/synced-folders/rsync.html. You should be able to something like this:
config.vm.synced_folder "html/", "/var/www/html", type: 'rsync'
Check the rest of that rsync page though -- depending on your ssh user, you might need to use the --rsync-path option.
NB - you don't mention whether you vagrant host is running windows or linux etc. If you're on windows then I tend to use cygwin, though I expect you can otherwise find some rsync.exe to use.
If you can free yourself from the vagrant pre-requisite then there are many solutions, but the above should be a quick win from where you are now.

Vagrant provisioning with different dependencies based on branch

Different branches and versions of my codebase have different dependencies, e.g. master branch might be on Ruby 1.9 and use Rails 4, but some release branch might be on Ruby 1.8 and use Rails 3. I imagine this is a common problem, but I haven't really seen much about it.
Is there a clean way to detect/re-provision the Vagrant VM based on the current branch (or maybe Gemfile)?
Currently I have it set up to "bundle install" and other stuff in the provisioner, which sorta works but it still clutters the VM enviroment with the dependencies for each branch I've ever been on.
Why don't you add the Vagrantfile and the provisioning scripts to the repository? Since they are part of the branch then, they can look like you wish.
If you often switch between branches this might not suite anymore since you would need to re-provision the vm every time you change the branch. In that case I would suggest to setup multiple vms in the Vagrantfile and add provisioning scripts for all the vms in parallel. Then you could do something like
vagrant up ruby1.8
vagrant up ruby1.9
... and so on.
In the spirit of hek2mgl answer but I would use different VM repository rather than multiple VM from the same Vagrantfile
For example I sometimes do the following :
have the VM on an external hard drive but the project on my local drive
I have a shell script workhere.sh which set the corresponding VM to use
#!/usr/bin/env bash
export VAGRANT_CWD="/Volumes/VMDrive/..."/;
If you commit the script to your git repo, all you need to do after you checkout a branch is to do source workhere.sh and it'll point you to the correct VM so you can have multiple VM running in parallel
In the case you switch first time, you'll need to boot the VM and after ssh, but in the case you switch multiple times, the VM will be up and by indicating which VM you need to point too, vagrant ssh would connect to the correct VM.
If you leave VM up and running you would need to be cautious about IP (if you use fixed IP) and port forwarding as you could have conflict.

Changing a manifest file on a puppet master, how to get it to take hold

I'm fairly new to Puppet and am using Puppet 3.x. Right now I have a puppet master running locally, which I connect to using vagrant ssh.
At the moment if I change a manifest file on that puppet master I'm logging out of ssh and calling vagrant destroy, followed by vagrant up.
This takes a good 10 minutes. Is there a faster/better way of doing this? I'm looking at librarian-puppet but am not sure whether / how to use it in this circumstance.
Re-Run Provisioners
If you've already provisioned the machine at least once, you can make changes to your manifests and then run:
vagrant provision
on the host. This will re-run any provisioners without requiring a new Vagrant instance to be built and provisioned for the first time. In most cases, unless you have a pathological set of manifests, this will be significantly faster.

How to apply changes to Puphpet config (via Vagrant)

I built my config with https://puphpet.com and have successfully deployed my virtual machine using vagrant up. Now I'm trying to customize it a little bit and see in the docs that there are two folders to run scripts in the VM: puphpet/files/exec-always and puphpet/files/exec-once.
I wrote a couple test BASH scripts and dropped them there but the only way I could "apply" these was if I destroy the VM completely and reprovision it from scratch. This takes an enormous amount of time.
What's the proper way to test and debug these scripts?
You should use $ vagrant provision!

Resources