I'm using packer with a vmware-iso source and an ansible provisioner.
As I'm tinkering with ansible to get things working, I end up running packer from the beginning which starts from the iso from scratch (e.g. downloading centos updates, etc).
I know I can disable the ansible provisioner (manually comment out the provisioner section on the packer hcl file), store the result of vmware-iso build somewhere and manually start a virtual machine from that, then make all the ansible tests there, but I'm wondering if there is a better / automatic way to do this, similarly to how docker caches layers and uses a cache to avoid redoing work unnecessarily.
Is there any way to do this? perhaps I can have intermediate builders and tell packer which stage of the building to execute?
e.g.
build1: from iso produces ova.
build2: from result of build1 runs ansible provisioner and produces another ova
this way I can tell packer whether to build from scratch or from build2 only.
any ideas?
Related
I have a couple bash scripts to automate the setup process for a new devices e.g. installing packages, configuring environment variables, etc.
I'm working on making the process more automated with autoexpect, and adding a few thing other things; however, it's difficult to test since every time I run the install script I have to manually go back an undo the changes that were made from running the script. Is there a way to run the scripts without actually installing anything so I can observe the behaviour for testing? something like the --dry-run option with rsync
for configuring your machine and being able to test this quickly and knowing you won't cause problems to your PC locally, create a VM using Virtual box or VMwWare player and then snapshot the VM so you can revert back to the state before you run the script, and then you can run your script on this VM, and check what configuration has been applied successfully.
So I'm looking at creating a generic wrapper around the ansible-playbook command.
What I'd like to do is spin up a number of VMs (Vagrant or docker), based on the inventory supplied.
I'd use these VMs locally for automated testing using molecule, as well as manual function testing.
Crucially the number of machines in the inventory could change, so these need created prior to the run.
Any thoughts?
Cheers,
Stuart
You could use a tool like Terraform to run your docker images, and then export the inventory from Terraform to Ansible using something like terraform-inventory.
I think there's also an Ansible provisioner for Terraform.
I'm fairly new to Puppet and am using Puppet 3.x. Right now I have a puppet master running locally, which I connect to using vagrant ssh.
At the moment if I change a manifest file on that puppet master I'm logging out of ssh and calling vagrant destroy, followed by vagrant up.
This takes a good 10 minutes. Is there a faster/better way of doing this? I'm looking at librarian-puppet but am not sure whether / how to use it in this circumstance.
Re-Run Provisioners
If you've already provisioned the machine at least once, you can make changes to your manifests and then run:
vagrant provision
on the host. This will re-run any provisioners without requiring a new Vagrant instance to be built and provisioned for the first time. In most cases, unless you have a pathological set of manifests, this will be significantly faster.
I built my config with https://puphpet.com and have successfully deployed my virtual machine using vagrant up. Now I'm trying to customize it a little bit and see in the docs that there are two folders to run scripts in the VM: puphpet/files/exec-always and puphpet/files/exec-once.
I wrote a couple test BASH scripts and dropped them there but the only way I could "apply" these was if I destroy the VM completely and reprovision it from scratch. This takes an enormous amount of time.
What's the proper way to test and debug these scripts?
You should use $ vagrant provision!
I'm looking for some best practices on how to increase my productivity when writing new puppet modules. My workflow looks like this right now:
vagrant up
Make changes/fixes
vagrant provision
Find mistakes/errors, GOTO 2
After I get through all the mistakes/errors I do:
vagrant destroy
vagrant up
Make sure everything is working
commit my changes
This is too slow... how can i make this workflow faster?
I am in denial about writing tests for puppet. What are my other options?
cache your apt/yum repository on your host with the vagrant-cachier plugin
use profile –evaltrace to find where you loose time on full provisioning
use package base distribution :
eg: rvm install ruby-2.0.0 vs a pre-compiled ruby package created with fpm
avoid a "wget the internet and compile" approach
this will probably make your provisioning more reproducible and speedier.
don't code modules
try reusing some from the forge/github/...
note that it can be against my previous advice
if this is an option, upgrade your puppet/ruby version
iterate and prevent full provisioning
vagrant up
vagrant provision
modify manifest/modules
vagrant provision
modify manifest/modules
vagrant provision
vagrant destroy
vagrant up
launch server-spec
minimize typed command
launch command as you modify your files
you can perhaps setup guard to launch lint/test/spec/provision as you save
you can also send notifications from guest to host machine with vagrant-notify
test without actually provisioning in vagrant
rspec puppet (ideal when refactoring modules)
test your provisioning instead of manual checking
stop vagrant ssh-ing checking if service is running or a config has a given value
launch server-spec
take a look at Beaker
delegate running the test to your preferred ci server (jenkins, travis-ci,...)
if you are a bit fustrated by puppet... take a look at ansible
easy to setup (no ruby to install/compile)
you can select portion of stuff you want to run with tags
you can share the playbooks via synched folders and run ansible in the vagrant box locally (no librairian-puppet to launch)
update : after discussion with #garethr, take a look at his last presentation about guard.
I recommand using language-puppet. It comes with a command line tool (puppetresources) that can compute catalogs on your computer and let you examine them. It has a few useful features that can't be found in Puppet :
It is really fast (6 times faster on a single catalog, something like 50 times on many catalogs)
It tracks where each resource was defined, and what was the "class stack" at that point, which is really handy when you have duplicate resources
It automatically checks that the files you refer to exist
It is stricter than Puppet (breaks on undefined variables for example)
It let you print to standard output the content of any file, which is useful for developing complex templates
The only caveat is that it only works with "modern" Puppet practices. For example, require is not implemented. It also only works on Linux.