Conditionally call different provision in Vagrantfile - vagrant

I have the following provisions setup in my Vagrant file.
config.vm.provision :shell, :path => "provision/bootstrap.sh"
config.vm.provision :shell, :path => "provision/step-1.sh"
config.vm.provision :shell, :path => "provision/step-2.sh"
config.vm.provision :shell, :path => "provision/dev-setup.sh"
provision/bootstrap.sh needs to be run always, however i need to conditionally run the remaining provisions. For Eg. if dev mode, run the provision/dev-setup.sh
Is there some inbuilt Vagrant config setting to achieve this? ( like passing command line args to vagrant provision) ?
I would not like to rely on ENV variables like this if possible.

I think environment variables are the most common way to handle this, and there isn't a way to pass anything through the vagrant up or vagrant provision command. Here are a couple of alternative ideas you could explore:
Have something else that is different on the Dev versus Prod environment. The Vagrantfile is just a Ruby script, so anything that can be detected can be used to control the provisioning script sequence. For example, presence/absence of a file, local network, hostname, etc.
Define separate Vagrant nodes which are actually the same, but differ based on provisioning. For example with a file like the following, you could do vagrant up prod or vagrant up dev, depending on your environment:
Vagrant.configure("2") do |config|
config.vm.provision :shell, :path => "provision/bootstrap.sh"
config.vm.provision :shell, :path => "provision/step-1.sh"
config.vm.provision :shell, :path => "provision/step-2.sh"
config.vm.define "prod" do |prod|
...
end
config.vm.define "dev" do |dev|
...
config.vm.provision :shell, :path => "provision/dev-setup.sh"
end
end

You can also create a new variable in your inventory to hold the environment, for example env: dev in the dev inventory and env: production in the prod inventory and then use that to conditionally include your individual task files, like this:
# provision.yml (your main playbook)
- include: provision/bootstrap.sh
- include: provision/step-1.sh
- include: provision/step-2.sh
- include: provision/dev-setup.sh
when: env == 'dev'
Then in Vagrant you can specify the inventory to use, for example:
config.vm.provision "provision", type: "ansible" do |ansible|
ansible.playbook = 'provision.yml'
ansible.inventory_path = 'inventory/dev'
end
More info:
Conditional includes
Inventories

Related

How to render a YAML list in a Vagrantfile

I have a Vagrantfile which is referencing a YAML file to make multi-host configuration easier.
It's mostly working, but I'm using the Ansible provisioner and I need to reference a list/array for the ansible.groups item.
The YAML looks like this:
hosts:
- host:
provision:
ansible:
enable: yes
playbook_path: webserver.yml
groups:
- web
- vagrant
- live
I'm trying to reference it in the Vagrantfile using:
if host['provision']['ansible']['enable'] == true
vmhost.vm.provision "ansible" do |ansible|
ansible.verbose = "vv"
ansible.config_file = "./ansible.cfg"
ansible.playbook = host['provision']['ansible']['playbook_path']
ansible.tags = host['provision']['ansible']['tags']
ansible.groups = host['provision']['ansible']['groups']
end
end
But this gives me this error when building the actual VMs:
undefined method `each_pair' for ["web", "vagrant", "dev"]:Array
I searched and haven't found anything that addresses ansible_groups specifically, though I have seen various patterns for reading lists/arrays in the Vagrantfile. Any ideas?
The Ansible groups are not supposed to be an array but a hash, mapping the groups to the server names. It should look like:
hosts:
- host:
provision:
ansible:
enable: yes
playbook_path: webserver.yml
groups:
web:
- machine1
- machine2
vagrant:
- machine3
- machine4
live:
- machine5
See the documentation for more details on how this can be used.

host_vars and group_vars are not getting loaded

I have below folder structure, which seem to make it build and load the roles, but the group and host vars are not being loaded. How come?
/etc/ansible/
- hosts
- requirements.yml
- group_vars/
- app/
- postgres.yml
- host_vars/
- app1/
- postgres.yml
- roles
/documents/ansible/
- playbook.yml
- vagrant
host file
# Application servers
[app]
192.168.60.6
# Group multi
[multi:children]
app
#variables applied to all servers
[multi:vars]
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
**vagrant **
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# Genral Vagrant VM Configuration.
config.vm.box = "geerlingguy/centos7"
config.ssh.insert_key = false
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.provider :virtualbox do |v|
v.memory = 256
v.linked_clone = true
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
end
# Application server 1
config.vm.define "app1" do |app|
app.vm.hostname = "orc-app1.dev"
app.vm.network :private_network, ip: "192.168.60.6"
end
end
Vagrant by default uses its own, auto-generated inventory file in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory.
There is an option ansible.inventory_path in Vagrant to point to the inventory file/directory other than the auto-generated one, but you don't use it to point to the /etc/ansible/hosts in Vagrantfile, so Vagrant is completely unaware of it. Likewise it does not look for host_vars and group_vars in the /etc/ansible.
On the other hand, the path to roles is not overridden by the inventory file, so the fact that Vagrant uses its own one does not influence the path for roles.
They are loaded by default from /etc/ansible/roles (or whatever directory Ansible uses as its default, for example /usr/local/etc/ansible/roles on macOS).
Here's what works for me; it's a combo of various advice found on this and other sites,
Solution
In my Vagrantfile (esp. note the ansible.groups line),
config.vm.provision "ansible" do |ansible|
ansible.playbook = "provisioning/site.yml"
# "default" is the name of the VM as in auto-generated
ansible.groups = { "vagrant" => ["default"] }
end
My inventory data for the VM goes in provisioning/host_vars/default and/or provisioning/group_vars/vagrant - either as files or directories with files (I prefer this so I can have a file for each role I'm using). Ensure these inventory files have a meaningful extension (so .yml assuming YAML format files) thus provisioning/host_vars/default/role_name.yml or provisioning/group_vars/vagrant/role_name.yml. provisioning can be a directory or (my preference) a symlink; the latter means I can have a common set of roles for my vagrant-handled VMs. Gotcha: Using filename extensions is important; I notice that inventory data that I used to have in files without extensions is now ignored, so I think this is a change in how Ansible reads its inventory.
Background details
The ansible.groups setting in the Vagrantfile tweaks the auto-generated inventory in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory mentioned in #techraf's answer to add,
[vagrant]
default
The line in the Vagrantfile shown above,
ansible.playbook = "provisioning/site.yml"
not only specifies the playbook, but also means the playbook directory (or symlink to a directory) is provisioning/.
Vagrant looks in provisioning/host_vars etc because "You can also add group_vars/ and host_vars/ directories to your playbook directory." (from this inventory documentation)
For clarity, here are the contents of each vagrant VM directory,
.vagrant/
ansible.cfg
provisioning -> shared_playbook_directory
Vagrantfile
and in the shared playbook directory,
group_vars/
host_vars/
roles/
site.yml
FYI this Ansible & Vagrant intro page details the host_vars, group_vars etc. inventory layout very well.

Using Ansible hosts raises `--limit` does not match any hosts

My Vagrantfile looks like:
Vagrant.configure("2") do |config|
config.vm.box = "vag-box"
config.vm.box_url = "boxes/base.box"
config.vm.network :private_network, ip: "192.168.100.100"
config.vm.provision :setup, type: :ansible_local do |ansible|
ansible.playbook = "playbook.yml"
ansible.provisioning_path = "/vagrant"
ansible.inventory_path = "/vagrant/hosts"
end
end
My playbook file looks like:
---
- name: Setup system
hosts: localhost
become: true
become_user: root
roles:
- { role: role1 }
- { role: role2 }
My hosts file looks like:
[localhost]
localhost # 192.168.100.100
During ansible execution I get the following error:
ERROR! Specified --limit does not match any hosts
First: "localhost" is a name that is assigned to the 127.0.0.1 address by convention. This refers to the loopback address of the local machine. I don't think this is what you are trying to use it for, based on the comment in your hosts file.
Second: The Ansible provisioner in Vagrant usually creates a custom inventory file, with the required contents to provision the Vagrant box. For example:
# Generated by Vagrant
myvagrantbox ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/sage/ansible/vagrant/myvagrantbox/.vagrant/machines/myvagrantbox/virtualbox/private_key'
As you are overriding the inventory file, you will need to specify a similar line for the hostname of your vagrant box.
If you omit the ansible.inventory_path = "/vagrant/hosts" line from your config, it should JustWork(tm). You might also wish to specify config.vm.hostname = "myvagrantboxname" in your configuration, so you know what hostname will be used.
See the Using Vagrant and Ansible and the Vagrant -- Ansible Provisioner documentation for more details.

Ansible & Vagrant - give args to ansible provision

A collegeau of mine wrote a script to automate Vagrant installations, to include Ansible scripts. So if I run ansible provision, the playbook ansible/playbooks/provision.yml` is run at the vagrant machine(s).
The downside of this script is the Ansible playbook will only deploy on the machine with ansible provision.
Now, as I'm writing code and working, I am noticing the downsides. Because I can give ansible-playbook parameters / arguments, such asansible-playbook -i inventory provision.yml -vvv --tags "test". But this is not possible because of an architectual problem.
So, instead of solving the real problem (which I try to evade), are there any guru's out there, who can point me in the right directoin, to make it possible to give ansible provision arguments? E.g. ansible provision -vvv.
I looked at https://www.vagrantup.com/docs/cli/provision.html but without help.
Thanks.
Not completly sure I have understood correctly but maybe this config (from one of my projects), in vagrantfile, could help :
config.vm.provision "ansible" do |ansible|
ansible.playbook = "ansible/playbook.yml"
ansible.limit = 'all'
ansible.tags = 'local'
ansible.sudo = true
ansible.verbose = 'v'
ansible.groups = {
"db" => ["db"],
"app" => ["app"],
"myproject" => ["myproject"],
"fourth" => ["fourth"],
"local:children" => ["db", "app", "myproject", "fourth"]
}
end
In this Vagrantfile, I configured 4 VM vagrant.
vagrant_ansible_inventory looks like this :
# Generated by Vagrant
db ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=/home/user/.vagrant.d/insecure_private_key
app ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=/home/user/.vagrant.d/insecure_private_key
myproject ansible_ssh_host=127.0.0.1 ansible_ssh_port=2201 ansible_ssh_private_key_file=/home/user/.vagrant.d/insecure_private_key
fourth ansible_ssh_host=127.0.0.1 ansible_ssh_port=2202 ansible_ssh_private_key_file=/home/user/.vagrant.d/insecure_private_key
[db]
db
[app]
app
[myproject]
myproject
[fourth]
fourth
[local:children]
db
app
myproject
fourth
https://www.vagrantup.com/docs/provisioning/ansible_local.html

Getting Ansible example (jboss-standalone) to work with Vagrant

I need some assistance with getting https://github.com/ansible/ansible-examples.git / jboss-standalone to work with Vagrant. I think I am making the same mistake, with my Vagrant configuration.
My Vagrantfile is here:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-6.6"
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.hostname = "webserver1"
config.vm.provision :ansible do |ansible|
ansible.playbook = "site.yml"
ansible.verbose = "vvvv"
ansible.inventory_path = "/Users/miledavenport/vagrant-ansible/jboss-standalone/hosts"
end
end
My hosts file is here:
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
[jboss-servers]
webserver1
[webserver1]
127.0.0.1 ansible_connection=local
[localhost]
127.0.0.1
I am fairly new to using ansible, and want to "play" with Ansible, by using Vagrant.
"vagrant up" produces the following error:
TASK: [jboss-standalone | Install Java 1.7 and some basic dependencies] *******
FATAL: no hosts matched or all hosts have already failed -- aborting
"vagrant ssh" works OK.
site.yml is:
---
# This playbook deploys a simple standalone JBoss server.
- hosts: jboss-servers
user: root
roles:
- jboss-standalone
I don't understand why I am getting the error:
FATAL: no hosts matched
The hosts contains webserver1, which is the same as the Vagrantfile hostname.
Can someone please help me to resolve this error.
Thanks :)
Miles.
Maybe your intent is to create a parent group called jboss-servers, with a subgroup called webserver1
Try changing [jboss-servers] to [jboss-servers:children]
This will make the group jboss-servers also contain 127.0.0.1 as its hosts, and your playbook should run. Link to DOCS
At the moment since webserver1 does not have the KVP ansible_ssh_host=<ip> associated with it, it is just a hostname without an ip to connect to. Make it a subgroup of jboss-servers only if you dont have webserver1 mapped to some IP in your /etc/hosts file or something :)

Resources