How to render a YAML list in a Vagrantfile - ruby

I have a Vagrantfile which is referencing a YAML file to make multi-host configuration easier.
It's mostly working, but I'm using the Ansible provisioner and I need to reference a list/array for the ansible.groups item.
The YAML looks like this:
hosts:
- host:
provision:
ansible:
enable: yes
playbook_path: webserver.yml
groups:
- web
- vagrant
- live
I'm trying to reference it in the Vagrantfile using:
if host['provision']['ansible']['enable'] == true
vmhost.vm.provision "ansible" do |ansible|
ansible.verbose = "vv"
ansible.config_file = "./ansible.cfg"
ansible.playbook = host['provision']['ansible']['playbook_path']
ansible.tags = host['provision']['ansible']['tags']
ansible.groups = host['provision']['ansible']['groups']
end
end
But this gives me this error when building the actual VMs:
undefined method `each_pair' for ["web", "vagrant", "dev"]:Array
I searched and haven't found anything that addresses ansible_groups specifically, though I have seen various patterns for reading lists/arrays in the Vagrantfile. Any ideas?

The Ansible groups are not supposed to be an array but a hash, mapping the groups to the server names. It should look like:
hosts:
- host:
provision:
ansible:
enable: yes
playbook_path: webserver.yml
groups:
web:
- machine1
- machine2
vagrant:
- machine3
- machine4
live:
- machine5
See the documentation for more details on how this can be used.

Related

host_vars and group_vars are not getting loaded

I have below folder structure, which seem to make it build and load the roles, but the group and host vars are not being loaded. How come?
/etc/ansible/
- hosts
- requirements.yml
- group_vars/
- app/
- postgres.yml
- host_vars/
- app1/
- postgres.yml
- roles
/documents/ansible/
- playbook.yml
- vagrant
host file
# Application servers
[app]
192.168.60.6
# Group multi
[multi:children]
app
#variables applied to all servers
[multi:vars]
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
**vagrant **
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# Genral Vagrant VM Configuration.
config.vm.box = "geerlingguy/centos7"
config.ssh.insert_key = false
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.provider :virtualbox do |v|
v.memory = 256
v.linked_clone = true
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
end
# Application server 1
config.vm.define "app1" do |app|
app.vm.hostname = "orc-app1.dev"
app.vm.network :private_network, ip: "192.168.60.6"
end
end
Vagrant by default uses its own, auto-generated inventory file in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory.
There is an option ansible.inventory_path in Vagrant to point to the inventory file/directory other than the auto-generated one, but you don't use it to point to the /etc/ansible/hosts in Vagrantfile, so Vagrant is completely unaware of it. Likewise it does not look for host_vars and group_vars in the /etc/ansible.
On the other hand, the path to roles is not overridden by the inventory file, so the fact that Vagrant uses its own one does not influence the path for roles.
They are loaded by default from /etc/ansible/roles (or whatever directory Ansible uses as its default, for example /usr/local/etc/ansible/roles on macOS).
Here's what works for me; it's a combo of various advice found on this and other sites,
Solution
In my Vagrantfile (esp. note the ansible.groups line),
config.vm.provision "ansible" do |ansible|
ansible.playbook = "provisioning/site.yml"
# "default" is the name of the VM as in auto-generated
ansible.groups = { "vagrant" => ["default"] }
end
My inventory data for the VM goes in provisioning/host_vars/default and/or provisioning/group_vars/vagrant - either as files or directories with files (I prefer this so I can have a file for each role I'm using). Ensure these inventory files have a meaningful extension (so .yml assuming YAML format files) thus provisioning/host_vars/default/role_name.yml or provisioning/group_vars/vagrant/role_name.yml. provisioning can be a directory or (my preference) a symlink; the latter means I can have a common set of roles for my vagrant-handled VMs. Gotcha: Using filename extensions is important; I notice that inventory data that I used to have in files without extensions is now ignored, so I think this is a change in how Ansible reads its inventory.
Background details
The ansible.groups setting in the Vagrantfile tweaks the auto-generated inventory in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory mentioned in #techraf's answer to add,
[vagrant]
default
The line in the Vagrantfile shown above,
ansible.playbook = "provisioning/site.yml"
not only specifies the playbook, but also means the playbook directory (or symlink to a directory) is provisioning/.
Vagrant looks in provisioning/host_vars etc because "You can also add group_vars/ and host_vars/ directories to your playbook directory." (from this inventory documentation)
For clarity, here are the contents of each vagrant VM directory,
.vagrant/
ansible.cfg
provisioning -> shared_playbook_directory
Vagrantfile
and in the shared playbook directory,
group_vars/
host_vars/
roles/
site.yml
FYI this Ansible & Vagrant intro page details the host_vars, group_vars etc. inventory layout very well.

Using Ansible hosts raises `--limit` does not match any hosts

My Vagrantfile looks like:
Vagrant.configure("2") do |config|
config.vm.box = "vag-box"
config.vm.box_url = "boxes/base.box"
config.vm.network :private_network, ip: "192.168.100.100"
config.vm.provision :setup, type: :ansible_local do |ansible|
ansible.playbook = "playbook.yml"
ansible.provisioning_path = "/vagrant"
ansible.inventory_path = "/vagrant/hosts"
end
end
My playbook file looks like:
---
- name: Setup system
hosts: localhost
become: true
become_user: root
roles:
- { role: role1 }
- { role: role2 }
My hosts file looks like:
[localhost]
localhost # 192.168.100.100
During ansible execution I get the following error:
ERROR! Specified --limit does not match any hosts
First: "localhost" is a name that is assigned to the 127.0.0.1 address by convention. This refers to the loopback address of the local machine. I don't think this is what you are trying to use it for, based on the comment in your hosts file.
Second: The Ansible provisioner in Vagrant usually creates a custom inventory file, with the required contents to provision the Vagrant box. For example:
# Generated by Vagrant
myvagrantbox ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/sage/ansible/vagrant/myvagrantbox/.vagrant/machines/myvagrantbox/virtualbox/private_key'
As you are overriding the inventory file, you will need to specify a similar line for the hostname of your vagrant box.
If you omit the ansible.inventory_path = "/vagrant/hosts" line from your config, it should JustWork(tm). You might also wish to specify config.vm.hostname = "myvagrantboxname" in your configuration, so you know what hostname will be used.
See the Using Vagrant and Ansible and the Vagrant -- Ansible Provisioner documentation for more details.

How to enforce running a playbook on a Vagrant machine?

I wrote a playbook which installs Docker.
---
- name: Install dependencies
apt: name={{ item }} state=present update_cache=yes
with_items:
- linux-image-generic-lts-{{ ansible_distribution_release }}
- linux-headers-generic-lts-{{ ansible_distribution_release }}
become: true
- name: Add Docker repository key
apt_key:
id: 58118E89F3A912897C070ADBF76221572C52609D
keyserver: hkp://p80.pool.sks-keyservers.net:80
state: present
register: add_repository_key
become: true
- name: Add Docker repository
apt_repository:
repo: 'deb https://apt.dockerproject.org/repo {{ ansible_distribution_release }} main'
state: present
become: true
- name: Install Docker
apt:
name: docker
state: latest
update_cache: yes
become: true
- name: Verify the service is running
service:
name: docker
enabled: yes
state: started
become: true
I'm up'ing a vagrant machine which is configured to use that playbook.
Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/trusty64"
config.ssh.insert_key = false
config.vm.synced_folder "./", "/tmp/project",create: true
config.vm.network :forwarded_port, guest: 80, host: 80 , auto_correct: true
config.vm.provider :virtualbox do |v|
# Name of machine
v.name = "default"
# Machine memory
v.memory = 1024
# Number of cpu's
v.cpus = 2
# This option makes the NAT engine use the host's resolver mechanisms to handle DNS requests
v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
# Enabling the I/O APIC is required for 64-bit guest operating systems; it is also required if you want to use more than one virtual CPU in a virtual machine.
v.customize ["modifyvm", :id, "--ioapic", "on"]
end
config.vm.provision "ansible" do |ansible|
# Sets the playbook to use when machine is up'ed
ansible.playbook = "deploy/main.yml"
end
end
But for some reason, that's the output I get and docker is not installed on the vagrant machine:
$ vagrant up --provision
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'ubuntu/trusty64' is up to date...
==> default: Running provisioner: ansible...
default: Running ansible-playbook...
PLAY RECAP *********************************************************************
Is this the right way to do that?
Is there a command which lets me play a playbook on a running Vagrant machine?
I wrote a playbook which installs docker.
---
- name: Install dependencies
apt: name={{ item }} state=present update_cache=yes
with_items:
- linux-image-generic-lts-{{ ansible_distribution_release }}
- linux-headers-generic-lts-{{ ansible_distribution_release }}
become: true
No, you have not written a playbook. You have written a YAML file containing a list of Ansible tasks.
Playbooks contain a list of plays, and plays are YAML dictionaries which, for Ansible to work, at minimum must contain hosts key. In a typical play, the list of tasks is defined in tasks key.
So for your file to be a playbook you'd at least need:
- hosts: default
tasks:
- name: Install dependencies
apt: name={{ item }} state=present update_cache=yes
with_items:
- linux-image-generic-lts-{{ ansible_distribution_release }}
- linux-headers-generic-lts-{{ ansible_distribution_release }}
become: true
Note: default here refers to the name of the machine defined in your Vagrantfile (v.name = "default") not anything Ansible-default.
Is there a command which lets me play a playbook on a running Vagrant machine?
You can run the playbook defined in the Vagrant file with:
vagrant provision
To run another one, you'd just use ansible-playbook, but you must point to the Vagrant's inventory file, you must also use vagrant as the remote_user:
ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml

Getting Ansible example (jboss-standalone) to work with Vagrant

I need some assistance with getting https://github.com/ansible/ansible-examples.git / jboss-standalone to work with Vagrant. I think I am making the same mistake, with my Vagrant configuration.
My Vagrantfile is here:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-6.6"
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.hostname = "webserver1"
config.vm.provision :ansible do |ansible|
ansible.playbook = "site.yml"
ansible.verbose = "vvvv"
ansible.inventory_path = "/Users/miledavenport/vagrant-ansible/jboss-standalone/hosts"
end
end
My hosts file is here:
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
[jboss-servers]
webserver1
[webserver1]
127.0.0.1 ansible_connection=local
[localhost]
127.0.0.1
I am fairly new to using ansible, and want to "play" with Ansible, by using Vagrant.
"vagrant up" produces the following error:
TASK: [jboss-standalone | Install Java 1.7 and some basic dependencies] *******
FATAL: no hosts matched or all hosts have already failed -- aborting
"vagrant ssh" works OK.
site.yml is:
---
# This playbook deploys a simple standalone JBoss server.
- hosts: jboss-servers
user: root
roles:
- jboss-standalone
I don't understand why I am getting the error:
FATAL: no hosts matched
The hosts contains webserver1, which is the same as the Vagrantfile hostname.
Can someone please help me to resolve this error.
Thanks :)
Miles.
Maybe your intent is to create a parent group called jboss-servers, with a subgroup called webserver1
Try changing [jboss-servers] to [jboss-servers:children]
This will make the group jboss-servers also contain 127.0.0.1 as its hosts, and your playbook should run. Link to DOCS
At the moment since webserver1 does not have the KVP ansible_ssh_host=<ip> associated with it, it is just a hostname without an ip to connect to. Make it a subgroup of jboss-servers only if you dont have webserver1 mapped to some IP in your /etc/hosts file or something :)

vagrant with ansible error

If I have Vagrantfile with ansible provision:
Vagrant.configure(2) do |config|
config.vm.box = 'hashicorp/precise32'
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.provision :ansible do |ansible|
ansible.playbook = "playbook.yml"
ansible.inventory_path = "hosts"
ansible.limit = 'all'
ansible.sudo = true
end
end
My hosts file is very simple:
[local]
web ansible_connection=local
and playbook.yml is:
---
- hosts: local
sudo: true
remote_user: vagrant
tasks:
- name: update apt cache
apt: update_cache=yes
- name: install apache
apt: name=apache2 state=present
When I start vagrant with wagrant up I got error:
failed: [web] => {"failed": true, "parsed": false}
[sudo via ansible, key=daxgehmwoinwalgbzunaiovnrpajwbmj] password:
What's the problem?
The error is occurring because ansible is assuming key based ssh authentication, however your vagrant is creating a VM is uses password based authentication.
There are two ways you can solve this issue.
You can run your ansible playbook as
ansible-playbook playbook.yml --ask-pass
This will tell ansible to not assume key-based authentication, instead use password based ssh authentication and ask one before execution.

Resources