Using Ansible hosts raises `--limit` does not match any hosts - vagrant

My Vagrantfile looks like:
Vagrant.configure("2") do |config|
config.vm.box = "vag-box"
config.vm.box_url = "boxes/base.box"
config.vm.network :private_network, ip: "192.168.100.100"
config.vm.provision :setup, type: :ansible_local do |ansible|
ansible.playbook = "playbook.yml"
ansible.provisioning_path = "/vagrant"
ansible.inventory_path = "/vagrant/hosts"
end
end
My playbook file looks like:
---
- name: Setup system
hosts: localhost
become: true
become_user: root
roles:
- { role: role1 }
- { role: role2 }
My hosts file looks like:
[localhost]
localhost # 192.168.100.100
During ansible execution I get the following error:
ERROR! Specified --limit does not match any hosts

First: "localhost" is a name that is assigned to the 127.0.0.1 address by convention. This refers to the loopback address of the local machine. I don't think this is what you are trying to use it for, based on the comment in your hosts file.
Second: The Ansible provisioner in Vagrant usually creates a custom inventory file, with the required contents to provision the Vagrant box. For example:
# Generated by Vagrant
myvagrantbox ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/sage/ansible/vagrant/myvagrantbox/.vagrant/machines/myvagrantbox/virtualbox/private_key'
As you are overriding the inventory file, you will need to specify a similar line for the hostname of your vagrant box.
If you omit the ansible.inventory_path = "/vagrant/hosts" line from your config, it should JustWork(tm). You might also wish to specify config.vm.hostname = "myvagrantboxname" in your configuration, so you know what hostname will be used.
See the Using Vagrant and Ansible and the Vagrant -- Ansible Provisioner documentation for more details.

Related

How to render a YAML list in a Vagrantfile

I have a Vagrantfile which is referencing a YAML file to make multi-host configuration easier.
It's mostly working, but I'm using the Ansible provisioner and I need to reference a list/array for the ansible.groups item.
The YAML looks like this:
hosts:
- host:
provision:
ansible:
enable: yes
playbook_path: webserver.yml
groups:
- web
- vagrant
- live
I'm trying to reference it in the Vagrantfile using:
if host['provision']['ansible']['enable'] == true
vmhost.vm.provision "ansible" do |ansible|
ansible.verbose = "vv"
ansible.config_file = "./ansible.cfg"
ansible.playbook = host['provision']['ansible']['playbook_path']
ansible.tags = host['provision']['ansible']['tags']
ansible.groups = host['provision']['ansible']['groups']
end
end
But this gives me this error when building the actual VMs:
undefined method `each_pair' for ["web", "vagrant", "dev"]:Array
I searched and haven't found anything that addresses ansible_groups specifically, though I have seen various patterns for reading lists/arrays in the Vagrantfile. Any ideas?
The Ansible groups are not supposed to be an array but a hash, mapping the groups to the server names. It should look like:
hosts:
- host:
provision:
ansible:
enable: yes
playbook_path: webserver.yml
groups:
web:
- machine1
- machine2
vagrant:
- machine3
- machine4
live:
- machine5
See the documentation for more details on how this can be used.

host_vars and group_vars are not getting loaded

I have below folder structure, which seem to make it build and load the roles, but the group and host vars are not being loaded. How come?
/etc/ansible/
- hosts
- requirements.yml
- group_vars/
- app/
- postgres.yml
- host_vars/
- app1/
- postgres.yml
- roles
/documents/ansible/
- playbook.yml
- vagrant
host file
# Application servers
[app]
192.168.60.6
# Group multi
[multi:children]
app
#variables applied to all servers
[multi:vars]
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
**vagrant **
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# Genral Vagrant VM Configuration.
config.vm.box = "geerlingguy/centos7"
config.ssh.insert_key = false
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.provider :virtualbox do |v|
v.memory = 256
v.linked_clone = true
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
end
# Application server 1
config.vm.define "app1" do |app|
app.vm.hostname = "orc-app1.dev"
app.vm.network :private_network, ip: "192.168.60.6"
end
end
Vagrant by default uses its own, auto-generated inventory file in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory.
There is an option ansible.inventory_path in Vagrant to point to the inventory file/directory other than the auto-generated one, but you don't use it to point to the /etc/ansible/hosts in Vagrantfile, so Vagrant is completely unaware of it. Likewise it does not look for host_vars and group_vars in the /etc/ansible.
On the other hand, the path to roles is not overridden by the inventory file, so the fact that Vagrant uses its own one does not influence the path for roles.
They are loaded by default from /etc/ansible/roles (or whatever directory Ansible uses as its default, for example /usr/local/etc/ansible/roles on macOS).
Here's what works for me; it's a combo of various advice found on this and other sites,
Solution
In my Vagrantfile (esp. note the ansible.groups line),
config.vm.provision "ansible" do |ansible|
ansible.playbook = "provisioning/site.yml"
# "default" is the name of the VM as in auto-generated
ansible.groups = { "vagrant" => ["default"] }
end
My inventory data for the VM goes in provisioning/host_vars/default and/or provisioning/group_vars/vagrant - either as files or directories with files (I prefer this so I can have a file for each role I'm using). Ensure these inventory files have a meaningful extension (so .yml assuming YAML format files) thus provisioning/host_vars/default/role_name.yml or provisioning/group_vars/vagrant/role_name.yml. provisioning can be a directory or (my preference) a symlink; the latter means I can have a common set of roles for my vagrant-handled VMs. Gotcha: Using filename extensions is important; I notice that inventory data that I used to have in files without extensions is now ignored, so I think this is a change in how Ansible reads its inventory.
Background details
The ansible.groups setting in the Vagrantfile tweaks the auto-generated inventory in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory mentioned in #techraf's answer to add,
[vagrant]
default
The line in the Vagrantfile shown above,
ansible.playbook = "provisioning/site.yml"
not only specifies the playbook, but also means the playbook directory (or symlink to a directory) is provisioning/.
Vagrant looks in provisioning/host_vars etc because "You can also add group_vars/ and host_vars/ directories to your playbook directory." (from this inventory documentation)
For clarity, here are the contents of each vagrant VM directory,
.vagrant/
ansible.cfg
provisioning -> shared_playbook_directory
Vagrantfile
and in the shared playbook directory,
group_vars/
host_vars/
roles/
site.yml
FYI this Ansible & Vagrant intro page details the host_vars, group_vars etc. inventory layout very well.

Trying to provision my Vagrant with Ansible

I am trying to provision a virtual machine with an Ansible playbook.
Following the documentation, I ended up with this simple Vagrant File:
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial64"
config.vm.network "private_network", ip: "192.168.50.5"
config.vm.provision "ansible" do |ansible|
ansible.verbose = "vvv"
ansible.playbook = "playbook.yml"
end
end
As you can see, I am trying to provision a xenial64 machine (Ubuntu 16.04) from a playbook.yml file.
When I launch vagrant provision, here is what I get:
$ vagrant provision
==> default: Running provisioner: ansible...
default: Running ansible-playbook...
PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --limit="default" --inventory-file=/home/mmarteau/Code/ansible-arc/.vagrant/provisioners/ansible/inventory -vvv playbook.yml
Using /etc/ansible/ansible.cfg as config file
statically included: /home/mmarteau/Code/ansible-arc/roles/user/tasks/ho-my-zsh.yml
statically included: /home/mmarteau/Code/ansible-arc/roles/webserver/tasks/nginx.yml
statically included: /home/mmarteau/Code/ansible-arc/roles/webserver/tasks/php.yml
statically included: /etc/ansible/roles/geerlingguy.composer/tasks/global-require.yml
statically included: /etc/ansible/roles/geerlingguy.nodejs/tasks/setup-RedHat.yml
statically included: /etc/ansible/roles/geerlingguy.nodejs/tasks/setup-Debian.yml
PLAYBOOK: playbook.yml *********************************************************
1 plays in playbook.yml
PLAY RECAP *********************************************************************
So my file seems to be read because I get some statically included from roles into my playbook.yml file.
However, the script stops very quickly, and I don't have any information to debug or to see any errors.
How can I debug this process?
EDIT: More info
Here is my playbook.yml file:
---
- name: Installation du serveur
# hosts: web
hosts: test
vars:
user: mmart
apps:
dev:
branch: development
domain: admin.test.dev
master:
branch: master
domain: admin.test.fr
bitbucket_repository: git#bitbucket.org:Test/test.git
composer_home_path: '/home/mmart/.composer'
composer_home_owner: mmart
composer_home_group: mmart
zsh_theme: agnoster
environment_file: arc-parameters.yml
ssh_agent_config: arc-ssh-config
roles:
- apt
- user
- webserver
- geerlingguy.composer
- geerlingguy.nodejs
- deploy
- deployer
...
Here is my host file:
[web]
XX.XX.XXX.XXX ansible_ssh_private_key_file=/somekey.pem ansible_become=true ansible_user=ubuntu
[test]
Here is the generated host file from vagrant in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory :
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='ubuntu' ansible_ssh_private_key_file='/home/mmart/Code/ansible-test/.vagrant/machines/default/virtualbox/private_key'
Is this correct? Shouldn't the ansible_ssh_user be set to vagrant ?
In your playbook use the default as hosts, as vagrant by default will only create an inventory element for that particular host:
---
- name: Installation du serveur
hosts: default
(...)

Getting Ansible example (jboss-standalone) to work with Vagrant

I need some assistance with getting https://github.com/ansible/ansible-examples.git / jboss-standalone to work with Vagrant. I think I am making the same mistake, with my Vagrant configuration.
My Vagrantfile is here:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-6.6"
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.hostname = "webserver1"
config.vm.provision :ansible do |ansible|
ansible.playbook = "site.yml"
ansible.verbose = "vvvv"
ansible.inventory_path = "/Users/miledavenport/vagrant-ansible/jboss-standalone/hosts"
end
end
My hosts file is here:
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
[jboss-servers]
webserver1
[webserver1]
127.0.0.1 ansible_connection=local
[localhost]
127.0.0.1
I am fairly new to using ansible, and want to "play" with Ansible, by using Vagrant.
"vagrant up" produces the following error:
TASK: [jboss-standalone | Install Java 1.7 and some basic dependencies] *******
FATAL: no hosts matched or all hosts have already failed -- aborting
"vagrant ssh" works OK.
site.yml is:
---
# This playbook deploys a simple standalone JBoss server.
- hosts: jboss-servers
user: root
roles:
- jboss-standalone
I don't understand why I am getting the error:
FATAL: no hosts matched
The hosts contains webserver1, which is the same as the Vagrantfile hostname.
Can someone please help me to resolve this error.
Thanks :)
Miles.
Maybe your intent is to create a parent group called jboss-servers, with a subgroup called webserver1
Try changing [jboss-servers] to [jboss-servers:children]
This will make the group jboss-servers also contain 127.0.0.1 as its hosts, and your playbook should run. Link to DOCS
At the moment since webserver1 does not have the KVP ansible_ssh_host=<ip> associated with it, it is just a hostname without an ip to connect to. Make it a subgroup of jboss-servers only if you dont have webserver1 mapped to some IP in your /etc/hosts file or something :)

vagrant with ansible error

If I have Vagrantfile with ansible provision:
Vagrant.configure(2) do |config|
config.vm.box = 'hashicorp/precise32'
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.provision :ansible do |ansible|
ansible.playbook = "playbook.yml"
ansible.inventory_path = "hosts"
ansible.limit = 'all'
ansible.sudo = true
end
end
My hosts file is very simple:
[local]
web ansible_connection=local
and playbook.yml is:
---
- hosts: local
sudo: true
remote_user: vagrant
tasks:
- name: update apt cache
apt: update_cache=yes
- name: install apache
apt: name=apache2 state=present
When I start vagrant with wagrant up I got error:
failed: [web] => {"failed": true, "parsed": false}
[sudo via ansible, key=daxgehmwoinwalgbzunaiovnrpajwbmj] password:
What's the problem?
The error is occurring because ansible is assuming key based ssh authentication, however your vagrant is creating a VM is uses password based authentication.
There are two ways you can solve this issue.
You can run your ansible playbook as
ansible-playbook playbook.yml --ask-pass
This will tell ansible to not assume key-based authentication, instead use password based ssh authentication and ask one before execution.

Resources