vagrant with ansible error - vagrant

If I have Vagrantfile with ansible provision:
Vagrant.configure(2) do |config|
config.vm.box = 'hashicorp/precise32'
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.provision :ansible do |ansible|
ansible.playbook = "playbook.yml"
ansible.inventory_path = "hosts"
ansible.limit = 'all'
ansible.sudo = true
end
end
My hosts file is very simple:
[local]
web ansible_connection=local
and playbook.yml is:
---
- hosts: local
sudo: true
remote_user: vagrant
tasks:
- name: update apt cache
apt: update_cache=yes
- name: install apache
apt: name=apache2 state=present
When I start vagrant with wagrant up I got error:
failed: [web] => {"failed": true, "parsed": false}
[sudo via ansible, key=daxgehmwoinwalgbzunaiovnrpajwbmj] password:
What's the problem?

The error is occurring because ansible is assuming key based ssh authentication, however your vagrant is creating a VM is uses password based authentication.
There are two ways you can solve this issue.
You can run your ansible playbook as
ansible-playbook playbook.yml --ask-pass
This will tell ansible to not assume key-based authentication, instead use password based ssh authentication and ask one before execution.

Related

Trying to provision my Vagrant with Ansible

I am trying to provision a virtual machine with an Ansible playbook.
Following the documentation, I ended up with this simple Vagrant File:
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial64"
config.vm.network "private_network", ip: "192.168.50.5"
config.vm.provision "ansible" do |ansible|
ansible.verbose = "vvv"
ansible.playbook = "playbook.yml"
end
end
As you can see, I am trying to provision a xenial64 machine (Ubuntu 16.04) from a playbook.yml file.
When I launch vagrant provision, here is what I get:
$ vagrant provision
==> default: Running provisioner: ansible...
default: Running ansible-playbook...
PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --limit="default" --inventory-file=/home/mmarteau/Code/ansible-arc/.vagrant/provisioners/ansible/inventory -vvv playbook.yml
Using /etc/ansible/ansible.cfg as config file
statically included: /home/mmarteau/Code/ansible-arc/roles/user/tasks/ho-my-zsh.yml
statically included: /home/mmarteau/Code/ansible-arc/roles/webserver/tasks/nginx.yml
statically included: /home/mmarteau/Code/ansible-arc/roles/webserver/tasks/php.yml
statically included: /etc/ansible/roles/geerlingguy.composer/tasks/global-require.yml
statically included: /etc/ansible/roles/geerlingguy.nodejs/tasks/setup-RedHat.yml
statically included: /etc/ansible/roles/geerlingguy.nodejs/tasks/setup-Debian.yml
PLAYBOOK: playbook.yml *********************************************************
1 plays in playbook.yml
PLAY RECAP *********************************************************************
So my file seems to be read because I get some statically included from roles into my playbook.yml file.
However, the script stops very quickly, and I don't have any information to debug or to see any errors.
How can I debug this process?
EDIT: More info
Here is my playbook.yml file:
---
- name: Installation du serveur
# hosts: web
hosts: test
vars:
user: mmart
apps:
dev:
branch: development
domain: admin.test.dev
master:
branch: master
domain: admin.test.fr
bitbucket_repository: git#bitbucket.org:Test/test.git
composer_home_path: '/home/mmart/.composer'
composer_home_owner: mmart
composer_home_group: mmart
zsh_theme: agnoster
environment_file: arc-parameters.yml
ssh_agent_config: arc-ssh-config
roles:
- apt
- user
- webserver
- geerlingguy.composer
- geerlingguy.nodejs
- deploy
- deployer
...
Here is my host file:
[web]
XX.XX.XXX.XXX ansible_ssh_private_key_file=/somekey.pem ansible_become=true ansible_user=ubuntu
[test]
Here is the generated host file from vagrant in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory :
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='ubuntu' ansible_ssh_private_key_file='/home/mmart/Code/ansible-test/.vagrant/machines/default/virtualbox/private_key'
Is this correct? Shouldn't the ansible_ssh_user be set to vagrant ?
In your playbook use the default as hosts, as vagrant by default will only create an inventory element for that particular host:
---
- name: Installation du serveur
hosts: default
(...)

Using Ansible hosts raises `--limit` does not match any hosts

My Vagrantfile looks like:
Vagrant.configure("2") do |config|
config.vm.box = "vag-box"
config.vm.box_url = "boxes/base.box"
config.vm.network :private_network, ip: "192.168.100.100"
config.vm.provision :setup, type: :ansible_local do |ansible|
ansible.playbook = "playbook.yml"
ansible.provisioning_path = "/vagrant"
ansible.inventory_path = "/vagrant/hosts"
end
end
My playbook file looks like:
---
- name: Setup system
hosts: localhost
become: true
become_user: root
roles:
- { role: role1 }
- { role: role2 }
My hosts file looks like:
[localhost]
localhost # 192.168.100.100
During ansible execution I get the following error:
ERROR! Specified --limit does not match any hosts
First: "localhost" is a name that is assigned to the 127.0.0.1 address by convention. This refers to the loopback address of the local machine. I don't think this is what you are trying to use it for, based on the comment in your hosts file.
Second: The Ansible provisioner in Vagrant usually creates a custom inventory file, with the required contents to provision the Vagrant box. For example:
# Generated by Vagrant
myvagrantbox ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/sage/ansible/vagrant/myvagrantbox/.vagrant/machines/myvagrantbox/virtualbox/private_key'
As you are overriding the inventory file, you will need to specify a similar line for the hostname of your vagrant box.
If you omit the ansible.inventory_path = "/vagrant/hosts" line from your config, it should JustWork(tm). You might also wish to specify config.vm.hostname = "myvagrantboxname" in your configuration, so you know what hostname will be used.
See the Using Vagrant and Ansible and the Vagrant -- Ansible Provisioner documentation for more details.

How to enforce running a playbook on a Vagrant machine?

I wrote a playbook which installs Docker.
---
- name: Install dependencies
apt: name={{ item }} state=present update_cache=yes
with_items:
- linux-image-generic-lts-{{ ansible_distribution_release }}
- linux-headers-generic-lts-{{ ansible_distribution_release }}
become: true
- name: Add Docker repository key
apt_key:
id: 58118E89F3A912897C070ADBF76221572C52609D
keyserver: hkp://p80.pool.sks-keyservers.net:80
state: present
register: add_repository_key
become: true
- name: Add Docker repository
apt_repository:
repo: 'deb https://apt.dockerproject.org/repo {{ ansible_distribution_release }} main'
state: present
become: true
- name: Install Docker
apt:
name: docker
state: latest
update_cache: yes
become: true
- name: Verify the service is running
service:
name: docker
enabled: yes
state: started
become: true
I'm up'ing a vagrant machine which is configured to use that playbook.
Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/trusty64"
config.ssh.insert_key = false
config.vm.synced_folder "./", "/tmp/project",create: true
config.vm.network :forwarded_port, guest: 80, host: 80 , auto_correct: true
config.vm.provider :virtualbox do |v|
# Name of machine
v.name = "default"
# Machine memory
v.memory = 1024
# Number of cpu's
v.cpus = 2
# This option makes the NAT engine use the host's resolver mechanisms to handle DNS requests
v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
# Enabling the I/O APIC is required for 64-bit guest operating systems; it is also required if you want to use more than one virtual CPU in a virtual machine.
v.customize ["modifyvm", :id, "--ioapic", "on"]
end
config.vm.provision "ansible" do |ansible|
# Sets the playbook to use when machine is up'ed
ansible.playbook = "deploy/main.yml"
end
end
But for some reason, that's the output I get and docker is not installed on the vagrant machine:
$ vagrant up --provision
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'ubuntu/trusty64' is up to date...
==> default: Running provisioner: ansible...
default: Running ansible-playbook...
PLAY RECAP *********************************************************************
Is this the right way to do that?
Is there a command which lets me play a playbook on a running Vagrant machine?
I wrote a playbook which installs docker.
---
- name: Install dependencies
apt: name={{ item }} state=present update_cache=yes
with_items:
- linux-image-generic-lts-{{ ansible_distribution_release }}
- linux-headers-generic-lts-{{ ansible_distribution_release }}
become: true
No, you have not written a playbook. You have written a YAML file containing a list of Ansible tasks.
Playbooks contain a list of plays, and plays are YAML dictionaries which, for Ansible to work, at minimum must contain hosts key. In a typical play, the list of tasks is defined in tasks key.
So for your file to be a playbook you'd at least need:
- hosts: default
tasks:
- name: Install dependencies
apt: name={{ item }} state=present update_cache=yes
with_items:
- linux-image-generic-lts-{{ ansible_distribution_release }}
- linux-headers-generic-lts-{{ ansible_distribution_release }}
become: true
Note: default here refers to the name of the machine defined in your Vagrantfile (v.name = "default") not anything Ansible-default.
Is there a command which lets me play a playbook on a running Vagrant machine?
You can run the playbook defined in the Vagrant file with:
vagrant provision
To run another one, you'd just use ansible-playbook, but you must point to the Vagrant's inventory file, you must also use vagrant as the remote_user:
ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml

Getting Ansible example (jboss-standalone) to work with Vagrant

I need some assistance with getting https://github.com/ansible/ansible-examples.git / jboss-standalone to work with Vagrant. I think I am making the same mistake, with my Vagrant configuration.
My Vagrantfile is here:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-6.6"
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.hostname = "webserver1"
config.vm.provision :ansible do |ansible|
ansible.playbook = "site.yml"
ansible.verbose = "vvvv"
ansible.inventory_path = "/Users/miledavenport/vagrant-ansible/jboss-standalone/hosts"
end
end
My hosts file is here:
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
[jboss-servers]
webserver1
[webserver1]
127.0.0.1 ansible_connection=local
[localhost]
127.0.0.1
I am fairly new to using ansible, and want to "play" with Ansible, by using Vagrant.
"vagrant up" produces the following error:
TASK: [jboss-standalone | Install Java 1.7 and some basic dependencies] *******
FATAL: no hosts matched or all hosts have already failed -- aborting
"vagrant ssh" works OK.
site.yml is:
---
# This playbook deploys a simple standalone JBoss server.
- hosts: jboss-servers
user: root
roles:
- jboss-standalone
I don't understand why I am getting the error:
FATAL: no hosts matched
The hosts contains webserver1, which is the same as the Vagrantfile hostname.
Can someone please help me to resolve this error.
Thanks :)
Miles.
Maybe your intent is to create a parent group called jboss-servers, with a subgroup called webserver1
Try changing [jboss-servers] to [jboss-servers:children]
This will make the group jboss-servers also contain 127.0.0.1 as its hosts, and your playbook should run. Link to DOCS
At the moment since webserver1 does not have the KVP ansible_ssh_host=<ip> associated with it, it is just a hostname without an ip to connect to. Make it a subgroup of jboss-servers only if you dont have webserver1 mapped to some IP in your /etc/hosts file or something :)

Ansible & Vagrant - apt-get: command not found

I am new with using Vagrant and Ansible. Currently I am stuck on Ansible telling me that it can't find apt-get command.
My Vagrant box runs on Ubuntu and here are the relevant files:
// Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :private_network, :ip => "192.168.33.10"
# make sure apt repo is up to date
config.vm.provision :shell, :inline => 'apt-get -qqy update'
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
end
end
// vagrant_ansible_inventory_default
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
// playbook.yml
---
- name: Install MySQL, Nginx, Node.js, and Monit
hosts: 127.0.0.1
user: root
# remote_user: user
# sudo: yes
roles:
- nginx
// roles/nginx/tasks/main.yml
---
- name: Installs nginx web server
apt: pkg=nginx state=installed update_cache=true
notify:
- start nginx
When I run vagrant provision, I get
[default] Running provisioner: shell...
[default] Running: inline script
stdin: is not a tty
[default] Running provisioner: ansible...
PLAY [Install MySQL, Nginx, Node.js, and Monit] *******************************
GATHERING FACTS ***************************************************************
ok: [127.0.0.1]
TASK: [nginx | Installs nginx web server] *************************************
failed: [127.0.0.1] => {"cmd": "apt-get update && apt-get install python-apt -y -q",
"failed": true, "item": "", "rc": 127}
stderr: /bin/sh: apt-get: command not found
msg: /bin/sh: apt-get: command not found
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/foosbar/playbook.retry
127.0.0.1 : ok=1 changed=0 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
What am I missing?
What happens if you run the play against the IP address you've actually assigned to vagrant?
// playbook.yml
---
- name: Install MySQL, Nginx, Node.js, and Monit
hosts: 192.168.33.10
The hosts on which you want to run the play is the vagrant. hosts in this case doesn't refer to the master, but the nodes.

Resources