Ansible & Vagrant - give args to ansible provision - vagrant

A collegeau of mine wrote a script to automate Vagrant installations, to include Ansible scripts. So if I run ansible provision, the playbook ansible/playbooks/provision.yml` is run at the vagrant machine(s).
The downside of this script is the Ansible playbook will only deploy on the machine with ansible provision.
Now, as I'm writing code and working, I am noticing the downsides. Because I can give ansible-playbook parameters / arguments, such asansible-playbook -i inventory provision.yml -vvv --tags "test". But this is not possible because of an architectual problem.
So, instead of solving the real problem (which I try to evade), are there any guru's out there, who can point me in the right directoin, to make it possible to give ansible provision arguments? E.g. ansible provision -vvv.
I looked at https://www.vagrantup.com/docs/cli/provision.html but without help.
Thanks.

Not completly sure I have understood correctly but maybe this config (from one of my projects), in vagrantfile, could help :
config.vm.provision "ansible" do |ansible|
ansible.playbook = "ansible/playbook.yml"
ansible.limit = 'all'
ansible.tags = 'local'
ansible.sudo = true
ansible.verbose = 'v'
ansible.groups = {
"db" => ["db"],
"app" => ["app"],
"myproject" => ["myproject"],
"fourth" => ["fourth"],
"local:children" => ["db", "app", "myproject", "fourth"]
}
end
In this Vagrantfile, I configured 4 VM vagrant.
vagrant_ansible_inventory looks like this :
# Generated by Vagrant
db ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=/home/user/.vagrant.d/insecure_private_key
app ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=/home/user/.vagrant.d/insecure_private_key
myproject ansible_ssh_host=127.0.0.1 ansible_ssh_port=2201 ansible_ssh_private_key_file=/home/user/.vagrant.d/insecure_private_key
fourth ansible_ssh_host=127.0.0.1 ansible_ssh_port=2202 ansible_ssh_private_key_file=/home/user/.vagrant.d/insecure_private_key
[db]
db
[app]
app
[myproject]
myproject
[fourth]
fourth
[local:children]
db
app
myproject
fourth
https://www.vagrantup.com/docs/provisioning/ansible_local.html

Related

Ansible called by Vagrant does not prompt for vault password

Summary
I have a Vagrantfile provisioning a Virtualbox VM with Ansible. The Ansible playbook contains an Ansible Vault-encrypted variable. My problem is that Vagrant provisioning does not prompt for the password although I pass the option to do so.
Minimal, complete example
Vagrantfile:
Vagrant.configure(2) do |config|
config.vm.provider "virtualbox" do |vb|
# Build a master VM for this box and clone it for individual VMs
vb.linked_clone = true
end
config.vm.box = "bento/ubuntu-16.04"
config.vm.hostname = "test-vm"
config.vm.provision :ansible do |ansible|
ansible.verbose = true
ansible.playbook = "playbook.yml"
ansible.ask_vault_pass = true
# ansible.raw_arguments = --ask-vault-pass
# ansible.raw_arguments = ["--vault-id", "#prompt"]
# ansible.raw_arguments = ["--vault-id", "dev#prompt"]
end
end
playbook.yml:
---
- name: Test
hosts: all
vars:
foo: !vault |
$ANSIBLE_VAULT;1.1;AES256
65306264626234353434613262613835353463346435343735396138336362643535656233393466
6331393337353837653239616331373463313665396431390a313338333735346237363435323066
66323435333331616639366536376639626636373038663233623861653363326431353764623665
3663636162366437650a383435666537626564393866643461393739393434346439346530336364
3639
tasks:
- name: print foo's value
debug:
msg: "foo -> {{ foo }}"
The Ansible Vault password is abc.
When I call vagrant up on first execution of the Vagrantfile or later vagrant provision I do not get the expected prompt to enter the password. Instead the task print foo's value prints the (red) message:
fatal: [default]: FAILED! => {"msg": "Attempting to decrypt but no vault secrets found"}
I also tried the outcommented alternatives in the Vagrantfile to make Ansible prompt for the password. I can see all of them in the ansible-playbook call printed by Vagrant.
Additionally, I tried several options when encrypting foo with ansible-vault encrypt_string, which did not help either.
What can I do to make Ansible prompt for the password when called with Vagrant?
Versions
kubuntu 16.04
Vagrant 1.8.1 and Vagrant 2.0.0
Ansible 2.4.0.0
Update
This is the Ansible call as printed by Vagrant:
PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --ask-vault-pass --limit="default" --inventory-file=/opt/vagrantVM/.vagrant/provisioners/ansible/inventory -v playbook.yml
If I execute this directly without Vagrant the password prompt works as expected! So it must be Vagrant, which somehow suppresses the prompt.
In Ansible 2.4.0.0, the Vault password prompt (i.e. --ask-vault-pass) is skipped when no tty is present (there is no execution of getpass.getpass function).
With the Ansible 2.4.0.0 prompt implementation, the Vagrant provisioner integration don't receive an interactive prompt.
Note that --ask-pass and --ask-become-pass implementation haven't changed (i.e. there is no mechanism to skip the getpass function) and are still working fine with Vagrant.
I plan to report the issue upstream to the Ansible project, but for the moment you can resolve the situation by downgrading to Ansible 2.3 (or by using vault_password_file provisioner option instead).
References:
https://github.com/ansible/ansible/pull/22756/files#diff-bdd6c847fae8976ab8a7259d0b583f34L176
https://github.com/hashicorp/vagrant/issues/2924
See the related Bug Fix Pull Request (#31493) in Ansible project

host_vars and group_vars are not getting loaded

I have below folder structure, which seem to make it build and load the roles, but the group and host vars are not being loaded. How come?
/etc/ansible/
- hosts
- requirements.yml
- group_vars/
- app/
- postgres.yml
- host_vars/
- app1/
- postgres.yml
- roles
/documents/ansible/
- playbook.yml
- vagrant
host file
# Application servers
[app]
192.168.60.6
# Group multi
[multi:children]
app
#variables applied to all servers
[multi:vars]
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
**vagrant **
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# Genral Vagrant VM Configuration.
config.vm.box = "geerlingguy/centos7"
config.ssh.insert_key = false
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.provider :virtualbox do |v|
v.memory = 256
v.linked_clone = true
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
end
# Application server 1
config.vm.define "app1" do |app|
app.vm.hostname = "orc-app1.dev"
app.vm.network :private_network, ip: "192.168.60.6"
end
end
Vagrant by default uses its own, auto-generated inventory file in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory.
There is an option ansible.inventory_path in Vagrant to point to the inventory file/directory other than the auto-generated one, but you don't use it to point to the /etc/ansible/hosts in Vagrantfile, so Vagrant is completely unaware of it. Likewise it does not look for host_vars and group_vars in the /etc/ansible.
On the other hand, the path to roles is not overridden by the inventory file, so the fact that Vagrant uses its own one does not influence the path for roles.
They are loaded by default from /etc/ansible/roles (or whatever directory Ansible uses as its default, for example /usr/local/etc/ansible/roles on macOS).
Here's what works for me; it's a combo of various advice found on this and other sites,
Solution
In my Vagrantfile (esp. note the ansible.groups line),
config.vm.provision "ansible" do |ansible|
ansible.playbook = "provisioning/site.yml"
# "default" is the name of the VM as in auto-generated
ansible.groups = { "vagrant" => ["default"] }
end
My inventory data for the VM goes in provisioning/host_vars/default and/or provisioning/group_vars/vagrant - either as files or directories with files (I prefer this so I can have a file for each role I'm using). Ensure these inventory files have a meaningful extension (so .yml assuming YAML format files) thus provisioning/host_vars/default/role_name.yml or provisioning/group_vars/vagrant/role_name.yml. provisioning can be a directory or (my preference) a symlink; the latter means I can have a common set of roles for my vagrant-handled VMs. Gotcha: Using filename extensions is important; I notice that inventory data that I used to have in files without extensions is now ignored, so I think this is a change in how Ansible reads its inventory.
Background details
The ansible.groups setting in the Vagrantfile tweaks the auto-generated inventory in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory mentioned in #techraf's answer to add,
[vagrant]
default
The line in the Vagrantfile shown above,
ansible.playbook = "provisioning/site.yml"
not only specifies the playbook, but also means the playbook directory (or symlink to a directory) is provisioning/.
Vagrant looks in provisioning/host_vars etc because "You can also add group_vars/ and host_vars/ directories to your playbook directory." (from this inventory documentation)
For clarity, here are the contents of each vagrant VM directory,
.vagrant/
ansible.cfg
provisioning -> shared_playbook_directory
Vagrantfile
and in the shared playbook directory,
group_vars/
host_vars/
roles/
site.yml
FYI this Ansible & Vagrant intro page details the host_vars, group_vars etc. inventory layout very well.

Running Ansible against host group

When I try running this Ansible command - ansible testserver -m ping it works just fine, but when I try this command - ansible webservers -m ping I get the following error - ERROR! Specified hosts options do not match any hosts.
My host file looks like this -
[webservers]
testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
What could be the problem? Why can ansible recognize the host in question and not the host group?
I've tried changing the file to make sure ansible is reading from this file specifically, and made sure this is the case, so this is not a problem of reading configurations from another file I am not aware of.
I've also tried using the solutions specified in Why Ansible skips hosts group and does nothing but it seems like a different problem with a different solution.
EDIT - added my anisble.cfg file, to point out I've already made all the vagrant specific configurations.
[defaults]
inventory = ./ansible_hosts
roles_path = ./ansible_roles
remote_user = vagrant
private_key_file = .vagrant/machine/default/virtualbox/private_key
host_key_checking = False
I think you are working with the vagrant and you need to ping like this:
ansible -i your-inventory-file webservers -m ping -u vagrant -k
Why your ping fail prevously:
ansible try to connect to vagrant machine using local login user and it doesn't exist on the vagrant machine
it also need password for the vagrant user which is also vagrant.
Hope that help you.

How to continue Vagrant/Ansible provision script from error?

After I provision my Vagrant... I may get errors during provision... how do I restart from error, instead of doing everything from scratch ?
vagrant destroy -f && vagrant up
And I may get an error...
PLAY RECAP ********************************************************************
to retry, use: --limit #/path/to/playbook.retry
And I want to just resume from where it failed... it seems it can be done by the message... use --limit.... but when I use it in the vagrant context it doesn't work..
You can edit the Vagrantfile and include the ansible.start_at_task variable.
Then you can re-run the provision with $ vagrant reload --provision
Vagrant Reload docs
However, because Ansible plays are idempotent you don't really need to do the start_at_task. You can just re-run the provision with the reload command above.
You could do an ansible run with ansible-playbook on your vagrant box. The trick is to use the inventory file ansible has created. It is located in your .vagrant foler.
ansible-playbook playbook.yml -i vagrant/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory --start-at-task='MY_TASK'
Vagrantfile: Assign your roles to a vagrant machines.
config.vm.provision "ansible" do |ansible|
ansible.verbose = "v"
ansible.playbook = "../playbook.yml"
ansible.groups = {
"web" => ['vm01app'],
"db" => ['vm01db'],
}
end
To start provisioning at a particular task:
vagrant ssh
provision --list-tasks
provision --start-at-task {task}
If there's anything that needs to be fixed, you can either do it while SSHed to the server, or you can make the changes in the host machine, then sync those changes to the Vagrant host.
https://developer.hashicorp.com/vagrant/docs/synced-folders
So, for instance, if you're using rsync as the sync type, you can sync the changes by running
vagrant rsync
If you used shell as provisioner instead of Ansible (for this you would need to have Ansible installed in the VM itself) you could run a command like this:
test -f "/path/to/playbook.retry" && (ansible-playbook site.yml --limit #/path/to/playbook.retry; rm -f "/path/to/playbook.retry") || ansible-playbook site.yml
This would basically check if a retry file exists, if it does it would run the playbook with that limit and delete the retry file after, if not it would run the whole playbook. Of course you would need to run these commands with sudo if you are using Vagrant, but that is easy to add.

Getting Ansible example (jboss-standalone) to work with Vagrant

I need some assistance with getting https://github.com/ansible/ansible-examples.git / jboss-standalone to work with Vagrant. I think I am making the same mistake, with my Vagrant configuration.
My Vagrantfile is here:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-6.6"
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.hostname = "webserver1"
config.vm.provision :ansible do |ansible|
ansible.playbook = "site.yml"
ansible.verbose = "vvvv"
ansible.inventory_path = "/Users/miledavenport/vagrant-ansible/jboss-standalone/hosts"
end
end
My hosts file is here:
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
[jboss-servers]
webserver1
[webserver1]
127.0.0.1 ansible_connection=local
[localhost]
127.0.0.1
I am fairly new to using ansible, and want to "play" with Ansible, by using Vagrant.
"vagrant up" produces the following error:
TASK: [jboss-standalone | Install Java 1.7 and some basic dependencies] *******
FATAL: no hosts matched or all hosts have already failed -- aborting
"vagrant ssh" works OK.
site.yml is:
---
# This playbook deploys a simple standalone JBoss server.
- hosts: jboss-servers
user: root
roles:
- jboss-standalone
I don't understand why I am getting the error:
FATAL: no hosts matched
The hosts contains webserver1, which is the same as the Vagrantfile hostname.
Can someone please help me to resolve this error.
Thanks :)
Miles.
Maybe your intent is to create a parent group called jboss-servers, with a subgroup called webserver1
Try changing [jboss-servers] to [jboss-servers:children]
This will make the group jboss-servers also contain 127.0.0.1 as its hosts, and your playbook should run. Link to DOCS
At the moment since webserver1 does not have the KVP ansible_ssh_host=<ip> associated with it, it is just a hostname without an ip to connect to. Make it a subgroup of jboss-servers only if you dont have webserver1 mapped to some IP in your /etc/hosts file or something :)

Resources