I am trying to provision a virtual machine with an Ansible playbook.
Following the documentation, I ended up with this simple Vagrant File:
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial64"
config.vm.network "private_network", ip: "192.168.50.5"
config.vm.provision "ansible" do |ansible|
ansible.verbose = "vvv"
ansible.playbook = "playbook.yml"
end
end
As you can see, I am trying to provision a xenial64 machine (Ubuntu 16.04) from a playbook.yml file.
When I launch vagrant provision, here is what I get:
$ vagrant provision
==> default: Running provisioner: ansible...
default: Running ansible-playbook...
PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --limit="default" --inventory-file=/home/mmarteau/Code/ansible-arc/.vagrant/provisioners/ansible/inventory -vvv playbook.yml
Using /etc/ansible/ansible.cfg as config file
statically included: /home/mmarteau/Code/ansible-arc/roles/user/tasks/ho-my-zsh.yml
statically included: /home/mmarteau/Code/ansible-arc/roles/webserver/tasks/nginx.yml
statically included: /home/mmarteau/Code/ansible-arc/roles/webserver/tasks/php.yml
statically included: /etc/ansible/roles/geerlingguy.composer/tasks/global-require.yml
statically included: /etc/ansible/roles/geerlingguy.nodejs/tasks/setup-RedHat.yml
statically included: /etc/ansible/roles/geerlingguy.nodejs/tasks/setup-Debian.yml
PLAYBOOK: playbook.yml *********************************************************
1 plays in playbook.yml
PLAY RECAP *********************************************************************
So my file seems to be read because I get some statically included from roles into my playbook.yml file.
However, the script stops very quickly, and I don't have any information to debug or to see any errors.
How can I debug this process?
EDIT: More info
Here is my playbook.yml file:
---
- name: Installation du serveur
# hosts: web
hosts: test
vars:
user: mmart
apps:
dev:
branch: development
domain: admin.test.dev
master:
branch: master
domain: admin.test.fr
bitbucket_repository: git#bitbucket.org:Test/test.git
composer_home_path: '/home/mmart/.composer'
composer_home_owner: mmart
composer_home_group: mmart
zsh_theme: agnoster
environment_file: arc-parameters.yml
ssh_agent_config: arc-ssh-config
roles:
- apt
- user
- webserver
- geerlingguy.composer
- geerlingguy.nodejs
- deploy
- deployer
...
Here is my host file:
[web]
XX.XX.XXX.XXX ansible_ssh_private_key_file=/somekey.pem ansible_become=true ansible_user=ubuntu
[test]
Here is the generated host file from vagrant in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory :
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='ubuntu' ansible_ssh_private_key_file='/home/mmart/Code/ansible-test/.vagrant/machines/default/virtualbox/private_key'
Is this correct? Shouldn't the ansible_ssh_user be set to vagrant ?
In your playbook use the default as hosts, as vagrant by default will only create an inventory element for that particular host:
---
- name: Installation du serveur
hosts: default
(...)
Related
The current version Ubuntu I have is 20.10, the version of Ansible 2.9.9.
I have Eve NG with Cisco VIRL Routers on IOS 15.6
First I came across that Ubuntu was unable to SSH to cisco router, due to no matching key exchange method found. Their offer: diffie-hellman-group1-sha1, I found a work around using ~/.ssh/config. File using the following link
~/.ssh/config file:
Host 192.168.100.2
KexAlgorithms=+diffie-hellman-group1-sha1
Host 192.168.100.3
KexAlgorithms=+diffie-hellman-group1-sha1_
Now I am trying to deploy my first playbook.
When I try to run the playbook I get the following error:
fatal: [CSR-1]: FAILED! => {"changed": false, "msg": "Connection type ssh is not valid for this module"}
fatal: [CSR-2]: FAILED! => {"changed": false, "msg": "Connection type ssh is not valid for this module"}
I can SSH from Ubuntu to each router as I used ~/.ssh/config, but I don’t know how to make sure Ansible to use the ~/.ssh/config file.
I try in ansible.cfg file ssh_args = -F /home/a/.ssh/config ß the location of the SSH file, but cannot seem to get it working.
I have spent several hours Google around, but cannot find a fix.
ansible.cfg
[defaults]
inventory =./host
host_key_checking = False
retry_files_enabled = False
gathering = explicit
Interpreter_python = /usr/bin/python3
ssh_args = -F /home/n/etc/ssh/ssh_config.d/*.conf
Playbook:
hosts: CSR_Routers
tasks:
name: Show Version
ios_command:
commands: show version
all.yml:
ansible_user: "cisco"
ansible_ssh_pass: "cisco"
ansible_connection: "ssh"
ansible_network_os: "iso"
ansbile_connection: "network_cli"
If you see into the documentation don't use SSH as connection type, but network_cli. So - you don't talk to the device via default ssh, but via network_cli. Put that as a host specific var into your inventory.
all:
hosts:
CSR_01:
ansible_host: 192.168.100.2
ansible_connection: "network_cli"
ansible_network_os: "ios"
ansible_user: "cisco"
ansible_password: "cisco"
ansible_become: yes
ansible_become_method: enable
ansible_become_password: "cisco"
children:
CSR_Routers:
hosts:
CSR_01:
Based on your playbook, this inventory contains a group "CSR_Routers" and the only device on it is CSR_01 with IP 192.168.100.2. The connection type of that device is not ssh but network_cli.
remove the ssh_args from your ansible.cfg
remove ansible_ssh_pass, ansible_connection, ansible_user, ansible_network_os, ansbile_connection from your all.yml. This should be host specific (be aware of other devices in your inventory that are not an IOS device
So you call your playbook with:
ansible-playbook -i inventory.yaml playbook.yml
Also - have a look at the IOS specific documentation in Ansible
SSH FIX - after posted in Reddit
nano /etc/ssh/ssh_config
KexAlgorithms +diffie-hellman-group1-sha1
Ciphers +aes256-cbc,aes192-cbc,aes128-cbc,3des-cbc
systemctl restart ssh
nano /etc/ansible/ansible.cfg
[defaults]
host_key_checking=False
timeout = 30
Video with details
https://www.youtube.com/playlist?app=desktop&list=PLov64niDpWBId50D_wuraYWuQ-d02PiR1
Summary
I have a Vagrantfile provisioning a Virtualbox VM with Ansible. The Ansible playbook contains an Ansible Vault-encrypted variable. My problem is that Vagrant provisioning does not prompt for the password although I pass the option to do so.
Minimal, complete example
Vagrantfile:
Vagrant.configure(2) do |config|
config.vm.provider "virtualbox" do |vb|
# Build a master VM for this box and clone it for individual VMs
vb.linked_clone = true
end
config.vm.box = "bento/ubuntu-16.04"
config.vm.hostname = "test-vm"
config.vm.provision :ansible do |ansible|
ansible.verbose = true
ansible.playbook = "playbook.yml"
ansible.ask_vault_pass = true
# ansible.raw_arguments = --ask-vault-pass
# ansible.raw_arguments = ["--vault-id", "#prompt"]
# ansible.raw_arguments = ["--vault-id", "dev#prompt"]
end
end
playbook.yml:
---
- name: Test
hosts: all
vars:
foo: !vault |
$ANSIBLE_VAULT;1.1;AES256
65306264626234353434613262613835353463346435343735396138336362643535656233393466
6331393337353837653239616331373463313665396431390a313338333735346237363435323066
66323435333331616639366536376639626636373038663233623861653363326431353764623665
3663636162366437650a383435666537626564393866643461393739393434346439346530336364
3639
tasks:
- name: print foo's value
debug:
msg: "foo -> {{ foo }}"
The Ansible Vault password is abc.
When I call vagrant up on first execution of the Vagrantfile or later vagrant provision I do not get the expected prompt to enter the password. Instead the task print foo's value prints the (red) message:
fatal: [default]: FAILED! => {"msg": "Attempting to decrypt but no vault secrets found"}
I also tried the outcommented alternatives in the Vagrantfile to make Ansible prompt for the password. I can see all of them in the ansible-playbook call printed by Vagrant.
Additionally, I tried several options when encrypting foo with ansible-vault encrypt_string, which did not help either.
What can I do to make Ansible prompt for the password when called with Vagrant?
Versions
kubuntu 16.04
Vagrant 1.8.1 and Vagrant 2.0.0
Ansible 2.4.0.0
Update
This is the Ansible call as printed by Vagrant:
PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --ask-vault-pass --limit="default" --inventory-file=/opt/vagrantVM/.vagrant/provisioners/ansible/inventory -v playbook.yml
If I execute this directly without Vagrant the password prompt works as expected! So it must be Vagrant, which somehow suppresses the prompt.
In Ansible 2.4.0.0, the Vault password prompt (i.e. --ask-vault-pass) is skipped when no tty is present (there is no execution of getpass.getpass function).
With the Ansible 2.4.0.0 prompt implementation, the Vagrant provisioner integration don't receive an interactive prompt.
Note that --ask-pass and --ask-become-pass implementation haven't changed (i.e. there is no mechanism to skip the getpass function) and are still working fine with Vagrant.
I plan to report the issue upstream to the Ansible project, but for the moment you can resolve the situation by downgrading to Ansible 2.3 (or by using vault_password_file provisioner option instead).
References:
https://github.com/ansible/ansible/pull/22756/files#diff-bdd6c847fae8976ab8a7259d0b583f34L176
https://github.com/hashicorp/vagrant/issues/2924
See the related Bug Fix Pull Request (#31493) in Ansible project
My Vagrantfile looks like:
Vagrant.configure("2") do |config|
config.vm.box = "vag-box"
config.vm.box_url = "boxes/base.box"
config.vm.network :private_network, ip: "192.168.100.100"
config.vm.provision :setup, type: :ansible_local do |ansible|
ansible.playbook = "playbook.yml"
ansible.provisioning_path = "/vagrant"
ansible.inventory_path = "/vagrant/hosts"
end
end
My playbook file looks like:
---
- name: Setup system
hosts: localhost
become: true
become_user: root
roles:
- { role: role1 }
- { role: role2 }
My hosts file looks like:
[localhost]
localhost # 192.168.100.100
During ansible execution I get the following error:
ERROR! Specified --limit does not match any hosts
First: "localhost" is a name that is assigned to the 127.0.0.1 address by convention. This refers to the loopback address of the local machine. I don't think this is what you are trying to use it for, based on the comment in your hosts file.
Second: The Ansible provisioner in Vagrant usually creates a custom inventory file, with the required contents to provision the Vagrant box. For example:
# Generated by Vagrant
myvagrantbox ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/sage/ansible/vagrant/myvagrantbox/.vagrant/machines/myvagrantbox/virtualbox/private_key'
As you are overriding the inventory file, you will need to specify a similar line for the hostname of your vagrant box.
If you omit the ansible.inventory_path = "/vagrant/hosts" line from your config, it should JustWork(tm). You might also wish to specify config.vm.hostname = "myvagrantboxname" in your configuration, so you know what hostname will be used.
See the Using Vagrant and Ansible and the Vagrant -- Ansible Provisioner documentation for more details.
If I have Vagrantfile with ansible provision:
Vagrant.configure(2) do |config|
config.vm.box = 'hashicorp/precise32'
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.provision :ansible do |ansible|
ansible.playbook = "playbook.yml"
ansible.inventory_path = "hosts"
ansible.limit = 'all'
ansible.sudo = true
end
end
My hosts file is very simple:
[local]
web ansible_connection=local
and playbook.yml is:
---
- hosts: local
sudo: true
remote_user: vagrant
tasks:
- name: update apt cache
apt: update_cache=yes
- name: install apache
apt: name=apache2 state=present
When I start vagrant with wagrant up I got error:
failed: [web] => {"failed": true, "parsed": false}
[sudo via ansible, key=daxgehmwoinwalgbzunaiovnrpajwbmj] password:
What's the problem?
The error is occurring because ansible is assuming key based ssh authentication, however your vagrant is creating a VM is uses password based authentication.
There are two ways you can solve this issue.
You can run your ansible playbook as
ansible-playbook playbook.yml --ask-pass
This will tell ansible to not assume key-based authentication, instead use password based ssh authentication and ask one before execution.
I am new with using Vagrant and Ansible. Currently I am stuck on Ansible telling me that it can't find apt-get command.
My Vagrant box runs on Ubuntu and here are the relevant files:
// Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :private_network, :ip => "192.168.33.10"
# make sure apt repo is up to date
config.vm.provision :shell, :inline => 'apt-get -qqy update'
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
end
end
// vagrant_ansible_inventory_default
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
// playbook.yml
---
- name: Install MySQL, Nginx, Node.js, and Monit
hosts: 127.0.0.1
user: root
# remote_user: user
# sudo: yes
roles:
- nginx
// roles/nginx/tasks/main.yml
---
- name: Installs nginx web server
apt: pkg=nginx state=installed update_cache=true
notify:
- start nginx
When I run vagrant provision, I get
[default] Running provisioner: shell...
[default] Running: inline script
stdin: is not a tty
[default] Running provisioner: ansible...
PLAY [Install MySQL, Nginx, Node.js, and Monit] *******************************
GATHERING FACTS ***************************************************************
ok: [127.0.0.1]
TASK: [nginx | Installs nginx web server] *************************************
failed: [127.0.0.1] => {"cmd": "apt-get update && apt-get install python-apt -y -q",
"failed": true, "item": "", "rc": 127}
stderr: /bin/sh: apt-get: command not found
msg: /bin/sh: apt-get: command not found
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/foosbar/playbook.retry
127.0.0.1 : ok=1 changed=0 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
What am I missing?
What happens if you run the play against the IP address you've actually assigned to vagrant?
// playbook.yml
---
- name: Install MySQL, Nginx, Node.js, and Monit
hosts: 192.168.33.10
The hosts on which you want to run the play is the vagrant. hosts in this case doesn't refer to the master, but the nodes.