Vagrant with Ansible for Windows VM - vagrant

I am trying to run Vagrant with Ansible on my Mac to create and provision a Windows 7 VM. I am able to "vagrant up" when I don't invoke Ansible in the Vagrantfile.
I am using the following playbook.yml
---
- hosts: all
tasks:
- name: run win ping
win_ping:
When I add the ansible code to my Vagrantfile, I get the following error
GATHERING FACTS ***************************************************************
failed: [default] => {"failed": true, "parsed": false}
/bin/sh: /usr/bin/python: No such file or directory
To me, this error means it fails to find Python because it is looking for Python as if it is a Linux machine.
Separately, I have run
ansible windows -m win_ping
where windows is the IP address to the VM brought up by Vagrant so I suspect the issue is not with Ansible but with how Vagrant is invoking Ansible.
Has anyone tried Vagrant + Ansible for a Windows VM? Is there something obvious that I am missing (perhaps an option to pass to Ansible)?
I am using Vagrant version 1.7.2 and Ansible version 1.8.3

With Ansible provisioning a Windows box (either Vagrant, VM or real machine) the configuration is much more important in the first place. Before crafting your playbook, you should have a correct configuration in place.
Having a Windows box managed by Vagrant, your configuration file group_vars/windows-dev should contain something like:
ansible_user: IEUser
ansible_password: Passw0rd!
ansible_port: 55986 # not 5986, as we would use for non-virtualized environments
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
Be sure to insert the correct credentials and choose the right port for ansible-port. Working with Vagrant, you can get the correct port from the log-messages produced by Vagrant after a vagrant up. In my case this looks like this:
==> default: Forwarding ports...
default: 5985 (guest) => 55985 (host) (adapter 1)
default: 5986 (guest) => 55986 (host) (adapter 1)
My Vagrantfile could be found here, if you´re interested. It uses the Microsoft Edge on Windows 10 Stable (14.xxx) image from https://developer.microsoft.com/en-us/microsoft-edge/tools/vms.
Now the win_ping module should work - assuming that you´ve done all the necessary preparing steps on your Windows box which center around executing the script ConfigureRemotingForAnsible.ps1 (more Information could be found in the Making Windows Ansible ready chapter in this blog post):
ansible windows-dev -i hostsfile -m win_ping
Only, if this gives you an SUCCESS you should proceed with crafting your playbook.

In my Windows provisioning playbook I set this in the header:
gather_facts: no

Related

/ect/ansible file is not available in Mac OS

I used pip to install Ansible in MacOS. But I cannot find the /etc/ansible folder. Neither the inventory file.
I want to run my playbook in minikube environment. But the playbook returns,
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
[WARNING]: Could not match supplied host pattern, ignoring: 192.168.99.105
How to solve this issue?
I looked into this matter and using Ansible for managing minikube is not an easy topic. Let me elaborate on that:
The main issue is cited below:
Most Ansible modules that execute under a POSIX environment require a Python interpreter on the target host. Unless configured otherwise, Ansible will attempt to discover a suitable Python interpreter on each target host the first time a Python module is executed for that host.
-- Ansible Docs
What that means is that most of the modules will be unusable. Even ping
Steps to reproduce:
Install Ansible
Install Virtualbox
Install minikube
Start minikube
SSH into minikube
Configure Ansible
Test
Install Ansible
As the original poster said it can be installed through pip.
For example:
$ pip3 install ansible
Install VirtualBox
Please download and install appropriate version for your system.
Install minikube
Please follow this site: Kubernetes.io
Start minikube
You can start minikube by invoking command:
$ minikube start --vm-driver=virtualbox
Parameter --vm-driver=virtualbox is important because it will be useful later for connecting to the minikube.
Please wait for minikube to successfully deploy on the Virtualbox.
SSH into minikube
It is necessary to know the IP address of minikube inside the Virtualbox.
One way of getting this IP is:
Open Virtualbox
Click on the minikube virtual machine for it to show
Enter root for account name. It should not ask for password
Execute command: $ ip a | less and find the address of network interface. It should be in format of 192.168.99.XX
From terminal that was used to start minikube please run below command:
$ minikube ssh
Command above will ssh to newly created minikube environment and it will store a private key in location:
HOME_DIRECTORY .minikube/machines/minikube/id_rsa
id_rsa will be needed to connect to the minikube
Try to login to minikube by invoking command:
ssh -i PATH_TO/id_rsa docker#IP_ADDRESS
If login has happened correctly there should be no issues with Ansible
Configure Ansible
For using ansible-playbook 2 files will be needed:
Hosts file with information about hosts
Playbook file with statements what you require from Ansible to do
Example hosts file:
[minikube_env]
minikube ansible_host=IP_ADDRESS ansible_ssh_private_key_file=./id_rsa
[minikube_env:vars]
ansible_user=docker
ansible_port=22
The ansible_ssh_private_key_file=./id_rsa will tell Ansible to use ssh key from file with correct key to this minikube instance.
Note that this declaration will need to have id_rsa file in the same location as rest of the files.
Example playbook:
- name: Playbook for checking connection between hosts
hosts: all
gather_facts: no
tasks:
- name: Task to check the connection
ping:
You can test the connection by invoking command:
$ ansible-playbook -i hosts_file ping.yaml
Above command should fail because there is no Python interpreter installed.
fatal: [minikube]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "module_stderr": "Shared connection to 192.168.99.101 closed.\r\n", "module_stdout": "/bin/sh: /usr/bin/python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
There is a successful connection between Ansible and minikube but there is no Python interpreter to back it up.
There is a way to use Ansible without Python interpreter.
This Ansible documentation is explaining the use of raw module.

Ansible playbook throwing `playbook does not exist on the guest` when testing with Vagrant machine

When running ansible playbook in windows vagrant image, output looks like:
==> default: Machine booted and ready!
Sorry, don't know how to check guest version of Virtualbox Guest Additions on this platform. Stopping installation.
==> default: Checking for guest additions in VM...
default: The guest additions on this VM do not match the installed version of
default: VirtualBox! In most cases this is fine, but in rare cases it can
default: prevent things such as shared folders from working properly. If you see
default: shared folder errors, please make sure the guest additions within the
default: virtual machine match the version of VirtualBox you have installed on
default: your host and reload your VM.
default:
default: Guest Additions Version: 5.1.12
default: VirtualBox Version: 5.2
==> default: Mounting shared folders...
default: /vagrant => /Users/aaron.west/Workspace/hss-iaas/ansible-repo/tmp
==> default: Running provisioner: ansible_local...
`playbook` does not exist on the guest: /vagrant/test/local.yml
The playbook can be found here: https://galaxy.ansible.com/lean_delivery/java
and the vagrant windows server 2016 image that i'm using is: mwrock/Windows2016
the playbook looks like:
- hosts: local
gather_facts: yes
connection: local
become: yes
become_user: root
roles:
- ../roles/java
As per the ansible_local documentation:
The Ansible Local provisioner requires that all the Ansible Playbook
files are available on the guest machine, at the location referred by
the provisioning_path option. Usually these files are initially
present on the host machine (as part of your Vagrant project), and it
is quite easy to share them with a Vagrant Synced Folder.
To do so, add the following:
config.vm.synced_folder ".", "/vagrant"
This configuration shares the vagrant folder from the host in the "/vagrant" folder on the guest, where it seems to look for the playbook as can be seen in the error message you get.
Running provisioner: ansible_local – so Vagrant tries to execute playbook inside the guest.
You might want to refactor your playbooks to work with ansible provisioner.
Better use:
config.vm.synced_folder ".", "/vagrant", create: true
Sometimes this directory is missing in the image and this leads to the same error.

Ansible can't ping my vagrant box with the vagrant insecure public key

I'm using Ansible 2.4.1.0 and Vagrant 2.0.1 with VirtualBox on osx and although provisioning of my vagrant box works fine with ansible, I get an unreachable error when I try to ping with:
➜ ansible all -m ping
vagrant_django | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n",
"unreachable": true
}
The solutions offered on similar questions didn't work for me (like adding the vagrant insecure pub key to my ansible config). I just can't get it to work with the vagrant insecure public key.
Fwiw, here's my ansible.cfg file:
[defaults]
host_key_checking = False
inventory = ./ansible/hosts
roles_path = ./ansible/roles
private_key_file = ~/.vagrant.d/insecure_private_key
And here's my ansible/hosts file (ansible inventory):
[vagrantboxes]
vagrant_vm ansible_ssh_user=vagrant ansible_ssh_host=192.168.10.100 ansible_ssh_port=22 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
What did work was using my own SSH public key. When I add this to the authorized_keys on my vagrant box, I can ansible ping:
➜ ansible all -m ping
vagrant_django | SUCCESS => {
"changed": false,
"failed": false,
"ping": "pong"
}
I can't connect via ssh either, so that seems to be the underlying problem. Which is fixed by adding my own pub key to the vagrant box in authorized_hosts.
I'd love to know why it doesn't work with the vagrant insecure key. Does anyone know?
PS: To clarify, although the root cause is similar to this other question, the symptoms and context are different. I could provision my box with ansible, but couldn't ansible ping it. This justifies another question imho.
I'd love to know why it doesn't work with the vagrant insecure key. Does anyone know?
Because Vagrant insecure key is used for the initial connection to the box only. By default Vagrant replaces it with a freshly-generated key, which you’ll find in .vagrant/machines/<machine_name>/virtualbox/private_key under the project directory.
You’ll also find an automatically generated Ansible inventory in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory, if you use Ansible provisioner in Vagrantfile, so you don't need to create your own.

Running Ansible playbooks on remote Vagrant box

I have one machine (A) from which I run Ansible playbooks on a variety of hosts. Vagrant is not installed here.
I have another machine (B) with double the RAM that hosts my Vagrant boxes. Ansible is not installed here.
I want to use Ansible to act on Vagrant boxes the same way I do all other hosts; that is, running ansible-playbook on machineA while targeting a virtualized Vagrant box on machineB. SSH keys are already set up between the two.
This seems like a simple use case but I can't find it clearly explained anywhere given the encouraged use of Vagrant's built-in Ansible provisioner. Is it possible?
Perhaps some combination of SSH tunnels and port forwarding trickery?
Turns out this was surprisingly simple. Vagrant in fact does not need to know about Ansible at all.
Ansible inventory on machineA:
default ansible_host=machineB ansible_port=2222
Vagrantfile on machineB:
Vagrant.configure("2") do |config|
...
config.vm.network "forwarded_port", id: "ssh", guest: 22, host: 2222
...
end
The id: "ssh" is the important bit, as this overrides the default SSH behavior of restricting SSH to the guest from localhost only.
$ ansible --private-key=~/.ssh/vagrant-default -u vagrant -m ping default
default | SUCCESS => {
"changed": false,
"ping": "pong"
j }
(Note that the Vagrant private key must be copied over to the Ansible host and specified at the command line).

Vagrant: Ansible set hosts entries via provision

i have a question again. I'm looking for a possibility to set hosts entries for domains in vagrant ansible without using my local and guest hosts files.
I read something about the inventory file, so tried it. The following content is in the file itself:
[myvm]
the-vm.local
Here is the error I got from
==> myvm: Running provisioner: ansible...
ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false PYTHONUNBUFFERED=1 ansible-playbook --private-key=/Users/me/.vagrant.d/insecure_private_key --user=vagrant --limit='myvm' --inventory-file=ansible/hosts -v ansible/playbook.yml
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
fatal: [the-vm.local] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
TASK: [Required packages present] *********************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/me/playbook.retry
the-vm.local : ok=0 changed=0 unreachable=1 failed=0
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
Any clue how to solve that or is there any other way to do it?
Thanks in upfront!
I had the same problem. The problem lies where ansible is running on our local machine and the vagrant VM is using NAT.
The answer involves setting the VM network to bridged so that ansible can resolve the hostname or ip from the machine it is executing from (Vagrantfile modification):
config.vm.network :public_network
During "vagrant up" the console will ask you to specify which network interface to bridge to.
==> default: Available bridged network interfaces:
1) en1: Wi-Fi (AirPort)
2) en0: Ethernet
3) en3: Thunderbolt 1
4) bridge0
5) p2p0
default: What interface should the network bridge to?
Source: http://docs.ansible.com/guide_vagrant.html

Resources