Running Ansible playbooks on remote Vagrant box - vagrant

I have one machine (A) from which I run Ansible playbooks on a variety of hosts. Vagrant is not installed here.
I have another machine (B) with double the RAM that hosts my Vagrant boxes. Ansible is not installed here.
I want to use Ansible to act on Vagrant boxes the same way I do all other hosts; that is, running ansible-playbook on machineA while targeting a virtualized Vagrant box on machineB. SSH keys are already set up between the two.
This seems like a simple use case but I can't find it clearly explained anywhere given the encouraged use of Vagrant's built-in Ansible provisioner. Is it possible?
Perhaps some combination of SSH tunnels and port forwarding trickery?

Turns out this was surprisingly simple. Vagrant in fact does not need to know about Ansible at all.
Ansible inventory on machineA:
default ansible_host=machineB ansible_port=2222
Vagrantfile on machineB:
Vagrant.configure("2") do |config|
...
config.vm.network "forwarded_port", id: "ssh", guest: 22, host: 2222
...
end
The id: "ssh" is the important bit, as this overrides the default SSH behavior of restricting SSH to the guest from localhost only.
$ ansible --private-key=~/.ssh/vagrant-default -u vagrant -m ping default
default | SUCCESS => {
"changed": false,
"ping": "pong"
j }
(Note that the Vagrant private key must be copied over to the Ansible host and specified at the command line).

Related

Ansible on macOS sshpass program workaround

I'm using homebrew to install Ansible on macOS Catalina (I previously installed via pip too per the documentation). The problem is that when I attempt to use a test playbook, I receive the following error:
target1 | FAILED! => {
"msg": "to use the 'ssh' connection type with passwords, you must install the sshpass program"
}
The issue is that sshpass isn't readily available on macOS via homebrew, etc. I've found a couple of options of installation for this but attempted to make the following changes prior to installing this:
export ANSIBLE_HOST_KEY_CHECKING=False
host_key_checking=false within the ansible.cfg in the same directory
None of the above changes worked, should I just install sshpass, or is there another workaround? Or should I just use virtualbox and call it a day?
For reference, this is the following playbook, it's a simple ping test that I'm attempting to use on a local Raspberry Pi that I've already been able to SSH into:
-
name: Test connectivity to target servers
hosts: all
tasks:
- name: Ping test
ping:
The inventory.txt file looks like this:
target1 ansible_host=192.168.x.x ansible_ssh_pass=<password>
Should I just install sshpass, or is there another workaround? Or should I just use virtualbox and call it a day?
It depends on the use case. What do you want to do? Use Ansible for development purposes, or use the machine with IP 192.168.x.x for production workloads?
It is preferred to use ssh keypairs instead of passwords. You can create these on the target host by executing command: "ssh-keygen". This way, you can 'work-around' the use of sshpass.
To help you out with using Virtualbox/Vagrant.
After installing Vagrant, create a file named "Vagrantfile" in a directory, place this in there:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
config.ssh.insert_key = false
config.vm.define "vm-local-1" do | me |
me.vm.box = "rocky8-python3"
me.vm.hostname = "vm-local-1"
me.vm.network :forwarded_port, guest: 22, host: 65530, id: "ssh"
me.vm.network :forwarded_port, guest: 80, host: 8080
me.vm.network :forwarded_port, guest: 443, host: 4443
me.vm.network :forwarded_port, guest: 27017, host: 27017
me.vm.network "private_network", ip: "10.0.0.110"
me.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
ansible.inventory_path = "inventory"
ansible.limit = "vm-local-1"
end
end
end
Place this in /etc/vbox/networks.conf. This allows the usage of the 10.x.x.x network in Vagrant.
* 10.0.0.0/8 192.168.56.0/21
Create an inventory file, named 'inventory', and place this content in there. Replace my_username with your Username.
[local_test]
vm-local-1 ansible_ssh_user=vagrant ansible_host=127.0.0.1 ansible_ssh_port=65530 ansible_ssh_private_key_file=/Users/<my_username>/.vagrant.d/insecure_private_key
[local_test:vars]
ansible_python_interpreter=/usr/bin/python3
Then, create an Ansible playbook like this:
---
- hosts: local_test
gather_facts: false
become: true
tasks:
- shell: echo
Now, you can execute command: "vagrant up", and the VM will be automatically created, and the playbook will be executed automatically as well.
This ended up being more of a novice issue as I am still very new to the tool. Within my inventory file, I added ansible_user=pi which resolved the issue here.
To solve this, I logged into the raspberry pi via a manual ssh connection and ran the command systemctl status sshd. This showed me multiple login failures and that ansible was defaulting to my macOS user.

Packer.io - dial tcp 172.X.X.X:22: connect: no route to host

I'm using vsphere-clone as the builder and ansible-playbook as the provisioner to build my machine.
In one of my ansible tasks, I'm rebooting the machine (after installing some packages and changing network interfaces names), but sometimes my VM is getting a different IP address from DHCP and the ansible playbook cannot continue to the rest of the tasks. I tried the ansible.builtin.setup:
- name: do facts module to get latest information
setup:
But it's not refreshing the IP. Also tried reboot with shell provisioner instead:
{
"type": "shell",
"inline": ["echo {{user `ssh_password`}} | sudo -S reboot"],
"expect_disconnect": true,
"inline_shebang": "/bin/bash -x"
}
But the next provisioners also uses the old IP. Is there a way with Packer, to refresh the IP?

Ansible can't ping my vagrant box with the vagrant insecure public key

I'm using Ansible 2.4.1.0 and Vagrant 2.0.1 with VirtualBox on osx and although provisioning of my vagrant box works fine with ansible, I get an unreachable error when I try to ping with:
➜ ansible all -m ping
vagrant_django | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n",
"unreachable": true
}
The solutions offered on similar questions didn't work for me (like adding the vagrant insecure pub key to my ansible config). I just can't get it to work with the vagrant insecure public key.
Fwiw, here's my ansible.cfg file:
[defaults]
host_key_checking = False
inventory = ./ansible/hosts
roles_path = ./ansible/roles
private_key_file = ~/.vagrant.d/insecure_private_key
And here's my ansible/hosts file (ansible inventory):
[vagrantboxes]
vagrant_vm ansible_ssh_user=vagrant ansible_ssh_host=192.168.10.100 ansible_ssh_port=22 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
What did work was using my own SSH public key. When I add this to the authorized_keys on my vagrant box, I can ansible ping:
➜ ansible all -m ping
vagrant_django | SUCCESS => {
"changed": false,
"failed": false,
"ping": "pong"
}
I can't connect via ssh either, so that seems to be the underlying problem. Which is fixed by adding my own pub key to the vagrant box in authorized_hosts.
I'd love to know why it doesn't work with the vagrant insecure key. Does anyone know?
PS: To clarify, although the root cause is similar to this other question, the symptoms and context are different. I could provision my box with ansible, but couldn't ansible ping it. This justifies another question imho.
I'd love to know why it doesn't work with the vagrant insecure key. Does anyone know?
Because Vagrant insecure key is used for the initial connection to the box only. By default Vagrant replaces it with a freshly-generated key, which you’ll find in .vagrant/machines/<machine_name>/virtualbox/private_key under the project directory.
You’ll also find an automatically generated Ansible inventory in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory, if you use Ansible provisioner in Vagrantfile, so you don't need to create your own.

Vagrant ssh using username and password

I want to connect with a vagrant machine with different user instead of Vagrant also want to use another username and password instead of using keys. Also, I want to know is it possible to use ssh vagrant vm from another vm running in same machine. If so, how to do that?
Vagrant has a few options (see full doc https://docs.vagrantup.com/v2/vagrantfile/ssh_settings.html) :
Vagrant.configure("2") do |config|
config.ssh.username = "user"
config.ssh.password = "password"
end
note indeed, you need to make sure those users exist on the guest os (generally most vagrant box are created with vagrant user)
To have the connection between your different VMs, you can easily do that if you assign fix IP to the VM.
Vagrant.configure("2") do |config|
config.vm.network :private_network, ip: "192.168.45.15"
end
when you connect to your second VM, you can run ssh vagrant#192.168.45.15 and it will ssh to the first VM

Vagrant with Ansible for Windows VM

I am trying to run Vagrant with Ansible on my Mac to create and provision a Windows 7 VM. I am able to "vagrant up" when I don't invoke Ansible in the Vagrantfile.
I am using the following playbook.yml
---
- hosts: all
tasks:
- name: run win ping
win_ping:
When I add the ansible code to my Vagrantfile, I get the following error
GATHERING FACTS ***************************************************************
failed: [default] => {"failed": true, "parsed": false}
/bin/sh: /usr/bin/python: No such file or directory
To me, this error means it fails to find Python because it is looking for Python as if it is a Linux machine.
Separately, I have run
ansible windows -m win_ping
where windows is the IP address to the VM brought up by Vagrant so I suspect the issue is not with Ansible but with how Vagrant is invoking Ansible.
Has anyone tried Vagrant + Ansible for a Windows VM? Is there something obvious that I am missing (perhaps an option to pass to Ansible)?
I am using Vagrant version 1.7.2 and Ansible version 1.8.3
With Ansible provisioning a Windows box (either Vagrant, VM or real machine) the configuration is much more important in the first place. Before crafting your playbook, you should have a correct configuration in place.
Having a Windows box managed by Vagrant, your configuration file group_vars/windows-dev should contain something like:
ansible_user: IEUser
ansible_password: Passw0rd!
ansible_port: 55986 # not 5986, as we would use for non-virtualized environments
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
Be sure to insert the correct credentials and choose the right port for ansible-port. Working with Vagrant, you can get the correct port from the log-messages produced by Vagrant after a vagrant up. In my case this looks like this:
==> default: Forwarding ports...
default: 5985 (guest) => 55985 (host) (adapter 1)
default: 5986 (guest) => 55986 (host) (adapter 1)
My Vagrantfile could be found here, if you´re interested. It uses the Microsoft Edge on Windows 10 Stable (14.xxx) image from https://developer.microsoft.com/en-us/microsoft-edge/tools/vms.
Now the win_ping module should work - assuming that you´ve done all the necessary preparing steps on your Windows box which center around executing the script ConfigureRemotingForAnsible.ps1 (more Information could be found in the Making Windows Ansible ready chapter in this blog post):
ansible windows-dev -i hostsfile -m win_ping
Only, if this gives you an SUCCESS you should proceed with crafting your playbook.
In my Windows provisioning playbook I set this in the header:
gather_facts: no

Resources