How to get ip address from other host in ansible? - vagrant

I have vagrant configured with multiple machines and ansible:
config.vm.box = "ubuntu/trusty64"
config.vm.define "my_server" do |my_server|
my_server.vm.network "private_network", ip: "192.168.50.4"
end
config.vm.define "my_agent" do |my_agent|
my_agent.vm.network "private_network", ip: "192.168.50.5"
end
config.vm.provision "ansible" do |ansible|
ansible.groups = {
"my-server" => ["my_server"],
"my-agent" => ["my_agent"],
"all_groups:children" => ["my-server", "my-agent"]
}
ansible.playbook = "./ansible/my.yml"
end
And vagrant generate inventory file:
# Generated by Vagrant
my_server ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=/.../private_key
my_agent ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=/.../private_key
...
When I run vagrant my_server gets ip:
eth0: 10.0.2.15
eth1: 192.168.50.4
and my_agent gets ip:
eth0: 10.0.2.15
eth1: 192.168.50.5
I want to add in agent configuration ip address of server (from eth1).
I try debug informations about server:
- debug: var=hostvars[item]
with_items: groups['my-server']
but i only gets:
ok: [my_agent] => (item=my_server) => {
"item": "my_server",
"var": {
"hostvars[item]": {
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2222,
"ansible_ssh_private_key_file": ".../private_key",
"group_names": [
"all_groups",
"my-server"
],
"inventory_hostname": "my_server",
"inventory_hostname_short": "my_server"
}
}
}
Is it possible to get ip address of server in agent role? If it is possible how I can do it?

I resolve problem. I need add
ansible.limit = "all"
in ansible configuration, because vagrant runs ansible two times: first for my_server and second for my_agent. Second run ansible don't collect informations about my_server. Now ansible runs two times for each server.
Working vagrant configuration:
config.vm.provision "ansible" do |ansible|
ansible.groups = {
"my-server" => ["my_server"],
"my-agent" => ["my_agent"],
"all_groups:children" => ["my-server", "my-agent"]
}
ansible.limit = "all"
ansible.playbook = "./ansible/my.yml"
end
And ansible agent role:
- debug: var=hostvars[item]["ansible_eth1"]["ipv4"]["address"]
with_items: groups['my-server']
sudo: yes

Related

vagrant ansible access other machines IP address

I have a vagrant file to create 3 VMs and an ansible to manage these 3 machines (inventory file is generated by the vagrant file). I need to access the VM no.1 IP address in order to put it in the configuration file of two other machines. but using hostvars[vm1] variable won't give me the IP address of the vm1.
Here is my vagrant file:
Vagrant.configure("2") do |config|
config.vm.network "private_network", type: "dhcp"
config.vm.provider "virtualbox" do |v|
v.memory = 512
end
config.vm.synced_folder ".", "/vagrant"
config.vm.box_check_update = false
config.vm.define "vm1" do |vm1|
vm1.vm.box = "hashicorp/bionic64"
end
config.vm.define "vm2" do |vm2|
vm2.vm.box = "hashicorp/bionic64"
end
config.vm.define "vm3" do |vm3|
vm3.vm.box = "hashicorp/bionic64"
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
#ansible.ask_become_pass = true
ansible.groups = {
"node_master" => ["vm1"],
"node_replicas" => ["vm2", "vm3"],
"node:children" => ["node_master", "node_replicas"]
}
end
How can I solve this problem?
As configured, your ansible provisionner will run three times: once independently for each machine, being called with a limit set to the current machine name.
In this situation, the facts for all other machines will not be gathered in your playbook and hostvars[vm1] will be empty (unless you are currently running on vm1).
What you can try is to declare the provisionner on a single machine only (the best bet being vm3, the last one) and change the default current machine limit to all
config.vm.define "vm3" do |vm3|
vm3.vm.box = "hashicorp/bionic64"
vm3.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
ansible.limit = "all"
#ansible.ask_become_pass = true
ansible.groups = {
"node_master" => ["vm1"],
"node_replicas" => ["vm2", "vm3"],
"node:children" => ["node_master", "node_replicas"]
}
end
end
This way, your playbook will run on all your vms at once in parallel and you should be able to access facts from all the hosts you target in your playbook.
I got no clue how to solve your specific version.
However, I use Vagrant and Ansible separately. Vagrant to only build the hosts with vagrant up, and Ansible to manage the configuration on those hosts.
I use this Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 1
end
config.ssh.insert_key = false
config.vm.define "multi-1" do | localMulti1 |
localMulti1.vm.box = "ubuntu/xenial64"
localMulti1.vm.hostname = "multi-1"
localMulti1.vm.network :forwarded_port, guest: 22, host: 30001, id: "ssh"
localMulti1.vm.network "private_network", ip: "10.0.0.111"
end
config.vm.define "multi-2" do | localMulti2 |
localMulti2.vm.box = "ubuntu/xenial64"
localMulti2.vm.hostname = "multi-2"
localMulti2.vm.network :forwarded_port, guest: 22, host: 30002, id: "ssh"
localMulti2.vm.network "private_network", ip: "10.0.0.112"
end
config.vm.define "multi-3" do | localMulti3 |
localMulti3.vm.box = "ubuntu/xenial64"
localMulti3.vm.hostname = "multi-3"
localMulti3.vm.network :forwarded_port, guest: 22, host: 30003, id: "ssh"
localMulti3.vm.network "private_network", ip: "10.0.0.113"
localMulti3.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
ansible.inventory_path = "inventory"
ansible.limit = "local_multi"
end
end
end
I place this in my inventory file:
[local_multi]
multi-1 ansible_ssh_user=vagrant ansible_host=127.0.0.1 ansible_ssh_port=30001 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
multi-2 ansible_ssh_user=vagrant ansible_host=127.0.0.1 ansible_ssh_port=30002 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
multi-3 ansible_ssh_user=vagrant ansible_host=127.0.0.1 ansible_ssh_port=30003 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
Your playbook.yml
---
- hosts: local_multi
become: True
tasks:
- name: check who is master
debug:
msg: "{{ node_master }}"
when: node_master is defined
Now you can place all your Ansible vars in the grouped or host vars;
./inventory
./playbook.yml
./Vagrantfile
./group_vars/all.yml
./host_vars/multi-1.yml
./host_vars/multi-2.yml
./host_vars/multi-3.yml

How to run an ansible playbook on a specific vagrant host

I have a Vagrantfile which creates 3 servers. I have two ansible playbooks. playbook1 should be executed on every server first. The second playbook2 should only be executed on server1 and not on server2 and server3.
How can I manage this with my Vagrantfile?
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/bionic64"
config.vm.define "server1" do |server1|
//
end
config.vm.define "server2" do |server2|
//
end
config.vm.define "server3" do |server3|
//
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook1.yml"
end
end
The above executes playbook1 on all servers. How can I add config for playbook2.yml to be executed only on server1 and AFTER playbook1?
Given your example Vagrantfile and your theoretical playbook2.yml executing only on server2 after playbook1.yml on server1, we would arrive at the following solution:
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/bionic64"
config.vm.define "server1" do |server1|
//
# restrict scope of ansible provisioner to server1 by invoking on its class method off the constructor
server1.vm.provision :ansible do |ansible|
ansible.playbook = 'playbook1.yml'
end
end
config.vm.define "server2" do |server2|
//
# perform similarly for server2, which executes after server1 provisioning due to the imperative ruby dsl
server2.vm.provision :ansible do |ansible|
ansible.playbook = 'playbook2.yml'
end
end
config.vm.define "server3" do |server3|
//
end
end
It is also worth noting that if you want to be precise about ordering, you can vagrant up server1 and then vagrant up server2 instead of an all-in-one vagrant up.
Basically, within Vagrant.configure there is a scope affecting all VMs within config.vm. You can restrict its scope to specific VMs by instantiating with config.vm.define like you do above. Object VMs instantiated with config.vm.define have the same members/attributes as the base config.
Note you can also do something like this if you want:
Vagrant.configure('2') do |config|
...
(1..3).each do |i|
config.vm.define "server#{i}" do |server|
//
server.vm.provision :ansible do |ansible|
ansible.playbook = "playbook#{i}.yml"
end
end
end
end
for a per-server specific playbook. This depends on what exactly is within your // though specific to each VM, and if you wanted a third playbook for the third VM.
The below example will execute playbook1.yml on every server first then execute playbook2.yml only on server1 (this example assumes that the playbook1.yml can be parallelized):
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
N = 3
(1..N).each do |server_id|
config.vm.define "server#{server_id}" do |server|
server.vm.box = "ubuntu/bionic64"
server.vm.hostname = "server#{server_id}"
server.vm.network "private_network", ip: "172.28.128.25#{server_id}"
server.vm.provision :shell, inline: "sudo apt install -y python"
# only execute once the ansible provisioner,
# when all the machines are up and ready.
if server_id == N
server.vm.provision :ansible do |ansible|
# disable default limit to connect to all the machines
# execute playbook1 on all hosts
ansible.limit = "all"
ansible.playbook = "playbook1.yml"
ansible.compatibility_mode = "2.0"
end
server.vm.provision :ansible do |ansible|
# limit the connection to server1 and execute playbook2
ansible.limit = "server1"
ansible.playbook = "playbook2.yml"
ansible.compatibility_mode = "2.0"
end
end
end
end
end
This example builds on top of the example provided in the Tips and Tricks, the ansible-playbook's are parallelized and the scope for both ansible-playbooks's is limited by the ansible.limit configuration option (e.g. a vagrant up will bring up the virtual machines first then execute the playbooks one after the other against all hosts or subset of hosts).
Note: the ubuntu/bionic64 (virtualbox, 20190131.0.0) box has /usr/bin/python3 installed, for the sake of having a copy & paste example and for using a dynamic inventory I deliberately kept the server.vm.provision :shell, inline: "sudo apt install -y python" in the example so ansible-playbook (2.7.6) doesn't bomb out with "/bin/sh: 1: /usr/bin/python: not found\r\n errors (ref. How do I handle python not having a Python interpreter at /usr/bin/python on a remote machine?). Example playbook1.yml and playbook2.yml (present in the same directory as the Vagrantfile):
---
- hosts: all
tasks:
- debug:
msg: 'executing on {{ inventory_hostname }}'
Results in:

skipping: no hosts matched -Ansible- Vagrant

Recently started looking into ansible. So for testing purpose i try to provision a box. I am using vagrant, virtual box and ansible to proviosion my box but when i do vagrant provision its showing
skipping: no hosts matched
My Vagrantfile
config.vm.define"deploymaster" do |deploymaster|
deploymaster.vm.hostname = "deploymaster"
deploymaster.vm.network :private_network, ip: "192.168.33.10"
deploymaster.vm.network "forwarded_port", guest: 80, host: 6080, id: "http", auto_corect: true
deploymaster.vm.network "forwarded_port", guest: 443, host: 2201, id: "https"
deploymaster.vm.provider "virtualbox" do |vb|
vb.memory = "1024"
end
deploymaster.vm.provision :ansible do |ansible|
ansible.playbook = "../playbooks/inventory-change-callback.yml"
ansible.inventory_path = "../inventory/ansible/hosts.yml"
ansible.become = true
ansible.verbose = "vvv"
end
end
inventory hosts.yml file
deploymaster:
hosts:
192.168.33.10:
hostname: deploymaster
elk:
hosts:
192.168.33.12:
hostname: elk
vars:
retention: 30
Update 1
Updated the inventory script
deploymaster:
hosts:
deploy-master:
ansible_host: 192.168.33.10
elk:
hosts:
elk-node:
ansible_host: 192.168.33.12
Playbook
---
- name: Set inventory path
hosts: deploymaster
gather_facts: no
- include: some-play-book.yml
I am importing my inventory file in vagrant file as mentioned here and here but still not able to fix it.What am i missing here.

Ansible is re-provision the same host even though inventory file setup correctly

I've been trying to debug this for a while now, and I thought I had it working, but then made some other changes, and now back again.
Basically, I have Vagrant looping over a list of machines definitions and while my Ansible inventory looks perfectly fine, I find that only one host is actually being provisioned.
Generated Ansible Inventory -- The SSH ports are all different, groups are correct
# Generated by Vagrant
kafka.cp.vagrant ansible_host=127.0.0.1 ansible_port=2200 ansible_user='vagrant' ansible_ssh_private_key_file='/workspace/confluent/cp-ansible/vagrant/.vagrant/machines/kafka.cp.vagrant/virtualbox/private_key' kafka='{"broker": {"id": 1}}'
zk.cp.vagrant ansible_host=127.0.0.1 ansible_port=2222 ansible_user='vagrant' ansible_ssh_private_key_file='/workspace/confluent/cp-ansible/vagrant/.vagrant/machines/zk.cp.vagrant/virtualbox/private_key'
connect.cp.vagrant ansible_host=127.0.0.1 ansible_port=2201 ansible_user='vagrant' ansible_ssh_private_key_file='/workspace/confluent/cp-ansible/vagrant/.vagrant/machines/connect.cp.vagrant/virtualbox/private_key'
[preflight]
zk.cp.vagrant
kafka.cp.vagrant
connect.cp.vagrant
[zookeeper]
zk.cp.vagrant
[broker]
kafka.cp.vagrant
[schema-registry]
kafka.cp.vagrant
[connect-distributed]
connect.cp.vagrant
Generated hosts file -- IPs and hostnames are correct
## vagrant-hostmanager-start id: aca1499c-a63f-4747-b39e-0e71ae289576
192.168.100.101 zk.cp.vagrant
192.168.100.102 kafka.cp.vagrant
192.168.100.103 connect.cp.vagrant
## vagrant-hostmanager-end
Ansible Playbook I want to run -- Correctly correspond to the groups in my inventory
- hosts: preflight
tasks:
- import_role:
name: confluent.preflight
- hosts: zookeeper
tasks:
- import_role:
name: confluent.zookeeper
- hosts: broker
tasks:
- import_role:
name: confluent.kafka-broker
- hosts: schema-registry
tasks:
- import_role:
name: confluent.schema-registry
- hosts: connect-distributed
tasks:
- import_role:
name: confluent.connect-distributed
For any code missing here, see Confluent :: cp-ansible.
The following is a sample of my Vagrantfile. (I made a fork, but haven't committed until I get this working...)
I know that this if index == machines.length - 1 should work according to the Vagrant documentation, and it does start all the machines, then only runs Ansible on the last machine, but its just all the tasks are executed on first one for some reason.
machines = {"zk"=>{"ports"=>{2181=>nil}, "groups"=>["preflight", "zookeeper"]}, "kafka"=>{"memory"=>3072, "cpus"=>2, "ports"=>{9092=>nil, 8081=>nil}, "groups"=>["preflight", "broker", "schema-registry"], "vars"=>{"kafka"=>"{\"broker\": {\"id\": 1}}"}}, "connect"=>{"ports"=>{8083=>nil}, "groups"=>["preflight", "connect-distributed"]}}
Vagrant.configure("2") do |config|
if Vagrant.has_plugin?("vagrant-hostmanager")
config.hostmanager.enabled = true
config.hostmanager.manage_host = true
config.hostmanager.ignore_private_ip = false
config.hostmanager.include_offline = true
end
# More info on http://fgrehm.viewdocs.io/vagrant-cachier/usage
if Vagrant.has_plugin?("vagrant-cachier")
config.cache.scope = :box
end
if Vagrant.has_plugin?("vagrant-vbguest")
config.vbguest.auto_update = false
end
config.vm.box = VAGRANT_BOX
config.vm.box_check_update = false
config.vm.synced_folder '.', '/vagrant', disabled: true
machines.each_with_index do |(machine, machine_conf), index|
hostname = getFqdn(machine.to_s)
config.vm.define hostname do |v|
v.vm.network "private_network", ip: "192.168.100.#{101+index}"
v.vm.hostname = hostname
machine_conf['ports'].each do |guest_port, host_port|
if host_port.nil?
host_port = guest_port
end
v.vm.network "forwarded_port", guest: guest_port, host: host_port
end
v.vm.provider "virtualbox" do |vb|
vb.memory = machine_conf['memory'] || 1536 # Give overhead for 1G default java heaps
vb.cpus = machine_conf['cpus'] || 1
end
if index == machines.length - 1
v.vm.provision "ansible" do |ansible|
ansible.compatibility_mode = '2.0'
ansible.limit = 'all'
ansible.playbook = "../plaintext/all.yml"
ansible.become = true
ansible.verbose = "vv"
# ... defined host and group variables here
end # Ansible provisioner
end # If last machine
end # machine configuration
end # for each machine
end
I setup an Ansible task like this
- debug:
msg: "FQDN: {{ansible_fqdn}}; Hostname: {{inventory_hostname}}; IPv4: {{ansible_default_ipv4.address}}"
Just with that task, notice that the following ansible_fqdn is always zk.cp.vagrant, and this lines up with the fact that only that VM is getting provisioned by Ansible.
ok: [zk.cp.vagrant] => {
"msg": "FQDN: zk.cp.vagrant; Hostname: zk.cp.vagrant; IPv4: 10.0.2.15"
}
ok: [kafka.cp.vagrant] => {
"msg": "FQDN: zk.cp.vagrant; Hostname: kafka.cp.vagrant; IPv4: 10.0.2.15"
}
ok: [connect.cp.vagrant] => {
"msg": "FQDN: zk.cp.vagrant; Hostname: connect.cp.vagrant; IPv4: 10.0.2.15"
}
Update with minimal example: hostname -f is only one host, and I assume that's what gather_facts is running for ansible_fqdn
ansible all --private-key=~/.vagrant.d/insecure_private_key --inventory-file=/workspace/confluent/cp-ansible/vagrant/.vagrant/provisioners/ansible/inventory -a 'hostname -f' -f1
zk.cp.vagrant | SUCCESS | rc=0 >>
kafka.cp.vagrant
connect.cp.vagrant | SUCCESS | rc=0 >>
kafka.cp.vagrant
kafka.cp.vagrant | SUCCESS | rc=0 >>
kafka.cp.vagrant
Turns out I can get around the problem with not having this section in my ansible.cfg
[ssh_connection]
control_path = %(directory)s/%%h-%%r

Vagrant Synced Folders not showing

I want to sync my OSX dev folder which contains my applications to my VM. My VagrantFile (based off Phansible):
Vagrant.require_version ">= 1.5"
Vagrant.configure("2") do |config|
config.vm.provider :virtualbox do |v|
v.name = "default"
v.customize [
"modifyvm", :id,
"--name", "default",
"--memory", 512,
"--natdnshostresolver1", "on",
"--cpus", 1,
]
end
config.vm.box = "ubuntu/trusty64"
config.vm.network :private_network, ip: "192.168.33.99"
config.ssh.forward_agent = true
if which('ansible-playbook')
config.vm.provision "ansible" do |ansible|
ansible.playbook = "ansible/playbook.yml"
ansible.inventory_path = "ansible/inventories/dev"
ansible.limit = 'all'
ansible.extra_vars = {
private_interface: "192.168.33.99",
hostname: "default"
}
end
else
config.vm.provision :shell, path: "ansible/windows.sh", args: ["default"]
end
config.vm.synced_folder "/Users/xylar/Code", "/vagrant", type: "nfs"
end
When I vagrant up:
==> default: Exporting NFS shared folders...
==> default: Preparing to edit /etc/exports. Administrator privileges will be required...
==> default: Mounting NFS shared folders...
==> default: Running provisioner: ansible...
There are no error messages and when I vagrant ssh and view contents of the vagrant folder I only see some dot files (ansible, bash etc). Is there something I have missed?
I was being foolish. Once I had ssh'd into the box I thought I was in the synced folder (as it was called vagrant) however I was in /home/vagrant and the synced location was in /vagrant.
You need to specify the mount options:
config.vm.synced_folder "/Users/xylar/Code", "/vagrant", "nfs" => { :mount_options => ['dmode=777', 'fmode=777'] }

Resources