skipping: no hosts matched -Ansible- Vagrant - ansible

Recently started looking into ansible. So for testing purpose i try to provision a box. I am using vagrant, virtual box and ansible to proviosion my box but when i do vagrant provision its showing
skipping: no hosts matched
My Vagrantfile
config.vm.define"deploymaster" do |deploymaster|
deploymaster.vm.hostname = "deploymaster"
deploymaster.vm.network :private_network, ip: "192.168.33.10"
deploymaster.vm.network "forwarded_port", guest: 80, host: 6080, id: "http", auto_corect: true
deploymaster.vm.network "forwarded_port", guest: 443, host: 2201, id: "https"
deploymaster.vm.provider "virtualbox" do |vb|
vb.memory = "1024"
end
deploymaster.vm.provision :ansible do |ansible|
ansible.playbook = "../playbooks/inventory-change-callback.yml"
ansible.inventory_path = "../inventory/ansible/hosts.yml"
ansible.become = true
ansible.verbose = "vvv"
end
end
inventory hosts.yml file
deploymaster:
hosts:
192.168.33.10:
hostname: deploymaster
elk:
hosts:
192.168.33.12:
hostname: elk
vars:
retention: 30
Update 1
Updated the inventory script
deploymaster:
hosts:
deploy-master:
ansible_host: 192.168.33.10
elk:
hosts:
elk-node:
ansible_host: 192.168.33.12
Playbook
---
- name: Set inventory path
hosts: deploymaster
gather_facts: no
- include: some-play-book.yml
I am importing my inventory file in vagrant file as mentioned here and here but still not able to fix it.What am i missing here.

Related

Ansible eht1 ipv4 address in j2 template

I'm currently working on a school assignment where i have to configure HAProxy to loadbalance between my two webservers.
I'm deploying the machines via Vagrant in Virtualbox. After this, my Ansible playbook will run and starts off with configuring the webservers. After the webservers are done, it will configure the loadbalancer.
Sadly, i can't manage to get the ipv4 address of both eth1 adapters added to the HAproxy.conf. I'm repeatedly getting the message that ansible can't find the variable inside the hostvars.
TASK [Configuring haproxy]
fatal: [HDVLD-TEST-LB01]: FAILED! => {"changed": false, "msg":
"AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object'
has no attribute 'ansible_eth1'"}
Adding up to this, HAProxy is not responding on 10.2.2.20:8080 -> Chrome gives me an
ERR_CONNECTION_REFUSED
I hope someone over here can help me out..
I'll paste my code down here.
Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# config.ssh.insert_key = false
#webserver
(1..2).each do |i|
config.vm.define "HDVLD-TEST-WEB0#{i}" do |webserver|
webserver.vm.box = "ubuntu/trusty64"
webserver.vm.hostname = "HDVLD-TEST-WEB0#{i}"
webserver.vm.network :private_network, ip: "10.2.2.1#{i}"
webserver.vm.provider :virtualbox do |vb|
vb.memory = "524"
vb.customize ["modifyvm", :id, "--nested-hw-virt", "on"]
end
webserver.vm.provision "shell" do |shell|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub")
shell.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end
# config.vm.define "HDVLD-TEST-DB01" do|db_server|
# db_server.vm.box = "ubuntu/trusty64"
# db_server.vm.hostname = "HDVLD-TEST-DB01"
# db_server.vm.network :private_network, ip: "10.2.2.30"
# end
config.vm.define "HDVLD-TEST-LB01" do |lb_server|
lb_server.vm.box = "ubuntu/trusty64"
lb_server.vm.hostname = "HDVLD-TEST-LB01"
lb_server.vm.network :private_network, ip: "10.2.2.20"
lb_server.vm.provider :virtualbox do |vb|
vb.memory = "524"
vb.customize ["modifyvm", :id, "--nested-hw-virt", "on"]
end
lb_server.vm.provision "shell" do |shell|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub")
shell.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end
config.vm.provision :ansible do |ansible|
ansible.playbook = "webserver_test.yml"
ansible.groups = {
"webservers" => ["HDVLD-TEST-WEB01", "HDVLD-TEST-WEB02"],
"loadbalancer" => ["HDVLD-TEST-LB01"]
}
end
end
end
Playbook.yml
- hosts: webservers
become: true
vars_files: vars/default.yml
gather_facts: True
tasks:
# Getting the IP address of eth0 interface
- name: Gather facts from new server
delegate_facts: True
setup:
filter: ansible_eth1.ipv4.address
- name: Debug facts from Server
delegate_facts: True
debug:
var: ansible_eth1.ipv4.address
- name: UPurge
apt: purge=yes
- name: Install latest version of Apache
apt: name=apache2 update_cache=yes state=latest
- name: Install latest version of Facter
apt: name=facter state=latest
- name: Create document root for your domain
file:
path: /var/www/{{ http_host }}
state: directory
mode: '0755'
- name: Copy your index page
template:
src: "files/index.html.j2"
dest: "/var/www/{{ http_host }}/index.html"
- name: Set up virtuahHost
template:
src: "files/apache.conf.j2"
dest: "/etc/apache2/sites-available/{{ http_conf }}"
notify: restart-apache
- name: Enable new site {{ http_host }}
command: a2ensite {{ http_host }}
- name: Disable default site
command: a2dissite 000-default
when: disable_default
notify: restart-apache
- name: "UFW firewall allow HTTP on port {{ http_port }}"
ufw:
rule: allow
port: "{{ http_port }}"
proto: tcp
handlers:
- name: restart-apache
service:
name: apache2
state: restarted
- hosts: loadbalancer
become: true
vars_files: vars/default.yml
gather_facts: true
tasks:
- name: "Installing haproxy"
package:
name: "haproxy"
state: present
- name: "Starting haproxy"
service:
name: "haproxy"
state: started
enabled: yes
- name: "Configuring haproxy"
template:
src: "files/haproxy.conf.j2"
dest: "/etc/haproxy/haproxy.cfg"
notify: restart-haproxy
- name: "UFW firewall allow Proxy on port {{ proxy_port }}"
ufw:
rule: allow
port: "{{ proxy_port }}"
proto: tcp
- name: "UFW firewall allow static port on port {{ staticlb_port }}"
ufw:
rule: allow
port: "{{ staticlb_port }}"
proto: tcp
- name: Gather facts from new Server
setup:
filter: ansible_default_ipv4.address
handlers:
- name: restart-haproxy
service:
name: haproxy
state: restarted
Haproxy.conf.j2
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# https://www.haproxy.org/download/1.8/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 1000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
# utilize system-wide crypto-policies
ssl-default-bind-ciphers PROFILE=SYSTEM
ssl-default-server-ciphers PROFILE=SYSTEM
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen haproxy-monitoring *:{{ proxy_port }}
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main
bind *:{{ http_port }}
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
use_backend static if url_static
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
balance roundrobin
server static 127.0.0.1:{{ staticlb_port }} check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
{% for host in groups['webservers'] %}
{{ hostvars[host].ansible_eth1.ipv4.address }}:{{ http_port }} check
{% endfor %}`
defaults (vars file)
http_host: "hdvld"
http_conf: "hdvld.conf"
http_port: "80"
proxy_port: "8080"
disable_default: true
staticlb_port: "4331"
I'm doing something wrong, but i can't find the issue.. Yesterday I have been searching and trying the whole day so there are some quoted pieces off code inside the files, please ignore it..
** Added the inventory file
This is the inventory file
# Generated by Vagrant
HDVLD-TEST-LB01 ansible_host=127.0.0.1 ansible_port=2200 ansible_user='vagrant' ansible_ssh_private_key_file='/home/web01/VM2022/template/.vagrant/machines/HDVLD-TEST-LB01/virtualbox/private_key'
HDVLD-TEST-WEB02 ansible_host=127.0.0.1 ansible_port=2201 ansible_user='vagrant' ansible_ssh_private_key_file='/home/web01/VM2022/template/.vagrant/machines/HDVLD-TEST-WEB02/virtualbox/private_key'
HDVLD-TEST-WEB01 ansible_host=127.0.0.1 ansible_port=2222 ansible_user='vagrant' ansible_ssh_private_key_file='/home/web01/VM2022/template/.vagrant/machines/HDVLD-TEST-WEB01/virtualbox/private_key'
[webservers]
HDVLD-TEST-WEB01
HDVLD-TEST-WEB02
[loadbalancer]
HDVLD-TEST-LB01
in the first play: (you could replace the first 2 tasks)
- name: N1
hosts: webservers
tasks:
- name: get eth1 adress
set_fact:
ips: "{{ ips | d({}) | combine({_ho: _ip}) }}"
loop: "{{ ansible_play_hosts }}"
vars:
_ho: "{{ item }}"
_ip: "{{ ansible_eth1.ipv4.address }}"
- name: add variables to dummy host
add_host:
name: "variable_holder"
shared_variable: "{{ ips }}"
:
in the second play:
- name: N2
hosts: loadbalancer
gather_facts: true
vars:
ips: "{{ hostvars['variable_holder']['shared_variable'] }}"
tasks:
- name: check the value of ips
debug:
var: ips
:
:
in the j2.file change:
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
{% for host in groups['webservers'] if ips[host] is defined %}
{{ ips[host] }}:{{ http_port }} check
{% endfor %}`

vagrant ansible access other machines IP address

I have a vagrant file to create 3 VMs and an ansible to manage these 3 machines (inventory file is generated by the vagrant file). I need to access the VM no.1 IP address in order to put it in the configuration file of two other machines. but using hostvars[vm1] variable won't give me the IP address of the vm1.
Here is my vagrant file:
Vagrant.configure("2") do |config|
config.vm.network "private_network", type: "dhcp"
config.vm.provider "virtualbox" do |v|
v.memory = 512
end
config.vm.synced_folder ".", "/vagrant"
config.vm.box_check_update = false
config.vm.define "vm1" do |vm1|
vm1.vm.box = "hashicorp/bionic64"
end
config.vm.define "vm2" do |vm2|
vm2.vm.box = "hashicorp/bionic64"
end
config.vm.define "vm3" do |vm3|
vm3.vm.box = "hashicorp/bionic64"
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
#ansible.ask_become_pass = true
ansible.groups = {
"node_master" => ["vm1"],
"node_replicas" => ["vm2", "vm3"],
"node:children" => ["node_master", "node_replicas"]
}
end
How can I solve this problem?
As configured, your ansible provisionner will run three times: once independently for each machine, being called with a limit set to the current machine name.
In this situation, the facts for all other machines will not be gathered in your playbook and hostvars[vm1] will be empty (unless you are currently running on vm1).
What you can try is to declare the provisionner on a single machine only (the best bet being vm3, the last one) and change the default current machine limit to all
config.vm.define "vm3" do |vm3|
vm3.vm.box = "hashicorp/bionic64"
vm3.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
ansible.limit = "all"
#ansible.ask_become_pass = true
ansible.groups = {
"node_master" => ["vm1"],
"node_replicas" => ["vm2", "vm3"],
"node:children" => ["node_master", "node_replicas"]
}
end
end
This way, your playbook will run on all your vms at once in parallel and you should be able to access facts from all the hosts you target in your playbook.
I got no clue how to solve your specific version.
However, I use Vagrant and Ansible separately. Vagrant to only build the hosts with vagrant up, and Ansible to manage the configuration on those hosts.
I use this Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 1
end
config.ssh.insert_key = false
config.vm.define "multi-1" do | localMulti1 |
localMulti1.vm.box = "ubuntu/xenial64"
localMulti1.vm.hostname = "multi-1"
localMulti1.vm.network :forwarded_port, guest: 22, host: 30001, id: "ssh"
localMulti1.vm.network "private_network", ip: "10.0.0.111"
end
config.vm.define "multi-2" do | localMulti2 |
localMulti2.vm.box = "ubuntu/xenial64"
localMulti2.vm.hostname = "multi-2"
localMulti2.vm.network :forwarded_port, guest: 22, host: 30002, id: "ssh"
localMulti2.vm.network "private_network", ip: "10.0.0.112"
end
config.vm.define "multi-3" do | localMulti3 |
localMulti3.vm.box = "ubuntu/xenial64"
localMulti3.vm.hostname = "multi-3"
localMulti3.vm.network :forwarded_port, guest: 22, host: 30003, id: "ssh"
localMulti3.vm.network "private_network", ip: "10.0.0.113"
localMulti3.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
ansible.inventory_path = "inventory"
ansible.limit = "local_multi"
end
end
end
I place this in my inventory file:
[local_multi]
multi-1 ansible_ssh_user=vagrant ansible_host=127.0.0.1 ansible_ssh_port=30001 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
multi-2 ansible_ssh_user=vagrant ansible_host=127.0.0.1 ansible_ssh_port=30002 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
multi-3 ansible_ssh_user=vagrant ansible_host=127.0.0.1 ansible_ssh_port=30003 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
Your playbook.yml
---
- hosts: local_multi
become: True
tasks:
- name: check who is master
debug:
msg: "{{ node_master }}"
when: node_master is defined
Now you can place all your Ansible vars in the grouped or host vars;
./inventory
./playbook.yml
./Vagrantfile
./group_vars/all.yml
./host_vars/multi-1.yml
./host_vars/multi-2.yml
./host_vars/multi-3.yml

How to use hostvars in vagrant env

I'm having trouble using hostvars to pass a variable to another host in a vagrant environment, the code I did:
Vagrant.configure("2") do |config|
config.vm.define "server_1" do |server_1|
server_1.vm.hostname = "n1"
server_1.vm.box = "centos/7"
server_1.vm.network "public_network", bridge: "wlp1s0", ip: "192.168.0.50"
end
config.vm.define "worker_1" do |worker_1|
worker_1.vm.hostname = "n2"
worker_1.vm.box = "centos/7"
worker_1.vm.network "public_network", bridge: "wlp1s0", ip: "192.168.0.51"
end
config.vm.provider "virtualbox" do |vb|
vb.memory = 1024
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "t0a.yml"
end
end
t0a.yml
---
- hosts: server*
tasks:
- set_fact: hello=world
- hosts: worker*
tasks:
- debug:
msg: "{{ hostvars['server_1']['hello'] }}"
expected:
TASK [show] *******************************************************************
ok: [worker_1] => {
"msg": [
"works"
]
}
actual:
TASK [debug] ********************************************************************
fatal: [worker_1]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'hello'\n\nThe error appears to have been in '/home/kayke/Documentos/vm-vagrant/provision-ansible/centos/t0_tests/t0a.yml': line 8, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - debug:\n ^ here\n"}
What you want is a single playbook that targets both server and worker, matching on their host patterns, because that way there is only a single ansible run, versus what's happening now is that ansible is being run twice, once for each host. Thus:
- hosts: server*
tasks:
- set_fact: hello=world
- hosts: worker*
tasks:
- debug:
msg: "{{ hostvars["server_1"]["hello"] }}"
and invoked at the very end, like you see here: https://github.com/kubernetes-sigs/kubespray/blob/v2.8.3/Vagrantfile#L177-L193
I haven't studied Vagrant enough to know if it will write out an inventory file for you, or what, but if you need an example kubespray is also generating an inventory file from all the known vms, too: https://github.com/kubernetes-sigs/kubespray/blob/v2.8.3/Vagrantfile#L69-L75
If you don't like that approach, you can also use fact caching plugins to cause ansible to write out the fact cache for the hosts in a way that the worker playbooks can read in, but as you might suspect, that's a ton more work.

Ansible is re-provision the same host even though inventory file setup correctly

I've been trying to debug this for a while now, and I thought I had it working, but then made some other changes, and now back again.
Basically, I have Vagrant looping over a list of machines definitions and while my Ansible inventory looks perfectly fine, I find that only one host is actually being provisioned.
Generated Ansible Inventory -- The SSH ports are all different, groups are correct
# Generated by Vagrant
kafka.cp.vagrant ansible_host=127.0.0.1 ansible_port=2200 ansible_user='vagrant' ansible_ssh_private_key_file='/workspace/confluent/cp-ansible/vagrant/.vagrant/machines/kafka.cp.vagrant/virtualbox/private_key' kafka='{"broker": {"id": 1}}'
zk.cp.vagrant ansible_host=127.0.0.1 ansible_port=2222 ansible_user='vagrant' ansible_ssh_private_key_file='/workspace/confluent/cp-ansible/vagrant/.vagrant/machines/zk.cp.vagrant/virtualbox/private_key'
connect.cp.vagrant ansible_host=127.0.0.1 ansible_port=2201 ansible_user='vagrant' ansible_ssh_private_key_file='/workspace/confluent/cp-ansible/vagrant/.vagrant/machines/connect.cp.vagrant/virtualbox/private_key'
[preflight]
zk.cp.vagrant
kafka.cp.vagrant
connect.cp.vagrant
[zookeeper]
zk.cp.vagrant
[broker]
kafka.cp.vagrant
[schema-registry]
kafka.cp.vagrant
[connect-distributed]
connect.cp.vagrant
Generated hosts file -- IPs and hostnames are correct
## vagrant-hostmanager-start id: aca1499c-a63f-4747-b39e-0e71ae289576
192.168.100.101 zk.cp.vagrant
192.168.100.102 kafka.cp.vagrant
192.168.100.103 connect.cp.vagrant
## vagrant-hostmanager-end
Ansible Playbook I want to run -- Correctly correspond to the groups in my inventory
- hosts: preflight
tasks:
- import_role:
name: confluent.preflight
- hosts: zookeeper
tasks:
- import_role:
name: confluent.zookeeper
- hosts: broker
tasks:
- import_role:
name: confluent.kafka-broker
- hosts: schema-registry
tasks:
- import_role:
name: confluent.schema-registry
- hosts: connect-distributed
tasks:
- import_role:
name: confluent.connect-distributed
For any code missing here, see Confluent :: cp-ansible.
The following is a sample of my Vagrantfile. (I made a fork, but haven't committed until I get this working...)
I know that this if index == machines.length - 1 should work according to the Vagrant documentation, and it does start all the machines, then only runs Ansible on the last machine, but its just all the tasks are executed on first one for some reason.
machines = {"zk"=>{"ports"=>{2181=>nil}, "groups"=>["preflight", "zookeeper"]}, "kafka"=>{"memory"=>3072, "cpus"=>2, "ports"=>{9092=>nil, 8081=>nil}, "groups"=>["preflight", "broker", "schema-registry"], "vars"=>{"kafka"=>"{\"broker\": {\"id\": 1}}"}}, "connect"=>{"ports"=>{8083=>nil}, "groups"=>["preflight", "connect-distributed"]}}
Vagrant.configure("2") do |config|
if Vagrant.has_plugin?("vagrant-hostmanager")
config.hostmanager.enabled = true
config.hostmanager.manage_host = true
config.hostmanager.ignore_private_ip = false
config.hostmanager.include_offline = true
end
# More info on http://fgrehm.viewdocs.io/vagrant-cachier/usage
if Vagrant.has_plugin?("vagrant-cachier")
config.cache.scope = :box
end
if Vagrant.has_plugin?("vagrant-vbguest")
config.vbguest.auto_update = false
end
config.vm.box = VAGRANT_BOX
config.vm.box_check_update = false
config.vm.synced_folder '.', '/vagrant', disabled: true
machines.each_with_index do |(machine, machine_conf), index|
hostname = getFqdn(machine.to_s)
config.vm.define hostname do |v|
v.vm.network "private_network", ip: "192.168.100.#{101+index}"
v.vm.hostname = hostname
machine_conf['ports'].each do |guest_port, host_port|
if host_port.nil?
host_port = guest_port
end
v.vm.network "forwarded_port", guest: guest_port, host: host_port
end
v.vm.provider "virtualbox" do |vb|
vb.memory = machine_conf['memory'] || 1536 # Give overhead for 1G default java heaps
vb.cpus = machine_conf['cpus'] || 1
end
if index == machines.length - 1
v.vm.provision "ansible" do |ansible|
ansible.compatibility_mode = '2.0'
ansible.limit = 'all'
ansible.playbook = "../plaintext/all.yml"
ansible.become = true
ansible.verbose = "vv"
# ... defined host and group variables here
end # Ansible provisioner
end # If last machine
end # machine configuration
end # for each machine
end
I setup an Ansible task like this
- debug:
msg: "FQDN: {{ansible_fqdn}}; Hostname: {{inventory_hostname}}; IPv4: {{ansible_default_ipv4.address}}"
Just with that task, notice that the following ansible_fqdn is always zk.cp.vagrant, and this lines up with the fact that only that VM is getting provisioned by Ansible.
ok: [zk.cp.vagrant] => {
"msg": "FQDN: zk.cp.vagrant; Hostname: zk.cp.vagrant; IPv4: 10.0.2.15"
}
ok: [kafka.cp.vagrant] => {
"msg": "FQDN: zk.cp.vagrant; Hostname: kafka.cp.vagrant; IPv4: 10.0.2.15"
}
ok: [connect.cp.vagrant] => {
"msg": "FQDN: zk.cp.vagrant; Hostname: connect.cp.vagrant; IPv4: 10.0.2.15"
}
Update with minimal example: hostname -f is only one host, and I assume that's what gather_facts is running for ansible_fqdn
ansible all --private-key=~/.vagrant.d/insecure_private_key --inventory-file=/workspace/confluent/cp-ansible/vagrant/.vagrant/provisioners/ansible/inventory -a 'hostname -f' -f1
zk.cp.vagrant | SUCCESS | rc=0 >>
kafka.cp.vagrant
connect.cp.vagrant | SUCCESS | rc=0 >>
kafka.cp.vagrant
kafka.cp.vagrant | SUCCESS | rc=0 >>
kafka.cp.vagrant
Turns out I can get around the problem with not having this section in my ansible.cfg
[ssh_connection]
control_path = %(directory)s/%%h-%%r

How to get ip address from other host in ansible?

I have vagrant configured with multiple machines and ansible:
config.vm.box = "ubuntu/trusty64"
config.vm.define "my_server" do |my_server|
my_server.vm.network "private_network", ip: "192.168.50.4"
end
config.vm.define "my_agent" do |my_agent|
my_agent.vm.network "private_network", ip: "192.168.50.5"
end
config.vm.provision "ansible" do |ansible|
ansible.groups = {
"my-server" => ["my_server"],
"my-agent" => ["my_agent"],
"all_groups:children" => ["my-server", "my-agent"]
}
ansible.playbook = "./ansible/my.yml"
end
And vagrant generate inventory file:
# Generated by Vagrant
my_server ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=/.../private_key
my_agent ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=/.../private_key
...
When I run vagrant my_server gets ip:
eth0: 10.0.2.15
eth1: 192.168.50.4
and my_agent gets ip:
eth0: 10.0.2.15
eth1: 192.168.50.5
I want to add in agent configuration ip address of server (from eth1).
I try debug informations about server:
- debug: var=hostvars[item]
with_items: groups['my-server']
but i only gets:
ok: [my_agent] => (item=my_server) => {
"item": "my_server",
"var": {
"hostvars[item]": {
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2222,
"ansible_ssh_private_key_file": ".../private_key",
"group_names": [
"all_groups",
"my-server"
],
"inventory_hostname": "my_server",
"inventory_hostname_short": "my_server"
}
}
}
Is it possible to get ip address of server in agent role? If it is possible how I can do it?
I resolve problem. I need add
ansible.limit = "all"
in ansible configuration, because vagrant runs ansible two times: first for my_server and second for my_agent. Second run ansible don't collect informations about my_server. Now ansible runs two times for each server.
Working vagrant configuration:
config.vm.provision "ansible" do |ansible|
ansible.groups = {
"my-server" => ["my_server"],
"my-agent" => ["my_agent"],
"all_groups:children" => ["my-server", "my-agent"]
}
ansible.limit = "all"
ansible.playbook = "./ansible/my.yml"
end
And ansible agent role:
- debug: var=hostvars[item]["ansible_eth1"]["ipv4"]["address"]
with_items: groups['my-server']
sudo: yes

Resources