I can't figure this out. I have host_vars that I want to pass into my Ansible role. I can iterate through the items in my playbook, but it doesn't work if I give the item into my Ansible role. My host inventory looks like this:
hosts:
host_one:
domain: one
ip: 172.18.1.1
connection:
- connection_name: two
connection_ip: 172.18.1.2
- connection_index: three
local_hub_ip: 172.18.1.3
host_two:
domain: two
ip: 172.18.1.2
For instance,this works correctly:
tasks:
- debug:
msg: "{{item.connection_name}}"
with_items:
- "{{ connection }}"
will correctly print out the connections.connection_name for each connection I have, "two" and "three". However, if I try to pass it into a role:
tasks:
- name: Adding several connections
include_role:
name: connection-create
with_items:
- "{{ connection }}"
in which my role "connection-create" uses a variable called "connection_name" I get a:
FAILED! => {"msg": "'connection_name' is undefined"}
Why doesn't this work?
The loop with_items: "{{ connection }}" creates the loop variable item. The included role can use
item.connection_name
item.connection_ip
The loop variable can be renamed, if needed. See Loop control
Related
I am trying to build up a config file that contains the list of my inventory host servers and their fields eg. IP,FQDN etc.
Here is my part of inventory file :
ocp_cluster:
hosts:
anbootstrap.ocp.hattusas.tst:
fqdn: anbootstrap.ocp.hattusas.tst
ip: 10.72.217.92
anmaster1.ocp.hattusas.tst:
fqdn: anmaster1.ocp.hattusas.tst
ip: 10.72.217.93
anmaster2.ocp.hattusas.tst:
fqdn: anmaster2.ocp.hattusas.tst
ip: 10.72.217.94
anmaster3.ocp.hattusas.tst:
And here is my playbook:
- name: Adding OCP Clusters to DHCP configuration
debug:
"{{ hostvars[item][fqdn] }}"
loop: "{{ groups['ocp_cluster'] }}"
(I will use blockinfile soon)
When I ran my playbook I am getting undefined error fqdn. I tried using a for loop and it didn't help. Any suggestions?
Thanks a lot.
The fixed task is below
- debug:
msg: "{{ hostvars[item]['fqdn'] }}"
loop: "{{ groups['ocp_cluster'] }}"
Debug parameter msg was missing
fqdn is an attribute of a dictionary. It must be quoted in brackets. Similar to ocp_cluster.
It's possible to use the dot notation and simplify the references to the attributes of the dictionaries
- debug:
msg: "{{ hostvars[item].fqdn }}"
loop: "{{ groups.ocp_cluster }}"
Background: I have a dynamic ansible inventory that is built by a previously run process, I don't know the IPs until after this task completes. I have 2 groups: db servers and web servers defined in the inventory file. The specific task I am trying to complete is create some_user#'dynamic_ip_of_webserver_group'.
I think I am close but somethings not quite right. In my dbserver role main task I have:
- name: Create DB User
mysql_user:
name: dbuser
host: "{{ item }}"
password: "{{ mysql_wordpress_password }}"
priv: "someDB.*:ALL"
with_items:
- "{{ ansible_hostname }}"
- 127.0.0.1
- ::1
- localhost
- "{{ hostvars[groups['webservers']] }}"
This errors out with:
TASK [dbservers : Create DB User] *******************************************************************************************************************************************************************
fatal: [10.10.10.13]: FAILED! => {"msg": "ansible.vars.hostvars.HostVars object has no element [u'10.10.10.30', u'10.10.10.240']"}
It is showing the right IPs and there is only 2 so both of those are correct. I think it is trying to access the inventory item as an object instead of the actual input?
Inventory File:
[webservers]
10.10.10.30
10.10.10.240
Simply:
- "{{ groups['webservers'] }}"
This works, because with_items flattens first nested level of lists.
I want to dynamically create an in-memory inventory which is a filter of a standard inventory including only the host where a specific service is installed. The filtered inventory is to be used in a subsequent play.
So I identify the IP address of the host where the service is installed.
- name: find where the service is installed
win_service:
name: "{{ service }}"
register: service_info
This returns a boolean 'exists' value. Using this value as a condition an attempt to add the host where the service is running is made.
- name: create filtered in memory inventory
add_host:
name: "{{ ansible_host }}"
when: service_info.exists
The add_host module bypasses the play host loop and only runs once for all the hosts in the play, as such this only works if the host that add_host runs against is the one that has the service installed.
Below is an attempt to force add_host to iterate across the hosts in the inventory however it appears that the hostvars and therefore service_info.exists are not being passed through to add_host and therefore the conditional 'when' check always returns false.
- name: create filtered in memory inventory
add_host:
name: "{{ ansible_host }}"
when: service_info.exists
with_items: "{{ ansible_play_batch }}"
Is there a way to pass the hosts with their hostvars to add_host as a iterator?
I suggest to create a tasks before add_host to create a temporary file on executor with the list of server matching the condition, and then looping in module add_host over the file.
example taken from Improving use of add_host in ansible that I asked before
---
- hosts: servers
tasks:
- name: find where the service is installed
win_service:
name: "{{ service }}"
register: service_info
- name: write server name in file on control node
lineinfile:
path: /tmp/servers_foo.txt
state: present
line: "{{ inventory_hostname }}"
delegate_to: 127.0.0.1
when: service_info.exists
- name: assign target to group
add_host:
name: "{{ item }}"
groups:
- foo
with_lines: cat /tmp/servers_foo.txt
delegate_to: 127.0.0.1
I want to run an Ansible playbook on multiple hosts and register outputs to a variable. Now using this variable, I want to copy output to single file. The issue is that in the end there is output of only one host in the file. How can I add output of all the hosts in a file one after the other. I don't want to use serial = 1 as it slows down execution considerably if we have multiple hosts.
-
hosts: all
remote_user: cisco
connection: local
gather_facts: no
vars_files:
- group_vars/passwords.yml
tasks:
- name: Show command collection
ntc_show_command:
connection: ssh
template_dir: /ntc-ansible/ntc-templates/templates
platform: cisco_ios
host: "{{ inventory_hostname }}"
username: "{{ ansible_ssh_user }}"
password: "{{ ansible_ssh_pass }}"
command: "{{commands}}"
register: result
- local_action:
copy content="{{result.response}}" dest='/home/user/show_cmd_ouput.txt'
result variable will be registered as a fact on each host the task ntc_show_command was run, thus you should access the value through hostvars dictionary.
- local_action:
module: copy
content: "{{ groups['all'] | map('extract', hostvars, 'result') | map(attribute='response') | list }}"
dest: /home/user/show_cmd_ouput.txt
run_once: true
You also need run_once because the action would still be run as many times as hosts in the group.
I have defined two group of hosts: wmaster and wnodes. Each group runs in its play:
- hosts: wmaster
roles:
- all
- swarm-mode
vars:
- swarm_master: true
- hosts: wnodes
roles:
- all
- swarm-mode
I use host variables (swarm_master) to define different behavior of some role.
Now, my first playbook performs some initialization and I need to share data with the nodes. What I did is to use set_fact in first play, and then to lookup in the second play:
- set_fact:
docker_worker_token: "{{ hostvars[smarm_master_ip].foo }}"
I don't like using the swarm_master_ip. How about to add a dummy host: global with e.g. address 1.1.1.1 that does not get any role, and serves just for holding the global facts/variables?
If you're using Ansible 2 then you can take use of delegate_facts during your first play:
- name: set fact on swarm nodes
set_fact: docker_worker_token="{{ some_var }}"
delegate_to: "{{ item }}"
delegate_facts: True
with_items: "{{ groups['wnodes'] }}"
This should delegate the set_fact task to every host in the wnodes group and will also delegate the resulting fact to those hosts as well instead of setting the fact on the inventory host currently being targeted by the first play.
How about to add a dummy host: global
I have actually found this suggestion to be quite useful in certain circumstances.
---
- hosts: my_server
tasks:
# create server_fact somehow
- add_host:
name: global
my_server_fact: "{{ server_fact }}"
- hosts: host_group
tasks:
- debug: var=hostvars['global']['my_server_fact']