ansible magic variables not returning values - ansible

I'm trying to build a /etc/hosts file; however it seems that I can't get the hostvars to show up when running a playbook, but with the command line it works to a point.
Here is my ansible.cfg file:
[defaults]
ansible_managed = Please do not change this file directly since it is managed by Ansible and will be overwritten
library = ./library
module_utils = ./module_utils
action_plugins = plugins/actions
callback_plugins = plugins/callback
filter_plugins = plugins/filter
roles_path = ./roles
# Be sure the user running Ansible has permissions on the logfile
log_path = $HOME/ansible/ansible.log
inventory = hosts
forks = 20
host_key_checking = False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = $HOME/ansible/facts
fact_caching_timeout = 7200
nocows = 1
callback_whitelist = profile_tasks
stdout_callback = yaml
force_valid_group_names = ignore
inject_facts_as_vars = False
# Disable them in the context of https://review.openstack.org/#/c/469644
retry_files_enabled = False
# This is the default SSH timeout to use on connection attempts
# CI slaves are slow so by setting a higher value we can avoid the following error:
# Timeout (12s) waiting for privilege escalation prompt:
timeout = 60
[ssh_connection]
# see: https://github.com/ansible/ansible/issues/11536
control_path = %(directory)s/%%h-%%r-%%p
ssh_args = -o ControlMaster=auto -o ControlPersist=600s
pipelining = True
# Option to retry failed ssh executions if the failure is encountered in ssh itself
retries = 10
Here is my playbook:
- name: host file update
hosts: baremetal
become: true
gather_facts: true
vars:
primarydomain: "cephcluster.local"
tasks:
- name: print ipv4 info
debug:
msg: "IPv4 addresses: {{ansible_default_ipv4.address }}"
- name: update host file
lineinfile:
dest: /etc/hosts
regexp: '{{ hostvars[item].ansible_default_ipv4.address }}.*{{ item }} {{item}}.{{primarydomain}}$'
line: "{{ hostvars[item].ansible_default_ipv4.address }} {{item}} {{item}}.{{primarydomain}}"
state: present
with_items: "{{ groups.baremetal }}"
Here is my inventory file:
[baremetal]
svr1 ansible_host=192.168.59.10
svr2 ansible_host=192.168.59.11
svr3 ansible_host=192.168.59.12
When I run the playbook I get:
FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: 'ansible_default_ipv4' is undefined
However when I run
ansible baremetal -m gather_facts -a "filter=ansible_default_ipv4"
I get:
| SUCCESS => {
"ansible_facts": {
"ansible_default_ipv4": {
"address": "192.168.59.20",
"alias": "eno1",
"broadcast": "192.168.59.255",
"gateway": "192.168.59.1",
"interface": "eno1",
"macaddress": "80:18:44:e0:4b:a4",
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "192.168.59.0",
"type": "ether"
}
},
"changed": false,
"deprecations": [],
"warnings": []
}
But; if I run
ansible baremetal -m gather_facts -a "filter=ansible_default_ipv4.address"
I get nothing in the return:
SUCCESS => {
"ansible_facts": {},
"changed": false,
"deprecations": [],
"warnings": []
}
I have even tried in the playbook:
- debug:
msg: "IPv4 addresses: {{ansible_default_ipv4 }}"
and nothing gets returned.
Not sure what I'm missing.

You should get used to use and abuse the debug module.
Provided that the ansible ad-hoc command raised you that the facts are under ansible_facts, you could have done a simple task in your playbook:
- debug:
var: ansible_facts
From that, you would have discovered that the fact you are looking for is nested in the dictionary ansible_facts under the key default_ipv4. And that you can access its address with the property address.
So, your debug task ends up being:
- debug:
msg: "IPv4 addresses: {{ ansible_facts.default_ipv4.address }}"

Related

How can i access 'invocation' variable from Return Values in ansible playbook?

When running debug mode with Ansible Playbook i can clearly see that one of returned values is "invocation" however i struggle trying to get it form playbook. "register: xyz" Allows you only to get "msg, status failed, changed" form returned values (at least in task i use - proxmox_kvm). Is there a way of accessing rest of them?
My code:
---
- hosts: pve
become: yes
vars:
passwd: !vault |
$ANSIBLE_VAULT;1.1;AES256
<encrypted-password>
tasks:
- name: Stop VM
proxmox_kvm:
api_user : root#pam
api_password: "{{ passwd }}"
api_host : 10.0.0.1
name : "{{ vm_name }}"
node : my-node
state : current
register: output
- debug:
var: output
Output value from 'register':
"output": {
"changed": false,
"failed": false,
"msg": "VM RHEL8.1 with vmid = 101 is stopped",
"status": "stopped"
List of returned vars:
https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html

Not able to gather facts of ansible host machine

Set up module in ansible gives an error when i tried to set custom facts on host machine using control machine
---
- hosts: test-servers
gather_facts: false
tasks:
- name: deleting Facts directory
file:
path: /etc/ansible/facts.d/
state: absent
- name: Creates a directiory
file:
path: /etc/ansible/facts.d/
recurse: yes
state: directory
- name: Copy custom date facts to host machine
copy:
src: /app/ansible_poc/roles/custom_facts/templates/facts.d/getdate.fact
dest: /etc/ansible/facts.d/getdate.fact
mode: 0755
- name: Copy custom role facts to host machine
copy:
src: /app/ansible_poc/roles/custom_facts/templates/facts.d/getrole.fact
dest: /etc/ansible/facts.d/getrole.fact
mode: 0755
- name: Reloading facts
setup:
- name: Display message
debug:
msg: "{{ ansible_local.getdate.date.date }}"
- name: Display message
debug:
msg: "{{ ansible_local.getrole.role.role }}"
I get following error when i tried to collect facts of ansible host machine. I have set up a file getdate.fact and getrole.fact which has code respectively
#############getdate.fact###############
echo [date]
echo date= `date`
########################################
#############getrole.fact###############
echo [role]
echo role= `whoami`
########################################
and when i tried to run the playbook main.yml then it following error.
[root#ansibletower tasks]# ansible -m setup test-servers
192.168.111.28 | FAILED! => {
"changed": false,
"cmd": "/etc/ansible/facts.d/getdate.fact",
"msg": "[Errno 8] Exec format error",
"rc": 8
}
192.168.111.27 | FAILED! => {
"changed": false,
"cmd": "/etc/ansible/facts.d/getdate.fact",
"msg": "[Errno 8] Exec format error",
"rc": 8
}
If I recall correctly, executables are expected to return JSON:
#!/bin/bash
echo '{ "date" : "'$( date )'" }'
You probably need to add "shebang" line to your fact scripts. I.e., getdate.fact should look like:
#!/bin/sh
echo [date]
echo date=`date`

Ansible: How to specify an array or list element fact with yaml?

When we check hostvars with:
- name: Display all variables/facts known for a host
debug: var=hostvars[inventory_hostname]
We get:
ok: [default] => {
"hostvars[inventory_hostname]": {
"admin_email": "admin#surfer190.com",
"admin_user": "root",
"ansible_all_ipv4_addresses": [
"192.168.35.19",
"10.0.2.15"
],...
How would I specify the first element of the "ansible_all_ipv4_addresses" list?
Use dot notation
"{{ ansible_all_ipv4_addresses.0 }}"
This should work just like it would in Python. Meaning you can access the keys with quotes and the index with an integer.
- set_fact:
ip_address_1: "{{ hostvars[inventory_hostname]['ansible_all_ipv4_addresses'][0] }}"
ip_address_2: "{{ hostvars[inventory_hostname]['ansible_all_ipv4_addresses'][1] }}"
- name: Display 1st ipaddress
debug:
var: ip_address_1
- name: Display 2nd ipaddress
debug:
var: ip_address_2
I had this same challenge when trying to parse the result of a command in Ansible.
So the result was:
{
"changed": true,
"instance_ids": [
"i-0a243240353e84829"
],
"instances": [
{
"id": "i-0a243240353e84829",
"state": "running",
"hypervisor": "xen",
"tags": {
"Backup": "FES",
"Department": "Research"
},
"tenancy": "default"
}
],
"tagged_instances": [],
"_ansible_no_log": false
}
And I wanted to parse the value of state into the result register in the ansible playbook.
Here's how I did it:
Since the result is an hash of array of hashes, that is state is in the index (0) hash of the instances array, I modified my playbook to look this way:
---
- name: Manage AWS EC2 instance
hosts: localhost
connection: local
# gather_facts: false
tasks:
- name: AWS EC2 Instance Restart
ec2:
instance_ids: '{{ instance_id }}'
region: '{{ aws_region }}'
state: restarted
wait: True
register: result
- name: Show result of task
debug:
var: result.instances.0.state
I saved the value of the command using register in a variable called result and then got the value of state in the variable using:
result.instances.0.state
This time when the command ran, I got the result as:
TASK [Show result of task] *****************************************************
ok: [localhost] => {
"result.instances.0.state": "running"
}
That's all.
I hope this helps

Ansbile AWS Dynamic Inventory Groups Fail to Match Play Hosts

I'm having trouble getting my Ansible play's hosts to match the AWS dynamic groups that are coming back for my dynamic inventory. Let's break this problem down.
Given this output of ec2.py --list:
$ ./devops/inventories/dynamic/ec2.py --list
{
"_meta": {
"hostvars": {
"54.37.213.132": {
"ec2__in_monitoring_element": false,
"ec2_ami_launch_index": "0",
"ec2_architecture": "x86_64",
"ec2_client_token": "",
"ec2_dns_name": "ec2-52-37-203-132.us-west-2.compute.amazonaws.com",
"ec2_ebs_optimized": false,
"ec2_eventsSet": "",
"ec2_group_name": "",
"ec2_hypervisor": "xen",
"ec2_id": "i-d352c50b",
"ec2_image_id": "ami-63b25203",
"ec2_instance_profile": "",
"ec2_instance_type": "t2.micro",
"ec2_ip_address": "54.37.213.132",
"ec2_item": "",
"ec2_kernel": "",
"ec2_key_name": "peaker-v1-keypair",
"ec2_launch_time": "2016-03-11T20:45:44.000Z",
"ec2_monitored": false,
"ec2_monitoring": "",
"ec2_monitoring_state": "disabled",
"ec2_persistent": false,
"ec2_placement": "us-west-2a",
"ec2_platform": "",
"ec2_previous_state": "",
"ec2_previous_state_code": 0,
"ec2_private_dns_name": "ip-172-31-43-132.us-west-2.compute.internal",
"ec2_private_ip_address": "172.31.43.132",
"ec2_public_dns_name": "ec2-52-37-203-132.us-west-2.compute.amazonaws.com",
"ec2_ramdisk": "",
"ec2_reason": "",
"ec2_region": "us-west-2",
"ec2_requester_id": "",
"ec2_root_device_name": "/dev/xvda",
"ec2_root_device_type": "ebs",
"ec2_security_group_ids": "sg-824ac0e5",
"ec2_security_group_names": "peaker-v1-security-group",
"ec2_sourceDestCheck": "true",
"ec2_spot_instance_request_id": "",
"ec2_state": "running",
"ec2_state_code": 16,
"ec2_state_reason": "",
"ec2_subnet_id": "subnet-b96e1bce",
"ec2_tag_Environment": "v1",
"ec2_tag_Name": "peaker-v1-ec2",
"ec2_virtualization_type": "hvm",
"ec2_vpc_id": "vpc-5fe8ae3a"
}
}
},
"ec2": [
"54.37.213.132"
],
"tag_Environment_v1": [
"54.37.213.132"
],
"tag_Name_peaker-v1-ec2": [
"54.37.213.132"
],
"us-west-2": [
"54.37.213.132"
]
}
I should be able write a playbook that matches some of the groups coming back:
---
# playbook
- name: create s3 bucket with policy
hosts: localhost
gather_facts: yes
tasks:
- name: s3
s3:
bucket: "fake"
region: "us-west-2"
mode: create
permission: "public-read-write"
register: s3_output
- debug: msg="{{ s3_output }}"
- name: test on remote machine
hosts: ec2
gather_facts: yes
tasks:
- name: test on remote machine
file:
dest: "/home/ec2-user/test/"
owner: ec2-user
group: ec2-user
mode: 0700
state: directory
become: yes
become_user: ec2-user
However, when I --list-hosts that match these plays it's obvious that the play hosts are not matching anything coming back:
$ ansible-playbook -i devops/inventories/dynamic/ec2/ec2.py devops/build_and_bundle_example.yml --ask-vault-pass --list-hosts
Vault password:
[WARNING]: provided hosts list is empty, only localhost is available
playbook: devops/build_and_bundle_example.yml
play #1 (localhost): create s3 bucket with policy TAGS: []
pattern: [u'localhost']
hosts (1):
localhost
play #2 (ec2): test on remote machine TAGS: []
pattern: [u'ec2']
hosts (0):
The quick fix for what you're doing:
change hosts: localhost in your playbook to hosts: all
It would never work just with dynamic inventory if you're going to keep hosts: localhost in your playbook...
If so, – you must combine dynamic & static inventories. Create file with path ./devops/inventories/dynamic/static.ini (on the same level with ec2.py and ec2.ini) and put this content:
[localhost]
localhost
[ec2_tag_Name_peaker_v1_ec2]
[aws-hosts:children]
localhost
ec2_tag_Name_peaker_v1_ec2
After that, you will be able to run a quick check:
ansible -i devops/inventories/dynamic/ec2 aws-hosts -m ping
and your playbook itself:
ansible-playbook -i devops/inventories/dynamic/ec2 \
devops/build_and_bundle_example.yml --ask-vault-pass
NOTE: devops/inventories/dynamic/ec2 is a path to the folder, but it will be automatically resolved into hybrid dynamic&static inventory with access to aws-hosts group name.
In fact, this isn't the best use of inventory. But it's important to understand, that by combining dynamic&static inventories, you're just appending new group names for particular dynamic host
ansible -i devops/inventories/dynamic/ec2 all -m debug \
-a "var=hostvars[inventory_hostname].group_names"

How do I register a variable and persist it between plays targeted on different nodes?

I have an Ansible playbook, where I would like a variable I register in a first play targeted on one node to be available in a second play, targeted on another node.
Here is the playbook I am using:
---
- hosts: localhost
gather_facts: no
tasks:
- command: echo "hello world"
register: foo
- hosts: main
gather_facts: no
tasks:
- debug:
msg: {{ foo.stdout }}
But, when I try to access the variable in the second play, targeted on main, I get this message:
The task includes an option with an undefined variable. The error was: 'foo' is undefined
How can I access foo, registered on localhost, from main?
The problem you're running into is that you're trying to reference facts/variables of one host from those of another host.
You need to keep in mind that in Ansible, the variable foo assigned to the host localhost is distinct from the variable foo assigned to the host main or any other host.
If you want to access one hosts facts/variables from another host then you need to explicitly reference it via the hostvars variable. There's a bit more of a discussion on this in this question.
Suppose you have a playbook like this:
- hosts: localhost
gather_facts: no
tasks:
- command: echo "hello world"
register: foo
- hosts: localhost
gather_facts: no
tasks:
- debug:
var: foo
This will work because you're referencing the host localhost and localhosts's instance of the variable foo in both plays.
The output of this playbook is something like this:
PLAY [localhost] **************************************************
TASK: [command] ***************************************************
changed: [localhost]
PLAY [localhost] **************************************************
TASK: [debug] *****************************************************
ok: [localhost] => {
"var": {
"foo": {
"changed": true,
"cmd": [
"echo",
"hello world"
],
"delta": "0:00:00.004585",
"end": "2015-11-24 20:49:27.462609",
"invocation": {
"module_args": "echo \"hello world\",
"module_complex_args": {},
"module_name": "command"
},
"rc": 0,
"start": "2015-11-24 20:49:27.458024",
"stderr": "",
"stdout": "hello world",
"stdout_lines": [
"hello world"
],
"warnings": []
}
}
}
If you modify this playbook slightly to run the first play on one host and the second play on a different host, you'll get the error that you encountered.
Solution
The solution is to use Ansible's built-in hostvars variable to have the second host explicitly reference the first hosts variable.
So modify the first example like this:
- hosts: localhost
gather_facts: no
tasks:
- command: echo "hello world"
register: foo
- hosts: main
gather_facts: no
tasks:
- debug:
var: foo
when: foo is defined
- debug:
var: hostvars['localhost']['foo']
## alternatively, you can use:
# var: hostvars.localhost.foo
when: hostvars['localhost']['foo'] is defined
The output of this playbook shows that the first task is skipped because foo is not defined by the host main.
But the second task succeeds because it's explicitly referencing localhosts's instance of the variable foo:
TASK: [debug] *************************************************
skipping: [main]
TASK: [debug] *************************************************
ok: [main] => {
"var": {
"hostvars['localhost']['foo']": {
"changed": true,
"cmd": [
"echo",
"hello world"
],
"delta": "0:00:00.005950",
"end": "2015-11-24 20:54:04.319147",
"invocation": {
"module_args": "echo \"hello world\"",
"module_complex_args": {},
"module_name": "command"
},
"rc": 0,
"start": "2015-11-24 20:54:04.313197",
"stderr": "",
"stdout": "hello world",
"stdout_lines": [
"hello world"
],
"warnings": []
}
}
}
So, in a nutshell, you want to modify the variable references in your main playbook to reference the localhost variables in this manner:
{{ hostvars['localhost']['foo'] }}
{# alternatively, you can use: #}
{{ hostvars.localhost.foo }}
Use a dummy host and its variables
For example, to pass a Kubernetes token and hash from the master to the workers.
On master
- name: "Cluster token"
shell: kubeadm token list | cut -d ' ' -f1 | sed -n '2p'
register: K8S_TOKEN
- name: "CA Hash"
shell: openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
register: K8S_MASTER_CA_HASH
- name: "Add K8S Token and Hash to dummy host"
add_host:
name: "K8S_TOKEN_HOLDER"
token: "{{ K8S_TOKEN.stdout }}"
hash: "{{ K8S_MASTER_CA_HASH.stdout }}"
- name:
debug:
msg: "[Master] K8S_TOKEN_HOLDER K8S token is {{ hostvars['K8S_TOKEN_HOLDER']['token'] }}"
- name:
debug:
msg: "[Master] K8S_TOKEN_HOLDER K8S Hash is {{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}"
On worker
- name:
debug:
msg: "[Worker] K8S_TOKEN_HOLDER K8S token is {{ hostvars['K8S_TOKEN_HOLDER']['token'] }}"
- name:
debug:
msg: "[Worker] K8S_TOKEN_HOLDER K8S Hash is {{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}"
- name: "Kubeadmn join"
shell: >
kubeadm join --token={{ hostvars['K8S_TOKEN_HOLDER']['token'] }}
--discovery-token-ca-cert-hash sha256:{{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}
{{K8S_MASTER_NODE_IP}}:{{K8S_API_SERCURE_PORT}}
I have had similar issues with even the same host, but across different plays. The thing to remember is that facts, not variables, are the persistent things across plays. Here is how I get around the problem.
#!/usr/local/bin/ansible-playbook --inventory=./inventories/ec2.py
---
- name: "TearDown Infrastructure !!!!!!!"
hosts: localhost
gather_facts: no
vars:
aws_state: absent
vars_prompt:
- name: "aws_region"
prompt: "Enter AWS Region:"
default: 'eu-west-2'
tasks:
- name: Make vars persistant
set_fact:
aws_region: "{{aws_region}}"
aws_state: "{{aws_state}}"
- name: "TearDown Infrastructure hosts !!!!!!!"
hosts: monitoring.ec2
connection: local
gather_facts: no
tasks:
- name: set the facts per host
set_fact:
aws_region: "{{hostvars['localhost']['aws_region']}}"
aws_state: "{{hostvars['localhost']['aws_state']}}"
- debug:
msg="state {{aws_state}} region {{aws_region}} id {{ ec2_id }} "
- name: last few bits
hosts: localhost
gather_facts: no
tasks:
- debug:
msg="state {{aws_state}} region {{aws_region}} "
results in
Enter AWS Region: [eu-west-2]:
PLAY [TearDown Infrastructure !!!!!!!] ***************************************************************************************************************************************************************************************************
TASK [Make vars persistant] **************************************************************************************************************************************************************************************************************
ok: [localhost]
PLAY [TearDown Infrastructure hosts !!!!!!!] *********************************************************************************************************************************************************************************************
TASK [set the facts per host] ************************************************************************************************************************************************************************************************************
ok: [XXXXXXXXXXXXXXXXX]
TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [XXXXXXXXXXX] => {
"changed": false,
"msg": "state absent region eu-west-2 id i-0XXXXX1 "
}
PLAY [last few bits] *********************************************************************************************************************************************************************************************************************
TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "state absent region eu-west-2 "
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************
XXXXXXXXXXXXX : ok=2 changed=0 unreachable=0 failed=0
localhost : ok=2 changed=0 unreachable=0 failed=0
You can use an Ansible known behaviour. That is using group_vars folder to load some variables at your playbook. This is intended to be used together with inventory groups, but it is still a reference to the global variable declaration. If you put a file or folder in there with the same name as the group, you want some variable to be present, Ansible will make sure it happens!
As for example, let's create a file called all and put a timestamp variable there. Then, whenever you need, you can call that variable, which will be available to every host declared on any play inside your playbook.
I usually do this to update a timestamp once at the first play and use the value to write files and folders using the same timestamp.
I'm using lineinfile module to change the line starting with timestamp :
Check if it fits for your purpose.
On your group_vars/all
timestamp: t26032021165953
On the playbook, in the first play:
hosts: localhost
gather_facts: no
- name: Set timestamp on group_vars
lineinfile:
path: "{{ playbook_dir }}/group_vars/all"
insertafter: EOF
regexp: '^timestamp:'
line: "timestamp: t{{ lookup('pipe','date +%d%m%Y%H%M%S') }}"
state: present
On the playbook, in the second play:
hosts: any_hosts
gather_facts: no
tasks:
- name: Check if timestamp is there
debug:
msg: "{{ timestamp }}"

Resources