I'm trying to learn how to use Ansible facts as variables, and I don't get it. When I run...
$ ansible localhost -m setup
...it lists all of the facts of my system. I selected one at random to try and use it, ansible_facts.ansible_date_time.date, but I can't figure out HOW to use it. When I run...
$ ansible localhost -m setup -a "filter=ansible_date_time"
localhost | success >> {
"ansible_facts": {
"ansible_date_time": {
"date": "2015-07-09",
"day": "09",
"epoch": "1436460014",
"hour": "10",
"iso8601": "2015-07-09T16:40:14Z",
"iso8601_micro": "2015-07-09T16:40:14.795637Z",
"minute": "40",
"month": "07",
"second": "14",
"time": "10:40:14",
"tz": "MDT",
"tz_offset": "-0600",
"weekday": "Thursday",
"year": "2015"
}
},
"changed": false
}
So, it's CLEARLY there. But when I run...
$ ansible localhost -a "echo {{ ansible_facts.ansible_date_time.date }}"
localhost | FAILED => One or more undefined variables: 'ansible_facts' is undefined
$ ansible localhost -a "echo {{ ansible_date_time.date }}"
localhost | FAILED => One or more undefined variables: 'ansible_date_time' is undefined
$ ansible localhost -a "echo {{ date }}"
localhost | FAILED => One or more undefined variables: 'date' is undefined
What am I not getting here? How do I use Facts as variables?
The command ansible localhost -m setup basically says "run the setup module against localhost", and the setup module gathers the facts that you see in the output.
When you run the echo command these facts don't exist since the setup module wasn't run. A better method to testing things like this would be to use ansible-playbook to run a playbook that looks something like this:
- hosts: localhost
tasks:
- debug: var=ansible_date_time
- debug: msg="the current date is {{ ansible_date_time.date }}"
Because this runs as a playbook facts for localhost are gathered before the tasks are run. The output of the above playbook will be something like this:
PLAY [localhost] **************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [debug var=ansible_date_time] *******************************************
ok: [localhost] => {
"ansible_date_time": {
"date": "2015-07-09",
"day": "09",
"epoch": "1436461166",
"hour": "16",
"iso8601": "2015-07-09T16:59:26Z",
"iso8601_micro": "2015-07-09T16:59:26.896629Z",
"minute": "59",
"month": "07",
"second": "26",
"time": "16:59:26",
"tz": "UTC",
"tz_offset": "+0000",
"weekday": "Thursday",
"year": "2015"
}
}
TASK: [debug msg="the current date is {{ ansible_date_time.date }}"] **********
ok: [localhost] => {
"msg": "the current date is 2015-07-09"
}
PLAY RECAP ********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
The lookup module of ansible works fine for me. The yml is:
- hosts: test
vars:
time: "{{ lookup('pipe', 'date -d \"1 day ago\" +\"%Y%m%d\"') }}"
You can replace any command with date to get result of the command.
Note that the ansible command doesn't collect facts, but the ansible-playbook command does. When running ansible -m setup, the setup module happens to run the fact collection so you get the facts, but running ansible -m command does not. Therefore the facts aren't available. This is why the other answers include playbook YAML files and indicate the lookup works.
The filter option filters only the first level subkey below ansible_facts
I tried the lookup('pipe,'date') method and got trouble when I push the playbook to the tower. The tower is somehow using UTC timezone. All play executed as early as the + hours of my TZ will give me one day later of the actual date.
For example: if my TZ is Asia/Manila I supposed to have UTC+8. If I execute the playbook earlier than 8:00am in Ansible Tower, the date will follow to what was in UTC+0. It took me a while until I found this case. It let me use the date option '-d \"+8 hours\" +%F'. Now it gives me the exact date that I wanted.
Below is the variable I set in my playbook:
vars:
cur_target_wd: "{{ lookup('pipe','date -d \"+8 hours\" +%Y/%m-%b/%d-%a') }}"
That will give me the value of "cur_target_wd = 2020/05-May/28-Thu" even I run it earlier than 8:00am now.
Related
I have an Ansible task where I navigate to a YAML variable file in GitHub, download the file, and add the variables as Ansible Facts where they're later used.
My YAML file looks like:
---
foo: bar
hello: world
I have a method where I loop over this file, and individually add the key/value pairs as the facts:
- name: Grab contents of variable file
win_shell: cat '{{ playbook_dir }}/DEV1.yml'
register: raw_config
- name: Add variables to workspace
vars:
config: "{{ raw_config.stdout | from_yaml }}"
set_fact:
"{{ item.key }}": "{{ item.value }}"
loop: "{{ config | dict2items }}"
This works but generates much larger log outputs that look like:
ok: [localhost] => (item={u'key': u'foo', u'value': u'bar'}) => {
"ansible_facts": {
"foo": "bar"
},
"ansible_loop_var": "item",
"changed": false,
"item": {
"key": "foo",
"value": "bar"
}
}
ok: [localhost] => (item={u'key': u'hello', u'value': u'world'}) => {
"ansible_facts": {
"hello": "world"
},
"ansible_loop_var": "item",
"changed": false,
"item": {
"key": "hello",
"value": "world"
}
}
I was wondering if it was possible to add the entire variable file as Ansible Facts instead of needing to loop through it. The way I tried was like:
- name: Grab contents of variable file
win_shell: cat '{{ playbook_dir }}/DEV1.yml'
register: raw_config
- name: Add variables to workspace
vars:
config: '{{ raw_config.stdout | from_yaml }}'
set_fact: '{{ config }}'
This almost works, but it looks like this:
ok: [msf1vpom04d.corp.tjxcorp.net] => {
"ansible_facts": {
"_raw_params": {
"foo": "bar",
"hello": "world"
…
Can I add the entire object as Ansible Facts without generating this _raw_params object?
... where I navigate to a YAML variable file in GitHub, download the file, and add the variables ... I was wondering if it was possible to add the entire variable file ...
There are several possibilities.
One option (annot.: like in Ansible Tower) can be to checkout, download, sync the variable file before executing the playbook. To do so
curl --silent --user "${ACCOUNT}:${PASSWORD}" -X GET "https://${REPOSITORY_URL}/raw/group_vars/test?at=refs%2Fheads%2Fmaster" -o group_vars/test && \
sshpass -p ${PASSWORD} ansible-playbook --user ${ACCOUNT} --ask-pass test.yml
... used a Bitbucket URL here for demonstration and test
The advantage of this approach is that there is no implementation or logic within the playbooks necessary at all. The only requirement is just to Organize host and group variables. Furthermore it is the there recommended approach
Keeping your inventory file and variables in a git repo (or other version control) is an excellent way to track changes to your inventory and host variables.
An other option would be to use include_vars module to
Loads YAML/JSON variables dynamically from a file or directory, recursively, during task runtime.
In respect to simplicity it is still recommended to sync before execute.
Further Q&A
... and as already mentioned within the comments
Getting variable values from URL
You can take advantage of the play level vars_files parameter and the fact that it will be loaded and expanded for each running task. We just need to have a fallback file when your file does not yet locally exist so that we don't get an error (during facts gathering for example).
Here is an example with a uri of mine containing default values for an ansible role.
First, create an empty.yml file which will be empty as its name suggest. (It's adjacent to my playbook for the example but you can put it wherever your want, just reflect this in your playbook accordingly)
Then the following playbook:
---
- hosts: localhost
gather_facts: false
vars:
external_vars_uri: https://raw.githubusercontent.com/ansible-ThoTeam/nexus3-oss/main/defaults/main.yml
external_vars_file: /tmp/external_vars.yml
vars_files:
- "{{ lookup('first_found', [external_vars_file, 'empty.yml']) }}"
tasks:
- name: make sure we have our external file
get_url:
url: "{{ external_vars_uri }}"
dest: "{{ external_vars_file }}"
# Note: we're only using localhost here so the below
# parameters are useless. But they will be necessary
# if you target other (groups of) hosts.
run_once: true
delegate_to: localhost
- name: debug a var we know is in the external file
debug:
var: nexus_repos_maven_proxy
Gives:
$ ansible-playbook play.yml
PLAY [localhost] **************************************************************************************************************************************************************************************************
TASK [make sure we have our external file] ************************************************************************************************************************************************************************
changed: [localhost]
TASK [debug a var we know is in the external file] ****************************************************************************************************************************************************************************
ok: [localhost] => {
"nexus_repos_maven_proxy": [
{
"layout_policy": "permissive",
"name": "central",
"remote_url": "https://repo1.maven.org/maven2/"
},
{
"name": "jboss",
"remote_url": "https://repository.jboss.org/nexus/content/groups/public-jboss/"
}
]
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Could you explain why following behaviour happens. When I try to print remote Ansible IP with following playbook everything works as expected:
---
- hosts: centos1
tasks:
- name: Print ip address
debug:
msg: "ip: {{ansible_all_ipv4_addresses[0]}}"
when I try ad-hoc command it doesn't work:
ansible -i hosts centos1 -m debug -a 'msg={{ansible_all_ipv4_addresses[0]}}'
Here is the ad-hoc error:
centos1 | FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'ansible_all_ipv4_addresses' is undefined.
'ansible_all_ipv4_addresses' is undefined" }
I don't find any difference in both approaches that is why I was expecting both to work and print the remote IP address.
I don't find any difference in both approaches that is why I was expecting both to work and print the remote IP address.
This is because no facts were gathered. Whereby via ansible-playbook and depending on the configuration Ansible facts become gathered automatically, via ansible only and ad-hoc command not.
To do so you would need to execute the setup module instead. See Introduction to ad hoc commands - Gathering facts.
Further Q&A
How Ansible sets variables?
Why does Ansible ad-hoc debug module not print variable?
Please take note of the variable names according
Conflict of variable name packages with ansible_facts.packages
Could you please give some example on How to output "Your IP address is "{{ ansible_all_ipv4_addresses[0] }}"? using ad-hoc approach with setup module?
Example
ansible test.example.com -m setup -a 'filter=ansible_all_ipv4_addresses'
test.example.com | SUCCESS => {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"192.0.2.1"
]
},
"changed": false
}
or
ansible test.example.com -m setup -a 'filter=ansible_default_ipv4'
test.example.com | SUCCESS => {
"ansible_facts": {
"ansible_default_ipv4": {
"address": "192.0.2.1",
"alias": "eth0",
"broadcast": "192.0.2.255",
"gateway": "192.0.2.0",
"interface": "eth0",
"macaddress": "00:00:5e:12:34:56",
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "192.0.2.0",
"type": "ether"
}
},
"changed": false
}
It is also recommend to have a look into the full output without the filter argument to get familiar with the result set and data structure.
Documentation
setup module - Examples
I just don't understand how you are supposed to loop over a list and collect the results in a list in ansible:
- name: Collect all containers
command: docker ps --all --no-trunc --format {% raw %}"{{json .}}"{% endraw %}
register: docker_raw_containers
- debug:
msg: "{{ docker_raw_containers.stdout_lines }}"
- name: Convert variables
set_fact:
docker_raw_container_item: "{{ item | to_json }}"
loop: "{{ docker_raw_containers.stdout_lines }}"
- name: Convert to list
set_fact:
docker_parsed_containers: " {{ docker_raw_container_item | map(attribute='ID') | list }} "
- debug:
msg: "{{ docker_parsed_containers }}"
This code should result in a list of containers IDs and CreatedAt attributes. But it just results in a list of AnsibleUndefined objects. Where is my fault?
Converting a json string to an ansible variable requires the from_json filter. You used to_json which does exactly the opposite.
You can create a list all at once my mapping the from_json filter to each result line. The following playbook should meet your requirements with minimum ansible tasks.
---
- name: Parse docker ps output formated as json
hosts: all
gather_facts: false
tasks:
- name: Collect all containers
command: docker ps --all --no-trunc --format {% raw %}"{{json .}}"{% endraw %}
register: docker_raw_containers
# This is an info only command so it never changes the target
changed_when: false
- name: Convert variable
set_fact:
docker_parsed_containers: "{{ docker_raw_containers.stdout_lines | map('from_json') | list }}"
- debug:
msg: "{{ docker_parsed_containers }}"
Which gives the following result on my machine (launched some test containers for the occasion...):
$ ansible --version
ansible 2.9.2
config file = None
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
$ ansible-playbook test.yml
PLAY [Parse docker ps output formated as json] **************************************************************************************************************************************************************************************************************************
TASK [Collect all containers] *******************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [Convert variable] *************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] ************************************************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": [
{
"Command": "\"bash -c 'while true; do sleep 20000; done'\"",
"CreatedAt": "2019-12-09 10:05:18 +0100 CET",
"ID": "9e6ea71499df19f5c1e33e069c533f43b3ec18c957b31bcca571b0a194b23027",
"Image": "python:3.8",
"Labels": "",
"LocalVolumes": "0",
"Mounts": "",
"Names": "demo2",
"Networks": "bridge",
"Ports": "",
"RunningFor": "39 minutes ago",
"Size": "0B",
"Status": "Up 39 minutes"
},
{
"Command": "\"bash -c 'while true; do sleep 20000; done'\"",
"CreatedAt": "2019-12-09 10:05:17 +0100 CET",
"ID": "038f1e4b1f4dd627f6ccea2ddce858e1055474c6a092f32c773e842e938dec68",
"Image": "python:3.8",
"Labels": "",
"LocalVolumes": "0",
"Mounts": "",
"Names": "demo1",
"Networks": "bridge",
"Ports": "",
"RunningFor": "39 minutes ago",
"Size": "0B",
"Status": "Up 39 minutes"
},
{
"Command": "\"/bin/sh -c 'virtualenv venv'\"",
"CreatedAt": "2019-12-04 18:26:14 +0100 CET",
"ID": "88427258f30226ee9ba8af420978ffb2d4046206a7b6b7dc3ee3f5494236e12b",
"Image": "sha256:8a3e76ff0da0dd43a305461f3a6bf6abd320770fb6c3c2c365b5ea1a0062de0b",
"Labels": "",
"LocalVolumes": "0",
"Mounts": "",
"Names": "cocky_brattain",
"Networks": "bridge",
"Ports": "",
"RunningFor": "4 days ago",
"Size": "0B",
"Status": "Exited (127) 4 days ago"
}
]
}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
i bumped into an issue after migrated from python2 to python3. Seems that migration somehow changed the way how json query is being processed. Maybe anyone has a hint how to fix this
vars:
vmk_out:
host_vmk_info:
hostname:
[
{
ipv4_address: "10.10.10.101",
ipv4_subnet_mask: "255.255.255.0",
stack: "defaultTcpipStack"
},
{
ipv4_address: "10.10.20.101",
ipv4_subnet_mask: "255.255.255.0",
stack: "vmotion"
}
]
tasks:
- name: Extract list of IPs
set_fact:
output: "{{ vmk_out.host_vmk_info.values() |json_query('[].ipv4_address') }}"
Above ran under Python2 with Ansible 2.9.1 returns list of IP addresses but running the same under Python3 returns the empty list
I did not take time to dig into the root of the problem, but there is clearly a difference in the return of the values() function between python 2.7 and 3.x.
Here is what a direct debug or vmk_out.host_vmk_info.values() looks like from my tests:
ansible 2.9.1 - python 3.6
ok: [localhost] => {
"msg": "dict_values([[{'ipv4_address': '10.10.10.101', 'ipv4_subnet_mask': '255.255.255.0', 'stack': 'defaultTcpipStack'}, {'ipv4_address': '10.10.20.101', 'ipv4_subnet_mask': '255.255.255.0', 'stack': 'vmotion'}]])"
}
ansible 2.9.1 - python 2.7
ok: [localhost] => {
"msg": [
[
{
"ipv4_address": "10.10.10.101",
"ipv4_subnet_mask": "255.255.255.0",
"stack": "defaultTcpipStack"
},
{
"ipv4_address": "10.10.20.101",
"ipv4_subnet_mask": "255.255.255.0",
"stack": "vmotion"
}
]
]
}
You have 2 solutions to fix your current code and make it compatible with both versions.
Solution 1: make sure the output of values() always produces a list:
output: "{{ vmk_out.host_vmk_info.values() | list | json_query('[].ipv4_address') }}"
Solution 2: stop using values() and directly map the existing hostname list
output: "{{ vmk_out.host_vmk_info.hostname | json_query('[].ipv4_address') }}"
I have an Ansible playbook, where I would like a variable I register in a first play targeted on one node to be available in a second play, targeted on another node.
Here is the playbook I am using:
---
- hosts: localhost
gather_facts: no
tasks:
- command: echo "hello world"
register: foo
- hosts: main
gather_facts: no
tasks:
- debug:
msg: {{ foo.stdout }}
But, when I try to access the variable in the second play, targeted on main, I get this message:
The task includes an option with an undefined variable. The error was: 'foo' is undefined
How can I access foo, registered on localhost, from main?
The problem you're running into is that you're trying to reference facts/variables of one host from those of another host.
You need to keep in mind that in Ansible, the variable foo assigned to the host localhost is distinct from the variable foo assigned to the host main or any other host.
If you want to access one hosts facts/variables from another host then you need to explicitly reference it via the hostvars variable. There's a bit more of a discussion on this in this question.
Suppose you have a playbook like this:
- hosts: localhost
gather_facts: no
tasks:
- command: echo "hello world"
register: foo
- hosts: localhost
gather_facts: no
tasks:
- debug:
var: foo
This will work because you're referencing the host localhost and localhosts's instance of the variable foo in both plays.
The output of this playbook is something like this:
PLAY [localhost] **************************************************
TASK: [command] ***************************************************
changed: [localhost]
PLAY [localhost] **************************************************
TASK: [debug] *****************************************************
ok: [localhost] => {
"var": {
"foo": {
"changed": true,
"cmd": [
"echo",
"hello world"
],
"delta": "0:00:00.004585",
"end": "2015-11-24 20:49:27.462609",
"invocation": {
"module_args": "echo \"hello world\",
"module_complex_args": {},
"module_name": "command"
},
"rc": 0,
"start": "2015-11-24 20:49:27.458024",
"stderr": "",
"stdout": "hello world",
"stdout_lines": [
"hello world"
],
"warnings": []
}
}
}
If you modify this playbook slightly to run the first play on one host and the second play on a different host, you'll get the error that you encountered.
Solution
The solution is to use Ansible's built-in hostvars variable to have the second host explicitly reference the first hosts variable.
So modify the first example like this:
- hosts: localhost
gather_facts: no
tasks:
- command: echo "hello world"
register: foo
- hosts: main
gather_facts: no
tasks:
- debug:
var: foo
when: foo is defined
- debug:
var: hostvars['localhost']['foo']
## alternatively, you can use:
# var: hostvars.localhost.foo
when: hostvars['localhost']['foo'] is defined
The output of this playbook shows that the first task is skipped because foo is not defined by the host main.
But the second task succeeds because it's explicitly referencing localhosts's instance of the variable foo:
TASK: [debug] *************************************************
skipping: [main]
TASK: [debug] *************************************************
ok: [main] => {
"var": {
"hostvars['localhost']['foo']": {
"changed": true,
"cmd": [
"echo",
"hello world"
],
"delta": "0:00:00.005950",
"end": "2015-11-24 20:54:04.319147",
"invocation": {
"module_args": "echo \"hello world\"",
"module_complex_args": {},
"module_name": "command"
},
"rc": 0,
"start": "2015-11-24 20:54:04.313197",
"stderr": "",
"stdout": "hello world",
"stdout_lines": [
"hello world"
],
"warnings": []
}
}
}
So, in a nutshell, you want to modify the variable references in your main playbook to reference the localhost variables in this manner:
{{ hostvars['localhost']['foo'] }}
{# alternatively, you can use: #}
{{ hostvars.localhost.foo }}
Use a dummy host and its variables
For example, to pass a Kubernetes token and hash from the master to the workers.
On master
- name: "Cluster token"
shell: kubeadm token list | cut -d ' ' -f1 | sed -n '2p'
register: K8S_TOKEN
- name: "CA Hash"
shell: openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
register: K8S_MASTER_CA_HASH
- name: "Add K8S Token and Hash to dummy host"
add_host:
name: "K8S_TOKEN_HOLDER"
token: "{{ K8S_TOKEN.stdout }}"
hash: "{{ K8S_MASTER_CA_HASH.stdout }}"
- name:
debug:
msg: "[Master] K8S_TOKEN_HOLDER K8S token is {{ hostvars['K8S_TOKEN_HOLDER']['token'] }}"
- name:
debug:
msg: "[Master] K8S_TOKEN_HOLDER K8S Hash is {{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}"
On worker
- name:
debug:
msg: "[Worker] K8S_TOKEN_HOLDER K8S token is {{ hostvars['K8S_TOKEN_HOLDER']['token'] }}"
- name:
debug:
msg: "[Worker] K8S_TOKEN_HOLDER K8S Hash is {{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}"
- name: "Kubeadmn join"
shell: >
kubeadm join --token={{ hostvars['K8S_TOKEN_HOLDER']['token'] }}
--discovery-token-ca-cert-hash sha256:{{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}
{{K8S_MASTER_NODE_IP}}:{{K8S_API_SERCURE_PORT}}
I have had similar issues with even the same host, but across different plays. The thing to remember is that facts, not variables, are the persistent things across plays. Here is how I get around the problem.
#!/usr/local/bin/ansible-playbook --inventory=./inventories/ec2.py
---
- name: "TearDown Infrastructure !!!!!!!"
hosts: localhost
gather_facts: no
vars:
aws_state: absent
vars_prompt:
- name: "aws_region"
prompt: "Enter AWS Region:"
default: 'eu-west-2'
tasks:
- name: Make vars persistant
set_fact:
aws_region: "{{aws_region}}"
aws_state: "{{aws_state}}"
- name: "TearDown Infrastructure hosts !!!!!!!"
hosts: monitoring.ec2
connection: local
gather_facts: no
tasks:
- name: set the facts per host
set_fact:
aws_region: "{{hostvars['localhost']['aws_region']}}"
aws_state: "{{hostvars['localhost']['aws_state']}}"
- debug:
msg="state {{aws_state}} region {{aws_region}} id {{ ec2_id }} "
- name: last few bits
hosts: localhost
gather_facts: no
tasks:
- debug:
msg="state {{aws_state}} region {{aws_region}} "
results in
Enter AWS Region: [eu-west-2]:
PLAY [TearDown Infrastructure !!!!!!!] ***************************************************************************************************************************************************************************************************
TASK [Make vars persistant] **************************************************************************************************************************************************************************************************************
ok: [localhost]
PLAY [TearDown Infrastructure hosts !!!!!!!] *********************************************************************************************************************************************************************************************
TASK [set the facts per host] ************************************************************************************************************************************************************************************************************
ok: [XXXXXXXXXXXXXXXXX]
TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [XXXXXXXXXXX] => {
"changed": false,
"msg": "state absent region eu-west-2 id i-0XXXXX1 "
}
PLAY [last few bits] *********************************************************************************************************************************************************************************************************************
TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "state absent region eu-west-2 "
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************
XXXXXXXXXXXXX : ok=2 changed=0 unreachable=0 failed=0
localhost : ok=2 changed=0 unreachable=0 failed=0
You can use an Ansible known behaviour. That is using group_vars folder to load some variables at your playbook. This is intended to be used together with inventory groups, but it is still a reference to the global variable declaration. If you put a file or folder in there with the same name as the group, you want some variable to be present, Ansible will make sure it happens!
As for example, let's create a file called all and put a timestamp variable there. Then, whenever you need, you can call that variable, which will be available to every host declared on any play inside your playbook.
I usually do this to update a timestamp once at the first play and use the value to write files and folders using the same timestamp.
I'm using lineinfile module to change the line starting with timestamp :
Check if it fits for your purpose.
On your group_vars/all
timestamp: t26032021165953
On the playbook, in the first play:
hosts: localhost
gather_facts: no
- name: Set timestamp on group_vars
lineinfile:
path: "{{ playbook_dir }}/group_vars/all"
insertafter: EOF
regexp: '^timestamp:'
line: "timestamp: t{{ lookup('pipe','date +%d%m%Y%H%M%S') }}"
state: present
On the playbook, in the second play:
hosts: any_hosts
gather_facts: no
tasks:
- name: Check if timestamp is there
debug:
msg: "{{ timestamp }}"