Ansbile AWS Dynamic Inventory Groups Fail to Match Play Hosts - ansible

I'm having trouble getting my Ansible play's hosts to match the AWS dynamic groups that are coming back for my dynamic inventory. Let's break this problem down.
Given this output of ec2.py --list:
$ ./devops/inventories/dynamic/ec2.py --list
{
"_meta": {
"hostvars": {
"54.37.213.132": {
"ec2__in_monitoring_element": false,
"ec2_ami_launch_index": "0",
"ec2_architecture": "x86_64",
"ec2_client_token": "",
"ec2_dns_name": "ec2-52-37-203-132.us-west-2.compute.amazonaws.com",
"ec2_ebs_optimized": false,
"ec2_eventsSet": "",
"ec2_group_name": "",
"ec2_hypervisor": "xen",
"ec2_id": "i-d352c50b",
"ec2_image_id": "ami-63b25203",
"ec2_instance_profile": "",
"ec2_instance_type": "t2.micro",
"ec2_ip_address": "54.37.213.132",
"ec2_item": "",
"ec2_kernel": "",
"ec2_key_name": "peaker-v1-keypair",
"ec2_launch_time": "2016-03-11T20:45:44.000Z",
"ec2_monitored": false,
"ec2_monitoring": "",
"ec2_monitoring_state": "disabled",
"ec2_persistent": false,
"ec2_placement": "us-west-2a",
"ec2_platform": "",
"ec2_previous_state": "",
"ec2_previous_state_code": 0,
"ec2_private_dns_name": "ip-172-31-43-132.us-west-2.compute.internal",
"ec2_private_ip_address": "172.31.43.132",
"ec2_public_dns_name": "ec2-52-37-203-132.us-west-2.compute.amazonaws.com",
"ec2_ramdisk": "",
"ec2_reason": "",
"ec2_region": "us-west-2",
"ec2_requester_id": "",
"ec2_root_device_name": "/dev/xvda",
"ec2_root_device_type": "ebs",
"ec2_security_group_ids": "sg-824ac0e5",
"ec2_security_group_names": "peaker-v1-security-group",
"ec2_sourceDestCheck": "true",
"ec2_spot_instance_request_id": "",
"ec2_state": "running",
"ec2_state_code": 16,
"ec2_state_reason": "",
"ec2_subnet_id": "subnet-b96e1bce",
"ec2_tag_Environment": "v1",
"ec2_tag_Name": "peaker-v1-ec2",
"ec2_virtualization_type": "hvm",
"ec2_vpc_id": "vpc-5fe8ae3a"
}
}
},
"ec2": [
"54.37.213.132"
],
"tag_Environment_v1": [
"54.37.213.132"
],
"tag_Name_peaker-v1-ec2": [
"54.37.213.132"
],
"us-west-2": [
"54.37.213.132"
]
}
I should be able write a playbook that matches some of the groups coming back:
---
# playbook
- name: create s3 bucket with policy
hosts: localhost
gather_facts: yes
tasks:
- name: s3
s3:
bucket: "fake"
region: "us-west-2"
mode: create
permission: "public-read-write"
register: s3_output
- debug: msg="{{ s3_output }}"
- name: test on remote machine
hosts: ec2
gather_facts: yes
tasks:
- name: test on remote machine
file:
dest: "/home/ec2-user/test/"
owner: ec2-user
group: ec2-user
mode: 0700
state: directory
become: yes
become_user: ec2-user
However, when I --list-hosts that match these plays it's obvious that the play hosts are not matching anything coming back:
$ ansible-playbook -i devops/inventories/dynamic/ec2/ec2.py devops/build_and_bundle_example.yml --ask-vault-pass --list-hosts
Vault password:
[WARNING]: provided hosts list is empty, only localhost is available
playbook: devops/build_and_bundle_example.yml
play #1 (localhost): create s3 bucket with policy TAGS: []
pattern: [u'localhost']
hosts (1):
localhost
play #2 (ec2): test on remote machine TAGS: []
pattern: [u'ec2']
hosts (0):

The quick fix for what you're doing:
change hosts: localhost in your playbook to hosts: all
It would never work just with dynamic inventory if you're going to keep hosts: localhost in your playbook...
If so, – you must combine dynamic & static inventories. Create file with path ./devops/inventories/dynamic/static.ini (on the same level with ec2.py and ec2.ini) and put this content:
[localhost]
localhost
[ec2_tag_Name_peaker_v1_ec2]
[aws-hosts:children]
localhost
ec2_tag_Name_peaker_v1_ec2
After that, you will be able to run a quick check:
ansible -i devops/inventories/dynamic/ec2 aws-hosts -m ping
and your playbook itself:
ansible-playbook -i devops/inventories/dynamic/ec2 \
devops/build_and_bundle_example.yml --ask-vault-pass
NOTE: devops/inventories/dynamic/ec2 is a path to the folder, but it will be automatically resolved into hybrid dynamic&static inventory with access to aws-hosts group name.
In fact, this isn't the best use of inventory. But it's important to understand, that by combining dynamic&static inventories, you're just appending new group names for particular dynamic host
ansible -i devops/inventories/dynamic/ec2 all -m debug \
-a "var=hostvars[inventory_hostname].group_names"

Related

Use variable group in playbook like target hosts

I hava this situation:
A play that run in localhost use include_task for create, on the fly with add_host, two sub-group extracting only one host from two group that are present in the inventory file.
Another play in the same yaml file use this group as hosts (host:sub-group).
This is the inventory file:
all:
children:
group_one:
hosts:
hostA01:
ansible_host: host1a
hostA02:
ansible_host: host2a
hostA03:
ansible_host: host3a
vars:
cluster: hosta
vip: 192.168.10.10
home: /cluster/hosta
user: usr_hosta
pass: pass_hosta
group_two:
hosts:
hostB01:
ansible_host: host1b
hostB02:
ansible_host: host2b
hostB03:
ansible_host: host3b
vars:
cluster: hostb
vip: 192.168.10.20
home: /cluster/hostb
user: usr_hostb
pass: pass_hostb
other groups...
I have created the sub groups, with add_host, for each group in inventoty file. The name the sub-groups add the prefix "sub-" to inventory original groups, like sub-(one/two/etc..)
In hostvars i retrieve this situation:
"groups": {
"all": [
"hostA01",
"hostA02",
"hostA03",
"hostB01",
"hostB02",
"hostB03",
"other_host_from _other groups"
],
"group_one": [
"hostA01",
"hostA02",
"hostA03"
],
"group_two": [
"hostB01",
"hostB02",
"hostB03"
],
"other_group": [
"other_host",
.....
],
"sub-group_one": [
"hostA01"
],
"sub_group_two": [
"hostB01"
],
"sub-other_group": [
"other_first_host"
],
"ungrouped": []
},
"vars_for_group": {
"group_one": {
"cluster: hosta
"vip: 192.168.10.10
"home: /cluster/hosta
"user: usr_hosta
"pass: pass_hosta,
"ansible_host": "host1a",
"host": "hostA01"
},
"group_two": {
"cluster: hostb
"vip: 192.168.10.20
"home: /cluster/hostb
"user: usr_hostb
"pass: pass_hostb,
"ansible_host": "host1b",
"host": "hostB01"
},
"otehr_groups": {
.......
},
},
"inventory_hostname": "127.0.0.1",
"inventory_hostname_short": "127",
"module_setup": true,
"playbook_dir": ""/home/foo/playbook",
"choice": "'sub-two'"
}
}
The environment variable "choice" in the last line of hostvars derives from other tool and indicates the group on which the final user wants to operate (one, two, ..., all).
Now, my playbook is:
---
- hosts: 127.0.0.1
become: yes
gather_facts: yes
remote_user: root
tasks:
- name: include news_groups
ansible.builtin.include_tasks:
newsgroups.yaml
vars:
choice: "{{ lookup('env','CHOICE') }}"
- hosts: "{{ hostvars['localhost']['groups']['{{ hostvars['localhost']['choice'] }}'] }}"
name: second_play
become: yes
gather_facts: no
remote_user: root
tasks:
- hosts: all
name: other play
gather_facts: no
vars:
other_vars:...
tasks:
.....
Unfortunately this line don't work.
- hosts: "{{ hostvars['localhost']['groups']['{{ hostvars['localhost']['choice'] }}'] }}"
I have made several attempts with many different configuration and different sintax (without {{...}}, with or without "double quotes" and 'single quote' but it always seems syntactically wrong or still cannot find the group indicated by the variable. Or, maybe could it also be that the approach is wrong?
Any suggestions?
Thanks in advance

ansible magic variables not returning values

I'm trying to build a /etc/hosts file; however it seems that I can't get the hostvars to show up when running a playbook, but with the command line it works to a point.
Here is my ansible.cfg file:
[defaults]
ansible_managed = Please do not change this file directly since it is managed by Ansible and will be overwritten
library = ./library
module_utils = ./module_utils
action_plugins = plugins/actions
callback_plugins = plugins/callback
filter_plugins = plugins/filter
roles_path = ./roles
# Be sure the user running Ansible has permissions on the logfile
log_path = $HOME/ansible/ansible.log
inventory = hosts
forks = 20
host_key_checking = False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = $HOME/ansible/facts
fact_caching_timeout = 7200
nocows = 1
callback_whitelist = profile_tasks
stdout_callback = yaml
force_valid_group_names = ignore
inject_facts_as_vars = False
# Disable them in the context of https://review.openstack.org/#/c/469644
retry_files_enabled = False
# This is the default SSH timeout to use on connection attempts
# CI slaves are slow so by setting a higher value we can avoid the following error:
# Timeout (12s) waiting for privilege escalation prompt:
timeout = 60
[ssh_connection]
# see: https://github.com/ansible/ansible/issues/11536
control_path = %(directory)s/%%h-%%r-%%p
ssh_args = -o ControlMaster=auto -o ControlPersist=600s
pipelining = True
# Option to retry failed ssh executions if the failure is encountered in ssh itself
retries = 10
Here is my playbook:
- name: host file update
hosts: baremetal
become: true
gather_facts: true
vars:
primarydomain: "cephcluster.local"
tasks:
- name: print ipv4 info
debug:
msg: "IPv4 addresses: {{ansible_default_ipv4.address }}"
- name: update host file
lineinfile:
dest: /etc/hosts
regexp: '{{ hostvars[item].ansible_default_ipv4.address }}.*{{ item }} {{item}}.{{primarydomain}}$'
line: "{{ hostvars[item].ansible_default_ipv4.address }} {{item}} {{item}}.{{primarydomain}}"
state: present
with_items: "{{ groups.baremetal }}"
Here is my inventory file:
[baremetal]
svr1 ansible_host=192.168.59.10
svr2 ansible_host=192.168.59.11
svr3 ansible_host=192.168.59.12
When I run the playbook I get:
FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: 'ansible_default_ipv4' is undefined
However when I run
ansible baremetal -m gather_facts -a "filter=ansible_default_ipv4"
I get:
| SUCCESS => {
"ansible_facts": {
"ansible_default_ipv4": {
"address": "192.168.59.20",
"alias": "eno1",
"broadcast": "192.168.59.255",
"gateway": "192.168.59.1",
"interface": "eno1",
"macaddress": "80:18:44:e0:4b:a4",
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "192.168.59.0",
"type": "ether"
}
},
"changed": false,
"deprecations": [],
"warnings": []
}
But; if I run
ansible baremetal -m gather_facts -a "filter=ansible_default_ipv4.address"
I get nothing in the return:
SUCCESS => {
"ansible_facts": {},
"changed": false,
"deprecations": [],
"warnings": []
}
I have even tried in the playbook:
- debug:
msg: "IPv4 addresses: {{ansible_default_ipv4 }}"
and nothing gets returned.
Not sure what I'm missing.
You should get used to use and abuse the debug module.
Provided that the ansible ad-hoc command raised you that the facts are under ansible_facts, you could have done a simple task in your playbook:
- debug:
var: ansible_facts
From that, you would have discovered that the fact you are looking for is nested in the dictionary ansible_facts under the key default_ipv4. And that you can access its address with the property address.
So, your debug task ends up being:
- debug:
msg: "IPv4 addresses: {{ ansible_facts.default_ipv4.address }}"

Yml must be stored as a dictionary/hash

I have a yaml file of usernames and their ssh keys stored like this:
---
- user: bob
name: bob McBob
ssh_keys:
- ssh-rsa ...
- user: fred
name: fred McFred
ssh_keys:
- ssh-rsa ...
I'm trying to grab the user and ssh_keys keys so i can use this file to setup the users on linux hosts
It seems Ansible does not like the format of this file though, as this simple task throws an error:
- name: Get SSH Keys
include_vars:
file: ../admins.yml
name: ssh_keys
TASK [network-utility-servers : Get SSH Keys] *******************************************************************************************************************************
task path: main.yml:1
fatal: [127.0.0.1]: FAILED! => {
"ansible_facts": {
"ssh_keys": {}
},
"ansible_included_var_files": [],
"changed": false,
"message": "admins.yml must be stored as a dictionary/hash"
}
Unfortunately I can't change the format of the admins.yml file as it is used in other tools and changing the format will break them.
Any suggestions on how I can work around this?
Looks like ansible wants the admins.yml file to look like this:
---
foo:
- user: bob
name bob mcbob
ssh_keys:
- ssh-rsa ..
As you found out, include_vars is expecting a file containing dict key(s) at the top level. But there are other ways to read yaml files in ansible.
If you cannot change the file, the simplest way is to read its content inside a variable using a file lookup and the from_yaml filter.
Here is an example playbook. For this test, your above example data was stored in admins.yml in the same folder as the playbook. Adapt accordingly.
---
- hosts: localhost
gather_facts: false
vars:
admins: "{{ lookup('file', 'admins.yml') | from_yaml }}"
tasks:
- name: Show admin users list
debug:
var: admins
Which gives:
PLAY [localhost] ***********************************************************************************************************************************************************************************************************************
TASK [Show admin users list] ***********************************************************************************************************************************************************************************************************
ok: [localhost] => {
"admins": [
{
"name": "bob McBob",
"ssh_keys": [
"ssh-rsa ..."
],
"user": "bob"
},
{
"name": "fred McFred",
"ssh_keys": [
"ssh-rsa ..."
],
"user": "fred"
}
]
}
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
As you said, the given vars file has not a valid format. You could use a task to fix the file in the fashion you described, and then attempt to load it.
The lineinfile task could be of use to do the trick, as in below example:
---
- hosts: localhost
gather_facts: false
vars:
tasks:
- name: fix vars file
lineinfile:
path: "{{ playbook_dir }}/vars_file.yml"
insertafter: "^---$"
line: "ssh_keys:"
backup: yes
- name: Get SSH Keys
include_vars:
file: "{{ playbook_dir }}/vars_file.yml"
- debug: var=ssh_keys
If you are not supposed to edit that file, you could copy to a new file name (vars_file_fixed.yml), then apply the lineinfile and load it.
cheers

yml format for inventory file...fails

---
all:
zones:
- name: accessswitch
hosts:
- name: accessswitch-x0
ip: 192.168.4.xx
- name: groupswitch
hosts:
- name: groupswitch-x1
ip: 192.168.4.xx
- name: groupswitch-x2
ip: 192.168.4.xx
- name: groupswitch-x3
ip: 192.168.4.xx
Basically i have a access switch and to that switch there are many group switches connected to it...Tried already "children" but this does not work. A typical ini file works....
Also i have some variables...which do apply to all zones aka. access witch & group switches...
in the future there will be more than 1 ..multiple access switches..
....
some docs use ansible-host:...confusing..
And yes..checked the uml structure...
cat#catwomen:~/workspace/ansible-simulator/inventories/simulator/host_vars$ sudo ansible -i testdata.yaml all -m ping
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match
'all'
cat#catwomen:~/workspace/ansible-simulator/inventories/simulator/host_vars$ sudo ansible -i testdata.yaml all -m ping
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match
'all'
You have yaml indentation problems which is an error (indentation in yaml is significant). Moreover, you must follow the specific format described in the documentation.
Very basically, the file is an imbrication of groups in parent/child relationship starting with the top element special group all. A group definition looks like:
---
my_group1: # group name, use `all` at top level
vars:
# definition of vars for this group. Apply to all hosts if defined for `all`
hosts:
# hosts in that group (host is ungrouped if it appears only in `all`)
children:
# mapping of children group definitions (repeat the current format)
A host definition looks like:
my_host1:
# definition of vars specific to that host if any
A var definition (either for vars in group or for a specific host) looks like:
my_variable1: some value
If I understand correctly from your example, here is how your yaml inventory should look like (inventories/so_example.yml)
---
all:
children:
accessswitch:
hosts:
accessswitch-x0:
ansible_host: 192.168.4.xx
groupswitch:
hosts:
groupswitch-x1:
ansible_host: 192.168.4.xx
groupswitch-x2:
ansible_host: 192.168.4.xx
groupswitch-x3:
ansible_host: 192.168.4.xx
You can then easilly see how the above is interpreted with the ansible-inventory command:
$ ansible-inventory -i inventories/so_example.yml --graph
#all:
|--#accessswitch:
| |--accessswitch-x0
|--#groupswitch:
| |--groupswitch-x1
| |--groupswitch-x2
| |--groupswitch-x3
|--#ungrouped:
$ ansible-inventory -i inventories/so_example.yml --list
{
"_meta": {
"hostvars": {
"accessswitch-x0": {
"ansible_host": "192.168.4.xx"
},
"groupswitch-x1": {
"ansible_host": "192.168.4.xx"
},
"groupswitch-x2": {
"ansible_host": "192.168.4.xx"
},
"groupswitch-x3": {
"ansible_host": "192.168.4.xx"
}
}
},
"accessswitch": {
"hosts": [
"accessswitch-x0"
]
},
"all": {
"children": [
"accessswitch",
"groupswitch",
"ungrouped"
]
},
"groupswitch": {
"hosts": [
"groupswitch-x1",
"groupswitch-x2",
"groupswitch-x3"
]
}
}
The incident is not correct for the line #6. Try below please.
---
all:
zones:
- name: accessswitch
hosts:
- name: accessswitch-x0
ip: 192.168.4.xx
- name: groupswitch
hosts:
- name: groupswitch-x1
ip: 192.168.4.xx
- name: groupswitch-x2
ip: 192.168.4.xx
- name: groupswitch-x3
ip: 192.168.4.xx

How do I register a variable and persist it between plays targeted on different nodes?

I have an Ansible playbook, where I would like a variable I register in a first play targeted on one node to be available in a second play, targeted on another node.
Here is the playbook I am using:
---
- hosts: localhost
gather_facts: no
tasks:
- command: echo "hello world"
register: foo
- hosts: main
gather_facts: no
tasks:
- debug:
msg: {{ foo.stdout }}
But, when I try to access the variable in the second play, targeted on main, I get this message:
The task includes an option with an undefined variable. The error was: 'foo' is undefined
How can I access foo, registered on localhost, from main?
The problem you're running into is that you're trying to reference facts/variables of one host from those of another host.
You need to keep in mind that in Ansible, the variable foo assigned to the host localhost is distinct from the variable foo assigned to the host main or any other host.
If you want to access one hosts facts/variables from another host then you need to explicitly reference it via the hostvars variable. There's a bit more of a discussion on this in this question.
Suppose you have a playbook like this:
- hosts: localhost
gather_facts: no
tasks:
- command: echo "hello world"
register: foo
- hosts: localhost
gather_facts: no
tasks:
- debug:
var: foo
This will work because you're referencing the host localhost and localhosts's instance of the variable foo in both plays.
The output of this playbook is something like this:
PLAY [localhost] **************************************************
TASK: [command] ***************************************************
changed: [localhost]
PLAY [localhost] **************************************************
TASK: [debug] *****************************************************
ok: [localhost] => {
"var": {
"foo": {
"changed": true,
"cmd": [
"echo",
"hello world"
],
"delta": "0:00:00.004585",
"end": "2015-11-24 20:49:27.462609",
"invocation": {
"module_args": "echo \"hello world\",
"module_complex_args": {},
"module_name": "command"
},
"rc": 0,
"start": "2015-11-24 20:49:27.458024",
"stderr": "",
"stdout": "hello world",
"stdout_lines": [
"hello world"
],
"warnings": []
}
}
}
If you modify this playbook slightly to run the first play on one host and the second play on a different host, you'll get the error that you encountered.
Solution
The solution is to use Ansible's built-in hostvars variable to have the second host explicitly reference the first hosts variable.
So modify the first example like this:
- hosts: localhost
gather_facts: no
tasks:
- command: echo "hello world"
register: foo
- hosts: main
gather_facts: no
tasks:
- debug:
var: foo
when: foo is defined
- debug:
var: hostvars['localhost']['foo']
## alternatively, you can use:
# var: hostvars.localhost.foo
when: hostvars['localhost']['foo'] is defined
The output of this playbook shows that the first task is skipped because foo is not defined by the host main.
But the second task succeeds because it's explicitly referencing localhosts's instance of the variable foo:
TASK: [debug] *************************************************
skipping: [main]
TASK: [debug] *************************************************
ok: [main] => {
"var": {
"hostvars['localhost']['foo']": {
"changed": true,
"cmd": [
"echo",
"hello world"
],
"delta": "0:00:00.005950",
"end": "2015-11-24 20:54:04.319147",
"invocation": {
"module_args": "echo \"hello world\"",
"module_complex_args": {},
"module_name": "command"
},
"rc": 0,
"start": "2015-11-24 20:54:04.313197",
"stderr": "",
"stdout": "hello world",
"stdout_lines": [
"hello world"
],
"warnings": []
}
}
}
So, in a nutshell, you want to modify the variable references in your main playbook to reference the localhost variables in this manner:
{{ hostvars['localhost']['foo'] }}
{# alternatively, you can use: #}
{{ hostvars.localhost.foo }}
Use a dummy host and its variables
For example, to pass a Kubernetes token and hash from the master to the workers.
On master
- name: "Cluster token"
shell: kubeadm token list | cut -d ' ' -f1 | sed -n '2p'
register: K8S_TOKEN
- name: "CA Hash"
shell: openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
register: K8S_MASTER_CA_HASH
- name: "Add K8S Token and Hash to dummy host"
add_host:
name: "K8S_TOKEN_HOLDER"
token: "{{ K8S_TOKEN.stdout }}"
hash: "{{ K8S_MASTER_CA_HASH.stdout }}"
- name:
debug:
msg: "[Master] K8S_TOKEN_HOLDER K8S token is {{ hostvars['K8S_TOKEN_HOLDER']['token'] }}"
- name:
debug:
msg: "[Master] K8S_TOKEN_HOLDER K8S Hash is {{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}"
On worker
- name:
debug:
msg: "[Worker] K8S_TOKEN_HOLDER K8S token is {{ hostvars['K8S_TOKEN_HOLDER']['token'] }}"
- name:
debug:
msg: "[Worker] K8S_TOKEN_HOLDER K8S Hash is {{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}"
- name: "Kubeadmn join"
shell: >
kubeadm join --token={{ hostvars['K8S_TOKEN_HOLDER']['token'] }}
--discovery-token-ca-cert-hash sha256:{{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}
{{K8S_MASTER_NODE_IP}}:{{K8S_API_SERCURE_PORT}}
I have had similar issues with even the same host, but across different plays. The thing to remember is that facts, not variables, are the persistent things across plays. Here is how I get around the problem.
#!/usr/local/bin/ansible-playbook --inventory=./inventories/ec2.py
---
- name: "TearDown Infrastructure !!!!!!!"
hosts: localhost
gather_facts: no
vars:
aws_state: absent
vars_prompt:
- name: "aws_region"
prompt: "Enter AWS Region:"
default: 'eu-west-2'
tasks:
- name: Make vars persistant
set_fact:
aws_region: "{{aws_region}}"
aws_state: "{{aws_state}}"
- name: "TearDown Infrastructure hosts !!!!!!!"
hosts: monitoring.ec2
connection: local
gather_facts: no
tasks:
- name: set the facts per host
set_fact:
aws_region: "{{hostvars['localhost']['aws_region']}}"
aws_state: "{{hostvars['localhost']['aws_state']}}"
- debug:
msg="state {{aws_state}} region {{aws_region}} id {{ ec2_id }} "
- name: last few bits
hosts: localhost
gather_facts: no
tasks:
- debug:
msg="state {{aws_state}} region {{aws_region}} "
results in
Enter AWS Region: [eu-west-2]:
PLAY [TearDown Infrastructure !!!!!!!] ***************************************************************************************************************************************************************************************************
TASK [Make vars persistant] **************************************************************************************************************************************************************************************************************
ok: [localhost]
PLAY [TearDown Infrastructure hosts !!!!!!!] *********************************************************************************************************************************************************************************************
TASK [set the facts per host] ************************************************************************************************************************************************************************************************************
ok: [XXXXXXXXXXXXXXXXX]
TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [XXXXXXXXXXX] => {
"changed": false,
"msg": "state absent region eu-west-2 id i-0XXXXX1 "
}
PLAY [last few bits] *********************************************************************************************************************************************************************************************************************
TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "state absent region eu-west-2 "
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************
XXXXXXXXXXXXX : ok=2 changed=0 unreachable=0 failed=0
localhost : ok=2 changed=0 unreachable=0 failed=0
You can use an Ansible known behaviour. That is using group_vars folder to load some variables at your playbook. This is intended to be used together with inventory groups, but it is still a reference to the global variable declaration. If you put a file or folder in there with the same name as the group, you want some variable to be present, Ansible will make sure it happens!
As for example, let's create a file called all and put a timestamp variable there. Then, whenever you need, you can call that variable, which will be available to every host declared on any play inside your playbook.
I usually do this to update a timestamp once at the first play and use the value to write files and folders using the same timestamp.
I'm using lineinfile module to change the line starting with timestamp :
Check if it fits for your purpose.
On your group_vars/all
timestamp: t26032021165953
On the playbook, in the first play:
hosts: localhost
gather_facts: no
- name: Set timestamp on group_vars
lineinfile:
path: "{{ playbook_dir }}/group_vars/all"
insertafter: EOF
regexp: '^timestamp:'
line: "timestamp: t{{ lookup('pipe','date +%d%m%Y%H%M%S') }}"
state: present
On the playbook, in the second play:
hosts: any_hosts
gather_facts: no
tasks:
- name: Check if timestamp is there
debug:
msg: "{{ timestamp }}"

Resources