I have got multiple yaml files on remote machine. I would like to parse those files in order to get information about names for each kind (Deployment, Configmap, Secret) of object, For example:
...
kind: Deployment
metadata:
name: pr1-dep
...
kind: Secret
metadata:
name: pr1
...
....
kind: ConfigMap
metadata:
name: cm-pr1
....
Ecpected result:
3 variables:
deployments = [pr1-dep]
secrets = [pr1]
configmaps = [cm-pr1]
I started with:
- shell: cat "{{ item.desc }}"
with_items:
- "{{ templating_register.results }}"
register: objs
but i have no idea how to correctly parse item.stdout from objs
Ansible has a from_yaml filter that takes YAML text as input and outputs an Ansible data structure. So for example you can write something like this:
- hosts: localhost
gather_facts: false
tasks:
- name: Read objects
command: "cat {{ item }}"
register: objs
loop:
- deployment.yaml
- configmap.yaml
- secret.yaml
- debug:
msg:
- "kind: {{ obj.kind }}"
- "name: {{ obj.metadata.name }}"
vars:
obj: "{{ item.stdout | from_yaml }}"
loop: "{{ objs.results }}"
loop_control:
label: "{{ item.item }}"
Given your example files, this playbook would output:
PLAY [localhost] ***************************************************************
TASK [Read objects] ************************************************************
changed: [localhost] => (item=deployment.yaml)
changed: [localhost] => (item=configmap.yaml)
changed: [localhost] => (item=secret.yaml)
TASK [debug] *******************************************************************
ok: [localhost] => (item=deployment.yaml) => {
"msg": [
"kind: Deployment",
"name: pr1-dep"
]
}
ok: [localhost] => (item=configmap.yaml) => {
"msg": [
"kind: ConfigMap",
"name: pr1-cm"
]
}
ok: [localhost] => (item=secret.yaml) => {
"msg": [
"kind: Secret",
"name: pr1"
]
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Creating the variables you've asked for is a little trickier. Here's
one option:
- hosts: localhost
gather_facts: false
tasks:
- name: Read objects
command: "cat {{ item }}"
register: objs
loop:
- deployment.yaml
- configmap.yaml
- secret.yaml
- name: Create variables
set_fact:
names: >-
{{
names|combine({
obj.kind.lower(): [obj.metadata.name]
}, list_merge='append')
}}
vars:
names: {}
obj: "{{ item.stdout | from_yaml }}"
loop: "{{ objs.results }}"
loop_control:
label: "{{ item.item }}"
- debug:
var: names
This creates a single variable named names that at the end of the
playbook will contain:
{
"configmap": [
"pr1-cm"
],
"deployment": [
"pr1-dep"
],
"secret": [
"pr1"
]
}
The key to the above playbook is our use of the combine filter, which can be used to merge dictionaries and, when we add list_merge='append', handles keys that resolve to lists by appending to the existing list, rather than replacing the existing key.
Include the dictionaries from the files into the new variables, e.g.
- include_vars:
file: "{{ item }}"
name: "objs_{{ item|splitext|first }}"
register: result
loop:
- deployment.yaml
- configmap.yaml
- secret.yaml
This will create dictionaries objs_deployment, objs_configmap, and objs_secret. Next, you can either use the dictionaries
- set_fact:
objs: "{{ objs|d({})|combine({_key: _val}) }}"
loop: "{{ query('varnames', 'objs_') }}"
vars:
_obj: "{{ lookup('vars', item) }}"
_key: "{{ _obj.kind }}"
_val: "{{ _obj.metadata.name }}"
, or the registered data
- set_fact:
objs: "{{ dict(_keys|zip(_vals)) }}"
vars:
_query1: '[].ansible_facts.*.kind'
_keys: "{{ result.results|json_query(_query1)|flatten }}"
_query2: '[].ansible_facts.*.metadata[].name'
_vals: "{{ result.results|json_query(_query2)|flatten }}"
Both options give
objs:
ConfigMap: cm-pr1
Deployment: pr1-dep
Secret: pr1
Related
I want to perform the following command using an Ansible playbook:
sed -i.bak 's/^mirror/#mirror/;s/#baseurl/baseurl/;s/mirror.centos/vault.centos/' /etc/yum.repos.d/CentOS-*.repo
A basic working Ansible playbook looks like this:
---
- name: Resolve any CentOS issues
hosts: "{{ nodes }}"
tasks:
- shell: sed -i.bak 's/^mirror/#mirror/;s/#baseurl/baseurl/;s/mirror.centos/vault.centos/' /opt/yum.repos.d/CentOS-*.repo
However, I would like to create a more advanced playbook, which *might *look something like this:
---
- name: Modify CentOS Reps
hosts: localhost
vars:
filelist: []
reg:
- ^mirrorlist
- ^#baseurl
- mirror.centos
rep:
- #mirror
- baseurl
- vault.centos
tasks:
- name: Find matching repo files and register to a variable
find:
paths: /etc/yum.repos.d
file_type: file
recurse: no
patterns: CentOS-*.repo
register: output
- name: Adding filelist to the LIST
no_log: true
set_fact:
filelist: "{{ filelist + [item.path]}}"
with_items: "{{ output.files }}"
- debug: var=filelist
- name: Change the lines
replace:
path: "{{ item.0 }}"
regexp: "{{ item.1.regexp }}"
replace: "{{ item.1.replace }}"
backup: true
with_nested:
- "{{ filelist }}"
- '{ regexp: "{{ reg }}", replace: "{{ rep }}" }'
The above gives the following attribute error:
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'regexp'\n\nThe error appears to be in '/etc/ansible/playbooks/centos-correct-repo301.yml': line 33, column 6, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Change the lines\n ^ here\n"}
Any thoughts?
Thanks
There are more options. For example,
Put the regex/replace logic into a dictionary
reg_rep:
- {reg: '^mirror', rep: '#mirror'}
- {reg: '^#baseurl', rep: 'baseurl'}
- {reg: 'mirror.centos', rep: 'vault.centos'}
Declare the list of the files
filelist: "{{ output.files|map(attribute='path')|list }}"
and find the files (no changes to your code in this task)
- name: Find matching repo files and register to a variable
find:
paths: /etc/yum.repos.d
file_type: file
recurse: no
patterns: CentOS-*.repo
register: output
Update the configuration
- name: Change the lines
replace:
path: "{{ item.0 }}"
regexp: "{{ item.1.reg }}"
replace: "{{ item.1.rep }}"
backup: true
with_nested:
- "{{ filelist }}"
- "{{ reg_rep }}"
loop_control:
label: "{{ item.0|basename }}"
Example of a complete playbook for testing
shell> cat pb.yml
- hosts: test_24
vars:
reg_rep:
- {reg: '^mirror', rep: '#mirror'}
- {reg: '^#baseurl', rep: 'baseurl'}
- {reg: 'mirror.centos', rep: 'vault.centos'}
filelist: "{{ output.files|map(attribute='path')|list }}"
tasks:
- name: Find matching repo files and register to a variable
find:
paths: /etc/yum.repos.d
file_type: file
recurse: no
patterns: CentOS-*.repo
register: output
- debug:
var: filelist
- debug:
msg: |
path: "{{ item.0 }}"
regexp: "{{ item.1.reg }}"
replace: "{{ item.1.rep }}"
backup: true
with_nested:
- "{{ filelist }}"
- "{{ reg_rep }}"
loop_control:
label: "{{ item.0|basename }}"
when: debug|d(false)|bool
- name: Change the lines
replace:
path: "{{ item.0 }}"
regexp: "{{ item.1.reg }}"
replace: "{{ item.1.rep }}"
backup: true
with_nested:
- "{{ filelist }}"
- "{{ reg_rep }}"
loop_control:
label: "{{ item.0|basename }}"
gives abridged, running in --check --diff mode
shell> ansible-playbook pb.yml -CD
PLAY [test_24] *******************************************************************************
TASK [Find matching repo files and register to a variable] ***********************************
ok: [test_24]
TASK [debug] *********************************************************************************
ok: [test_24] =>
filelist:
- /etc/yum.repos.d/CentOS-Linux-ContinuousRelease.repo
- /etc/yum.repos.d/CentOS-Linux-Debuginfo.repo
- /etc/yum.repos.d/CentOS-Linux-Devel.repo
- /etc/yum.repos.d/CentOS-Linux-Extras.repo
- /etc/yum.repos.d/CentOS-Linux-FastTrack.repo
- /etc/yum.repos.d/CentOS-Linux-HighAvailability.repo
- /etc/yum.repos.d/CentOS-Linux-Media.repo
- /etc/yum.repos.d/CentOS-Linux-Plus.repo
- /etc/yum.repos.d/CentOS-Linux-PowerTools.repo
- /etc/yum.repos.d/CentOS-Linux-Sources.repo
- /etc/yum.repos.d/CentOS-Linux-BaseOS.repo
- /etc/yum.repos.d/CentOS-Linux-AppStream.repo
TASK [debug] *********************************************************************************
skipping: [test_24] => (item=CentOS-Linux-ContinuousRelease.repo)
skipping: [test_24] => (item=CentOS-Linux-ContinuousRelease.repo)
...
skipping: [test_24] => (item=CentOS-Linux-AppStream.repo)
skipping: [test_24]
TASK [Change the lines] **********************************************************************
--- before: /etc/yum.repos.d/CentOS-Linux-ContinuousRelease.repo
+++ after: /etc/yum.repos.d/CentOS-Linux-ContinuousRelease.repo
## -17,7 +17,7 ##
[cr]
name=CentOS Linux $releasever - ContinuousRelease
-mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=cr&infra=$infra
+#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=cr&infra=$infra
#baseurl=http://mirror.centos.org/$contentdir/$releasever/cr/$basearch/os/
gpgcheck=1
enabled=0
changed: [test_24] => (item=CentOS-Linux-ContinuousRelease.repo)
--- before: /etc/yum.repos.d/CentOS-Linux-ContinuousRelease.repo
+++ after: /etc/yum.repos.d/CentOS-Linux-ContinuousRelease.repo
## -18,7 +18,7 ##
[cr]
name=CentOS Linux $releasever - ContinuousRelease
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=cr&infra=$infra
-#baseurl=http://mirror.centos.org/$contentdir/$releasever/cr/$basearch/os/
+baseurl=http://mirror.centos.org/$contentdir/$releasever/cr/$basearch/os/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
changed: [test_24] => (item=CentOS-Linux-ContinuousRelease.repo)
...
--- before: /etc/yum.repos.d/CentOS-Linux-AppStream.repo
+++ after: /etc/yum.repos.d/CentOS-Linux-AppStream.repo
## -11,7 +11,7 ##
[appstream]
name=CentOS Linux $releasever - AppStream
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=AppStream&infra=$infra
-#baseurl=http://mirror.centos.org/$contentdir/$releasever/AppStream/$basearch/os/
+#baseurl=http://vault.centos.org/$contentdir/$releasever/AppStream/$basearch/os/
baseurl=http://vault.centos.org/$contentdir/$releasever/AppStream/$basearch/os/
gpgcheck=1
enabled=0
changed: [test_24] => (item=CentOS-Linux-AppStream.repo)
PLAY RECAP ***********************************************************************************
test_24: ok=3 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
Practice:
I have the file files.yml with a list of files and their respective md5_sum hash, like:
files:
- name: /opt/file_compare1.tar
hash: 9cd599a3523898e6a12e13ec787da50a /opt/file_compare1.tar
- name: /opt/file_compare2tar.gz
hash: d41d8cd98f00b204e9800998ecf8427e /opt/file_compare2.tar.gz
I need to create a playbook to check this list of files if the current hash is the same or if it was changed, the playbook should have a debug message like below:
---
- hosts: localhost
connection: local
vars_files:
- files.yml
tasks:
- name: Use md5 to calculate checksum
stat:
path: "{{ item.name }}"
checksum_algorithm: md5
register: hash_check
with_items:
- "{{ files }}"
- name: Debug files - Different
debug:
msg: |
"Hash changed: {{ item.name }}"
when:
- item.hash != hash_check
with_items:
- "{{ files }}"
- name: Debug files - Equal
debug:
msg: |
"Hash NOT changed: {{ item.name }}"
when:
- item.hash == hash_check
with_items:
- "{{ files }}"
- debug:
msg: |
- "{{ hash_check }} {{ item.name }}"
with_items:
- "{{ files }}"
For example, given the files
files:
- name: /scratch/file_compare1.tar
hash: 4f8805b4b64dcc575547ec1c63793aec /scratch/file_compare1.tar
- name: /scratch/file_compare2.tar.gz
hash: 2dc4f1e9ca4081cc49d25195627982ef /scratch/file_compare2.tar.gz
the tasks below
- name: Use md5 to calculate checksum
stat:
path: "{{ item.name }}"
checksum_algorithm: md5
register: hash_check
loop: "{{ files }}"
- name: Debug files - Different
debug:
msg: |
Hash NOT changed: {{ item.0.name }}
{{ item.0.hash.split()|first }}
{{ item.1 }}
with_together:
- "{{ files }}"
- "{{ hash_check.results|map(attribute='stat.checksum')|list }}"
when: item.0.hash.split()|first == item.1
give
msg: |-
Hash NOT changed: /scratch/file_compare1.tar
4f8805b4b64dcc575547ec1c63793aec
4f8805b4b64dcc575547ec1c63793aec
msg: |-
Hash NOT changed: /scratch/file_compare2.tar.gz
2dc4f1e9ca4081cc49d25195627982ef
2dc4f1e9ca4081cc49d25195627982ef
A more robust option would be to create a dictionary with the calculated hashes
- name: Use md5 to calculate checksum
stat:
path: "{{ item.name }}"
checksum_algorithm: md5
register: hash_check
loop: "{{ files }}"
- set_fact:
path_hash: "{{ dict(_path|zip(_hash)) }}"
vars:
_path: "{{ hash_check.results|map(attribute='stat.path')|list }}"
_hash: "{{ hash_check.results|map(attribute='stat.checksum')|list }}"
gives
path_hash:
/scratch/file_compare1.tar: 4f8805b4b64dcc575547ec1c63793aec
/scratch/file_compare2.tar.gz: 2dc4f1e9ca4081cc49d25195627982ef
Then use this dictionary to compare the hashes. For example, the task below gives the same results
- name: Debug files - Different
debug:
msg: |
Hash NOT changed: {{ item.name }}
{{ item.hash.split()|first }}
{{ path_hash[item.name] }}
loop: "{{ files }}"
when: item.hash.split()|first == path_hash[item.name]
The next option is to create a dictionary with the original hashes and both lists of original and calculated hashes
- name: Use md5 to calculate checksum
stat:
path: "{{ item.name }}"
checksum_algorithm: md5
register: hash_check
loop: "{{ files }}"
- set_fact:
hash_name: "{{ dict(_hash|zip(_name)) }}"
hash_orig: "{{ _hash }}"
hash_stat: "{{ hash_check.results|map(attribute='stat.checksum')|list }}"
vars:
_hash: "{{ files|map(attribute='hash')|map('split')|map('first')|list }}"
_name: "{{ files|map(attribute='name')|list }}"
gives
hash_name:
2dc4f1e9ca4081cc49d25195627982ef: /scratch/file_compare2.tar.gz
4f8805b4b64dcc575547ec1c63793aec: /scratch/file_compare1.tar
hash_orig:
- 4f8805b4b64dcc575547ec1c63793aec
- 2dc4f1e9ca4081cc49d25195627982ef
hash_stat:
- 4f8805b4b64dcc575547ec1c63793aec
- 2dc4f1e9ca4081cc49d25195627982ef
Then calculate the difference of the lists and use it to extract both lists of changed and unchanged files
- set_fact:
files_diff: "{{ _diff|map('extract', hash_name)|list }}"
files_orig: "{{ _orig|map('extract', hash_name)|list }}"
vars:
_diff: "{{ hash_orig|difference(hash_stat) }}"
_orig: "{{ hash_orig|difference(_diff) }}"
- name: Debug files changed
debug:
var: files_diff
- name: Debug files NOT changed
debug:
var: files_orig
gives
files_diff: []
files_orig:
- /scratch/file_compare1.tar
- /scratch/file_compare2.tar.gz
I used your suggestion to complement the playbook, it's working now.
The idea is to get a list of files, read each one and compare with both hash, file, and current hash.
---
- hosts: localhost
connection: local
gather_facts: false
vars_files:
- files3.yml
tasks:
- stat:
path: "{{ item.file }}"
checksum_algorithm: md5
loop: "{{ files }}"
register: stat_results
- name: NOT changed files
debug:
msg: "NOT changed: {{ item.stat.path }}"
when: item.stat.checksum == item.item.checksum.split()|first
loop: "{{ stat_results.results }}"
loop_control:
label: "{{ item.stat.path }}"
- name: Changed files
debug:
msg: "CHANGED: {{ item.stat.path }}"
when: item.stat.checksum != item.item.checksum.split()|first
loop: "{{ stat_results.results }}"
loop_control:
label: "{{ item.stat.path }}"
Result:
>> ansible-playbook playbooks/check-file3.yml
PLAY [localhost] ********************************************************************************************************************************************************************************************************************
TASK [stat] *************************************************************************************************************************************************************************************************************************
ok: [localhost] => (item={'file': '/opt/file_compare1.tar', 'checksum': '9cd599a3523898e6a12e13ec787da50a /opt/file_compare1.tar'})
ok: [localhost] => (item={'file': '/opt/file_compare2.tar.gz', 'checksum': 'd41d8cd98f00b204e9800998ecf8427e /opt/file_compare2.tar.gz'})
TASK [NOT changed files] ************************************************************************************************************************************************************************************************************
skipping: [localhost] => (item=/opt/file_compare1.tar)
ok: [localhost] => (item=/opt/file_compare2.tar.gz) => {
"msg": "NOT changed: /opt/file_compare2.tar.gz"
}
TASK [Changed files] ****************************************************************************************************************************************************************************************************************
ok: [localhost] => (item=/opt/file_compare1.tar) => {
"msg": "CHANGED: /opt/file_compare1.tar"
}
skipping: [localhost] => (item=/opt/file_compare2.tar.gz)
PLAY RECAP **************************************************************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Have task which copies each user's key
- name: SSH Keys
authorized_key:
user: "{{ item.0.name }}"
key: "{{ item.0.ssh_key.0.key }}"
state: "{{ item.0.ssh_key.0.state }}"
when:
- item.1 == 'all' or item.1 in group_names or item.1 == inventory_hostname
with_subelements:
- "{{ users }}"
- servers
Var list:
users:
- name: user1
ssh_key:
- key:
- "key1.user1"
- "key2.user1"
- "key3.user1"
state: present
servers:
- server1
- name: user2
ssh_key:
- key:
- "key1.user2"
- "key2.user2"
state: present
servers:
- all
QUESTION: How can we allow users to copy multiple keys? Without deleting servers from with_subelements.
When starting the task, either the last key or an array with keys is copied, depending on how we write it in var list.
In this format copied last key.
- key: "key1.user1"
- key: "key2.user1"
- key: "key3.user1"
In this array.
- key:
- "key1"
- "key2"
Let's fit the structure of the data to this purpose. For example,
users:
- name: user1
ssh_key:
- "key1.user1"
- "key2.user1"
- "key3.user1"
state: present
servers:
- server1
...
It's possible to loop include_tasks. For example, create the task (test it with debug first)
shell> cat conf_authorized_key.yml
- name: SSH Keys
# authorized_key:
debug:
msg:
- "user: {{ item.0.name }}"
- "state: {{ item.0.state }}"
- "key: {{ iitem }}"
loop: "{{ item.0.ssh_key }}"
loop_control:
loop_var: iitem
Then include it in the playbook
shell> cat playbook.yml
- hosts: localhost
vars:
users:
- name: user1
ssh_key:
- "key1.user1"
- "key2.user1"
- "key3.user1"
state: present
servers:
- server1
- name: user2
ssh_key:
- "key1.user2"
- "key2.user2"
state: present
servers:
- all
tasks:
- name: Loop include_task
include_tasks: conf_authorized_key.yml
loop: "{{ users|subelements('servers') }}"
loop_control:
label: "{{ item.1 }}"
when: (item.1 == 'all') or
(item.1 in group_names) or
(item.1 == inventory_hostname)
gives
shell> ansible-playbook playbook.yml
PLAY [localhost] ****
TASK [Loop include_task] ****
skipping: [localhost] => (item=server1)
included: /export/scratch/tmp/conf_authorized_key.yml for localhost
TASK [SSH Keys] ****
ok: [localhost] => (item=key1.user2) => {
"msg": [
"user: user2",
"state: present",
"key: key1.user2"
]
}
ok: [localhost] => (item=key2.user2) => {
"msg": [
"user: user2",
"state: present",
"key: key2.user2"
]
}
PLAY RECAP ****
localhost: ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I am using ansible to set up a distributed application. i'm installing nodes, and then creating virtual interfaces, and cannot have more virtual interfaces than nodes. therefore, if i install on X nodes, and Y nodes fail, I need to check there are no more that (X-Y) virtual interfaces.
Is there a way to get, for a specific task, a numerical value of how many nodes succeeded/failed, so i can later use it to check the number of virtual interfaces?
Use ansible-runner. See Runner Artifact Job Events and "stats" in particular. For example ansible-runner and the playbook
shell> cat private3/project/test.yml
- hosts: test_01:test_02
gather_facts: false
tasks:
- debug:
var: inventory_hostname
- fail:
msg: Fail test_02
when: inventory_hostname == 'test_02'
shell> ansible-runner -p test.yml -i ID01 run private3
...
ASK [fail] ********************************************************************
skipping: [test_01]
fatal: [test_02]: FAILED! => {"changed": false, "msg": "Fail test_02"}
...
created records in the directory private3/artifacts/ID01/job_events/. I'm not aware of any publicly available tool to analyze the events. I've created a playbook that displays failed tasks
shell> cat pb.yml
- hosts: localhost
gather_facts: false
vars:
events_dir: private3/artifacts/ID01/job_events
tasks:
- find:
paths: "{{ events_dir }}"
register: result
- include_vars:
file: "{{ item }}"
name: "{{ 'my_var_' ~ my_idx }}"
loop: "{{ result.files|json_query('[].path') }}"
loop_control:
index_var: my_idx
label: "{{ my_idx }}"
- set_fact:
my_events: "{{ my_events|default({})|
combine({my_key: lookup('vars', my_key)}) }}"
loop: "{{ range(0, result.matched)|list }}"
loop_control:
index_var: my_idx
vars:
my_key: "{{ 'my_var_' ~ my_idx }}"
- set_fact:
my_list: "{{ my_events|json_query('*.{counter: counter,
event: event,
task: event_data.task_action,
host: event_data.host}') }}"
- debug:
var: item
loop: "{{ my_list|sort(attribute='counter') }}"
loop_control:
label: "{{ item.counter }}"
when: item.event == 'runner_on_failed'
gives
shell> ansible-playbook pb.yml
...
skipping: [localhost] => (item=11)
ok: [localhost] => (item=12) => {
"ansible_loop_var": "item",
"item": {
"counter": 12,
"event": "runner_on_failed",
"host": "test_02",
"task": "fail"
}
}
skipping: [localhost] => (item=13)
...
Feel free to fit the playbook to your needs.
Sorry if there are many posts about variables inside variable my use case is different.
Trying to access an element from a variable list "efs_list" based on the index-number of the current host. There are three hosts in the inventory
vars:
efs_list:
- efs1
- efs2
- efs3
sdb_index: "{{ groups['all'].index(inventory_hostname) }}"
The values should be as follows
host1- efs1
host2- efs2
host3- efs3
Tried accessing it through efs_list.{{ sdb_index }}
for - debug: var=efs_list.{{ sdb_index }} the output is as intended
ok: [10.251.0.174] => {
"efs_list.0": "efs1"
}
ok: [10.251.0.207] => {
"efs_list.1": "efs2"
}
ok: [10.251.0.151] => {
"efs_list.2": "efs3"
}
But for
- debug:
msg: "{{ efs_list.{{ sdb_index }} }}"
fatal: [10.251.0.174]: FAILED! => {"msg": "template error while templating string: expected name or number. String: {{ efs_list.{{ sdb_index }} }}"}
---
- name: SDB Snapshots Creation
hosts: all
remote_user: "centos"
become: yes
vars:
efs_list:
- efs1
- efs2
- efs3
sdb_index: "{{ groups['all'].index(inventory_hostname) }}"
tasks:
- debug: var=efs_list.{{ sdb_index }}
- debug:
msg: "{{ efs_list.{{ sdb_index }} }}"
- name: Get Filesystem ID
become: false
local_action: command aws efs describe-file-systems --creation-token "{{ efs_list.{{ sdb_index }} }}"
--region us-east-1 --query FileSystems[*].FileSystemId --output text
register: fs_id
It should attribute the element of list to current indexenter code here
extract filter will do the job. The input of the filter must be a list of indices and a container (array in this case). The tasks below
- set_fact:
sdb_index: "{{ [] + [ groups['all'].index(inventory_hostname) ] }}"
- debug:
msg: "{{ sdb_index|map('extract', efs_list)|list }}"
give
ok: [host1] => {
"msg": [
"efs1"
]
}
ok: [host2] => {
"msg": [
"efs2"
]
}
ok: [host3] => {
"msg": [
"efs3"
]
}
If the hosts are not sorted in the inventory it's necessary to sort them in the play
- set_fact:
my_hosts: "{{ groups['all']|sort }}"
- set_fact:
sdb_index: "{{ [] + [ my_hosts.index(inventory_hostname) ] }}"
- debug:
msg: "{{ sdb_index|map('extract', efs_list)|list }}"