I have a problem when I want to add a variable into a path which is used by with_fileglob - it seems that the variable is always expended to "[]".
I ran the playbook with parameter --extra-vars environment="dev" and got from debug output extra_vars: ('environment=dev',).
Unfortunatelly copy task with with_fileglob failed:
- name: Copy all files from environment subdirectory
copy:
src: "{{item}}"
dest: /etc/
with_fileglob: directory/{{ environment }}/*
TASK [Copy all files from environment subdirectory] ************************************************************************
task path: /home/ansible/playbook/playbook.yml:511
looking for "files/directory/[]" at "/home/ansible/playbook/files/files/directory/[]"
looking for "files/directory/[]" at "/home/ansible/playbook/files/directory/[]"
looking for "files/directory/[]" at "/home/ansible/playbook/files/files/directory/[]"
looking for "files/directory/[]" at "/home/ansible/playbook/files/directory/[]"
[WARNING]: Unable to find 'files/directory/[]' in expected paths (use -vvvvv to see paths)
I am using ansible 2.9.3.
May I ask you what I did wrong?
Thanks a lot for your hints in advance.
environment is a reserved keyword and can't be used as the name of a variable. See Creating valid variable names. The fixed variable in the playbook below works as expected
shell> cat pb.yml
- hosts: localhost
tasks:
- debug:
var: item
with_fileglob: "directory/{{ env }}/*"
Given the tree
shell> tree directory
directory
└── dev
├── file1
├── file2
└── file3
1 directory, 3 files
the abridged result is
shell> ansible-playbook pb.yml -e "env=dev" | grep item:
item: /scratch/tmp/directory/dev/file2
item: /scratch/tmp/directory/dev/file1
item: /scratch/tmp/directory/dev/file3
Related
On Ubuntu 18 server in directory /home/adminuser/keys are 5 files that contain key parts:
/home/adminuser/key/
|- unseal_key_0
|- unseal_key_1
|- unseal_key_2
|- unseal_key_3
|- unseal_key_4
File contents:
1bbeaafab5037a287bde3e5203c8b2cd205f4cc55b4fcffe7931658dc20d8cdcdf
bdf7a6ee4c493aca5b9cc2105077ec67738a0e8bf21936abfc5d1ff8080b628fcb
545c087d3d59d02556bdbf8690c8cc9faafec0e9766bb42de3a7884159356e91b8
053207b0683a8a2886129f7a1988601629a9e7e0d8ddbca02333ce08f1cc7b3887
2320f6275804341ebe5d39a623dd309f233e454b4453c692233ca86212a3d40b5f
Part of Ansible playbook (task):
- name: Reading file contents
command: cat {{item}}
register: unseal_keys
with_fileglob: "/home/adminuser/keys/*"
The error that I get:
"[WARNING]: Unable to find '/home/adminuser/keys' in expected paths (use -vvvvv to see paths)"
I have tried to:
change user that creates directory and files
change path to /home/adminuser/keys/ and /home/adminuser/keys
I expect all of the file contents (that is parts of a single key) to be merged into one string:
1bbeaafab5037a287bde3e5203c8b2cd205f4cc55b4fcffe7931658dc20d8cdcdfbdf7a6ee4c493aca5b9cc2105077ec67738a0e8bf21936abfc5d1ff8080b628fcb545c087d3d59d02556bdbf8690c8cc9faafec0e9766bb42de3a7884159356e91b8 053207b0683a8a2886129f7a1988601629a9e7e0d8ddbca02333ce08f1cc7b38872320f6275804341ebe5d39a623dd309f233e454b4453c692233ca86212a3d40b5f
Given the files below for testing
shell> tree /tmp/admin/
/tmp/admin/
└── key
├── key_0
├── key_1
└── key_2
1 directory, 3 files
shell> cat /tmp/admin/key/key_0
abc
shell> cat /tmp/admin/key/key_1
def
shell> cat /tmp/admin/key/key_2
ghi
Use the module assemble to: "assemble a configuration file from fragments."
Declare the path
key_all_path: /tmp/admin/key_all
and assemble the fragments
- assemble:
src: /tmp/admin/key
dest: "{{ key_all_path }}"
This will create the file /tmp/admin/key_all
shell> cat /tmp/admin/key_all
abc
def
ghi
Read the file and join the lines. Declare the variable
key_all: "{{ lookup('file', key_all_path).splitlines()|join('') }}"
gives
key_all: abcdefghi
Example of a complete playbook for testing
- hosts: localhost
vars:
key_all_path: /tmp/admin/key_all
key_all: "{{ lookup('file', key_all_path).splitlines()|join('') }}"
tasks:
- assemble:
src: /tmp/admin/key
dest: "{{ key_all_path }}"
- debug:
var: key_all
Thanks !
Problem was in paths and hosts where task had to be executed.
Problem is solved by locating and reading files localy and executing this task:
- name: Reading file contents
command: cat "{{item}}"
register: keys ----> all file contents to variable "keys"
with_fileglob: "~/keys/*" ----> this is path to directory all files are storedon my local machine
delegate_to: localhost ----> here I specify that this task will be executed on local machine
become: false ----> remove sudo so that password is not requested
I'm using this kind of ansible lookup, in order to load the content of a file into a variable :
- name: Prepare ignition for worker nodes
set_fact:
custom_attr: "{{ lookup('file', './files/ignition/{{ oc_cluster_name }}/worker.ign') | b64encode }}"
when: item.name.startswith('worker')
I know that we should avoid using nested variables (moustaches don't stack, right ?). This code is working indeed, but I'm not sure it's the correct way to write this.
Is there another way to do it ? I used to write in two separate "set_fact" blocks, which works as well, but it's not better (using temporary vars) :
- name: Prepare ignition for worker nodes
block:
- name: locate file for worker node
set_fact:
ignition_file: "./files/ignition/{{ oc_cluster_name }}/worker.ign"
- name: load file into fact for worker node
set_fact:
custom_attr: "{{ lookup('file', ignition_file) | b64encode }}"
when: item.name.startswith('worker')
What do you think ?
I'm trying to write nice code with best practices : using no temporary variable and respecting the way to nest interpolation of variables
Moustaches shouldn't be stacked because it's not necessary to do so. You're already in a Jinja expression so you just access variables by name without wrapping them in more delimiters.
- name: Prepare ignition for worker nodes
set_fact:
# Relative paths are looked for in `files/` first, so there's no need to specify it
custom_attr: "{{ lookup('file', 'ignition/' ~ oc_cluster_name ~ '/worker.ign') | b64encode }}"
when: item.name.startswith('worker')
You can also use a temporary variable without a separate set_fact, which can be helpful for breaking up complex expressions:
- name: Prepare ignition for worker nodes
set_fact:
custom_attr: "{{ lookup('file', ignition_file) | b64encode }}"
vars:
ignition_file: ignition/{{ oc_cluster_name }}/worker.ign
when: item.name.startswith('worker')
Q: "Write nice code."
A: Put the declarations into the vars. For example, into the group_vars/all
shell> tree .
.
├── ansible.cfg
├── files
│ └── ignition
│ └── cluster1
│ └── worker.ign
├── group_vars
│ └── all
├── hosts
└── pb.yml
4 directories, 5 files
shell> cat ansible.cfg
[defaults]
gathering = explicit
inventory = $PWD/hosts
remote_tmp = ~/.ansible/tmp
retry_files_enabled = false
stdout_callback = yaml
shell> cat files/ignition/cluster1/worker.ign
test
shell> cat group_vars/all
oc_cluster_name: cluster1
ignition_file: "./files/ignition/{{ oc_cluster_name }}/worker.ign"
custom_attr: "{{ lookup('file', ignition_file)|b64encode }}"
shell> cat hosts
localhost
shell> cat pb.yml
- hosts: localhost
tasks:
- debug:
var: custom_attr|b64decode
shell> ansible-playbook pb.yml
PLAY [localhost] *****************************************************************************
TASK [debug] *********************************************************************************
ok: [localhost] =>
custom_attr|b64decode: test
PLAY RECAP ***********************************************************************************
localhost: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I have a following directory structure:
parent_dir/
├── subdir_1
├── subdir_2
└── subdir_3
The subdirs don't have a fixed name and there can be an arbitrary number of them.
How to make ansible run a task for each sub directory?
(any task will do, eventually every dir will be a python package to install, but that isn't important for the context of this question)
This is the solution I managed to come up with, perhaps there is a cleaner way with lookups to achieve this in a single task.
Copy pasting the following code will create a directory structure with a minimal ansible playbook that does the required. (tested on Ubuntu/dash)
mkdir action_per_dir
cd action_per_dir
mkdir -p parent_dir/subdir_1 parent_dir/subdir_2 parent_dir/subdir_3
cat > action_per_dir.yml << "EOF"
---
# Gets all the directories and stores all the return values of `find`
# into the results_of_find
# The return value will consist of:
# https://docs.ansible.com/ansible/latest/modules/find_module.html#return-values
- hosts: localhost
tasks:
- name: Get all dirs
find:
paths: parent_dir
file_type: directory
register: result_of_find
# We're interested only in the `files` part of results of find.
# In pseudo code what's happening here is:
# for each item in result_of_find.files:
# print item.path
#
# The output will be very verbose but for debugging purposes it can be filtered
# ansible-playbook action_per_dir.yml | grep msg
- name: Print all the dirs
debug:
msg: "{{ item.path }}"
with_items: "{{ result_of_find.files }}"
EOF
After that it just needs to be run:
ansible-playbook action_per_dir.yml | grep msg
The following is a simple playbook which tries to dynamically load variables:
site.yml
---
- hosts: localhost
vars_files:
- "{{ name }}.yml"
tasks:
- debug: var={{ foo }}
Variable foo is defined in this file:
vars/myvars.yml
---
foo: "Hello"
Then playbook is run like this:
ansible-playbook test.yml -e "name=myvars"
However this results in this error:
ERROR! vars file {{ name }}.yml was not found
From what I understood from several code snippets this should be possible and import the variables from myvars.yml. When trying with ansible 1.7.x it indeed seemed to work (although I hit a different issue the file name vas resolved correctly).
Was this behaviour changed (perhaps support for dynamic variable files was removed?). Is there a different way to achieve this behaviour (I can use include_vars tasks, however it is not quite suitable)?
EDIT: To make sure my playbook structure is correct, here is a github repository: https://github.com/jcechace/ansible_sinppets/tree/master/dynamic_vars
Just change your site.yml like this:
- hosts: localhost
vars_files:
- "vars/{{ name }}.yml"
tasks:
- debug: var={{ foo }}
Then run the command like this:
ansible-playbook site.yml -e "name=myvars" -c local
Hope this will help you.
I have a host in 2 groups : pc and Servers
I have 2 group_vars (pc and servers) with, in each the file packages.yml
These files define the list of packages to be installed on pc hosts and on servers hosts
I have a role to install default package
The problem is : only the group_vars/pc/packages.yml is take into account by the role task, packages from group_vars/servers/packages.yml are not installed
Of course what I want is installation of packages defined for pc and servers
I do not know if it is a bug or a feature ...
Thanks for your help
here is the configuration :
# file: production
[pc]
armen
kerbel
kerzo
[servers]
kerbel
---
# packages on servers
packages:
- lftp
- mercurial
---
# packages on pc
packages:
- keepassx
- lm-sensors
- hddtemp
It's not a bug. According to the docs about variable precedence, you shouldn't define a variable in multiple places and try to keep it simple. Michael DeHaan (Ansible's lead dev) responded to a similar question on this topic:
Generally I find the purpose of plays though to bind hosts to roles, so the individual roles should contain the package lists.
I would use roles as it's a bit cleaner IMO.
If you really want (and this is NOT the recommended way), you can set the hash_behaviour option in ansible.cfg:
[defaults]
hash_behaviour = merge
This will cause the merging of two values when a hash (dict) is redefined, instead of replacing the old value with the new one. This does NOT work on lists, though, so you'll need to create a hash of lists, like:
group_vars/all/package.yml:
packages:
all: [pkg1, pkg2]
group_vars/servers/package.yml:
packages:
servers: [pkg3, pkg4]
Looping though that in the playbook is a bit more complex though.
If you want to use such scheme. You should set the hash_behaviour option in ansible.cfg:
[defaults]
hash_behaviour = merge
In addition, you have to use dictionaries instead of lists. To prevent duplicates I recommend to use names as keys, for example:
group_vars/servers/packages.yml:
packages:
package_name1:
package_name2:
group_vars/pc/packages.yml:
packages:
package_name3:
package_name4:
And in a playbook task (| default({}) - for an absent "package" variable case):
- name: install host packages
yum: name={{ item.key }} state=latest
with_dict: packages | default({})
Create a dictionary of the variables per group and merge the lists on your own. For example, create a project for testing
shell> tree .
.
├── ansible.cfg
├── group_dict_create.yml
├── group_vars
│ ├── all
│ │ └── group_dict_vars.yml
│ ├── pc
│ │ └── packages.yml
│ └── servers
│ └── packages.yml
├── hosts
└── pb.yml
4 directories, 7 files
shell> cat ansible.cfg
[defaults]
gathering = explicit
collections_path = $HOME/.local/lib/python3.9/site-packages/
inventory = $PWD/hosts
roles_path = $PWD/roles
retry_files_enabled = false
stdout_callback = yaml
shell> cat hosts
[pc]
armen
kerbel
kerzo
[servers]
kernel
shell> cat group_vars/pc/packages.yml
packages:
- keepassx
- lm-sensors
- hddtemp
shell> cat group_vars/servers/packages.yml
packages:
- lftp
- mercurial
shell> cat pb.yml
- hosts: armen,kerbel,kerzo
pre_tasks:
- import_tasks: group_dict_create.yml
tasks:
- debug:
var: my_packages
Declare variables in group_vars/all/group_dict_vars.yml
shell> cat group_vars/all/group_dict_vars.yml
group_vars_dir: "{{ inventory_dir }}/group_vars"
group_names_all: "{{ ansible_play_hosts_all|
map('extract', hostvars, 'group_names')|
flatten|unique }}"
group_dict_str: |
{% for group in group_names_all %}
{{ group }}: {{ lookup('vars', 'groupvars_' ~ group) }}
{% endfor %}
_group_dict: "{{ group_dict_str|from_yaml }}"
my_packages: "{{ group_names|map('extract', group_dict, 'packages')|
flatten|unique }}"
group_vars_dir: The directory group_vars can be used either in the directory where the inventory or the playbook comes from. In this example, these directories are identical and we set it to inventory_dir. In the loop, the task include_vars will read all YAML and JSON files from group_vars/<group> and will store the variables in the dictionary groupvars_<group>, where <group> are items of group_names_all.
group_names_all: This is a list of all groups the hosts are members of. See group_names
group_dict_str: Create the string with the YAML structure of the dictionary
_group_dict: Convert the string to YAML
my_packages: Merge the lists of packages from the groups the host is a member of. If needed, use this variable as a pattern of how to merge other variables.
Create a block of tasks that creates the dictionary and writes the file
shell> cat group_dict_create.yml
- name: Create dictionary group_dict in group_vars/all/group_dict.yml
block:
- name: Create directory group_vars/all
file:
state: directory
path: "{{ group_vars_dir }}/all"
- include_vars:
dir: "{{ group_vars_dir }}/{{ item }}"
name: "groupvars_{{ item }}"
loop: "{{ group_names_all }}"
- debug:
var: _group_dict
when: debug|d(false)|bool
- name: Write group_dict to group_vars/all/group_dict.yml
copy:
dest: "{{ group_vars_dir }}/all/group_dict.yml"
content: |
group_dict:
{{ _group_dict|to_nice_yaml(indent=2)|indent(2) }}
- include_vars:
file: "{{ group_vars_dir }}/all/group_dict.yml"
delegate_to: localhost
run_once: true
when: group_dict is not defined or group_dict_refresh|d(false)|bool
If the dictionary group_dict does not exist (the file group_vars/all/group_dict.yml has not been created yet) it will create the dictionary, write it to the file group_vars/all/group_dict.yml, and include it in the play. You can refresh group_dict by setting group_dict_refresh=true if you change the variables in group_vars/<group>.
shell> cat group_vars/all/group_dict.yml
group_dict:
pc:
packages:
- keepassx
- lm-sensors
- hddtemp
servers:
packages:
- lftp
- mercurial
The results, stored in the variable my_packages, are merged lists of packages by groups
TASK [debug] *********************************************************
ok: [kerbel] =>
my_packages:
- keepassx
- lm-sensors
- hddtemp
- lftp
- mercurial
ok: [armen] =>
my_packages:
- keepassx
- lm-sensors
- hddtemp
ok: [kerzo] =>
my_packages:
- keepassx
- lm-sensors
- hddtemp
Notes:
Best practice is running group_dict_create.yml separately for all hosts and letting other playbooks use created group_vars/all/group_dict.yml
The framework described here is idempotent.
The framework should be easily extendable to other use cases of merging variables by groups.
Example. Add variable users to group_vars/<group>
shell> cat group_vars/pc/users.yml
users:
- alice
- bob
shell> cat group_vars/servers/users.yml
users:
- carol
- dave
, and add variable my_users to group_vars/all/group_dict_vars.yml
my_users: "{{ group_names|map('extract', group_dict, 'users')|flatten|unique }}"
Refresh the dictionary group_dict for all hosts
shell> ansible-playbook pb.yml -l all -e group_dict_refresh=true
gives
ok: [armen] =>
my_users:
- alice
- bob
ok: [kerbel] =>
my_users:
- alice
- bob
- carol
- dave
ok: [kerzo] =>
my_users:
- alice
- bob