I have a host in 2 groups : pc and Servers
I have 2 group_vars (pc and servers) with, in each the file packages.yml
These files define the list of packages to be installed on pc hosts and on servers hosts
I have a role to install default package
The problem is : only the group_vars/pc/packages.yml is take into account by the role task, packages from group_vars/servers/packages.yml are not installed
Of course what I want is installation of packages defined for pc and servers
I do not know if it is a bug or a feature ...
Thanks for your help
here is the configuration :
# file: production
[pc]
armen
kerbel
kerzo
[servers]
kerbel
---
# packages on servers
packages:
- lftp
- mercurial
---
# packages on pc
packages:
- keepassx
- lm-sensors
- hddtemp
It's not a bug. According to the docs about variable precedence, you shouldn't define a variable in multiple places and try to keep it simple. Michael DeHaan (Ansible's lead dev) responded to a similar question on this topic:
Generally I find the purpose of plays though to bind hosts to roles, so the individual roles should contain the package lists.
I would use roles as it's a bit cleaner IMO.
If you really want (and this is NOT the recommended way), you can set the hash_behaviour option in ansible.cfg:
[defaults]
hash_behaviour = merge
This will cause the merging of two values when a hash (dict) is redefined, instead of replacing the old value with the new one. This does NOT work on lists, though, so you'll need to create a hash of lists, like:
group_vars/all/package.yml:
packages:
all: [pkg1, pkg2]
group_vars/servers/package.yml:
packages:
servers: [pkg3, pkg4]
Looping though that in the playbook is a bit more complex though.
If you want to use such scheme. You should set the hash_behaviour option in ansible.cfg:
[defaults]
hash_behaviour = merge
In addition, you have to use dictionaries instead of lists. To prevent duplicates I recommend to use names as keys, for example:
group_vars/servers/packages.yml:
packages:
package_name1:
package_name2:
group_vars/pc/packages.yml:
packages:
package_name3:
package_name4:
And in a playbook task (| default({}) - for an absent "package" variable case):
- name: install host packages
yum: name={{ item.key }} state=latest
with_dict: packages | default({})
Create a dictionary of the variables per group and merge the lists on your own. For example, create a project for testing
shell> tree .
.
├── ansible.cfg
├── group_dict_create.yml
├── group_vars
│ ├── all
│ │ └── group_dict_vars.yml
│ ├── pc
│ │ └── packages.yml
│ └── servers
│ └── packages.yml
├── hosts
└── pb.yml
4 directories, 7 files
shell> cat ansible.cfg
[defaults]
gathering = explicit
collections_path = $HOME/.local/lib/python3.9/site-packages/
inventory = $PWD/hosts
roles_path = $PWD/roles
retry_files_enabled = false
stdout_callback = yaml
shell> cat hosts
[pc]
armen
kerbel
kerzo
[servers]
kernel
shell> cat group_vars/pc/packages.yml
packages:
- keepassx
- lm-sensors
- hddtemp
shell> cat group_vars/servers/packages.yml
packages:
- lftp
- mercurial
shell> cat pb.yml
- hosts: armen,kerbel,kerzo
pre_tasks:
- import_tasks: group_dict_create.yml
tasks:
- debug:
var: my_packages
Declare variables in group_vars/all/group_dict_vars.yml
shell> cat group_vars/all/group_dict_vars.yml
group_vars_dir: "{{ inventory_dir }}/group_vars"
group_names_all: "{{ ansible_play_hosts_all|
map('extract', hostvars, 'group_names')|
flatten|unique }}"
group_dict_str: |
{% for group in group_names_all %}
{{ group }}: {{ lookup('vars', 'groupvars_' ~ group) }}
{% endfor %}
_group_dict: "{{ group_dict_str|from_yaml }}"
my_packages: "{{ group_names|map('extract', group_dict, 'packages')|
flatten|unique }}"
group_vars_dir: The directory group_vars can be used either in the directory where the inventory or the playbook comes from. In this example, these directories are identical and we set it to inventory_dir. In the loop, the task include_vars will read all YAML and JSON files from group_vars/<group> and will store the variables in the dictionary groupvars_<group>, where <group> are items of group_names_all.
group_names_all: This is a list of all groups the hosts are members of. See group_names
group_dict_str: Create the string with the YAML structure of the dictionary
_group_dict: Convert the string to YAML
my_packages: Merge the lists of packages from the groups the host is a member of. If needed, use this variable as a pattern of how to merge other variables.
Create a block of tasks that creates the dictionary and writes the file
shell> cat group_dict_create.yml
- name: Create dictionary group_dict in group_vars/all/group_dict.yml
block:
- name: Create directory group_vars/all
file:
state: directory
path: "{{ group_vars_dir }}/all"
- include_vars:
dir: "{{ group_vars_dir }}/{{ item }}"
name: "groupvars_{{ item }}"
loop: "{{ group_names_all }}"
- debug:
var: _group_dict
when: debug|d(false)|bool
- name: Write group_dict to group_vars/all/group_dict.yml
copy:
dest: "{{ group_vars_dir }}/all/group_dict.yml"
content: |
group_dict:
{{ _group_dict|to_nice_yaml(indent=2)|indent(2) }}
- include_vars:
file: "{{ group_vars_dir }}/all/group_dict.yml"
delegate_to: localhost
run_once: true
when: group_dict is not defined or group_dict_refresh|d(false)|bool
If the dictionary group_dict does not exist (the file group_vars/all/group_dict.yml has not been created yet) it will create the dictionary, write it to the file group_vars/all/group_dict.yml, and include it in the play. You can refresh group_dict by setting group_dict_refresh=true if you change the variables in group_vars/<group>.
shell> cat group_vars/all/group_dict.yml
group_dict:
pc:
packages:
- keepassx
- lm-sensors
- hddtemp
servers:
packages:
- lftp
- mercurial
The results, stored in the variable my_packages, are merged lists of packages by groups
TASK [debug] *********************************************************
ok: [kerbel] =>
my_packages:
- keepassx
- lm-sensors
- hddtemp
- lftp
- mercurial
ok: [armen] =>
my_packages:
- keepassx
- lm-sensors
- hddtemp
ok: [kerzo] =>
my_packages:
- keepassx
- lm-sensors
- hddtemp
Notes:
Best practice is running group_dict_create.yml separately for all hosts and letting other playbooks use created group_vars/all/group_dict.yml
The framework described here is idempotent.
The framework should be easily extendable to other use cases of merging variables by groups.
Example. Add variable users to group_vars/<group>
shell> cat group_vars/pc/users.yml
users:
- alice
- bob
shell> cat group_vars/servers/users.yml
users:
- carol
- dave
, and add variable my_users to group_vars/all/group_dict_vars.yml
my_users: "{{ group_names|map('extract', group_dict, 'users')|flatten|unique }}"
Refresh the dictionary group_dict for all hosts
shell> ansible-playbook pb.yml -l all -e group_dict_refresh=true
gives
ok: [armen] =>
my_users:
- alice
- bob
ok: [kerbel] =>
my_users:
- alice
- bob
- carol
- dave
ok: [kerzo] =>
my_users:
- alice
- bob
Related
On Ubuntu 18 server in directory /home/adminuser/keys are 5 files that contain key parts:
/home/adminuser/key/
|- unseal_key_0
|- unseal_key_1
|- unseal_key_2
|- unseal_key_3
|- unseal_key_4
File contents:
1bbeaafab5037a287bde3e5203c8b2cd205f4cc55b4fcffe7931658dc20d8cdcdf
bdf7a6ee4c493aca5b9cc2105077ec67738a0e8bf21936abfc5d1ff8080b628fcb
545c087d3d59d02556bdbf8690c8cc9faafec0e9766bb42de3a7884159356e91b8
053207b0683a8a2886129f7a1988601629a9e7e0d8ddbca02333ce08f1cc7b3887
2320f6275804341ebe5d39a623dd309f233e454b4453c692233ca86212a3d40b5f
Part of Ansible playbook (task):
- name: Reading file contents
command: cat {{item}}
register: unseal_keys
with_fileglob: "/home/adminuser/keys/*"
The error that I get:
"[WARNING]: Unable to find '/home/adminuser/keys' in expected paths (use -vvvvv to see paths)"
I have tried to:
change user that creates directory and files
change path to /home/adminuser/keys/ and /home/adminuser/keys
I expect all of the file contents (that is parts of a single key) to be merged into one string:
1bbeaafab5037a287bde3e5203c8b2cd205f4cc55b4fcffe7931658dc20d8cdcdfbdf7a6ee4c493aca5b9cc2105077ec67738a0e8bf21936abfc5d1ff8080b628fcb545c087d3d59d02556bdbf8690c8cc9faafec0e9766bb42de3a7884159356e91b8 053207b0683a8a2886129f7a1988601629a9e7e0d8ddbca02333ce08f1cc7b38872320f6275804341ebe5d39a623dd309f233e454b4453c692233ca86212a3d40b5f
Given the files below for testing
shell> tree /tmp/admin/
/tmp/admin/
└── key
├── key_0
├── key_1
└── key_2
1 directory, 3 files
shell> cat /tmp/admin/key/key_0
abc
shell> cat /tmp/admin/key/key_1
def
shell> cat /tmp/admin/key/key_2
ghi
Use the module assemble to: "assemble a configuration file from fragments."
Declare the path
key_all_path: /tmp/admin/key_all
and assemble the fragments
- assemble:
src: /tmp/admin/key
dest: "{{ key_all_path }}"
This will create the file /tmp/admin/key_all
shell> cat /tmp/admin/key_all
abc
def
ghi
Read the file and join the lines. Declare the variable
key_all: "{{ lookup('file', key_all_path).splitlines()|join('') }}"
gives
key_all: abcdefghi
Example of a complete playbook for testing
- hosts: localhost
vars:
key_all_path: /tmp/admin/key_all
key_all: "{{ lookup('file', key_all_path).splitlines()|join('') }}"
tasks:
- assemble:
src: /tmp/admin/key
dest: "{{ key_all_path }}"
- debug:
var: key_all
Thanks !
Problem was in paths and hosts where task had to be executed.
Problem is solved by locating and reading files localy and executing this task:
- name: Reading file contents
command: cat "{{item}}"
register: keys ----> all file contents to variable "keys"
with_fileglob: "~/keys/*" ----> this is path to directory all files are storedon my local machine
delegate_to: localhost ----> here I specify that this task will be executed on local machine
become: false ----> remove sudo so that password is not requested
I'm using this kind of ansible lookup, in order to load the content of a file into a variable :
- name: Prepare ignition for worker nodes
set_fact:
custom_attr: "{{ lookup('file', './files/ignition/{{ oc_cluster_name }}/worker.ign') | b64encode }}"
when: item.name.startswith('worker')
I know that we should avoid using nested variables (moustaches don't stack, right ?). This code is working indeed, but I'm not sure it's the correct way to write this.
Is there another way to do it ? I used to write in two separate "set_fact" blocks, which works as well, but it's not better (using temporary vars) :
- name: Prepare ignition for worker nodes
block:
- name: locate file for worker node
set_fact:
ignition_file: "./files/ignition/{{ oc_cluster_name }}/worker.ign"
- name: load file into fact for worker node
set_fact:
custom_attr: "{{ lookup('file', ignition_file) | b64encode }}"
when: item.name.startswith('worker')
What do you think ?
I'm trying to write nice code with best practices : using no temporary variable and respecting the way to nest interpolation of variables
Moustaches shouldn't be stacked because it's not necessary to do so. You're already in a Jinja expression so you just access variables by name without wrapping them in more delimiters.
- name: Prepare ignition for worker nodes
set_fact:
# Relative paths are looked for in `files/` first, so there's no need to specify it
custom_attr: "{{ lookup('file', 'ignition/' ~ oc_cluster_name ~ '/worker.ign') | b64encode }}"
when: item.name.startswith('worker')
You can also use a temporary variable without a separate set_fact, which can be helpful for breaking up complex expressions:
- name: Prepare ignition for worker nodes
set_fact:
custom_attr: "{{ lookup('file', ignition_file) | b64encode }}"
vars:
ignition_file: ignition/{{ oc_cluster_name }}/worker.ign
when: item.name.startswith('worker')
Q: "Write nice code."
A: Put the declarations into the vars. For example, into the group_vars/all
shell> tree .
.
├── ansible.cfg
├── files
│ └── ignition
│ └── cluster1
│ └── worker.ign
├── group_vars
│ └── all
├── hosts
└── pb.yml
4 directories, 5 files
shell> cat ansible.cfg
[defaults]
gathering = explicit
inventory = $PWD/hosts
remote_tmp = ~/.ansible/tmp
retry_files_enabled = false
stdout_callback = yaml
shell> cat files/ignition/cluster1/worker.ign
test
shell> cat group_vars/all
oc_cluster_name: cluster1
ignition_file: "./files/ignition/{{ oc_cluster_name }}/worker.ign"
custom_attr: "{{ lookup('file', ignition_file)|b64encode }}"
shell> cat hosts
localhost
shell> cat pb.yml
- hosts: localhost
tasks:
- debug:
var: custom_attr|b64decode
shell> ansible-playbook pb.yml
PLAY [localhost] *****************************************************************************
TASK [debug] *********************************************************************************
ok: [localhost] =>
custom_attr|b64decode: test
PLAY RECAP ***********************************************************************************
localhost: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I have a dynamic inventory set up which pulls hosts and their variables from a MySQL database. The dynamic inventory itself is working perfectly.
Some of the variables inside the inventory are sensitive so I would prefer not to store them as plain text.
So as a test I encrypted a value using:
ansible-vault encrypt_string 'foobar'
Which resulted in:
!vault |
$ANSIBLE_VAULT;1.1;AES256
39653264643032353830336333356665663638353839356162386462303338363661333564633737
3737356131303563626564376634313865613433346438630a343534363066393431633366393863
30333636386536333166363533373239303864633830653566316663616432633336393936656233
6633666134306530380a356664333834353265663563656162396435353030663833623166363466
6436
Encryption successful
I decided to store the encrypted value as a variable inside MySQL. For the avoidance of doubt, after some testing, you can normalise the encrypted string to:
$ANSIBLE_VAULT;1.1;AES256
363635393063363466313636633166356562396562336633373239333630643032646637383866656463366137623232653531613135623464353932653665300a376461666332626538386263343732333039356663363132333533663339313466346435373064343931383536393736303731393834323964613063323362370a3361313132396239336130643839623939346438616363383932616639656463
You can test this format works with a simple playbook:
---
- hosts: all
gather_facts: no
vars:
final_var: !vault "$ANSIBLE_VAULT;1.1;AES256\n363635393063363466313636633166356562396562336633373239333630643032646637383866656463366137623232653531613135623464353932653665300a376461666332626538386263343732333039356663363132333533663339313466346435373064343931383536393736303731393834323964613063323362370a3361313132396239336130643839623939346438616363383932616639656463"
tasks:
- name: Display variable
debug:
msg: "{{ final_var }}"
When the playbook is executed the following output is observed:
ok: [target] => {
"msg": "foobar"
}
The difficulty arises when trying to get the variable (my_secret) from the inventory instead of referencing it directly in a file.
---
- hosts: all
gather_facts: no
vars:
final_var: !vault "{{ my_secret }}"
tasks:
- name: Display variable
debug:
msg: "{{ final_var }}"
This results in:
fatal: [target]: FAILED! => {"msg": "input is not vault encrypted data"}
Now, while I've spoken a lot about the value being stored in the dynamic inventory in MySQL we can get a similar behaviour if we remove that from the equation.
---
- hosts: all
gather_facts: no
vars:
secret: "$ANSIBLE_VAULT;1.1;AES256\n363635393063363466313636633166356562396562336633373239333630643032646637383866656463366137623232653531613135623464353932653665300a376461666332626538386263343732333039356663363132333533663339313466346435373064343931383536393736303731393834323964613063323362370a3361313132396239336130643839623939346438616363383932616639656463"
final_var: !vault "{{ secret }}"
tasks:
- name: Display variable
debug:
msg: "{{ final_var }}"
This is now nearly identical to the working example but the encrypted string is not written inline but instead coming from another variable.
This results in the same error:
fatal: [target]: FAILED! => {"msg": "input is not vault encrypted data"}
This may indicate that the wider problem is that Ansible is for some reason unable to parse encrypted data stored as a variable. Perhaps when the YAML is being parsed it is literally trying to decrypt "{{ my_secret }}" rather than $ANSIBLE_VAULT;1.1;AES256 ....
Which kind of makes sense but I wanted to run that past you guys and ask if there is a way around this or if you recommend a different approach entirely. Thanks.
You'll be better off putting the variables into the encrypted files. Store the encrypted files in MySQL instead of encrypted variables. If you already "have a dynamic inventory set up which pulls hosts and their variables from a MySQL database" there shouldn't be a problem to modify the setup. Pull the encrypted files from the database and store them in host_vars (and/or group_vars, play vars, role vars ...) instead of storing encrypted variables in the inventory (and/or in the code of playbook, role, ...). This way, in the code, you don't care whether a variable is encrypted or not.
For example
shell> tree host_vars/
host_vars/
├── test_01
│ └── final_var.yml
├── test_02
│ └── final_var.yml
└── test_03
└── final_var.yml
shell> cat host_vars/test_01/final_var.yml
final_var: final_var for test_01
shell> cat host_vars/test_02/final_var.yml
final_var: final_var for test_02
shell> cat host_vars/test_03/final_var.yml
final_var: final_var for test_03
Encrypt the files. For example
shell> ansible-vault encrypt host_vars/test_01/final_var.yml
Encryption successful
shell> cat host_vars/test_01/final_var.yml
$ANSIBLE_VAULT;1.1;AES256
37363965336263366466336236336466323033353763656262633836323062626135613834396435
3665356363396132356131663336396138663962646434330a346433353039383864333638633462
35623034363338356362346133303262393233346439363264353036386337356236336135626434
6533333864623132330a346566656630376439643533373263303338313063373239343463333431
62353230323336383263376335613635616339383934313164323938363066616136373036326461
3538613937663530326364376335343438366139366639303230
Then the playbook below
- hosts: test_01,test_02,test_03
tasks:
- debug:
msg: "{{ inventory_hostname }}: {{ final_var }}"
gives (abridged)
"msg": "test_02: final_var for test_02"
"msg": "test_01: final_var for test_01"
"msg": "test_03: final_var for test_03"
I want to run updates on multiple Linux servers that all have different user names and passwords. I think this is a common use case, but it's not covered in the documentation.
There is SSH auth, but I need elevated access for the update process and Ansible tasks require way too many permissions to do this through the sudoers files.
How do I get the different ansible_password from the inventory in one file vault so I can run the playbook, enter only one password to decrypt all sudo passwords, and have it work?
Inventory:
[servers]
1.2.3.4 ansible_user=user1 ansible_password=password1
1.2.3.5 ansible_user=user2 ansible_password=password2
1.2.3.6 ansible_user=user3 ansible_password=password3
Playbook:
---
- hosts: servers
become: yes
become_method: sudo
gather_facts: false
vars:
verbose: false
log_dir: "/var/log/ansible/dist-upgrade/{{ inventory_hostname }}"
pre_tasks:
- name: Install python for Ansible
raw: sudo bash -c "test -e /usr/bin/python || (apt -qqy update && apt install -qy python-minimal)"
changed_when: false
tasks:
- name: Update packages
apt:
update_cache: yes
upgrade: dist
autoremove: no
register: output
- name: Check changes
set_fact:
updated: true
when: not output.stdout is search("0 upgraded, 0 newly installed")
- name: Display changes
debug:
msg: "{{ output.stdout_lines }}"
when: verbose or updated is defined
- block:
- name: "Create log directory"
file:
path: "{{ log_dir }}"
state: directory
changed_when: false
- name: "Write changes to logfile"
copy:
content: "{{ output.stdout }}"
dest: "{{ log_dir }}/dist-upgrade_{{ ansible_date_time.iso8601 }}.log"
changed_when: false
when: updated is defined
connection: local
Q: "How do I get the different ansible_password from the inventory in one file vault?"
A: Create a dictionary with the passwords. For example, given the tree
shell> tree .
.
├── ansible.cfg
├── group_vars
│ └── servers
│ ├── ansible_password.yml
│ └── my_vault.yml
├── hosts
└── pb.yml
Remove the passwords from the inventory file
shell> cat hosts
[servers]
1.2.3.4 ansible_user=user1
1.2.3.5 ansible_user=user2
1.2.3.6 ansible_user=user3
Create a dictionary with the passwords
shell> cat group_vars/servers/my_vault.yml
my_vault:
1.2.3.4:
ansible_password: password1
1.2.3.5:
ansible_password: password2
1.2.3.6:
ansible_password: password3
and declare the variable ansible_password
shell> cat group_vars/servers/ansible_password.yml
ansible_password: "{{ my_vault[inventory_hostname].ansible_password }}"
Encrypt the file
shell> ansible-vault encrypt group_vars/servers/my_vault.yml
Encryption successful
shell> cat group_vars/servers/my_vault.yml
$ANSIBLE_VAULT;1.1;AES256
3361393763646264326661326433313837613531376266383239383761...
3564366531386130623162386332646366646561663763320a63353365...
...
The playbook
shell> cat pb.yml
- hosts: servers
tasks:
- debug:
var: ansible_password
gives
ok: [1.2.3.4] =>
ansible_password: password1
ok: [1.2.3.5] =>
ansible_password: password2
ok: [1.2.3.6] =>
ansible_password: password3
Use pass the standard unix password manager
You can use pass instead of vault. For example, put the passwords into pass
shell> pass 1.2.3.4/user1
password1
shell> pass 1.2.3.5/user2
password2
shell> pass 1.2.3.6/user3
password3
and use the lookup plugin community.general.passwordstore. See details
shell> ansible-doc -t lookup community.general.passwordstore
Remove the file group_vars/servers/my_vault.yml and change the declaration of ansible_password
shell> cat group_vars/servers/ansible_password.yml
passwordstore_name: "{{ inventory_hostname }}/{{ ansible_user }}"
ansible_password: "{{ lookup('community.general.passwordstore',
passwordstore_name) }}"
The playbook above will give the same results.
Move ansible_user and ansible_password out of your inventory and into your host_vars directory. That is, make your inventory look like this:
[servers]
1.2.3.4
1.2.3.5
1.2.3.6
Then ansible-vault create host_vars/1.2.3.4.yml and give it the content:
ansible_user: user1
ansible_password: password1
And so on for the other hosts in your inventory.
I have written a playbook named as master.yaml as defined below
- hosts: master
remote_user: "{{ ansible_user }}"
tasks:
- name: Get env
command: id -g -n {{ lookup('env', '$USER') }}
register: group_user
vars:
is_done: "false"
- include: slave.yaml
vars:
sethostname: "{{ group_user }}"
worker: worker
when: is_done == "true"
where: inventory_hostname in groups['worker']
I am trying run another playbook named as slave.yaml as defined below, after a certain condition is met.
- hosts: worker
remote_user: "{{ ansible_user }}"
tasks:
- name: Write to a file for deamon setup
copy:
content: "{{ sethostname }}"
dest: "/home/ubuntu/test.text"
Now i have two question:
I am not able to set the value of var isDone. slave.yaml should
only work when isDone is true.
2.How salve.yaml access the value worker. I have defined group worker in inventory.yaml
I do not know if it's the right way to go to reach your objective. However I tried to make this playbook work by keeping as much as possible your logic. Hope it helps.
The point is that you cannot use import_playbook inside the play. Check the module documentation for more information.
So I propose to share code with a role instead of a playbook. You will be able to share the slave role between the master playbook and another playbooks, a slave playbook for example.
The ansible folder structure is the following.
├── hosts
├── master.yml
└── roles
└── slave
└── tasks
└── main.yml
Master.yml
---
- name: 'Master Playbook'
# Using the serial keyword to run the playbook for each host one by one
hosts: master
serial: 1
remote_user: "{{ ansible_user }}"
tasks:
- name: 'Get env'
command: id -g -n {{ lookup('env', '$USER') }}
register: group_user
- name: 'Calling the slave role'
import_role:
name: 'slave'
# The return value of the command is stored in stdout
vars:
sethostname: "{{ group_user.stdout }}"
# Only run when the task get env has been done (state changed)
when: group_user.changed
# Delegate the call to the worker host(s) -> don't know if it's the expected behavior
delegate_to: 'worker'
Slave main.yml
---
- name: 'Write to a file for deamon setup'
copy:
content: "{{ sethostname }}"
dest: "/tmp/test.text"
At the end the /tmp/test.text contains the effective user group name.