NB: I am using Ansible 1.9.4 and can't upgrade at this time
I am trying to conditionally include a task file based on whether or not a nested dictionary exists.
I have 4 hosts, 1 of which has a type of virtual machine I'm calling heavy_nvme but the others don't. I am trying to run this playbook on all 4 hosts simultaneously. I am using group_vars to set all of the variables for all of the servers, then a single host file to overwrite the relevant dict for the server with different requirements.
My group vars look like this:
virtual_machines:
type:
druid:
os: amazon_linux_2
count: 0
memory: 16
vcpus: 4
disk: {}
heavy_nvme:
active: false
My include command looks like this:
- name: destroy virtual machines
include: destroy-virtual-machines.yml types={{ virtual_machines.type }} name='heavy_nvme'
when: '"heavy_nvme" in virtual_machines.type and virtual_machines.type.heavy_nvme.count > 0'
When I try to run this, I get a failure related to a task in the included file:
fatal: [hostname.local] => unknown error parsing with_sequence arguments: u'start=1 end={{ num_machines }}'
This refers to the following line from destroy-virtual-machines.yml:
- name: destroy virt pools
command: "virsh pool-destroy {{ vm_name_prefix }}-{{ item }}"
with_sequence: start=1 end={{ num_machines }}
ignore_errors: True
The error occurs because the num_machines variable is a fact set by performing a calculation on the count variable from each of the virtual machine types. The druid type runs fine because count exists, but obviously it doesn't in the heavy_nvme dict.
I have tried a few different approaches, including adding a count: 0 to the heavy_nvme dict. I have also tried removing the heavy_nvme dict entirely, or setting it as a blank dict - neither give me the outcome I require.
I'm pretty sure this error occurs because when you run a conditional include, the condition is simply added to every task in the included file. That won't work for me because then I'll be missing a load of variables that the included file requires.
Is there a way that I can stop these tasks from being run at all if the heavy_nvme dict doesn't exist, or if it has a count 0, or if it has a specified key/var set (e.g. active: false)?
Related
I am trying to understand how to get facts to be preserved across multiple playbook runs.
I have something like this:
- name: Host Check
block:
- name: Host Check | Obtain System UUID
shell:
cmd: >
dmidecode --string system-uuid
| head -c3
register: uuid
- name: Host Check | Verify EC2
set_fact:
is_ec2: "{{ uuid.stdout == 'ec2' }}"
cacheable: yes
That chunk of code saves is_ec2 as a fact and is executed by a bigger playbook, let's say setup.yml:
ansible-playbook -i cloud_inventory.ini setup.yml
But then if I wanted to run another playbook, let's say test.yml and tried to access is_ec2 (on the same host), I get the following error:
fatal: [factTest]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'is_ec2' is undefined\n\nThe error appears to be in '/path/to/playbooks/test.yml': line 9, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: Print isEC2\n ^ here\n"}
I am a little confused because while I read this is possible in the docs, I can't get it to work.
This boolean converts the variable into an actual 'fact' which will also be added to the fact cache, if fact caching is enabled.
Normally this module creates 'host level variables' and has much higher precedence, this option changes the nature and precedence (by 7 steps) of the variable created. https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
This actually creates 2 copies of the variable, a normal 'set_fact' host variable with high precedence and a lower 'ansible_fact' one that is available for persistance via the facts cache plugin. This creates a possibly confusing interaction with meta: clear_facts as it will remove the 'ansible_fact' but not the host variable.
I was able to solve it!
While there was nothing wrong with the code above, in order to make it work I needed to enable fact caching and specify the plugin I wanted to use. This can be done in the ansible.cfg file by adding these lines:
fact_caching = jsonfile
fact_caching_connection = /path/to/cache/directory/where/a/json/will/be/saved
After that I was able to access those cached facts across executions.
I was curious to see what the JSON file would look like and it seems like it has the facts that you can gather from the setup module and the cached facts:
...
...
"ansible_virtualization_tech_host": [],
"ansible_virtualization_type": "kvm",
"discovered_interpreter_python": "/usr/bin/python3",
"gather_subset": [
"all"
],
"is_ec2": true,
"module_setup": true
}
Here is the problem I have. I am running the following playbook
- name: Check for RSA-Key existence
stat:
path: /opt/cert/{{item.username}}.key
with_items: "{{roles}}"
register: rsa
- name: debug
debug:
var: item.stat.exists
loop: "{{rsa.results}}"
- name: Generate RSA-Key
community.crypto.openssl_privatekey:
path: /opt/cert/{{item.username}}.key
size: 2048
when: item.stat.exists == False
with_items:
- "{{roles}}"
- "{{rsa.results}}"
This is the error I receive:
The error was: error while evaluating conditional (item.stat.exists == False): 'dict object' has no attribute 'stat'
The debug task is not firing any error
"item.stat.exists": true
What am I doing wrong and how can I fix my playbook to make it work?
TL;DR
Replace all your tasks with a single one:
- name: Generate RSA-Key or check they exist
community.crypto.openssl_privatekey:
path: /opt/cert/{{ item.username }}.key
size: 2048
state: present
with_items: "{{ roles }}"
Problem with your last loop in original example
I don't know what you are trying to do exactly when writing the following in your last task:
with_items:
- "{{roles}}"
- "{{rsa.results}}"
What I know is the actual result: you are looping over a single list made of roles elements at the beginning followed by rsa.results elements. Since I am pretty sure no elements in your roles list has a stat.exists entry, the error you get is quite expected.
Once you have looped over an original list (e.g. roles) and registered the result of the tasks (in e.g. rsa), you actually have all the information you need inside that registered var. rsa.results is a list of individual results. In each elements, you will find all the keys returned by the module you ran (e.g. stat) and an item key holding the original element that was used in the loop (i.e. an entry of your original roles list).
I strongly suggest you study this by yourself with most attention by looking at the entire variable to see how it is globally structured:
- name: Show my entire registered var
debug:
var: rsa
Once you have looked at your incoming data, it will become obvious that you should modify your last task as the following (note the item.item referencing the original element from previous loop):
- name: Generate RSA-Key
community.crypto.openssl_privatekey:
path: /opt/cert/{{ item.item.username }}.key
size: 2048
when: not item.stat.exists # Comparing to explicit False is bad use this instead
with_items: "{{ rsa.results }}"
Using stat here is an overkill
To go further, if all the above actually answers your direct question, it does not make much sense in Ansible world. You are doing a bunch of work that Ansible is already doing behind the scene for you.
The community.crypto.openssl_privatekey module create keys idempotently, i.e. it will create the key only if it doesn't exist and report changed or do nothing if the key already exists and report ok. So you can basically reduce all of your 3 tasks example to a single task
- name: Generate RSA-Key or check they exist
community.crypto.openssl_privatekey:
path: /opt/cert/{{ item.username }}.key
size: 2048
state: present # This is not necessary (default) but good practice
with_items: "{{ roles }}"
Consider changing your var name
Last, I'd like to mention that roles is actually a reserved name in Ansible. So defining a var with that name should issue a warning in current ansible version, and will probably be deprecated in some time.
Refs:
registering variables
registering variables with a loop
I'm writing an ansible custom module that does the same as the URI module with some additional features.
In particular, I would like the data returned to be available either as a global var or as host specific var
(where the host is the inventory host running the task).
based on a value passed to the module.
However, no matter what I do it seems that I can only create a global variable.
Let's say that I run the playbook for N hosts and I execute the custom module only once (run_once: yes).
[...]
- name: Run the custom module and generate output in custom_var
run_once: yes
custom_module:
field1: ...
field2: ...
global_data: yes -> yes, to create a global fact called: custom_var
- name: DEBUG - Print the content of custom_var
debug: msg="{{ custom_var }}"
This works fine, all N hosts are able to see the custom_var.
Is there a way that I can have custom_var defined only for the actual host that executes the task?
Strangely enough, I also tried to register the result of the first task as follows:
[...]
- name: Run the custom module and generate output in custom_var
run_once: yes
custom_module:
field1: ...
field2: ...
global_data: no -> no, if I don't create any global fact
register: register_task
- name: DEBUG - Print the content of custom_var
debug: msg="{{ register_task }}"
but it looks like that also was available to all the hosts in the inventory (with the same content). Is that the expected behaviour?
And any idea about how to make the custom_var available only for the host that actually runs the task?
Thanks in advance!
Yes, this is expected behaviour with run_once: yes as it overrides host loop.
With this option set, the task is executed only once (on the first host in a batch) but result is copied to all hosts in the same batch. So you end up with custom_var/register_task defined for every host.
If you want the opposite, you can change run_once: yes to conditional statement like:
when: play_hosts.index(inventory_hostname) == 0
This will execute the task only on the first host but will not override host loop (still skip the task for other hosts because of conditional); This way registered fact will be available only for the first host.
I have a playbook in which on of the tasks within is to copy template files to a specific server with delegate_to: named "grover"
The inventory list with servers of consequence here is: bigbird, grover, oscar
These template files MUST have a name that matches each server's hostnames, and the delegate server grover also must have it's own instance of said file. This template copy operation should only take place if the file does not already exist. /tmp/grover pre-exists on server grover, and it needs to remain unaltered by the playbook run.
Eventually in addition to /tmp/grover on server grover, after the run there should also exist: /tmp/bigbird and /tmp/oscar also on server grover.
The problem I'm having is that when the playbook runs without any conditionals, it does function, but then it also clobbers/overwrites /tmp/grover which is unacceptable.
BUT, if I add tasks in previous plays to pre-check for these files, and then a conditional at the template play to skip grover if the file already exists, it not only skips grover, but it skips every other server that would be run on grover for that play as well. If I try to set it to run on every other server BUT grover, it will still fail because the delegate server is grover and that would be skipped.
Here is the actual example code snipits, playbook is running on all 3 servers due to host pattern:
- hosts: bigbird:grover:oscar
- name: File clobber check.
stat:
path: /tmp/{{ansible_hostname}}
register: clobber_check
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
- name: Copy templates to grover.
template:
backup: yes
src: /opt/template
dest: /tmp/{{ansible_hostname}}
group: root
mode: "u=rw,g=rw"
owner: root
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
when: ( not clobber_check.stat.exists ) and ( clobber_check is defined )
If I run that, and /tmp/grover exists on grover, then it will simply skip the entire copy play because the conditional failed on grover. Thus the other servers will never have their /tmp/bigbird and /tmp/oscar templates copied to grover due to this problem.
Lastly, I'd like to avoid ghetto solutions like saving a backup of grover's original config file, allowing the clobber, and then copying the saved file back as the last task.
I must be missing something here, I can't have been the only person to run into this scenario. Anyone have any idea on how to code for this?
The answer to this question is to remove the unnecessary with_items in the code. While with_items used as it is does allow you to delegate_to a host pattern group, it also makes it so that variables can't be defined/assigned properly to the hosts within that group either.
Defining a single host entity for delegate_to fixes this issue and then the tasks execute as expected on all hosts defined.
Konstantin Suvorov should get the credit for this, as he was the original answer-er.
Additionally, I am surprised that Ansible doesn't allow for easy delegation to a group of hosts. There must be reasons they didn't allow it.
# "Run application specific configuration scripts"
- include: "app_cfg/{{ app_name }}.yml"
when: "{{ app_conf[app_name].app_cfg }}"
ignore_errors: no
tags:
- conf
I thought that I will be able conditionally include application specific playbooks simply by setting one variable to the true/false value like so:
app_conf:
my_app_1:
app_cfg: no
my_app_2:
app_cfg: yes
Unfortunately Ansible is forcing me to create file beforehand:
ERROR: file could not read: <...>/tasks/app_cfg/app_config.yml
Is there a way I can avoid creating a bunch of empty files?
# ansible --version
ansible 1.9.2
include with when is not conditional in common sense.
It actually includes every task inside include-file and append given when statement to every task included task.
So it expects include-file to exist.
You can try to handle this using with_first_found and skip: true.
Something like this:
# warning, code not tested
- include: "app_cfg/{{ app_name }}.yml"
with_first_found:
- files:
- "{{ app_conf[app_name].app_cfg | ternary('app_cfg/'+app_name+'.yml', 'unexisting.file') }}"
skip: true
tags: conf
It supposed to supply valid config name if app_cfg is true and unexisting.file (which will be skipped) otherwise.
See this answer about skip option.