I am trying to understand how to get facts to be preserved across multiple playbook runs.
I have something like this:
- name: Host Check
block:
- name: Host Check | Obtain System UUID
shell:
cmd: >
dmidecode --string system-uuid
| head -c3
register: uuid
- name: Host Check | Verify EC2
set_fact:
is_ec2: "{{ uuid.stdout == 'ec2' }}"
cacheable: yes
That chunk of code saves is_ec2 as a fact and is executed by a bigger playbook, let's say setup.yml:
ansible-playbook -i cloud_inventory.ini setup.yml
But then if I wanted to run another playbook, let's say test.yml and tried to access is_ec2 (on the same host), I get the following error:
fatal: [factTest]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'is_ec2' is undefined\n\nThe error appears to be in '/path/to/playbooks/test.yml': line 9, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: Print isEC2\n ^ here\n"}
I am a little confused because while I read this is possible in the docs, I can't get it to work.
This boolean converts the variable into an actual 'fact' which will also be added to the fact cache, if fact caching is enabled.
Normally this module creates 'host level variables' and has much higher precedence, this option changes the nature and precedence (by 7 steps) of the variable created. https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
This actually creates 2 copies of the variable, a normal 'set_fact' host variable with high precedence and a lower 'ansible_fact' one that is available for persistance via the facts cache plugin. This creates a possibly confusing interaction with meta: clear_facts as it will remove the 'ansible_fact' but not the host variable.
I was able to solve it!
While there was nothing wrong with the code above, in order to make it work I needed to enable fact caching and specify the plugin I wanted to use. This can be done in the ansible.cfg file by adding these lines:
fact_caching = jsonfile
fact_caching_connection = /path/to/cache/directory/where/a/json/will/be/saved
After that I was able to access those cached facts across executions.
I was curious to see what the JSON file would look like and it seems like it has the facts that you can gather from the setup module and the cached facts:
...
...
"ansible_virtualization_tech_host": [],
"ansible_virtualization_type": "kvm",
"discovered_interpreter_python": "/usr/bin/python3",
"gather_subset": [
"all"
],
"is_ec2": true,
"module_setup": true
}
Related
I have a play
---
- hosts: all
name: Full install with every component
gather_facts: no
roles:
- oracle_client
- another_role
- and_so_on
where each of the roles has a dependency on a single common role which is supposed to load all vairable I will require later:
- name: Include all default extension files in vars/all and all nested directories and save the output in test
include_vars:
dir: all
name: test
- debug:
msg: "{{test}}"
the common role folder structure has
common
vars
all
ansible.yml
common.yml
oracle_client.yml
where common.yml specifies a app_drive: "D:" and then oracle_client.yml tries to do oracle_base: "{{ app_drive }}\\app\\oracle"
At runtime I get
fatal: [10.188.27.27]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'app_drive' is undefined\n\nThe error appears to be in '<ansible-project-path>/roles/common/tasks/main.yml': line 18, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
Per documentation "the files are sorted alphabetically before being loaded." so I was expecting to have access to a varible from another file? How should I do this instead?
TLDR;
I want to to load all my variables from one place but have them logically splits (I went for separate files) but cross reference the splits. How is that best done?
Looks like it works as I expect it to and in my example above it is the
- debug:
msg: "{{test}}
part that fails.
NB: I am using Ansible 1.9.4 and can't upgrade at this time
I am trying to conditionally include a task file based on whether or not a nested dictionary exists.
I have 4 hosts, 1 of which has a type of virtual machine I'm calling heavy_nvme but the others don't. I am trying to run this playbook on all 4 hosts simultaneously. I am using group_vars to set all of the variables for all of the servers, then a single host file to overwrite the relevant dict for the server with different requirements.
My group vars look like this:
virtual_machines:
type:
druid:
os: amazon_linux_2
count: 0
memory: 16
vcpus: 4
disk: {}
heavy_nvme:
active: false
My include command looks like this:
- name: destroy virtual machines
include: destroy-virtual-machines.yml types={{ virtual_machines.type }} name='heavy_nvme'
when: '"heavy_nvme" in virtual_machines.type and virtual_machines.type.heavy_nvme.count > 0'
When I try to run this, I get a failure related to a task in the included file:
fatal: [hostname.local] => unknown error parsing with_sequence arguments: u'start=1 end={{ num_machines }}'
This refers to the following line from destroy-virtual-machines.yml:
- name: destroy virt pools
command: "virsh pool-destroy {{ vm_name_prefix }}-{{ item }}"
with_sequence: start=1 end={{ num_machines }}
ignore_errors: True
The error occurs because the num_machines variable is a fact set by performing a calculation on the count variable from each of the virtual machine types. The druid type runs fine because count exists, but obviously it doesn't in the heavy_nvme dict.
I have tried a few different approaches, including adding a count: 0 to the heavy_nvme dict. I have also tried removing the heavy_nvme dict entirely, or setting it as a blank dict - neither give me the outcome I require.
I'm pretty sure this error occurs because when you run a conditional include, the condition is simply added to every task in the included file. That won't work for me because then I'll be missing a load of variables that the included file requires.
Is there a way that I can stop these tasks from being run at all if the heavy_nvme dict doesn't exist, or if it has a count 0, or if it has a specified key/var set (e.g. active: false)?
I have an ansible playbook. When I run the playbook I specify which environment file to use.
ansible-playbook playbooks/release-deploy.yaml -i env/LAB3
Within the ansible-playbook I am calling another playbook and I want the same environment file to be used.
My current config is:
- include: tasks/replace_configs.yaml
So when I run the playbook, I get the error:
TASK [include] *****************************************************************
fatal: [10.169.99.70]: FAILED! => {"failed": true, "reason": "no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in '/home/ansible/playbooks/tasks/replace_configs.yaml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- hosts: cac
^ here
The error appears to have been in '/home/ansible/playbooks/tasks/replace_configs.yaml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- hosts: cac
^ here
"}
tasks/replace_configs.yaml also needs to use env/LAB3
Looks like it doesn't know what cac is. Do I need to do another config ?
My current config is:
- include: tasks/replace_configs.yaml
This is not any "config", this is a line which includes a file containing tasks.
Let's look at the following "task":
The offending line appears to be:
---
- hosts: cac
^ here
It does not look like a task, it looks like a play. It most likely does not contain any module directive, so Ansible rightfully complains that there is no module name provided in the task it expected: no action detected in task.
When you use include directive in Ansible it puts the included content at the indentation level of the include, so when you include tasks, you should include only tasks
Your included file should look like:
---
- name: This is a task
debug: msg="One task"
- name: This is another task
debug: msg="Another task"
and should not contain any other definitions, particularly those belonging to a higher level.
On Solaris Ansible's setup module does not gather information about installed zones. How to extend the setup module to gather the output of zoneadm list -iv?
create a script named /etc/ansible/facts.d/zoneadm.fact and gather whatever information you need there. This can be whatever you want (bash/python/etc).
When you are done, echo it to stdout as json format.
Deploy that script via ansible, make it executable
Gather facts and notice that the new facts are present under ansible_local.zoneadm
More infos can be found here
I was dealing with the same dilemma ~2y ago. Previous reply is correct, but you have to deploy some stuff before and the rerun Ansible to get new facts too.
There's an option to write your own Ansible module, which returns JSON looking like:
{
'changed': false,
'ansible_facts': {
'sunos': {
'zonename': 'global',
'zones': {},
}
}
Facts returned by such module are then merged to ones coming from setup module. To be more portable, best to include such module into Ansible role and include just one task calling this module, plus add tag always, to get this fact collection run even when you choose subset of tasks by specifying tags on cmdline.
I've got my old role pushed here on GitHub. Probably will not work out-of-the-box... was used with Ansible 1.0, but get inspired.
The reason for my misunderstanding was, that I thought, I have to put something on the Ansible machine, which gets automatically deployed to the target system as it is done with the modules. But facts gathering works differently! One has to take cake, that the fatcs gathering scripts are already on the target system, before the setup starts working. I would say this is a design error in Ansible or at least a still not implemented feature.
To add this missing functionality, it is necessary to write a play, which works before all other things. I came up with the following solution:
---
- name: facts deployment
gather_facts: false
hosts: all
become: true
tasks:
- set_fact: setup_necessary=false
- file:
path: "/etc/ansible/facts.d"
state: directory
recurse: yes
register: facts_directory
- set_fact: setup_necessary=true
when: facts_directory.changed
- name: solaris facts
gather_facts: false
hosts: solaris
become: true
tasks:
- include: deploy_fact.yml
with_items:
- { shell: bash, file: nonglobal_zones }
- { shell: bash, file: solaris_eeprom }
- name: setup after facts update
gather_facts: false
hosts: all
tasks:
- setup:
when: setup_necessary
The above playbook does all plays with gather_facts: false, to prevent any setup run before the facts have been deployed. All plays set the variable setup_necessary, when any change to the target system has been made. It is not possible to use a handler for this, because handlers run at the end of a play, but not at the end of a playbook or after some plays (Ansible limitation 1).
First the directory gets created and after that all facts files are deployed. It is necessary to put the body of the look into a separate task file, because it is not possible to group two tasks together in a playbook (Ansible limitation 2).
The contents of the deploy_fact.yml file uses the template module to transfer the facts scripts to the target system.
---
- name: "/etc/ansible/facts.d/{{item.file}}.fact"
template:
src: "{{inventory_dir}}/facts.d/{{item.shell}}.j2"
dest: "/etc/ansible/facts.d/{{item.file}}.fact"
mode: 0755
register: facts_file
- set_fact: setup_necessary=true
when: facts_file.changed
The reason why I use the template module is, that every facts script needs some kind of error handling which takes care that proper JSON gets created in case of an error. This is my current error handling but it can be still improved. Some people also don't like set -eu, which is somehow a matter of taste.
#! /bin/bash
{% include item.file + '.bash' %}
set -eu
_stderr=$(mktemp)
trap 'rm -f "$_stderr"' EXIT
if _stdout=$(main 2>$_stderr); then
if [ "$_stdout" ]; then
echo "$_stdout"
else
echo null
fi
else
jq -Rsc "{\"ERROR\":{\"failed\":true,\"exit\":$?,\"msg\":.}}" $_stderr
fi
The template does nothing more than just including the file passed by the loop via the implicit item variable. The wrapper expects from the included file, that a main functions gets defined. This is my main function for non-global zones:
main ()
{
zoneadm list -i |
grep -v global |
jq -Rc . |
jq -sc .
}
And this one collects the Solaris eeprom data:
main ()
{
eeprom |
sed 's/^\([^=]*\)=\(.*\)$/{"\1":"\2"}/' |
sed 's/^\(.*\): data not available.$/{"\1":null}/' |
sed 's/:"false"}$/:false}/g' |
sed 's/:"true"}$/:true}/g' |
sed 's/:"\([0-9][0-9]*\)"}$/:\1}/' |
sed '/^{"boot-device"/{s/":"/":["/;s/ /","/g;s/"}$/"]}/;}' |
jq -sc add
}
My bottom line is that it is somehow possible to extend Ansible's facts gathering but it is far from obvious and it is a bit painful, because it makes any ad hoc usage of the setup module impossible. Instead of requiring the user to implement the above stuff, Ansible should move all of the above into the setup module (Ansible limitation 3).
I have an Ansible playbook for setting up an Ubuntu web development box. If my Ansible postgresql_db play finds that the database has not been created yet and creates it, then I would like to run a subsequent shell command to do a bit of extra database setup. I was expecting this to work (here the shell command is just a dummy command, written so that it is easy to tell if it ran):
- name: Create database
become: yes
become_user: postgres
register: database_creation
postgresql_db: name=mydatabase
- name: Example conditional command
when: database_creation.uchanged
shell: date > /tmp/tmp
I was expecting uchanged to be available as an attribute in the when: expression because if I echo the database_creation value I get (in the case where the database is already created):
{uchanged: False, udb: mydatabase}
But in fact the when: clause shown above explodes with the error:
==> default: fatal: [localhost]: FAILED! => {"failed": true, "msg":
"ERROR! The conditional check 'database_creation.uchanged' failed.
The error was: ERROR! error while evaluating conditional
(database_creation.uchanged): ERROR! 'dict object' has no attribute
'uchanged'\n\nThe error appears to have been in
'/vagrant/develop/playbook.yml': line 82, column 5, but may\nbe
elsewhere in the file depending on the exact syntax problem.\n\nThe
offending line appears to be:\n\n\n - name: test\n ^ here\n"}
What am I doing wrong? What is the correct syntax for accessing the uchanged attribute of database_creation so that I can make the subsequent play contingent upon it? Thanks for any pointers!
Instead of unchanged you need to use not changed.
when: not database_creation | changed
In general you should better be using the notation variable | changed instead of variable.changed. .changed is not wrong but it is better to use the available filters. The filter would also take care of checking possible nested results. (generated by loops)