Forced to create file beforehand ansible conditional include - ansible

# "Run application specific configuration scripts"
- include: "app_cfg/{{ app_name }}.yml"
when: "{{ app_conf[app_name].app_cfg }}"
ignore_errors: no
tags:
- conf
I thought that I will be able conditionally include application specific playbooks simply by setting one variable to the true/false value like so:
app_conf:
my_app_1:
app_cfg: no
my_app_2:
app_cfg: yes
Unfortunately Ansible is forcing me to create file beforehand:
ERROR: file could not read: <...>/tasks/app_cfg/app_config.yml
Is there a way I can avoid creating a bunch of empty files?
# ansible --version
ansible 1.9.2

include with when is not conditional in common sense.
It actually includes every task inside include-file and append given when statement to every task included task.
So it expects include-file to exist.
You can try to handle this using with_first_found and skip: true.
Something like this:
# warning, code not tested
- include: "app_cfg/{{ app_name }}.yml"
with_first_found:
- files:
- "{{ app_conf[app_name].app_cfg | ternary('app_cfg/'+app_name+'.yml', 'unexisting.file') }}"
skip: true
tags: conf
It supposed to supply valid config name if app_cfg is true and unexisting.file (which will be skipped) otherwise.
See this answer about skip option.

Related

Ansible - Set Playbook level Environment variable, but after defining some tasks

I want to set a playbook level environment, but after I execute a couple of tasks. I have found that I could define a playbook level environment variable before definition of any tasks or task level environment variables. But, I haven't found how can I set-up an environment variable that can be used by all tasks following a task.
- name: server properties
hosts: kafka_broker
vars:
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no"
ansible_host_key_checking: false
date: "{{ lookup('pipe', 'date +%Y%m%d-%H%M%S') }}"
copy_to_dest: "/export/home/kafusr/kafka/secrets"
server_props_loc: "/etc/kafka"
secrets_props_loc: "{{ server_props_loc }}/secrets"
environment:
CONFLUENT_SECURITY_MASTER_KEY: "{{ extract_key2 }}"
tasks:
- name: Create a directory if it does not exist
file:
path: "{{ copy_to_dest }}"
state: directory
mode: '0755'
- name: Find files from "{{ server_props_loc }}"
find:
paths: /etc/kafka/
patterns: "server.properties*"
# ... the rest of the task
register: etc_kafka_server_props
- name: Find files from "{{ secrets_props_loc }}"
find:
paths: /etc/kafka/secrets
patterns: "*"
# ... the rest of the task
register: etc_kafka_secrets_props
- name: Copy the files
copy:
src: "{{ item.path }}"
dest: "{{ copy_to_dest }}"
remote_src: yes
loop: "{{ etc_kafka_server_props.files + etc_kafka_secrets_props.files }}"
- name: set masterkey content value
set_fact:
contents: "{{ lookup('file', '/export/home/kafusr/kafka/secrets/masterkey.txt') }}"
extract_key2: "{{ contents.split('\n').2.split('|').2|trim }}"
I want to set CONFLUENT_SECURITY_MASTER_KEY after the set_facts task
Is it possible to set playbook level environment variable, but after defining some tasks
Thank you
UPDATE
Initially, when I was executing the playbook as originally defined, I was getting the error
fatal: [kafkaserver1]: FAILED! => {"msg": "The field 'environment' has an invalid value,
which includes an undefined variable. The error was: 'extract_key2' is undefined"}
which was expected as the variable extract_key2 was not set - before copying the files to desired directory.
After #Zeitounator's suggestion, when I added default to the environment variable's definition,
CONFLUENT_SECURITY_MASTER_KEY: "{{ extract_key2 | default('') }}"
I now get a different error
The new error is
TASK [set masterkey content value] ******************** fatal: [kafkaserver1]: FAILED! =>
{"msg": "The task includes an option with an undefined variable. The error was: 'contents' is undefined\n\n
The error appears to be in '/export/home/kafuser/tmp/so-71538207-question.yml': line 43, column 7, but may\n
be elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n
- name: set masterkey content value\n ^ here\n"}
Getting this on all 3 brokers in the console and I checked the file it exists
I did do a cat on that file, copying the path from error to make sure there is no typo, and the contents of that file are displayed on console.
Update 2
I am trying to figure out how to use slurp to get the info, with the same approach as #Zeitounator's example about using lookup.
This is what I am trying. The current definition, is of course, erroneous. Just wanted to show what I am trying to do. But, can it be done for slurp and am I on the right path?
environment:
CONFLUENT_SECURITY_MASTER_KEY: >-
{{
(
((slurp: src: /export/home/z8tpush/kafka/secrets/masterkey.txt)['content'] | b64decode ).split('\n').2.split('|').2|trim
)
}}
#Zeitounator - Will you be able to direct me to an example where a slurp or fetch module is defined to set-up an environment variable and where the value will get updated after the tasks that create the file are executed, similar to what you have shown with lookup filter? I would really appreciate it.
Note:
Ultimately, I want to use ansible to create a new kafka user using confluents CLI commands ( using shell or command module ), verify it in my directory and once satisfied, I will encrypt the security.properties file using the masterkey and copy it to the appropriate location where confluent is installed.
As already mentioned, you can
With Ansible Configuration Settings set environment variables globally
Setting the remote environment in a task
Regarding your question
I haven't found how can I set-up an environment variable that can be used by all tasks following a task.
You can set the environment on Block level, a logical groups of tasks too
Setting the remote environment: "When you set a value with environment: at the play or block level, it is available only to tasks within the play or block that are executed by the same user."
This means you would need to define a block for the next tasks
- name: Block of next task(s)
block:
- name: Next task
...
environment:
CONFLUENT_SECURITY_MASTER_KEY: "{{ extract_key2 }}"
Regarding your question
Is it possible to set playbook level environment variable, but after defining some tasks?
No, not on that level in that run as the playbook is already running.
Another option might be to distribute the tasks in question into an other role, playbook or task file and include_* it.
You cannot set_fact a var depending on an other var in the same block. Moreover, there is absolutely no need to set_fact here as long as your relevant tasks can live with an empty environment var until it is fully defined. The following environment declaration (untested) should work and return the key for every task running after your file exists.
environment:
CONFLUENT_SECURITY_MASTER_KEY: >-
{{
(
(
lookup('file', '/export/home/kafusr/kafka/secrets/masterkey.txt', errors='ignore')
| default('')
).split('\n').2
| default('')
).2
| default('')
| trim
}}

implicit ansible file exists remote

I want to execute tasks only if a file exists on the target.
I can do that, of course, with
- ansible.builtin.stat:
path: <remote_file>
register: remote_file
- name: ...
something:
when: remote_file.stat.exists
but is there something smaller?
You can, for example, check files on the host with
when: '/tmp/file.txt' is file
But that only checks files on host.
Is there also something like that available for files on remotes?
Added, as according the comment section I need to be bit more specific.
I want to run tasks on the target, and on the next run, they shouldn´t be executed again. So I thought to put a file somewhere on the target, and on the next run, when this file exists, the tasks should not be executed anymore.
- name: do big stuff
bigsgtuff:
when: not <file> exists
They should be executed, if the file does not exists.
I am not aware of a way to let the Control Node know during templating or compile time if there exist a specific file on Remote Node(s).
I understand your question that you
... want to execute tasks on the target ...
only if a certain condition is met on the target.
This sounds to be impossible without taking a look on the target before or do a lookup somewhere else, in example in a Configuration Management Database (CMDB).
... is there something smaller?
It will depend on your task, what you try to achieve and how do you like to declare a configuration state.
In example, if you you like to declare the absence of a file, there is no need to check if it exists before. Just make sure it becomes deleted.
---
- hosts: test
become: false
gather_facts: false
tasks:
- name: Remove file
shell:
cmd: "rm /home/{{ ansible_user }}/test.txt"
warn: false
register: result
changed_when: result.rc != 1
failed_when: result.rc != 0 and result.rc != 1
- name: Show result
debug:
msg: "{{ result }}"
As you see, it will be necessary to Defining failure and control how the task behaves. Another example for showing the content.
---
- hosts: test
become: false
gather_facts: false
tasks:
- name: Gather file content
shell:
cmd: "cat /home/{{ ansible_user }}/test.txt"
register: result
changed_when: false
failed_when: result.rc != 0 and result.rc != 1
- name: Show result
debug:
msg: "{{ result.stdout }}"
Please take note, for the given example tasks there are already specific Ansible modules available which do the job better.
According the additional given information in your comments I understand that you like to install or configure "something" and like to leave that fact left on the remote node(s). You like to run the task the next time on the remote node(s) in fact only if it is wasn't successful performed before.
To do so, you may have a look into Ansible facts and Discovering variables: facts and magic variables, especially into Adding custom facts.
What your installation or configuration tasks could do, is leaving a custom .fact file with meaningful keys and values after they were successful running.
During next playbook execution and if gather_facts: true, the information would be gathered from the setup module and you can than let tasks run based on conditions in ansible_local.
Further Q&A
How Ansible gather_facts and sets variables
An introduction to Ansible facts
Loading custom facts
Whereby the host facts can be considered as kind of distributed CMDB, by using facts Cache
fact_caching = yaml
fact_caching_connection = /tmp/ansible/facts_cache
fact_caching_timeout = 129600
you can have the information also available on the Control Node. It might even be possible to organize it in host_vars and group_vars.

Ansible looping when a variable has no value

I have created a variable to disable sites in nginx within my main set of tasks. Since this is a one time task, meaning once domain1.com is disabled I can comment the entire line out. When I do, I receive an error
" {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'None' has no attribute 'domain'".
What can I do to modify my task to only run when there are domains listed within the variable?
nginx_sites_disabled:
#- domain: "domain1.com"
- name: Disable sites
file:
path: /etc/nginx/sites-enabled/{{ item.domain }}
state: absent
with_items: "{{nginx_sites_disabled}}"
notify:
- Reload nginx
Apply default filter within your task:
- name: Disable sites
file:
path: /etc/nginx/sites-enabled/{{ item.domain }}
state: absent
with_items: "{{ nginx_sites_disabled | default([]) }}"
notify:
- Reload nginx
Related answer: Ansible: apply when to complete loop
You don't need to comment out the lines when the work is done.
Like most of Ansible modules, file module is idempotent : if the desired state is absent and the file isn't there, it won't do anything.
Just leave nginx_sites_disabled list unchanged.
By the way, if you still need nginx_sites_disabled to be an empty list, you need to write this :
---
nginx_sites_disabled: []
otherwise, nginx_sites_disabled will be equal to None. That's why you have this error.

Ansible use task return varible in variable templates

I am trying to craft a list of environment variables to use in tasks that may have slightly different path on each host due to version differences.
For example, /some/common/path/v_123/rest/of/path
I created a list of these variables in variables.yml file that gets imported via roles.
roles/somerole/varables/main.yml contains the following
somename:
somevar: 'coolvar'
env:
SOME_LIB_PATH: /some/common/path/{{ unique_part.stdout }}/rest/of/path
I then have a task that runs something like this
- name: Get unique path part
shell: 'ls /some/common/path/'
register: unique_part
tags: workflow
- name: Perform some actions that need some paths
shell: 'binary argument argument'
environment: somename.env
But I get some Ansible errors about variables not being defined.
Alternatively I tried to predefine the unique_part.stdout in hopes of register overwriting predefined variable, but then I got other ansible errors - failure to template.
Is there another way to craft these variables based on command returns?
You can also use facts:
http://docs.ansible.com/set_fact_module.html
# Prepare unique variables
- hosts: myservers
tasks:
- name: Get unique path part
shell: 'ls /some/common/path/'
register: unique_part
tags: workflow
- name: Add as Fact per for each hosts
set_fact:
library_path: "{{ unique_part.stdout }}"
# launch roles that use those unique variables
- hosts: myservers
roles:
- somerole
This way you can dynamicaly add variable to your hosts before using them.
The vars files gets evaluated when it is read by Ansible. Your only chance would be to include a placeholder which you then later have to replace yourself, like this:
somename:
somevar: 'coolvar'
env:
SOME_LIB_PATH: '/some/common/path/[[ unique_part.stdout ]]/rest/of/path'
And then later in your playbook you can replace that placeholder:
- name: Perform some actions that need some paths
shell: 'binary argument argument'
environment: '{{ somename.env | replace("[[ unique_part.stdout ]]", unique_part.stdout) }}'

Ansible include task only if file exists

I'm trying to include a file only if it exists. This allows for custom "tasks/roles" between existing "tasks/roles" if needed by the user of my role. I found this:
- include: ...
when: condition
But the Ansible docs state that:
"All the tasks get evaluated, but the conditional is applied to each and every task" - http://docs.ansible.com/playbooks_conditionals.html#applying-when-to-roles-and-includes
So
- stat: path=/home/user/optional/file.yml
register: optional_file
- include: /home/user/optional/file.yml
when: optional_file.stat.exists
Will fail if the file being included doesn't exist. I guess there might be another mechanism for allowing a user to add tasks to an existing recipe. I can't let the user to add a role after mine, because they wouldn't have control of the order: their role will be executed after mine.
The with_first_found conditional can accomplish this without a stat or local_action. This conditional will go through a list of local files and execute the task with item set to the path of the first file that exists.
Including skip: true on the with_first_found options will prevent it from failing if the file does not exist.
Example:
- hosts: localhost
connection: local
gather_facts: false
tasks:
- include: "{{ item }}"
with_first_found:
- files:
- /home/user/optional/file.yml
skip: true
Thanks all for your help! I'm aswering my own question after finally trying all responses and my own question's code back in today's Ansible: ansible 2.0.1.0
My original code seems to work now, except the optional file I was looking was in my local machine, so I had to run stat through local_action and set become: no for that particular tasks, so ansible wouldn't attempt to do sudo in my local machine and error with: "sudo: a password is required\n"
- local_action: stat path=/home/user/optional/file.yml
register: optional_file
become: no
- include: /home/user/optional/file.yml
when: optional_file.stat.exists
In Ansible 2.5 and above, it can be done using tests like this:
- include: /home/user/optional/file.yml
when: "'/home/user/optional/file.yml' is file"
More details: https://docs.ansible.com/ansible/latest/user_guide/playbooks_tests.html#testing-paths
I using something similar but for the file module and what did the trick for me is to check for the variable definition, try something like:
when: optional_file.stat.exists is defined and optional_file.stat.exists
the task will run only when the variable exists.
If I am not wrong, you want to continue the playbook even the when statement false?
If so, please add this line after when:
ignore_errors: True
So your tasks will be look like this:
- stat: path=/home/user/optional/file.yml
register: optional_file
- include: /home/user/optional/file.yml
when: optional_file.stat.exists
ignore_errors: True
Please let me know, if I understand your question correctly, or can help further. Thanks
I could spend time here to bash ansible's error handling provisions, but in short, you are right and you can't use stat module for this purpose due to stated reasons.
Simplest solution for most ansible problems is to do it outside ansible. E.g.
- shell: test -f /home/user/optional/file.yml # or use -r if you're too particular.
register: optional_file
failed_when: False
- include: /home/user/optional/file.yml
when: optional_file.rc == 0
- debug: msg="/home/user/optional/file.yml did not exist and was not included"
when: optional_file.rc != 0
* failed_when added to avoid host getting excluded from further tasks when the file doesn't exist.
Using ansible-2.1.0, I'm able to use snippets like this in my playbook:
- hosts: all
gather_facts: false
tasks:
- name: Determine if local playbook exists
local_action: stat path=local.yml
register: st
- include: local.yml
when: st.stat.exists
I get no errors/failures when local.yml does not exist, and the playbook is executed (as a playbook, meaning it starts with the hosts: line, etc.)
You can do the same at the task level instead with similar code.
Using stat appears to work correctly.
There's also the option to use a Jinja2 filter for that:
- set_fact: optional_file="/home/user/optional/file.yml"
- include: ....
when: optional_file|exists
The best option I have come up with so far is this:
- include: "{{ hook_variable | default(lookup('pipe', 'pwd') ~ '/hooks/empty.yml') }}"
It's not exactly an "if-exists", but it gives users of your role the same result. Create a few variables within your role and a default empty file. The Jinja filters "default" and "lookup" take care of falling back on the empty file in case the variable was not set.
For convenience, a user could use the {{ playbook_dir }} variable for setting the paths:
hook_variable: "{{ playbook_dir }}/hooks/tasks-file.yml"

Resources