I want to execute tasks only if a file exists on the target.
I can do that, of course, with
- ansible.builtin.stat:
path: <remote_file>
register: remote_file
- name: ...
something:
when: remote_file.stat.exists
but is there something smaller?
You can, for example, check files on the host with
when: '/tmp/file.txt' is file
But that only checks files on host.
Is there also something like that available for files on remotes?
Added, as according the comment section I need to be bit more specific.
I want to run tasks on the target, and on the next run, they shouldn´t be executed again. So I thought to put a file somewhere on the target, and on the next run, when this file exists, the tasks should not be executed anymore.
- name: do big stuff
bigsgtuff:
when: not <file> exists
They should be executed, if the file does not exists.
I am not aware of a way to let the Control Node know during templating or compile time if there exist a specific file on Remote Node(s).
I understand your question that you
... want to execute tasks on the target ...
only if a certain condition is met on the target.
This sounds to be impossible without taking a look on the target before or do a lookup somewhere else, in example in a Configuration Management Database (CMDB).
... is there something smaller?
It will depend on your task, what you try to achieve and how do you like to declare a configuration state.
In example, if you you like to declare the absence of a file, there is no need to check if it exists before. Just make sure it becomes deleted.
---
- hosts: test
become: false
gather_facts: false
tasks:
- name: Remove file
shell:
cmd: "rm /home/{{ ansible_user }}/test.txt"
warn: false
register: result
changed_when: result.rc != 1
failed_when: result.rc != 0 and result.rc != 1
- name: Show result
debug:
msg: "{{ result }}"
As you see, it will be necessary to Defining failure and control how the task behaves. Another example for showing the content.
---
- hosts: test
become: false
gather_facts: false
tasks:
- name: Gather file content
shell:
cmd: "cat /home/{{ ansible_user }}/test.txt"
register: result
changed_when: false
failed_when: result.rc != 0 and result.rc != 1
- name: Show result
debug:
msg: "{{ result.stdout }}"
Please take note, for the given example tasks there are already specific Ansible modules available which do the job better.
According the additional given information in your comments I understand that you like to install or configure "something" and like to leave that fact left on the remote node(s). You like to run the task the next time on the remote node(s) in fact only if it is wasn't successful performed before.
To do so, you may have a look into Ansible facts and Discovering variables: facts and magic variables, especially into Adding custom facts.
What your installation or configuration tasks could do, is leaving a custom .fact file with meaningful keys and values after they were successful running.
During next playbook execution and if gather_facts: true, the information would be gathered from the setup module and you can than let tasks run based on conditions in ansible_local.
Further Q&A
How Ansible gather_facts and sets variables
An introduction to Ansible facts
Loading custom facts
Whereby the host facts can be considered as kind of distributed CMDB, by using facts Cache
fact_caching = yaml
fact_caching_connection = /tmp/ansible/facts_cache
fact_caching_timeout = 129600
you can have the information also available on the Control Node. It might even be possible to organize it in host_vars and group_vars.
Related
Using Ansible 2.9, how do you end a playbook while also using serial?
When I run the following code that is a rolling upgrade, it executes the prompt for each target system. When using serial, run_once only seems to work for the current target. I only want it to execute once.
- hosts: all
become: yes
serial: 1
handlers:
- include: handlers/main.yml
pre_tasks:
- name: Populate service facts
service_facts:
- name: Prompt
pause:
prompt: "NOTE: You are running a dangerous playbook. Do you want to continue? (yes/no)"
register: confirm_input
run_once: True
- name: end play if user didn't enter yes
meta: end_play
when: confirm_input.user_input | default ("yes") =="no"
run_once: True
tasks:
(other stuff)
run_once will be executed at each serial execution in the play. That means, if you choose serial = 1, it will be asked to confirm as many times as the quantity of targets on the play.
Check Ansible docs: https://docs.ansible.com/ansible/latest/user_guide/playbooks_strategies.html#running-on-a-single-machine-with-run-once
When used together with serial, tasks marked as run_once will be run
on one host in each serial batch. If the task must run only once
regardless of serial mode, use when: inventory_hostname ==
ansible_play_hosts_all[0] construct.
to avoid default ansible behaviour and do what you want, follow the doc and use when: inventory_hostname == ansible_play_hosts_all[0] on the prompt task.
I have a playbook in which on of the tasks within is to copy template files to a specific server with delegate_to: named "grover"
The inventory list with servers of consequence here is: bigbird, grover, oscar
These template files MUST have a name that matches each server's hostnames, and the delegate server grover also must have it's own instance of said file. This template copy operation should only take place if the file does not already exist. /tmp/grover pre-exists on server grover, and it needs to remain unaltered by the playbook run.
Eventually in addition to /tmp/grover on server grover, after the run there should also exist: /tmp/bigbird and /tmp/oscar also on server grover.
The problem I'm having is that when the playbook runs without any conditionals, it does function, but then it also clobbers/overwrites /tmp/grover which is unacceptable.
BUT, if I add tasks in previous plays to pre-check for these files, and then a conditional at the template play to skip grover if the file already exists, it not only skips grover, but it skips every other server that would be run on grover for that play as well. If I try to set it to run on every other server BUT grover, it will still fail because the delegate server is grover and that would be skipped.
Here is the actual example code snipits, playbook is running on all 3 servers due to host pattern:
- hosts: bigbird:grover:oscar
- name: File clobber check.
stat:
path: /tmp/{{ansible_hostname}}
register: clobber_check
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
- name: Copy templates to grover.
template:
backup: yes
src: /opt/template
dest: /tmp/{{ansible_hostname}}
group: root
mode: "u=rw,g=rw"
owner: root
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
when: ( not clobber_check.stat.exists ) and ( clobber_check is defined )
If I run that, and /tmp/grover exists on grover, then it will simply skip the entire copy play because the conditional failed on grover. Thus the other servers will never have their /tmp/bigbird and /tmp/oscar templates copied to grover due to this problem.
Lastly, I'd like to avoid ghetto solutions like saving a backup of grover's original config file, allowing the clobber, and then copying the saved file back as the last task.
I must be missing something here, I can't have been the only person to run into this scenario. Anyone have any idea on how to code for this?
The answer to this question is to remove the unnecessary with_items in the code. While with_items used as it is does allow you to delegate_to a host pattern group, it also makes it so that variables can't be defined/assigned properly to the hosts within that group either.
Defining a single host entity for delegate_to fixes this issue and then the tasks execute as expected on all hosts defined.
Konstantin Suvorov should get the credit for this, as he was the original answer-er.
Additionally, I am surprised that Ansible doesn't allow for easy delegation to a group of hosts. There must be reasons they didn't allow it.
As part of an Ansible playbook, I want to verify whether my servers are in sync.
To do so, I execute a script on each of my servers using the shell module and I register the result into a variable, say result_value.
The tricky part is to verify whether result_value is the same on all the servers. The expected value is not known beforehand.
Is there an idiomatic way to achieve this in Ansible?
One of possible ways:
---
- hosts: all
tasks:
- shell: /usr/bin/my_check.sh
register: result_value
- assert:
that: hostvars[item]['result_value'].stdout == result_value.stdout
with_items: "{{play_hosts}}"
run_once: true
I want to speed-up ansible playbook execution by avoiding to call some sections that do not have to be called more often than once a day or so.
I know that facts are supposed to allow us to implement this but it seems almost impossible to find some basic example of: setting a fact, reading it and doing something if this it has a specific value, setting a default value for a fact.
- name: "do system update"
shell: echo "did it!"
- set_fact:
os_is_updated: true
If my impression or facts are nothing else than variables that can be saved, loaded and caches between executions?
Let's assume hat ansible.cfg is already configured to enable fact caching for two hours.
[defaults]
gathering = smart
fact_caching = jsonfile
fact_caching_timeout = 7200
fact_caching_connection = /tmp/facts_cache
By its nature as a workstation CLI tool, Ansible doesn't have any built-in persistence mechanism (pretty much by design). There are some fact caching plugins that will use external stores (eg, Redis, jsonfile), but I'm not generally a fan.
If you want to persist stuff like that between runs yourself on the target machine(s), you can store them as local facts in /etc/ansible/facts.d (or an arbitrary location if you call setup yourself), and they'll come back from gather_facts under the ansible_local dictionary var. Assuming you're running on a *nix-flavored platform, something like:
- hosts: myhosts
tasks:
- name: do update no more than every 24h
shell: echo "doing updates..."
when: (lookup('pipe', 'date +%s') | int) - (ansible_local.last_update_run | default(0) | int) > 86400
register: update_result
- name: ensure /etc/ansible/facts.d exists
become: yes
file:
path: /etc/ansible/facts.d
state: directory
- name: persist last_update_run
become: yes
copy:
dest: /etc/ansible/facts.d/last_update_run.fact
content: "{{ lookup('pipe', 'date +%s') }}"
when: not update_result | skipped
Obviously the facts.d dir existence stuff is setup boilerplate, but I wanted to show you a fully-working sample.
I'm trying to include a file only if it exists. This allows for custom "tasks/roles" between existing "tasks/roles" if needed by the user of my role. I found this:
- include: ...
when: condition
But the Ansible docs state that:
"All the tasks get evaluated, but the conditional is applied to each and every task" - http://docs.ansible.com/playbooks_conditionals.html#applying-when-to-roles-and-includes
So
- stat: path=/home/user/optional/file.yml
register: optional_file
- include: /home/user/optional/file.yml
when: optional_file.stat.exists
Will fail if the file being included doesn't exist. I guess there might be another mechanism for allowing a user to add tasks to an existing recipe. I can't let the user to add a role after mine, because they wouldn't have control of the order: their role will be executed after mine.
The with_first_found conditional can accomplish this without a stat or local_action. This conditional will go through a list of local files and execute the task with item set to the path of the first file that exists.
Including skip: true on the with_first_found options will prevent it from failing if the file does not exist.
Example:
- hosts: localhost
connection: local
gather_facts: false
tasks:
- include: "{{ item }}"
with_first_found:
- files:
- /home/user/optional/file.yml
skip: true
Thanks all for your help! I'm aswering my own question after finally trying all responses and my own question's code back in today's Ansible: ansible 2.0.1.0
My original code seems to work now, except the optional file I was looking was in my local machine, so I had to run stat through local_action and set become: no for that particular tasks, so ansible wouldn't attempt to do sudo in my local machine and error with: "sudo: a password is required\n"
- local_action: stat path=/home/user/optional/file.yml
register: optional_file
become: no
- include: /home/user/optional/file.yml
when: optional_file.stat.exists
In Ansible 2.5 and above, it can be done using tests like this:
- include: /home/user/optional/file.yml
when: "'/home/user/optional/file.yml' is file"
More details: https://docs.ansible.com/ansible/latest/user_guide/playbooks_tests.html#testing-paths
I using something similar but for the file module and what did the trick for me is to check for the variable definition, try something like:
when: optional_file.stat.exists is defined and optional_file.stat.exists
the task will run only when the variable exists.
If I am not wrong, you want to continue the playbook even the when statement false?
If so, please add this line after when:
ignore_errors: True
So your tasks will be look like this:
- stat: path=/home/user/optional/file.yml
register: optional_file
- include: /home/user/optional/file.yml
when: optional_file.stat.exists
ignore_errors: True
Please let me know, if I understand your question correctly, or can help further. Thanks
I could spend time here to bash ansible's error handling provisions, but in short, you are right and you can't use stat module for this purpose due to stated reasons.
Simplest solution for most ansible problems is to do it outside ansible. E.g.
- shell: test -f /home/user/optional/file.yml # or use -r if you're too particular.
register: optional_file
failed_when: False
- include: /home/user/optional/file.yml
when: optional_file.rc == 0
- debug: msg="/home/user/optional/file.yml did not exist and was not included"
when: optional_file.rc != 0
* failed_when added to avoid host getting excluded from further tasks when the file doesn't exist.
Using ansible-2.1.0, I'm able to use snippets like this in my playbook:
- hosts: all
gather_facts: false
tasks:
- name: Determine if local playbook exists
local_action: stat path=local.yml
register: st
- include: local.yml
when: st.stat.exists
I get no errors/failures when local.yml does not exist, and the playbook is executed (as a playbook, meaning it starts with the hosts: line, etc.)
You can do the same at the task level instead with similar code.
Using stat appears to work correctly.
There's also the option to use a Jinja2 filter for that:
- set_fact: optional_file="/home/user/optional/file.yml"
- include: ....
when: optional_file|exists
The best option I have come up with so far is this:
- include: "{{ hook_variable | default(lookup('pipe', 'pwd') ~ '/hooks/empty.yml') }}"
It's not exactly an "if-exists", but it gives users of your role the same result. Create a few variables within your role and a default empty file. The Jinja filters "default" and "lookup" take care of falling back on the empty file in case the variable was not set.
For convenience, a user could use the {{ playbook_dir }} variable for setting the paths:
hook_variable: "{{ playbook_dir }}/hooks/tasks-file.yml"