I want my playbook to fail under two conditions.
If the SQL query's results is not returned by the database
If the SQL query's result returned has the string "775"
Below Playbook works fine and does the job.
- name: "Play 1"
hosts: localhost
- name: "Search for Number"
command: >
mysql --user="{{ DBUSER }}" --password="{{ DBPASS }}" deployment
--host=localhost --batch --skip-column-names
--execute="SELECT num FROM deploy_tbl"
register: command_result
failed_when: command_result.stdout is search('775') or command_result.rc != 0
when: Layer == 'APP'
I now wanted to print custom messages for both the conditions so I added the below lines after the above playbook code.
- name: Make Sure the mandatory input values are passed
fail:
msg: "This REQ is already Deployed. Hence retry with a Unique Number."
when: command_result.stdout is search('775')
- name: Make Sure the mandatory input values are passed
fail:
msg: "Database is not reachable."
when: command_result.rc != 0
This give me syntax error when running the playbook.
I'm on the latest version of Ansible.
Can you please tell me how can we update my plabook to print either of the custom message based on condition check ?
You should probably post your entire playbook at once in a single code block because there are absolutely no syntax problems in the bits you pasted out.
Meanwhile, if you want to achieve your requirement, you first have to understand that, by default, your playbook will stop as soon as an error is raised on a task. So in your case, the fail tasks will never be played as they are after the failing task.
One way to go around this problem would be to use ignore_error on your task.
In most cases, I find that blocks and their error handling feature are better suited to handle such scenarios.
The example below uses a block. I faked the sql call with a playbook var (that you can modify to play around) and a debug task using the same when conditions as yours.
I also modified the way to call fail so that you only have to do it once with the ternary filter.
If you are not familiar with the >- notation in yaml, have a look at a definition of scalar blocks. I used that to limit line length and ease readability.
---
- hosts: localhost
gather_facts: false
vars:
command_result:
stdout: "775"
rc: "0"
tasks:
- block:
- name: Fake task to mimic sql query
debug:
var: command_result
failed_when: command_result.stdout is search('775') or command_result.rc != 0
rescue:
- name: Make Sure the mandatory input values are passed
fail:
msg: >-
{{
command_result.stdout is search('775') |
ternary(
"This REQ is already Deployed. Hence retry with a Unique Number.",
"Database is not reachable."
)
}}
which gives:
PLAY [localhost] *********************************************************************
TASK [Fake task to mimic sql query] **************************************************
fatal: [localhost]: FAILED! => {
"command_result": {
"rc": "0",
"stdout": "775"
},
"failed_when_result": true
}
TASK [Make Sure the mandatory input values are passed] *******************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "This REQ is already Deployed. Hence retry with a Unique Number."}
PLAY RECAP ***************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=1 ignored=0
Related
I am trying to use the Ansible command module to get information from my inventory, but I am expecting the servers to return a non-zero value as a result. The non-zero values are causing the inventory to report as fail.
I was expecting the following sequence to fail when a certain group of process names are present on any of the inventory.
- name: Make sure all expected processes are down
tags: ensure_processes_down
block:
- name: Find list of unexpected processes
command:
argv:
- /usr/bin/pgrep
- --list-name
- unexpected_processes_pattern_here
register: pgrep_status
- name: Ensure no processes were found
# pgrep returns 0 if matches were found
assert:
that: pgrep_status.rc != 0
msg: 'Cannot run while these these processes are up: {{ pgrep_status.stdout }}'
I am new to Ansible and suspect that I am not using the command module correctly.
Is there another way to run a command where the expected return code is non-zero?
If the command in a command task fails, then the entire task fails, which means no more tasks will be executed on that particular host. Your second task in which you check pgrep_status will never run. You have a couple options here.
You can set ignore_errors: true, which will ignore the error condition and move on to the next task. E.g:
- hosts: localhost
gather_facts: false
tasks:
- command:
argv:
- false
register: status
ignore_errors: true
- assert:
that: status.rc != 0
Which will produce this result:
PLAY [localhost] ***************************************************************
TASK [command] *****************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "False", "msg": "[Errno 2] No such file or directory: b'False'", "rc": 2, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
TASK [assert] ******************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1
However, rather than running the command task and then checking the result in a second task, you can set a failed_when condition on the command task:
- name: Find list of unexpected processes
command:
argv:
- /usr/bin/pgrep
- --list-name
- unexpected_processes_pattern_here
register: pgrep_status
failed_when: pgrep_status.rc == 0
This command will only fail when the command returns successfully (i.e.,with an exit code of 0).
See the "Defining failure" section of the docs for more information.
I am trying to create a variable from a lookup in Ansible. I then want to use that variable as a constant throughout the rest of the role. Here are a couple of lines from defaults/main.yml.
job_start_time: "{{ lookup('pipe','date +%Y%m%d_%H%M%S') }}"
new_virtual_machine_name: "{{ template_name }}-pkrclone-{{ job_start_time }}"
The result of running the role causes an error as the lookup happens again when the new_virtual_machine_name variable is used in tasks/main.yml. You can see the second increment in subsequent tasks.
TASK [update_vsphere_template : Write vsphere source file] changed: [localhost] => {"changed": true, "checksum": "c2ee7ed6fad0b26aea825233f3e714862de3ac28", "dest": "packer-DevOps-Windows2019-Base-DevOps-Windows2019-Base-pkrclone-20210819_**093122**/vsphere-source-DevOps-Windows2019-Base-DevOps-Windows2019-Base-pkrclone-20210819_**093122**.pkr.hcl", 148765781020136/source", "state": "file", "uid": 1000}
TASK [update_vsphere_template : Write the template update file] *fatal: [localhost]: FAILED! => {"changed": false, "checksum": "6fd97c1d2373b9c71ad6b5246529c46a38f55c48", "msg": "Destination directory packer-DevOps-Windows2019-Base-DevOps-Windows2019-Base-pkrclone-20210819_**093123** does not exist"}
Here is how I am referencing the new_virtual_machine_name variable in tasks/main.yml
- name: Write vsphere source file
template:
src: ../templates/vsphere-source.j2
dest: 'packer-{{template_name}}-{{ new_virtual_machine_name }}/vsphere-source-{{ template_name }}-{{ new_virtual_machine_name }}.pkr.hcl'
- name: Write the template update file
template:
src: ../templates/update-template.j2
dest: 'packer-{{template_name}}-{{ new_virtual_machine_name }}/update-{{ template_name }}-{{ new_virtual_machine_name }}.pkr.hcl'
How can I keep Ansible from running the lookup every time I reference the variable that is created from the lookup at the start of the job? Thank you.
Q: "How can I keep Ansible from running the lookup every time I reference the variable?"
A: Use strftime, you'll get the current date each time you call the filter, e.g.
- set_fact:
job_start_time: "{{ '%Y%m%d_%H%M%S'|strftime }}"
- debug:
msg: "pkrclone-{{ job_start_time }}"
- wait_for:
timeout: 3
- debug:
msg: "pkrclone-{{ job_start_time }}"
- set_fact:
job_start_time: "{{ '%Y%m%d_%H%M%S'|strftime }}"
- debug:
msg: "pkrclone-{{ job_start_time }}"
gives
msg: pkrclone-20210819_174748
msg: pkrclone-20210819_174748
msg: pkrclone-20210819_174753
Yes, variables are evaluated every time you call them. A way to work around this is to store a fact using set_facts which will be evaluated only at time it is set and stored as a static values for the host.
Edit: Note that, as pointed out in #vladimir's answer (and discussed in comments), using the pipe lookup to get such an info is not really a good practice when you have the strftime filter available in core Ansible. So I changed it in my below example.
To illustrate, the following playbook:
---
- hosts: localhost
gather_facts: false
vars:
_job_start_time: "{{ '%Y%m%d_%H%M%S'| strftime }}"
tasks:
- name: Force var into a fact that will be constant throughout play
set_fact:
job_start_time: "{{ _job_start_time }}"
- name: Wait a bit making sure time goes on
pause:
seconds: 1
- name: Show that fact is constant whereas inital var changes
debug:
msg: "var is: {{ _job_start_time }}, fact is: {{ job_start_time }}"
- name: Wait a bit making sure time goes on
pause:
seconds: 1
- name: Show that fact is constant whereas inital var changes
debug:
msg: "var is: {{ _job_start_time }}, fact is: {{ job_start_time }}"
Gives:
PLAY [localhost] ***********************************************************************************************************************************************************************************************************************
TASK [Force var into a fact that will be constant throughout play] *********************************************************************************************************************************************************************
ok: [localhost]
TASK [Wait a bit making sure time goes on] *********************************************************************************************************************************************************************************************
Pausing for 1 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [localhost]
TASK [Show that fact is constant whereas inital var changes] ***************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "var is: 20210819_172528, fact is: 20210819_172527"
}
TASK [Wait a bit making sure time goes on] *********************************************************************************************************************************************************************************************
Pausing for 1 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [localhost]
TASK [Show that fact is constant whereas inital var changes] ***************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "var is: 20210819_172529, fact is: 20210819_172527"
}
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
localhost : ok=5 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I have an ansible role that reads data from a DB. That data might not exist in which case it fails. I cannot change this role. Currently I'm using this role like:
- name: Import role and read values
import_role:
name: shared_role
register: value
This works well when the data in the DB exists and the role doesn't fail. However when that data is missing it's causing more problems. So in case of an error I want to ignore that error and use a default value.
- import_role:
name: shared_role
register: value
ignore_errors: true
- set_fact:
value: "{{ value | default({{ default_var }}) }}"
where default_var is defined in group_vars. Now this doesn't work obviously and I'm kind of stuck. How could I use a variable as a default value foor another variable registered in a role that might have failed... if that makes sense.
default will by default (sorry for this redundancy but I don't know how to write it better) replace values for undefined variables only. A variable defined as None is defined.
Fortunately, there is a second parameter that can be used to replace also "empty" vars (i.e. None, or empty string). See Documentation for this
Moreover, you cannot use jinja2 expansion while inside a jinja2 expansion already. Check the jinja2 documentation to learn more.
So you should achieve your requirement with the following expression (if all you described about your data is correct):
- set_fact:
value: "{{ value | default(default_var, true) }}"
You should not have problems with variable precedence here: set_fact and register are on the same level so the latest assignment should win. But for clarity and security, I would rename that var so you are sure you never get into trouble:
- import_role:
name: shared_role
register: role_value
ignore_errors: true
- set_fact:
value: "{{ role_value | default(default_var, true) }}"
Here is my group_vars/all.yml file. I am using food that is defined here to set in my play. you can see the value 6 in output.
logs: /var/log/messages
foo: 6
and here is my play-book
---
- name: assign a variable value in default state
hosts: localhost
tasks:
- name: set fact and then print
set_fact:
new_var: "{{ doodle | default( foo ) }}"
- name: debug
debug: msg="{{ new_var }}"
and here is the output
PLAY [assign a variable value in default state] ********************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [set fact and then print] *************************************************
ok: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
> "msg": "6"
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
This is a simple syntax error. You can not use the {{}} in a filter.
The correct syntax is
- set_fact:
value: "{{ value | default(default_var) }}"
EDIT:
If your value is "None" and you want to replace that, you can use something like this:
- set_fact:
value: "{%- if not value is defined or value == 'None' -%}{{ default_var }}{%- else -%}{{ value }}{%- endif -%}"
Take note of the use of {{}} here.
I'm currently writing a bunch of checks for Ansible.
It involves contacting remote machines and performing a check.
Based on the result of this check, I make a decision whether it failed or not.
This is the important bit: the task itself never fails. It merely returns a message. I register the result and then analyze it. And based on that I decide if it's a failure or not.
The thing is, I want to add a flag that allows tests to keep running instead of failing.
So the code looks like this:
- name: Check fail.
fail:
msg: "Test failed"
when: "failfast_flag and <the actual check>"
The problem there is, if I make failfast_flag false, it doesn't output red anymore.
I want it to continue with the next tests, in that case, but I also want it to color red, indicating it's an error/fail.
How do I accomplish this?
EDIT: Thanks for the suggestions, I'll give them a try in a bit.
Not sure i fully understood your question, but you could use a - block and rescue structure in order to treat the failures and continue the playbook in failure scenarios:
example:
- block:
- name: Check fail.
fail:
msg: "Test failed"
when: "failfast_flag and <the actual check>"
rescue:
- name: do somthing when the task above fails
command: <do somthing>
This is a combination of #edrupado's answer and #Jack's comment.
The idea is to move the error up to the task where you register the value and use a rescue block to fail with an informative message, skipping the fail with a flag.
I have no idea what your actual task to collect data is. I used a dummy example cheking for the presence of a dir/file. You should be able to adapt to you actual scenario.
---
- name: Dummy POC just to test feasability
hosts: localhost
gather_facts: false
tasks:
- block:
- name: Make sure /whatever dir exists
stat:
path: /whatever
register: whatever
failed_when: not whatever.stat.exists | bool
- debug:
msg: "/whatever exists: Good !"
rescue:
- fail:
msg: /whatever dir must exist.
ignore_errors: "{{ ignore_flag | default(false) | bool }}"
- block:
- name: Make sure /tmp dir exists
stat:
path: /tmp
register: tmpdir
failed_when: not tmpdir.stat.exists | bool
- debug:
msg: "/tmp exists: Good !"
rescue:
- fail:
msg: /tmp dir must exist.
ignore_errors: "{{ ignore_flag | default(false) | bool }}"
Which gives:
$ ansible-playbook /tmp/test.yml
PLAY [Dummy POC just to test feasability] *************************************************************************************************************************************************************************
TASK [Make sure /whatever dir exists] *****************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed_when_result": true, "stat": {"exists": false}}
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "/whatever dir must exist."}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=1 ignored=0
$ ansible-playbook /tmp/test.yml -e ignore_flag=true
PLAY [Dummy POC just to test feasability] *************************************************************************************************************************************************************************
TASK [Make sure /whatever dir exists] *****************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed_when_result": true, "stat": {"exists": false}}
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "/whatever dir must exist."}
...ignoring
TASK [Make sure /tmp dir exists] **********************************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "/tmp exists: Good !"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=1
I want to exit without an error (I know about assert and fail modules) when I meet a certain condition. The following code exits but with a failure:
tasks:
- name: Check if there is something to upgrade
shell: if apt-get --dry-run upgrade | grep -q "0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded"; then echo "no"; else echo "yes"; fi
register: upgrading
- name: Exit if nothing to upgrade
fail: msg="Nothing to upgrade"
when: upgrading.stdout == "no"
Since Ansible 2.2, you can use end_play with the meta module:
- meta: end_play
You can also specify when for conditionally ending the play:
- meta: end_play
when: upgrading.stdout == "no"
Note, though, that the task is not listed in the output of ansible-playbook, regardless of whether or not the play actually ends. Also, the task is not counted in the recap. So, you could do something like:
- block:
- name: "end play if nothing to upgrade"
debug:
msg: "nothing to upgrade, ending play"
- meta: end_play
when: upgrading.stdout == "no"
which will announce the end of the play right before ending it, only when the condition is met. If the condition is not met, you'll see the task named end play if nothing to upgrade appropriately skipped, which would provide more info to the user as to why the play is, or is not, ending.
Of course, this will only end the current play and not all remaining plays in the playbook.
UPDATE June 20 2019:
As reto mentions in comments, end_play ends the play for all hosts. In Ansible 2.8, end_host was added to meta:
end_host (added in Ansible 2.8) is a per-host variation of end_play. Causes the play to end for the current host without failing it.
UPDATE Feb 2021: fixed broken link to meta module
Just a little note: meta: end_play ends just the play, not a playbook. So this playbook:
---
- name: 1st play with end play
hosts: localhost
connection: local
gather_facts: no
tasks:
- name: I'll always be printed
debug:
msg: next task terminates first play
- name: Ending the 1st play now
meta: end_play
- name: I want to be printed!
debug:
msg: However I'm unreachable so this message won't appear in the output
- name: 2nd play
hosts: localhost
connection: local
gather_facts: no
tasks:
- name: I will also be printed always
debug:
msg: "meta: end_play ended just the 1st play. This is 2nd one."
will produce this output:
$ ansible-playbook -i localhost, playbooks/end_play.yml
PLAY [1st play with end play] **************************************************
TASK [I'll always be printed] **************************************************
ok: [localhost] => {
"msg": "next task terminates first play"
}
PLAY [2nd play] ****************************************************************
TASK [I will also be printed always] *******************************************
ok: [localhost] => {
"msg": "meta: end_play ended just the 1st play. This is 2nd one."
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
A better and more logical way to solve it may be to do the reverse and rather than fail if there is nothing to upgrade (which is a separate step that does only that) you could append all your upgrading tasks with a conditional depending on the upgrade variable. In essence just add
when: upgrading.changed
to tasks that should be only executed during an upgrade.
It is a bit more work, but it also brings clarity and self contains the logic that affects given task within itself, rather than depending on something way above which may or may not terminate it early.
Lets use what Tymoteusz suggested for roles:
Split your play into two roles where first role will execute the check (and sets some variable holding check's result) and second one will act based on result of the check.
I have created aaa.yaml with this content:
---
- hosts: all
remote_user: root
roles:
- check
- { role: doit, when: "check.stdout == '0'" }
...
then role check in roles/check/tasks/main.yaml:
---
- name: "Check if we should continue"
shell:
echo $(( $RANDOM % 2 ))
register: check
- debug:
var: check.stdout
...
and then role doit in roles/doit/tasks/main.yaml:
---
- name: "Do it only on systems where check returned 0"
command:
date
...
And this was the output:
TASK [check : Check if we should continue] *************************************
Thursday 06 October 2016 21:49:49 +0200 (0:00:09.800) 0:00:09.832 ******
changed: [capsule.example.com]
changed: [monitoring.example.com]
changed: [satellite.example.com]
changed: [docker.example.com]
TASK [check : debug] ***********************************************************
Thursday 06 October 2016 21:49:55 +0200 (0:00:05.171) 0:00:15.004 ******
ok: [monitoring.example.com] => {
"check.stdout": "0"
}
ok: [satellite.example.com] => {
"check.stdout": "1"
}
ok: [capsule.example.com] => {
"check.stdout": "0"
}
ok: [docker.example.com] => {
"check.stdout": "0"
}
TASK [doit : Do it only on systems where check returned 0] *********************
Thursday 06 October 2016 21:49:55 +0200 (0:00:00.072) 0:00:15.076 ******
skipping: [satellite.example.com]
changed: [capsule.example.com]
changed: [docker.example.com]
changed: [monitoring.example.com]
It is not perfect: looks like you will keep seeing skipping status for all tasks for skipped systems, but might do the trick.
The following was helpful in my case, as meta: end_play seems to stop the execution for all hosts, not just the one that matches.
First establish a fact:
- name: Determine current version
become: yes
slurp:
src: /opt/app/CHECKSUM
register: version_check
ignore_errors: yes
- set_fact:
is_update_needed: "{{ ( version_check['checksum'] | b64decode != installer_file.stat.checksum) }}"
Now include that part that should only executed on this condition:
# update-app.yml can be placed in the same role folder
- import_tasks: update-app.yml
when: is_update_needed
You can do this making assert condition false
- name: WE ARE DONE. EXITING
assert:
that:
- "'test' in 'stop'
Running into the same issue, ugly as it might look to you, I decided temporarily to tweak the code to allow for a end_playbook value for the ansible.builtin.meta plugin. So I added below condition to file /path/to/python.../site-packages/ansible/plugins/strategy/__init__.py:
elif meta_action == 'end_playbook':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
sys.exit(0)
msg = "ending playbook"
else:
skipped = True
skip_reason += ', continuing play'
As you can see, pretty simple end effective to stop the entire process using sys.exit(0).
Here's an example playbook to test with:
- name: test
hosts: localhost
connection: local
gather_facts: no
become: no
tasks:
- meta: end_playbook
when: true
- fail:
- name: test2
hosts: localhost
connection: local
gather_facts: no
become: no
tasks:
- debug:
msg: hi
PLAY [test] ****************
TASK [meta] ****************
When I switch to when: false, it skips to next task.
PLAY [test] ********************
TASK [meta] ********************
2023-01-05 15:04:39 (0:00:00.021) 0:00:00.021 ***************************
skipping: [localhost]
TASK [fail] ********************
2023-01-05 15:04:39 (0:00:00.013) 0:00:00.035 ***************************
fatal: [localhost]: FAILED! => changed=false
msg: Failed as requested from task
PLAY RECAP *****************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0