Team,
Not sure why my message with echo is not getting printed. I want to print a text if command that ran before for stopping process was successful.
In my case am checking for apache2 process status. Basically, if stdout is empty it passed and if stderr exists it failed. I also referred this link but no luck..
https://stackoverflow.com/questions/26142343/ansible-conditional-based-on-stdout-of-result
- name: stop apache2 process
command: systemctl stop apache2
ignore_errors: no
changed_when: false
register: service_apache2_status
become: true
- debug:
var: service_apache2_status.stdout
- name: Check if apache2 stopped
command: echo "Successfully Stopped Apache2"
# when: service_apache2_status.stdout == " "
when: service_apache2_status.stdout | length == 0
Actual output:
TASK [diskcache_prerequisites : stop apache2 process] **********************************************************************************
ok: [localhost]
TASK [diskcache_prerequisites : debug] *************************************************************************************************
ok: [localhost] => {
"service_apache2_status.stdout": ""
}
TASK [diskcache_prerequisites : Check if apache2 stopped] ******************************************************************************
changed: [localhost]
expected output:
TASK [diskcache_prerequisites : Check if apache2 stopped] ******************************************************************************
changed: [localhost]
Successfully Stopped Apache2
You don't need the "check if apache2 stopped" at all if you use the systemd module instead of trying to use command: to call systemctl by hand
As for your question: if you want to see output, that's what - debug: msg="Successfully stopped Apache2" is for; the command: module is not a general purpose debugging tool, it's for running commands
Related
I am trying to find a logic to check if the service is running, if Not Running, then start it. Below is the logic I have written, but for some reason the notify is not calling the handler?
---
- name: Executing the play to start the service
hosts: nodes
become: yes
gather_facts: False
tasks:
- name: Executing the Shell task to check the status of the wso2 instance
shell: myMsg=$(sudo service wso2esb status| grep Active | awk '{print $3}');if [[ $myMsg == "(dead)" ]]; then echo "Not Running";else echo "Running";fi
ignore_errors: yes
register: result
notify: handl
when: result.stdout == "Not Running" (I even tried 'changed_when', but the same error)
handlers:
- name: handl
service: name=wso2esb state=started
$ ansible-playbook -i inventories/hosts.sit start.yml -b -k
SSH password:
PLAY [Executing the play to start the wso2 service] ***********************************************
TASK [Executing the Shell task to check the status of the instance] *******************************
fatal: [mpstest01]: FAILED! => {"msg": "The conditional check 'result.stdout == \"Not Running\"' failed. The error was: error while evaluating conditional (result.stdout == \"Not Running\"): **'result' is undefined**\n\nThe error appears to have been in '/home/ansible1/start.yml': line 9, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Executing the Shell task to check the status of the instance\n ^ here\n"}
...ignoring
PLAY RECAP ****************************************************************************************
mpstest01 : ok=1 changed=0 unreachable=0 failed=0
That all seems unnecessary. You don't need to "check if the service is running, if Not Running, then start it"; that's exactly what the service module does. Just put the service module in a task, and don't bother with the handler:
- name: start service if it isn't already running
service: name=wso2esb state=started
Although I would prefer YAML syntax rather than the key=value syntax:
- name: start service if it isn't already running
service:
name: wso2esb
state: started
In any case, the service module will check if the service is running, and if not, it will start it.
I have written a playbook which run handlers if the task was successful.Now I want to use some type conditions that if the above task fail then run different handler. Just like simple IF else statement works.
Current PLAYBOOK
tasks:
- name: checking file format
command: named-checkzone example.com /var/named/example.com
notify: service
handlers:
- name: "service reload"
command: rndc reload example.com
listen: "service"
Now I want to omit file name in configuration file if the main tasks fails
Q: "1) Run handler if the task was successful. 2) If the task fails then run another handler."
A: The simple way is to notify a handler when a task is changed. When a task fails the status is failed not changed and no handler is notified.
In this particular case, you don't care whether the command succeeded or failed. The handler shall always be notified. This can be achieved by an explicit combination of ignore_errors, failed_when, and changed_when.
Notify both handlers service success and service fail. The conditions in the handlers will decide which handler will run. For example the playbook
shell> cat playbook.yml
- hosts: localhost
tasks:
- command: "{{ cmd }}"
register: named_checkzone_result
ignore_errors: true
failed_when: false
changed_when: true
notify:
- service success
- service fail
handlers:
- name: service success
debug:
msg: Service success
when: named_checkzone_result.rc == 0
- name: service fail
debug:
msg: Service fail
when: named_checkzone_result.rc == 1
gives (abridged) if the command succeeds
shell> ansible-playbook playbook.yml -e "cmd=true"
TASK [command] ****
changed: [localhost]
RUNNING HANDLER [service success] ****
ok: [localhost] =>
msg: Service success
RUNNING HANDLER [service fail] ****
skipping: [localhost]
gives (abridged) if the command fails
shell> ansible-playbook playbook.yml -e "cmd=false"
TASK [command] ****
changed: [localhost]
RUNNING HANDLER [service success] ****
skipping: [localhost]
RUNNING HANDLER [service fail] ****
ok: [localhost] =>
msg: Service fail
In my ansible playbook I have a task that executes a shell command. One of the parameters of that command is password. When the shell command fails, ansible prints the whole json object that includes the command having password. If I use no_log: True then I get censored output and not able to get stderr_lines. Is there a way to customize the output when shell command execution fails?
You can take advantage of ansible blocks and their error handling feature.
Here is an example playbook
---
- name: Block demo for shell
hosts: localhost
gather_facts: false
tasks:
- block:
- name: my command
shell: my_command is bad
register: cmdresult
no_log: true
rescue:
- name: show error
debug:
msg: "{{ cmdresult.stderr }}"
- name: fail the playbook
fail:
msg: Error on command. See debug of stderr above
which gives the following result:
PLAY [Block demo for shell] *********************************************************************************************************************************************************************************************************************************************
TASK [my command] *******************************************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true}
TASK [show error] *******************************************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "/bin/sh: 1: my_command: not found"
}
TASK [fail the playbook] ************************************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error on command. See debug of stderr above"}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=1 ignored=0
You can utilize something like this :
- name: Running it
hosts: localhost
tasks:
- name: failing the task
shell: sh a.sh > output.txt
ignore_errors: true
register: abc
- name: now failing
command: rm output.txt
when: abc|succeeded
stdout will be written to a file. If it's a failure you can check the file and debug it, if it's a success then file will be deleted.
Ansible Version: ansible 2.4.2.0
I want to start VM sequentially depends on the role(master/backup). Multiple VM IDs are stored in 2 files master & backup. The controller flow should like below
Iterate VM IDs one by one from a file
For every iteration, the handler should notified. i.e the iteration should WAIT for handler complete
Iteration should not move foreword if handler is failed(or in WAITING state).
For reference, you see the below playbook
- name: Performs Power Actions VMs
hosts: localhost
vars:
- status: "{% if action=='stop' %}SHUTOFF{% else %}ACTIVE{% endif %}" # For Checking VM status
tasks:
- name: Staring Master VM
shell: |
echo {{ item }} > /tmp/current
echo "RUN nova start {{ item }} HERE!!!"
when: action == "start"
with_lines: cat ./master
notify: "Poll VM power status"
- name: Starting Backup VM
shell: |
echo {{ item }} > /tmp/current
echo "RUN nova start {{ item }} HERE!!!"
when: action == "start"
with_lines: cat ./backup
notify: "Poll VM power status"
handlers:
- name: Poll VM power status
shell: openstack server show -c status --format value `cat /tmp/current`
register: cmd_out
until: cmd_out.stdout == status
retries: 5
delay: 10
For above playbook, what I see is the handlers is notified after entire iteration is complete.
PLAY [Performs Power Actions on ESC VMs] **********************************************************************************************
TASK [Stopping Backup VM] *********************************************************************************************************
skipping: [localhost] => (item=Test)
TASK [Stopping Master VM] *********************************************************************************************************
skipping: [localhost] => (item=Test)
TASK [Staring Master VM] **********************************************************************************************************
changed: [localhost] => (item=Test)
TASK [Starting Backup VM] *********************************************************************************************************
changed: [localhost] => (item=Test)
TASK [Removing tmp files] *************************************************************************************************************
changed: [localhost] => (item=./master)
changed: [localhost] => (item=./backup)
RUNNING HANDLER [Poll VM power status] ********************************************************************************************
FAILED - RETRYING: Poll ESC VM power status (5 retries left).
^C [ERROR]: User interrupted execution
Is there any better approach to solve this problem? or Any suggestion how to fit block in this playbook to solve?
PS: The dummy command in tasks RUN nova start {{ item }} HERE!!! doesn't wait. That's why I have to check the status manually.
By default, handlers are run at the end of a play.
However, you can force the already notified handlers to run at a given time in your play by using the meta module.
- name: force running of all notified handlers now
meta: flush_handlers
In your case, you just have to add it in between your two vm start tasks
Edit: This will actually work in between your two tasks but not for each iteration in a single task so it is not really answering your full requirement.
An other approach (to be developed) would be to include your check command directly in your task that should not return until conditions are met.
Have you considered exploring the galaxy of openstack related modules? They might solve your current problems as well.
I have written an ansibe playbook to check if oswatcher is installed. if not, install it.
shell script to check the software is installed
#!/bin/bash
#Validation check that checks if oswatcher is installed
oswatcher=$(rpm -qa | grep "^oswatcher*" || echo "not_installed")
echo "$oswatcher"
this play here checks the shell scripts output and validates if it has to run the install play or not
- name: Execute script to check if oswatcher is installed
script: "{{ oswatch_home }}/version_details.sh"
register: oswatcher
- name: Print the Output
debug:
msg: "You can think the Application is installed, it is {{ oswatcher.stdout }}"
- include: oswatch-install.yml
when: oswatcher.stdout == "not_installed"
Please see the output
PLAY [integration] *******************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************
ok: [xxxxxx]
ok: [xxxxxx]
TASK [app/oswatch : Execute script to check if oswatcher is installed] ***************************************************************
changed: [xxxxx1]
changed: [xxxxx2]
TASK [app/oswatch : Print the Output] ************************************************************************************************
ok: [xxxxxx1] => {
"changed": false,
"msg": "You can think the Application is installed, it is oswatcher-7.3.3-2.el6.noarch\r\n"
}
ok: [xxxxxx2] => {
"changed": false,
"msg": "You can think the Application is installed, it is not_installed\r\n"
}
TASK [app/oswatch : create project directory "/tmp/oswatch"] *************************************************************************
skipping: [xxxxxx1]
skipping: [xxxxxx2]
My logic matches not_installed and validates whether to run the install play or not, It even prints it perfect not able to understand the hiccup.
Change the task to below and see if it is serving the purpose.If it is still not working, please run your playbook with -vvv and post complete output.
- include: oswatch-install.yml
when: "not_installed" in oswatcher.stdout