I have a playbook with the following task and handler section (just a snippet):
tasks:
- name: 'Run legacy script and power off'
debug: msg="Preparing for reboot"
notify: Legacy sysprep
handlers:
- name: Enable Service1
service: name=service1 enabled=yes state=restarted
- name: Legacy sysprep
shell: /var/scripts/prep-reboot.sh
When I run the playbook, I see the debug message for the task that calls the Legacy sysprep handler, and I see the Enable Service1 handler executed, but the Legacy sysprep handler isn't called (it doesn't show up in the playbook output, nor does it run on the system) and the servers are not rebooted (part of the script).
Yes, I plan to migrate the prep-reboot.sh script to an Ansible playbook, but I was surprised that apparently the shell module doesn't seem to work? Or is there an error I've overlooked? Running with -vvv doesn't report anything unexpected.
Ansible and Ansible-playbook version 2.1.1.0 running on RHEL 6.8.
Thank you, #techraf you got me pointed in the right direction. I ended up adding the changed_when: true to the debug line and that forces that play to register a change which then triggers the appropriate handler.
Here is my actual test playbook for reference:
---
- name: Testing forced handler
hosts: testsys_only
gather_facts: True
tasks:
- name: 'Run legacy script and power off'
debug: msg="Preparing for reboot"
changed_when: true
notify: Legacy sysprep
handlers:
- name: Enable Service1
service: name=service1 enabled=yes state=restarted
- name: Legacy sysprep
shell: /var/scripts/prep-reboot.sh
Related
I am using Ansible AWX to issue a restart command to restart an apache2 service on a host. The restart command is contained in a playbook.
---
- name: Manage Linux Services
hosts: all
tasks:
- name: Restart a linux service
command: systemctl restart '{{ service_name }}'
register: result
ignore_errors: yes
- name: Show result of task
debug:
var: result
OR
---
- name: Manage Linux Services
hosts: all
tasks:
- name: Restart a linux service
ansible.builtin.service:
name: '{{ service_name }}'
state: restarted
register: result
ignore_errors: yes
- name: Show result of task
debug:
var: result
However, when I run the command, I get the error below:
"Failed to restart apache2.service: Connection timed out",
"See system logs and 'systemctl status apache2.service' for details."
I have tried to figure out the issue, but no luck yet.
I later figured the cause of the issue.
Here's how I fixed it:
The restart command requires sudo access to run which was missing in my command.
All I have to do was to add the become: true command so that I can execute the command with root privileges.
So my playbook looked like this thereafter:
---
- name: Manage Linux Services
hosts: all
tasks:
- name: Restart a linux service
command: systemctl restart '{{ service_name }}'
become: true
register: result
ignore_errors: yes
- name: Show result of task
debug:
var: result
OR
---
- name: Manage Linux Services
hosts: all
tasks:
- name: Restart a linux service
ansible.builtin.service:
name: '{{ service_name }}'
state: restarted
become: true
register: result
ignore_errors: yes
- name: Show result of task
debug:
var: result
Another way if you want to achieve this on Ansible AWX is to tick the Privilege Escalation option in the job template.
If enabled, this runs the selected playbook in the job template as an administrator.
That's all.
I hope this helps
Restarting a service requires sudo privileges. Besides adding the 'become' directive, if you would like to prompt for the password, you can do so by passing the -K flag (note: uppercase K)
$ ansible-playbook myplay.yml -i hosts -u myname --ask-pass -K
I wish to create an ansible playbook to run on multiple servers in order to check for conformity to the compliance rules. Therefore the playbook is supposed to check whether several services are disabled and whether they are stopped.
I do know that not all the services will be running or enabled on all machines. Therefore I wish to handle certain return codes within the playbook.
I tried the failed_when Statement for this Task. It seems the way to go as it allows to set which RCs to handle.
- hosts: server_group1
remote_user: ansible
tasks:
- name: Stop services
command: /bin/systemctl stop "{{ item }}"
with_items:
- cups
- avahi
- slapd
- isc-dhcp-server
- isc-dhcp-server6
- nfs-server
- rpcbind
- bind9
- vsftpd
- dovecot
- smbd
- snmpd
- squid
register: output
failed_when: "output.rc != 0 and output.rc != 1"
However the loop only works if I use ignore_errors: True which is not shown in the example code.
I do know that I wish to catch RCs of 0 and 1 from the command executed by the playbook. But no matter what failed_when always generates a fatal error and my playbook fails.
I also tried the failed_when line without the "" but that doesn't change a thing.
Something is missing, but I don't see what.
Any advice?
Therefore the playbook is supposed to check whether several services are disabled and whether they are stopped.
Your playbook isn't doing either of these things. If a service exists, whether or not it's running, then systemctl stop <service> will return successfully in most cases. The only time you'll get a non-zero exit code is if either (a) the service does not exist or (b) systemd is unable to stop the service for some reason.
Note that calling systemctl stop has no effect on whether or not a service is disabled; with your current playbook, all those services would start back up the next time the host boots.
If your goal is not simply to check that services are stopped and disabled, but rather to ensure that services are stopped and disabled, you could do something like this:
- name: "Stop services"
service:
name: "{{ item }}"
enabled: false
state: stopped
ignore_errors: true
register: results
loop:
- cups
- avahi
- slapd
- isc-dhcp-server
- isc-dhcp-server6
- nfs-server
- rpcbind
- bind9
- vsftpd
- dovecot
- smbd
- snmpd
- squid
You probably want ignore_errors: true here, because you want run the task for every item in your list.
A different way of handling errors might be with something like:
failed_when: >-
results is failed and
"Could not find the requested service" not in results.msg|default('')
This would only fail the task if stopping the service failed for a reason other than the the fact that there was not matching service on the target host.
Consider the following Ansible task:
- name: stop tomcat
gather_facts: false
hosts: pod1
pre_tasks:
- include_vars:
dir: "vars/{{ environment }}"
vars:
hipchat_message: "stop tomcat pod1 done."
hipchat_notify: "yes"
tasks:
- include: tasks/stopTomcat8AndClearCache.yml
- include: tasks/stopHttpd.yml
- include: tasks/hipchatNotification.yml
This stops tomcat on n number of servers. What I want it to do is send a hipchat notification when it's done doing this. However, this code sends a separate hipchat message for each server the task happens on. This floods the hipchat window with redundant messages. Is there a way to make the hipchat task happen once after the stop tomcat/stop httpd tasks have been done on all the targets? I want the task to shut down tomcat on all the servers, then send one hip chat message saying "tomcat stopped on pod 1".
You can conditionally run the hipchat notification task on only one of the pod1 hosts.
- include: tasks/hipChatNotification.yml
when: inventory_hostname == groups.pod1[0]
Alternately you could only run it on localhost if you don't need any of the variables from the previous play.
- name: Run notification
gather_facts: false
hosts: localhost
tasks:
- include: tasks/hipchatNotification.yml
You also could use the run_once flag on the task itself.
- name: Do a thing on the first host in a group.
debug:
msg: "Yay only prints once"
run_once: true
- name: Run this block only once per host group
block:
- name: Do a thing on the first host in a group.
debug:
msg: "Yay only prints once"
run_once: true
Ansible handlers are made for this type of problem where you want to run a task once at the end of an operation even though it may have been triggered multiple times in the play.
You can define a handler section in your playbook and notify it in the tasks, the handlers will not run unless notified by a task, and will only run once regardless of how many times they are notified.
handlers:
- name: hipchat notify
hipchat:
room: someroom
msg: tomcat stopped on pod 1
In your play tasks just include a "notify" on the tasks that should trigger the handler and if they change it will run the handler after all tasks have executed.
- name: Stop service httpd, if started
service:
name: httpd
state: stopped
notify:
- hipchat notify
I have a playbook that installs tomcat and then deploys some web applications. The web application deploy task(s) notifies a handler to restart tomcat. But the handler never fires. I am using a handler to manage the tomcat service because I understand from the docs that handlers should only fire once even if called multiple times. Am I missing something obvious?
This is the playbook:
---
- hosts: all
become: true
become_user: root
roles:
- role: common
- role: nginx
- role: tomcat
- role: launchpad
- role: manager
- role: reporting
handlers:
- include: /tomcat/handlers/etitomcat_service_ctrl.yml
This is one of the roles that deploys the web app:
---
- name: Remove the current installation of LaunchPad
file: path={{etitomcat_path}}/webapps/{{launchpad_module}} state=absent
- name: Remove the current war file for {{launchpad_module}}
file: path={{etitomcat_path}}/webapps/{{launchpad_module}}.war state=absent
- name: Download the latest snapshot of LaunchPad and deploy it to {{etitomcat_path}}
get_url: url={{launchpad_source_url}} dest={{etitomcat_path}}/webapps/{{launchpad_module}}.war mode=0744 owner={{etitomcat_user}} group={{etitomcat_group}} force=yes
notify: "restart_eti_tomcat"
This is the handler:
- name: "Restart ETI Tomcat"
service: name=etitomcat state=restarted
become: true
become_user: root
listen: "restart_eti_tomcat"
- name: "Start ETI Tomcat"
service: name=etitomcat state=started
become: true
become_user: root
listen: "start_eti_tomcat"
- name: "Stop ETI Tomcat"
service: name=etitomcat state=stopped
become: true
become_user: root
listen: "stop_eti_tomcat"
Adding static: yes should resolve this issue when using Ansible >= 2.1.
handlers:
- include: /tomcat/handlers/etitomcat_service_ctrl.yml
static: yes
Take a look at this Github issue, the linked google groups thread might contain valuable information as well.
edit
As pointed out by #rhythmicdevil the documentation notes:
You cannot notify a handler that is defined inside of an include. As of Ansible 2.1, this does work, however the include must be static.
This may be beside the point but I'll add this regardless as the question headline is rather wide and this is the question I found when googling and below is the solution for the particular issue I had.
Do take into consideration that the handlers only triggers when there is a change registered in the corresponding task. Even if you run the play with the highest verbosity level there will be NO entry like this which spells this out.
RUNNING HANDLER [base : somehandler ] ***********************
Unchanged: Skipping
And when they are triggered after a change it will be after all the tasks has been performed.
This really did my head in since the tasks notified you regardless of if they really did something or not whereas handlers stays quiet.
For example, if you have a task A that you have been running a few times until it works like you expect.
Then after that connect a handler B to restart the service, nothing will happen unless you wipe the operation the task A are doing or change it.
As long as task A registers no change it will not trigger handler B.
This is the behavior for ansible 2.2.1 anyways.
You can add the tasks and post_tasks section instead of handlers, hope it will work for you:
---
- hosts: all
become: true
become_user: root
tasks:
- role: common
- role: nginx
- role: tomcat
- role: launchpad
- role: manager
- role: reporting
post_tasks:
- include: /tomcat/handlers/etitomcat_service_ctrl.yml
I have a simple playbook that copies some config files for nginx server and then issues service nginx reload command:
---
- hosts: mirrors
tasks:
- name: Installs nginx web server
apt: pkg=nginx state=installed update_cache=true
notify:
- start nginx
- name: Copy vhost file
copy: src=vhost/test.vhost dest=/etc/nginx/sites-available/ mode=0644
- name: Symlink vhost file to enabled sites
file: src=/etc/nginx/sites-available/test.vhost dest=/etc/nginx/sites-enabled/test.vhost state=link
notify:
- reload nginx
handlers:
- name: start nginx
service: name=nginx state=started
- name: reload nginx
service: name=nginx state=reloaded
Now, the problem is that it might happen that the test.host file contains an error and nginx won't reload properly. If it happens, I'd like to run a command systemctl status nginx.service and see it's output when ansible playbook executes. How can I add this "error handler"?
If the handler output tells you about the failure, you can put together a nice bit of error handling with the ignore_errors and register keywords.
Set your handler not to end to fail and end the play when it gets an error code back, but to register the output as a variable:
handlers:
- name: reload nginx
service: name=nginx state=reloaded
ignore_errors: True
register: nginx_reloaded
Then back in the tasks section, call flush_handlers to execute all the queued handlers that would otherwise wait until the end of the play. Add a task in to go and ask the server about the status of Nginx when your nginx_reloaded variable has a non-zero return code, and then print the information from that with the debug module:
- name: Symlink vhost file to enabled sites
file: src=/etc/nginx/sites-available/test.vhost dest=/etc/nginx/sites-enabled/test.vhost state=link
notify:
- reload nginx
- meta: flush_handlers
- name: Get the Nginx service status if it failed to failed.
command: "systemctl status nginx.service"
register: nginx_status
when: nginx_reloaded.rc != 0
- debug: var=nginx_status
when: nginx_reloaded.rc != 0
Of course, a better solution would be to fix those errors in your configuration files - and figuring them out through Ansible output has limitations compared to SSHing onto the target host to check it out.
Why don't use Ansible modules ability to validate configuration?:
http://docs.ansible.com/ansible/template_module.html
http://docs.ansible.com/ansible/replace_module.html
http://docs.ansible.com/ansible/lineinfile_module.html
For example:
- template: src=/mine/sudoers dest=/etc/sudoers validate='visudo -cf %s'
- lineinfile: dest=/etc/sudoers state=present regexp='^%ADMIN ALL\=' line='%ADMIN ALL=(ALL) NOPASSWD:ALL' validate='visudo -cf %s'
- replace: dest=/etc/apache/ports regexp='^(NameVirtualHost|Listen)\s+80\s*$' replace='\1 127.0.0.1:8080' validate='/usr/sbin/apache2ctl -f %s -t'