I am creating a systemd service using template module
---
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
The contents of the sonarqube.service can change of course. On change I want to restart the service. How can I do this?
There are two solutions.
Register + When changed
You can register template module output (with its status change),
register: service_conf
and then use when clause.
when: service_conf.changed
For example:
---
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
register: service_conf
- name: restart service
service:
name: sonarqube
state: restarted
when: service_conf.changed
Handler + Notify
You define your restart service task as handler. And then in your template task you notify the handler.
tasks:
- name: Add Sonarqube to Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
notify: Restart Sonarqube
- …
handlers:
- name: Restart Sonarqube
service:
name: sonarqube
state: restarted
More info can be found in Ansible Doc.
Difference between those 2?
In the first case, the service will restart directly. In the case of the handler the restart will happen at the end of the play.
Another difference will be, if you have several tasks changes that need to restart of your service, you simply add the notify to all of them.
The handler will run if any of those task get a changed status. With the first solution, you will have to register several return. And it will generate a longer when clause_1 or clause_2 or …
The handler will run only once even if notified several times.
This calls for a handler
---
- name: Testplaybook
hosts: all
handlers:
- name: restart_service
service:
name: <servicename>
state: restarted
tasks:
- template:
src: ...
dest: ...
notify:
- restart_service
The handler will automatically get notified by the module when something changed. See the documentatation for further information on handlers.
Since you are using systemd, you will also need to execute daemon-reload because you updated the service file.
The task just templates the service file and notifies a handler:
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
notify: restart sonarqube systemd
Based on the presence of your specific when clause above, I'm assuming you might want to specify separate handlers in the case that systemd is not in use. The handler for the systemd case would look like the following:
- name: restart sonarqube systemd
systemd:
name: sonarqube
state: restarted
daemon_reload: yes
Related
Imagine the following playbook, which manages a systemd service unit and a configuration file for a "thing" service:
---
- hosts: all
tasks:
- copy:
src: thing.service
dest: /etc/systemd/system/thing.service
notify: restart thing
- copy:
src: thing.conf
dest: /etc/thing.conf
notify: reload thing
handlers:
- name: restart thing
systemd:
name: thing
state: restarted
- name: reload thing
systemd:
name: thing
state: reloaded # Unnecessary if the restart handler has triggered.
If I modify the thing.service file AND the thing.conf file the handlers will trigger a restart AND a reload.
The reload is not necessary because the service will have been restarted.
Is there any way to inform Ansible of this so that it doesn't trigger the unnecessary reload after the restart?
I don't want to register variables and check those in the handlers with "when" clauses. I'm asking if this is something that Ansible accommodates in its playbook and task syntax.
I have ansible configuration for deploying locally built daemons to a series of target machines, these daemons have associated systemd service files to control them.
What I want to happen is:
If daemon or unit file is changed then restart service
If daemon is unchanged then just start service (which may count as 'unchanged', because it's probably already running)
I'm doing this in a few places so I have a commonly repeating pattern that looks like:
- name: Populate the daemon
copy:
src: "local_build/mydaemon"
dest: "/usr/bin/mydaemon"
mode: 0775
register: daemon_bin
- name: Populate the service
template:
src: "Daemon.service"
dest: "/etc/systemd/system/mydaemon.service"
register: daemon_service
- name: Enable and restart
systemd:
state: restarted
daemon_reload: yes
enabled: yes
name: "mydaemon.service"
when: (daemon_bin.changed or daemon_service.changed)
- name: Enable and start
systemd:
state: started
enabled: yes
name: "mydaemon.service"
when: not (daemon_bin.changed or daemon_service.changed)
Is there a cleaner way to achieve this? It feels like it might be a common problem. Or is my approach somehow wrong?
Yes, you can use notify and handlers.
I have a playbook that installs tomcat and then deploys some web applications. The web application deploy task(s) notifies a handler to restart tomcat. But the handler never fires. I am using a handler to manage the tomcat service because I understand from the docs that handlers should only fire once even if called multiple times. Am I missing something obvious?
This is the playbook:
---
- hosts: all
become: true
become_user: root
roles:
- role: common
- role: nginx
- role: tomcat
- role: launchpad
- role: manager
- role: reporting
handlers:
- include: /tomcat/handlers/etitomcat_service_ctrl.yml
This is one of the roles that deploys the web app:
---
- name: Remove the current installation of LaunchPad
file: path={{etitomcat_path}}/webapps/{{launchpad_module}} state=absent
- name: Remove the current war file for {{launchpad_module}}
file: path={{etitomcat_path}}/webapps/{{launchpad_module}}.war state=absent
- name: Download the latest snapshot of LaunchPad and deploy it to {{etitomcat_path}}
get_url: url={{launchpad_source_url}} dest={{etitomcat_path}}/webapps/{{launchpad_module}}.war mode=0744 owner={{etitomcat_user}} group={{etitomcat_group}} force=yes
notify: "restart_eti_tomcat"
This is the handler:
- name: "Restart ETI Tomcat"
service: name=etitomcat state=restarted
become: true
become_user: root
listen: "restart_eti_tomcat"
- name: "Start ETI Tomcat"
service: name=etitomcat state=started
become: true
become_user: root
listen: "start_eti_tomcat"
- name: "Stop ETI Tomcat"
service: name=etitomcat state=stopped
become: true
become_user: root
listen: "stop_eti_tomcat"
Adding static: yes should resolve this issue when using Ansible >= 2.1.
handlers:
- include: /tomcat/handlers/etitomcat_service_ctrl.yml
static: yes
Take a look at this Github issue, the linked google groups thread might contain valuable information as well.
edit
As pointed out by #rhythmicdevil the documentation notes:
You cannot notify a handler that is defined inside of an include. As of Ansible 2.1, this does work, however the include must be static.
This may be beside the point but I'll add this regardless as the question headline is rather wide and this is the question I found when googling and below is the solution for the particular issue I had.
Do take into consideration that the handlers only triggers when there is a change registered in the corresponding task. Even if you run the play with the highest verbosity level there will be NO entry like this which spells this out.
RUNNING HANDLER [base : somehandler ] ***********************
Unchanged: Skipping
And when they are triggered after a change it will be after all the tasks has been performed.
This really did my head in since the tasks notified you regardless of if they really did something or not whereas handlers stays quiet.
For example, if you have a task A that you have been running a few times until it works like you expect.
Then after that connect a handler B to restart the service, nothing will happen unless you wipe the operation the task A are doing or change it.
As long as task A registers no change it will not trigger handler B.
This is the behavior for ansible 2.2.1 anyways.
You can add the tasks and post_tasks section instead of handlers, hope it will work for you:
---
- hosts: all
become: true
become_user: root
tasks:
- role: common
- role: nginx
- role: tomcat
- role: launchpad
- role: manager
- role: reporting
post_tasks:
- include: /tomcat/handlers/etitomcat_service_ctrl.yml
I am working on a role where I want one task to be run at the end of the tasks file if and only if any of the previous tasks in that task file have changed.
For example, I have:
- name: install package
apt: name=mypackage state=latest
- name: modify a file
lineinfile: do stuff
- name: modify a second file
lineinfile: other stuff
- name: restart if anything changed
service: name=mypackage state=restarted
... and I want to only restart the service if an update has been installed or any of the config files have been changed.
How can I do this?
Best practice here is to use handlers.
In your role create a file handlers/main.yml with the content:
- name: restart mypackage
service: name=mypackage state=restarted
Then notify this handler from all tasks. The handler will be notified only if a task reports a changed state (=yellow output)
- name: install package
apt: name=mypackage state=latest
notify: restart mypackage
- name: modify a file
lineinfile: do stuff
notify: restart mypackage
- name: modify a second file
lineinfile: other stuff
notify: restart mypackage
Handlers will be executed at the very end of your play. If you have other roles involved which depend on the restarted mypackage service, you might want to flush all handlers at the end of the role:
- meta: flush_handlers
Additionally have a look at the force_handlers setting. In case an error happens in any other role processed after your mypackge role, the handler would not be triggered. Set force_handlers=True in your ansible.cfg to still force your handlers to be executed after errors. This is a very important topic since when you run your playbook the next time the files will not be changed and therefore the handler not get notified, hence your service never restarted.
You can also do this without handlers but this is very ugly. You need to register the output of every single task so you can later check the state in the condition applied to the restart task.
- name: install package
apt: name=mypackage state=latest
register: mypackage_1
- name: modify a file
lineinfile: do stuff
register: mypackage_2
- name: modify a second file
lineinfile: other stuff
register: mypackage_3
- name: restart if anything changed
service: name=mypackage state=restarted
when: mypackage_1 is changed or mypackage_2 is changed or mypackage_3 is changed
It was possible to use mypackage_1 | changed till ansible 2.9
See also the answer to Ansible Handler notify vs register.
In many of my Ansible roles I have a handler to restart a service and a task to make sure the service is enabled and started. When this is a first run Ansible will start my service (Ensure mongodb is started and enabled) and then at the end run the restart handler. How do I tell Ansible to only start it once.
Sample play:
---
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} state=started enabled=yes
Insert a flush_handlers task just in the front of the server started task to run the restart handler. Like:
- name: flush handlers
meta: flush_handlers
## your service start task
- name: Ensure mongodb is started and enabled
service:
service: name={{ mongodb_daemon_name }} state=started enabled=yes
If handlers are notified, the handler will restart the services (I assume you have handlers that restart the services).
If handlers are not notified, the Ensure service started task will start the service if it is not already started.
It's been a while since I asked this but anywho, if it helps, the solution I ended up with is below...
The trick is to...
Have the service task in the tasks/main.yml file with the parameters state: started and enabled: yes split out into their own tasks
For the Ensure mongodb is started task register a mongodb_service_started variable
Use the newly registered variable mongodb_service_started in the when clause of the Restart mongodb handler to only run when the Ensure mongodb is started task did NOT change
This ensures that your application does not get started then subsequently restarted in the same play and prevents your application from being stopped when it is running some critical function on startup. For example database migrations. Restarting a service right after it's been started can cause an inconsistent state.
tasks/main.yml
---
- name: Install MongoDB package
yum:
name: "mongodb-org-{{ mongodb_version }}"
state: present
notify: Restart mongodb
- name: Configure mongodb
template:
src: mongod.conf.j2
dest: "/etc/{{ mongodb_config_file }}"
owner: root
group: root
mode: 0644
notify: Restart mongodb
- name: Ensure mongodb is started
service:
name: "{{ mongodb_daemon_name }}"
state: started
register: mongodb_service_started
- name: Ensure mongodb is enabled
service:
name: "{{ mongodb_daemon_name }}"
enabled: yes
handlers/main.yml
---
- name: Restart mongodb
service:
name: "{{ mongodb_daemon_name }}"
state: started
when: mongodb_service_started.changed == False
May be you are looking something like this:
---
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
register: mongodb_install
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} state=started enabled=yes
when: mongodb_install.changed
I can think of two three ways.
Option #1. Remove the state from the task that enables the service and move it to a handler.
According to the Ansible documentation, "notify handlers are always run in the same order they are defined." Since started is idempotent and restarted always starts the service, the second handler (start) won't run if the first one does (restart).
# roles/mongodb/tasks/main.yml
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} enabled=yes
notify: mongodb start
# roles/mongodb/handlers/main.yml
- name: mongodb restart
service: name={{ mongodb_daemon_name }} state=restarted
- name: mongodb start
service: name={{ mongodb_daemon_name }} state=started
However, handlers are only triggered if the task is changed, so this doesn't ensure that the service is running if neither of those tasks are considered changed. For instance, if the service was already installed, enabled, and configured earlier, but someone has logged on to the host and shut it down manually, then it won't be started when this playbook is run.
Option #2. Add a post_task that always runs and ensures the service has been started.
Handlers are flushed before post tasks, so if the configuration has been changed, either during first installation or because it changed, the service will either be started or restarted. However, if nothing has changed and the service isn't running for some reason, it will explicitly be started during the post_task.
# roles/mongodb/tasks/main.yml
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} enabled=yes
# In your playbook, e.g., site.yml
- hosts: all
roles:
- mongodb
post_tasks:
- name: Ensure mongodb is running
service: name={{ mongodb_daemon_name }} state=started
Option #3. Like Option #1, except always trigger the "start" handler
# roles/mongodb/tasks/main.yml
# ...same as Option #1 above...
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} enabled=yes
notify: mongodb start
changed_when: yes
As long as seeing a "change" in your output doesn't bother you, this will work the way you want.
If you have a handler (in the handlers directory of your role) then you do not need an additional task to start/enable mongo. In your handler you should not start but restart (state=restarted). When notified for the first time it will start and enable your service, and in future when the service is running, it will re-start.
- name: mongodb restart
service: name={{ mongodb_daemon_name }} state=restarted enabled=yes
No additional task required.