Pass service/init.d name to Ansible handler as variable - ansible

New to Ansible and was considering the following creation of services on the fly and how best to manage this. I have described below how this doesn't work, but hopefully its enough to describe the problem.
Any pointers appreciated. Thanks.
I'm using a template file to deploy a bunch of near identical application servers. During the deployment of the application servers, a corresponding init script is placed using the variable:`
/etc/init.d/{{ application_instance }}`
Next I'd like to enable and ensure its started:
name: be sure app_xyz is running and enabled
service: name={{ application_instance }} state=started enabled=yes
Further on I'd like to call a restart of the the application when configuration files are updated:
- name: be sure app_xyz is configured
template: src=xyz.conf dest=/opt/application/{{ application_server }}.conf
notify:
- restart {{ application_server }}
With the handler looking like this:
- name: restart {{ application_server }}
service: name={{ application_server }} state=restarted

You don't need a dynamical handler name for it. What about a static handler name:
# handler
- name: restart application server
service: name={{ application_server }} state=restarted
# task
- name: be sure app_xyz is configured
template: src=xyz.conf dest=/opt/application/{{ application_server }}.conf
notify:
- restart application server
service: name={{ application_server }} state=restarted

Related

Stop/Start Windows services only when state not in manual or disabled

I have an Ansible playbook to start or stop services and I needed to add a condition to pick up services that are neither in Manual nor Disabled State. Appreciate your help !!!
My code looks like this.
---
- name: windows start services
hosts: all
tasks:
- name: Include primary app services
include_vars:
file: inventory/vars/servicestart.yml
- name: Start task
win_service:
name: "{{ item }}"
state: started
loop: "{{ services }}"
when: inventory_hostname in groups['primaryappservers']

Start only specific systemd service from list of services using Ansible

I have list of systemd services defined as
vars:
systemd_scripts: ['a.service', 'b.service', 'c.service']
Now I want to stop only a.service from above list. How this can be achieved using systemd_module?
What are you trying to achieve? As written, you could just do:
- name: Stop service A
systemd:
name: a.service
state: stopped
If instead you mean "the first service", use the first filter or an index:
- name: Stop first service
systemd:
name: "{{ systemd_scripts | first }}"
state: stopped
OR
- name: Stop first service
systemd:
name: "{{ systemd_scripts[0] }}"
state: stopped
Your question is very vague and unspecific. However, parts of your question could be answered with the following approach
- name: Loop over service list and stop one service
systemd:
name: "{{ item }}"
state: stopped
when:
- systemd_scripts[item] == 'a.service'
with_items: "{{ systemd_scripts }}"
You may need to extend and change it for your needs and required functionality.
Regarding discovering, starting and stopping services via Ansible facts (service_facts) and systemd you may have a look into
Further readings
Ansible: How to get disabled but running services?
How to declare a variable for service_facts?
How to check service exists and is not installed in the server using service_facts module in an Ansible playbook?

Is ist somehow possible to create a template of an ansible handler?

I am facing the problem that I need to define a set of similar handlers in ansible that differ only in a name.
Let me give an example. Here are some tasks:
# tasks/main.yml
- name: Install config of OpenVPN instance 1
notify: restart openvpn-1
...
- name: Install config of OpenVPN instance 2
notify: restart openvpn-2
...
# Multiple more of that pattern.
You might think that each instance has a slightly different configuration that can be handled here.
All right. Bow the handlers:
# handlers/main.yml
- name: restart openvpn-1
systemd:
name: openvpn-server#instance1
state: restarted
- name: restart openvpn-2
systemd:
name: openvpn-server#instance2
state: restarted
# ...
You see this is quite some duplicated code (not good).
I was thinking of doing something like:
# Handler template or so
- name: restart-openvpn-{{ item }}
systemd:
name: openvpn-server#instance{{ item }}
state: restarted
loop:
- "1"
- "2"
# ...
This is not working, I tried it.
I found this post but this works here not so ideal as it assumes the task to be running in a loop. Instead I have a set of individual tasks that trigger single instances of the handlers. Also this is already in a role.
So the short question is: How can I create a handler template to avoid code redundancy?
Your code works.
E.g. I took two services and tell 'em to restart using your approach:
- hosts: localhost
connection: local
become: true
tasks:
- name: restart-{{ item }}
systemd:
name: "{{ item }}.service"
state: restarted
loop:
- "whoopsie"
- "wpa_supplicant"
O/P is:
PLAY RECAP **********
localhost : ok=2
The main difference from your code: add quotes.
As per Ansible Documentation on Using Variables:
YAML syntax requires that if you start a value with {{ foo }} you quote the whole line, since it wants to be sure you aren’t trying to start a YAML dictionary. This is covered on the YAML Syntax documentation.
This won’t work:
- hosts: app_servers
vars:
app_path: {{ base_path }}/22
Do it like this and you’ll be fine:
- hosts: app_servers
vars:
app_path: "{{ base_path }}/22"

Restart service when service file changes when using Ansible

I am creating a systemd service using template module
---
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
The contents of the sonarqube.service can change of course. On change I want to restart the service. How can I do this?
There are two solutions.
Register + When changed
You can register template module output (with its status change),
register: service_conf
and then use when clause.
when: service_conf.changed
For example:
---
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
register: service_conf
- name: restart service
service:
name: sonarqube
state: restarted
when: service_conf.changed
Handler + Notify
You define your restart service task as handler. And then in your template task you notify the handler.
tasks:
- name: Add Sonarqube to Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
notify: Restart Sonarqube
- …
handlers:
- name: Restart Sonarqube
service:
name: sonarqube
state: restarted
More info can be found in Ansible Doc.
Difference between those 2?
In the first case, the service will restart directly. In the case of the handler the restart will happen at the end of the play.
Another difference will be, if you have several tasks changes that need to restart of your service, you simply add the notify to all of them.
The handler will run if any of those task get a changed status. With the first solution, you will have to register several return. And it will generate a longer when clause_1 or clause_2 or …
The handler will run only once even if notified several times.
This calls for a handler
---
- name: Testplaybook
hosts: all
handlers:
- name: restart_service
service:
name: <servicename>
state: restarted
tasks:
- template:
src: ...
dest: ...
notify:
- restart_service
The handler will automatically get notified by the module when something changed. See the documentatation for further information on handlers.
Since you are using systemd, you will also need to execute daemon-reload because you updated the service file.
The task just templates the service file and notifies a handler:
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
notify: restart sonarqube systemd
Based on the presence of your specific when clause above, I'm assuming you might want to specify separate handlers in the case that systemd is not in use. The handler for the systemd case would look like the following:
- name: restart sonarqube systemd
systemd:
name: sonarqube
state: restarted
daemon_reload: yes

How to stop Ansible from starting then restarting a service?

In many of my Ansible roles I have a handler to restart a service and a task to make sure the service is enabled and started. When this is a first run Ansible will start my service (Ensure mongodb is started and enabled) and then at the end run the restart handler. How do I tell Ansible to only start it once.
Sample play:
---
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} state=started enabled=yes
Insert a flush_handlers task just in the front of the server started task to run the restart handler. Like:
- name: flush handlers
meta: flush_handlers
## your service start task
- name: Ensure mongodb is started and enabled
service:
service: name={{ mongodb_daemon_name }} state=started enabled=yes
If handlers are notified, the handler will restart the services (I assume you have handlers that restart the services).
If handlers are not notified, the Ensure service started task will start the service if it is not already started.
It's been a while since I asked this but anywho, if it helps, the solution I ended up with is below...
The trick is to...
Have the service task in the tasks/main.yml file with the parameters state: started and enabled: yes split out into their own tasks
For the Ensure mongodb is started task register a mongodb_service_started variable
Use the newly registered variable mongodb_service_started in the when clause of the Restart mongodb handler to only run when the Ensure mongodb is started task did NOT change
This ensures that your application does not get started then subsequently restarted in the same play and prevents your application from being stopped when it is running some critical function on startup. For example database migrations. Restarting a service right after it's been started can cause an inconsistent state.
tasks/main.yml
---
- name: Install MongoDB package
yum:
name: "mongodb-org-{{ mongodb_version }}"
state: present
notify: Restart mongodb
- name: Configure mongodb
template:
src: mongod.conf.j2
dest: "/etc/{{ mongodb_config_file }}"
owner: root
group: root
mode: 0644
notify: Restart mongodb
- name: Ensure mongodb is started
service:
name: "{{ mongodb_daemon_name }}"
state: started
register: mongodb_service_started
- name: Ensure mongodb is enabled
service:
name: "{{ mongodb_daemon_name }}"
enabled: yes
handlers/main.yml
---
- name: Restart mongodb
service:
name: "{{ mongodb_daemon_name }}"
state: started
when: mongodb_service_started.changed == False
May be you are looking something like this:
---
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
register: mongodb_install
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} state=started enabled=yes
when: mongodb_install.changed
I can think of two three ways.
Option #1. Remove the state from the task that enables the service and move it to a handler.
According to the Ansible documentation, "notify handlers are always run in the same order they are defined." Since started is idempotent and restarted always starts the service, the second handler (start) won't run if the first one does (restart).
# roles/mongodb/tasks/main.yml
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} enabled=yes
notify: mongodb start
# roles/mongodb/handlers/main.yml
- name: mongodb restart
service: name={{ mongodb_daemon_name }} state=restarted
- name: mongodb start
service: name={{ mongodb_daemon_name }} state=started
However, handlers are only triggered if the task is changed, so this doesn't ensure that the service is running if neither of those tasks are considered changed. For instance, if the service was already installed, enabled, and configured earlier, but someone has logged on to the host and shut it down manually, then it won't be started when this playbook is run.
Option #2. Add a post_task that always runs and ensures the service has been started.
Handlers are flushed before post tasks, so if the configuration has been changed, either during first installation or because it changed, the service will either be started or restarted. However, if nothing has changed and the service isn't running for some reason, it will explicitly be started during the post_task.
# roles/mongodb/tasks/main.yml
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} enabled=yes
# In your playbook, e.g., site.yml
- hosts: all
roles:
- mongodb
post_tasks:
- name: Ensure mongodb is running
service: name={{ mongodb_daemon_name }} state=started
Option #3. Like Option #1, except always trigger the "start" handler
# roles/mongodb/tasks/main.yml
# ...same as Option #1 above...
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} enabled=yes
notify: mongodb start
changed_when: yes
As long as seeing a "change" in your output doesn't bother you, this will work the way you want.
If you have a handler (in the handlers directory of your role) then you do not need an additional task to start/enable mongo. In your handler you should not start but restart (state=restarted). When notified for the first time it will start and enable your service, and in future when the service is running, it will re-start.
- name: mongodb restart
service: name={{ mongodb_daemon_name }} state=restarted enabled=yes
No additional task required.

Resources