I am using Ansible 2.6.1.
I am trying to ensure that certain service is not running on target hosts.
Problem is that the service might not exist at all on some hosts. If this is the case Ansible fails with error because of missing service. Services are run by Systemd.
Using service module:
- name: Stop service
service:
name: '{{ target_service }}'
state: stopped
Fails with error Could not find the requested service SERVICE: host
Trying with command module:
- name: Stop service
command: service {{ target_service }} stop
Gives error: Failed to stop SERVICE.service: Unit SERVICE.service not loaded.
I know I could use ignore_errors: yes but it might hide real errors too.
An other solution would be having 2 tasks. One checking for existance of service and other that is run only when first task found service but feels complex.
Is there simpler way to ensure that service is stopped and avoid errors if the service does not exists?
I'm using the following steps:
- name: Get the list of services
service_facts:
- name: Stop service
systemd:
name: <service_name_here>
state: stopped
when: "'<service_name_here>.service' in services"
service_facts could be called once in the gathering facts phase.
The following will register the module output in service_stop; if the module execution's standard output does not contain "Could not find the requested service" AND the service fails to stop based on return code, the module execution will fail. Since you did not include the entire stack trace I am assuming the error you posted is in the standard output, you may need to change slightly based on your error.
- name: Stop service
register: service_stop
failed_when:
- '"Could not find the requested service" not in service_stop.stdout'
- service_stop.rc != 0
service:
name: '{{ target_service }}'
state: stopped
IMHO there isn't simpler way to ensure that service is stopped. Ansible service module doesn't check service's existence. Either (1) more then one task, or (2) command that check service's existence is needed. The command would be OS specific. For example for FreeBSD
command: "service -e | grep {{ target_service }} && service {{ target_service }} stop"
Same solution as Vladimir's, but for Ubuntu (systemd) and with better state handling:
- name: restart {{ target_service }} if exists
shell: if systemctl is-enabled --quiet {{ target_service }}; then systemctl restart {{ target_service }} && echo restarted ; fi
register: output
changed_when: "'restarted' in output.stdout"
It produces 3 states:
service is absent or disabled — ok
service exists and was restarted — changed
service exists and restart failed — failed
Same solution as #ToughKernel's but use systemd to manage service.
- name: disable ntpd service
systemd:
name: ntpd
enabled: no
state: stopped
register: stop_service
failed_when:
- stop_service.failed == true
- '"Could not find the requested service" not in stop_service.msg'
# the order is important, only failed == true, there will be
# attribute 'msg' in the result
When the service module fails, check if the service that needs to be stopped is installed at all. That is similar to this answer, but avoids the rather lengthy gathering of service facts unless necessary.
- name: Stop a service
block:
- name: Attempt to stop the service
service:
name: < service name >
state: stopped
rescue:
- name: Get the list of services
service_facts:
- name: Verify that Nagios is not installed
assert:
that:
- "'< service name >.service' not in services"
Related
I am creating a systemd service using template module
---
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
The contents of the sonarqube.service can change of course. On change I want to restart the service. How can I do this?
There are two solutions.
Register + When changed
You can register template module output (with its status change),
register: service_conf
and then use when clause.
when: service_conf.changed
For example:
---
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
register: service_conf
- name: restart service
service:
name: sonarqube
state: restarted
when: service_conf.changed
Handler + Notify
You define your restart service task as handler. And then in your template task you notify the handler.
tasks:
- name: Add Sonarqube to Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
notify: Restart Sonarqube
- …
handlers:
- name: Restart Sonarqube
service:
name: sonarqube
state: restarted
More info can be found in Ansible Doc.
Difference between those 2?
In the first case, the service will restart directly. In the case of the handler the restart will happen at the end of the play.
Another difference will be, if you have several tasks changes that need to restart of your service, you simply add the notify to all of them.
The handler will run if any of those task get a changed status. With the first solution, you will have to register several return. And it will generate a longer when clause_1 or clause_2 or …
The handler will run only once even if notified several times.
This calls for a handler
---
- name: Testplaybook
hosts: all
handlers:
- name: restart_service
service:
name: <servicename>
state: restarted
tasks:
- template:
src: ...
dest: ...
notify:
- restart_service
The handler will automatically get notified by the module when something changed. See the documentatation for further information on handlers.
Since you are using systemd, you will also need to execute daemon-reload because you updated the service file.
The task just templates the service file and notifies a handler:
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
notify: restart sonarqube systemd
Based on the presence of your specific when clause above, I'm assuming you might want to specify separate handlers in the case that systemd is not in use. The handler for the systemd case would look like the following:
- name: restart sonarqube systemd
systemd:
name: sonarqube
state: restarted
daemon_reload: yes
I wish to create an ansible playbook to run on multiple servers in order to check for conformity to the compliance rules. Therefore the playbook is supposed to check whether several services are disabled and whether they are stopped.
I do know that not all the services will be running or enabled on all machines. Therefore I wish to handle certain return codes within the playbook.
I tried the failed_when Statement for this Task. It seems the way to go as it allows to set which RCs to handle.
- hosts: server_group1
remote_user: ansible
tasks:
- name: Stop services
command: /bin/systemctl stop "{{ item }}"
with_items:
- cups
- avahi
- slapd
- isc-dhcp-server
- isc-dhcp-server6
- nfs-server
- rpcbind
- bind9
- vsftpd
- dovecot
- smbd
- snmpd
- squid
register: output
failed_when: "output.rc != 0 and output.rc != 1"
However the loop only works if I use ignore_errors: True which is not shown in the example code.
I do know that I wish to catch RCs of 0 and 1 from the command executed by the playbook. But no matter what failed_when always generates a fatal error and my playbook fails.
I also tried the failed_when line without the "" but that doesn't change a thing.
Something is missing, but I don't see what.
Any advice?
Therefore the playbook is supposed to check whether several services are disabled and whether they are stopped.
Your playbook isn't doing either of these things. If a service exists, whether or not it's running, then systemctl stop <service> will return successfully in most cases. The only time you'll get a non-zero exit code is if either (a) the service does not exist or (b) systemd is unable to stop the service for some reason.
Note that calling systemctl stop has no effect on whether or not a service is disabled; with your current playbook, all those services would start back up the next time the host boots.
If your goal is not simply to check that services are stopped and disabled, but rather to ensure that services are stopped and disabled, you could do something like this:
- name: "Stop services"
service:
name: "{{ item }}"
enabled: false
state: stopped
ignore_errors: true
register: results
loop:
- cups
- avahi
- slapd
- isc-dhcp-server
- isc-dhcp-server6
- nfs-server
- rpcbind
- bind9
- vsftpd
- dovecot
- smbd
- snmpd
- squid
You probably want ignore_errors: true here, because you want run the task for every item in your list.
A different way of handling errors might be with something like:
failed_when: >-
results is failed and
"Could not find the requested service" not in results.msg|default('')
This would only fail the task if stopping the service failed for a reason other than the the fact that there was not matching service on the target host.
I have an Ansible playbook for deploying a Java app as an init.d daemon.
Being a beginner in both Ansible and Linux I'm having trouble to conditionally execute tasks on a host based on the host's status.
Namely I have some hosts having the service already present and running where I want to stop it before doing anything else. And then there might be new hosts, which don't have the service yet. So I can't simply use service: name={{service_name}} state=stopped, because this will fail on new hosts.
How I can I achieve this? Here's what I have so far:
- name: Check if Service Exists
shell: "if chkconfig --list | grep -q my_service; then echo true; else echo false; fi;"
register: service_exists
# This should only execute on hosts where the service is present
- name: Stop Service
service: name={{service_name}} state=stopped
when: service_exists
register: service_stopped
# This too
- name: Remove Old App Folder
command: rm -rf {{app_target_folder}}
when: service_exists
# This should be executed on all hosts, but only after the service has stopped, if it was present
- name: Unpack App Archive
unarchive: src=../target/{{app_tar_name}} dest=/opt
See the service_facts module, new in Ansible 2.5.
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "'docker' in services"
Of course I could also just check if the wrapper script exists in /etc/init.d. So this is what I ended up with:
- name: Check if Service Exists
stat: path=/etc/init.d/{{service_name}}
register: service_status
- name: Stop Service
service: name={{service_name}} state=stopped
when: service_status.stat.exists
register: service_stopped
I modified Florian's answer to only use the return code of the service command (this worked on Mint 18.2)
- name: Check if Logstash service exist
shell: service logstash status
register: logstash_status
failed_when: not(logstash_status.rc == 3 or logstash_status.rc == 0)
- name: Check if Logstash service exist
service:
name: logstash
state: stopped
when: logstash_status.rc == 0
It would be nice if the "service" module could handle "unrecognized service" errors.
This is my approach, using the service command instead of checking for an init script:
- name: check for apache
shell: "service apache2 status"
register: _svc_apache
failed_when: >
_svc_apache.rc != 0 and ("unrecognized service" not in _svc_apache.stderr)
- name: disable apache
service: name=apache2 state=stopped enabled=no
when: "_svc_apache.rc == 0"
check the exit code of "service status" and accept the exit code 0 when the output contains "unrecognized service"
if the exit code was 0, that service is installed (stopped or running)
Another approach for systemd (from Jakuje):
- name: Check if cups-browsed service exists
command: systemctl cat cups-browsed
check_mode: no
register: cups_browsed_exists
changed_when: False
failed_when: cups_browsed_exists.rc not in [0, 1]
- name: Stop cups-browsed service
systemd:
name: cups-browsed
state: stopped
when: cups_browsed_exists.rc == 0
Building on #Maciej's answer for RedHat 8, and combining it with the comments made on it.
This is how I managed to stop Celery only if it has already been installed:
- name: Populate service facts
service_facts:
- debug:
msg: httpd installed!
when: ansible_facts.services['httpd.service'] is defined
- name: Stop celery.service
service:
name: celery.service
state: stopped
enabled: true
when: ansible_facts.services['celery.service'] is defined
You can drop the debug statement--it's there just to confirm that ansible_facts is working.
This way using only the service module has worked for us:
- name: Disable *service_name*
service:
name: *service_name*
enabled: no
state: stopped
register: service_command_output
failed_when: >
service_command_output|failed
and 'unrecognized service' not in service_command_output.msg
and 'Could not find the requested service' not in service_command_output.msg
My few cents. The same approach as above but for kubernetes
Check if kublete service is running
- name: "Obtain state of kublet service"
command: systemctl status kubelet.service
register: kubelet_status
failed_when: kubelet_status.rc > 3
Display debug message if kublet service is not running
- debug:
msg: "{{ kubelet_status.stdout }}"
when: "'running' not in kubelet_status.stdout"
You can use the service_facts module since Ansible 2.5. But you need to know that the output are the real name of the service like docker.service or somthing#.service. So you have different options like:
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "'docker.service' in services"
Or you can search for the string beginning with the service name; that is much more reliable, because the service names are different between the distributions:
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "services.keys()|list|select('search', '^docker')|length >0"
This is what I have
- name: disable os firewall - firewalld
systemd:
name: firewalld
state: stopped
enabled: no
This task works fine on hosts where firewalld package is installed but fails on hosts where it is not.
What is the best simple approach to make sure it can handle hosts which do not have firewalld installed? I do not want to use any CMDB as it's an additional setup. Also, I want the task to be idempotent; using a shell command like dpkg or rpm to query if firewalld is installed makes the ansible playbook summary report changes which I do not want.
Here is an example using failed_when:. While it does run every time, it ignores the failure if the package is not installed:
- name: Stop and disable chrony service
service:
name: chronyd
enabled: no
state: stopped
register: chronyd_service_result
failed_when: "chronyd_service_result is failed and 'Could not find the requested service' not in chronyd_service_result.msg"
What is the best approach to make sure it can handle hosts which do not have firewalld installed?
Get the information on which systems firewalld is installed from a CMDB of any kind (like Ansible vars files, since you already use Ansible), and run the task on those systems.
Likewise, do not run the task on systems that do not have firewalld installed according to the configuration pulled from the CMDB.
The actual implementation vary. Among others: specifying appropriate host groups for plays, using conditional in the task, using limit option for inventory.
In your Ansible play. you can check first if the package is installed before trying to stop/start it.
something similar to this: https://stackoverflow.com/a/46975808/3768721
With shell approach:
- name: Disable {{ service }} if enabled
shell: if systemctl is-enabled --quiet {{ service }}; then systemctl disable {{ service }} && echo disable_ok ; fi
register: output
changed_when: "'disable_ok' in output.stdout"
It produces 3 states:
service is absent or disabled already — ok
service exists and was disabled — changed
service exists and disable failed — failed
I tried to use Ansible service module to restart a service but I got an error.
tasks:
- ini_file: dest=/etc/dd-agent/datadog.conf
section=Main
option=use_mount
state=absent
register: ddagent
- service: name='datadog-agent' state=reloaded
when: ddagent.changed
This generated this error: ERROR: change handler (restart datadog) is not defined
I know that an alternative is to execute:
- command: "service datadog-agent restart"
Still, in this case what's the purpose of the service module?
You should add the following code:
handlers:
- name: restart datadog
service: name=datadog-agent state=restarted
The problem that you are facing is that you don't have the handler defined. This will do the job