How to view the single service status via Ansible with modules rather than shell? - ansible

Need to check the Single service status on multiple system using Ansible playbook , is there a way to do using service modules rather than shell module ?

With the service module (and the related more specific modules like systemd) you can make sure that a service is in a desired state.
For example, the following task will enable apache start at boot if not already configured, start apache if it is stopped, and report change if any change was made or ok if no change was needed.
- name: Enable and start apache
service:
name: apache
enabled: yes
state: started
Simply checking the service status without any change is not supported by those modules. You will have to use the command line and analyse the output / return status.
example with systemd
- name: Check status of my service
command: systemctl -q is-active my_service
check_mode: no
failed_when: false
changed_when: false
register: my_service_status
- name: Report status of my service
debug:
msg: "my_service is {{ (my_service_status.rc == 0) | ternary('Up', 'Down') }}"
To be noted:
check_mode: no make the task run wether or not you use '--check' on ansible_playbook command line. Without this, in check mode, the next task will fail with an undefined variable.
failed_when: false refrains the task to fail when the return code is different from 0 (when the service is not started). You can be more specific by listing all the possible return code in normal conditions and failing when you get an other (e.g. failed_when: not (my_service_status in [0, 3, X]))
changed_when: false makes the task always report ok except of changed by default for command and shell module.

Related

ansible playbook to check the condition of Selinux status

I am using the below playbook to check SELinux status. All I wanted to do further is that, if the status is not disabled, in the same playbook I need to make the changes to disabled the SELinux status.
tasks:
- name: To check SELinux status
shell: getenforce
register: result
- set_fact: selinux_status_output="{{ result.stdout }}"
- debug: var=selinux_status_output
You can use the selinux module
- name: Disable SELinux
selinux:
state: disabled
See: selinux Module
Edit:
You don't need to check if the state is not disabled. Ansible will check the state of selinux and only if it is not disabled it will try to change the state.
You may want to check the difference between declarative and imperative models.
Declarative vs. Imperative: Two Modeling Patterns
for the Automated Deployment of Applications
Declarative vs. Imperative Models for Configuration Management: Which Is Really Better?
I can't comment yet, but as Yabberth said, you can just use the selinux module.
When your running the play, only systems in a changed state would've been not set to disabled. If the state is already disabled ansible will leave it alone and move on to the next task.
If you use the shell module to check first, you'll always see the changed state since its registering it into the job flow. If your running a check check first and then change afterwards, it might be a bit overkill considering the selinux module will do what your asking IMO.
If you just want to disable SELinux, follow #yabberth's answer - Ansible is declarative and idempotent unless you mess things up. Therefore, if you have a task that declares selinux state as disabled, Ansible selinux module will check and set selinux state accordingly.
On the other hand, if someone's still looking for a condition to run a task based on SELinux state, I'd use Ansible facts. Here is an example:
- seboolean:
name: 'some_boolean'
state: yes
persistent: yes
when: ansible_facts.selinux.status == 'enabled'
There are some cases where you may still wish to test whether SELinux is enabled. For instance, the Ansible module sefcontext generates a failure message, if SELinux is disabled. E.g.:
TASK [users : Set SELinux context of directory /foo/bar to ftpd_u] ***************************************************************************************************************************************************************
fatal: [172.16.1.76]: FAILED! => {"changed": false, "msg": "SELinux is disabled on this host."}
To test whether SELinux is enabled, use selinuxenabled rather than getenforce (or perhaps both).
Here's an example of some tasks with this dependency. Note that in the first task, you ignore errors, because you don't want that task to fail based on the exit code of selinuxenabled.
- name: Test whether SELinux is enabled
command: /usr/sbin/selinuxenabled
ignore_errors: yes
register: selinux_status
- name: Set SELinux context of custom ftp directory /foo/bar to ftpd_u if SE Linux is enabled
sefcontext:
target: /foo/bar
setype: ftpd_u
state: present
register: ftp_dir
when: selinux_status.rc == 0
- name: Apply new SELinux context for custom FTP directory
command: restorecon -irv /foo/bar
when: ftp_dir.changed

Error handling with "failed_when" doesn't work

I wish to create an ansible playbook to run on multiple servers in order to check for conformity to the compliance rules. Therefore the playbook is supposed to check whether several services are disabled and whether they are stopped.
I do know that not all the services will be running or enabled on all machines. Therefore I wish to handle certain return codes within the playbook.
I tried the failed_when Statement for this Task. It seems the way to go as it allows to set which RCs to handle.
- hosts: server_group1
remote_user: ansible
tasks:
- name: Stop services
command: /bin/systemctl stop "{{ item }}"
with_items:
- cups
- avahi
- slapd
- isc-dhcp-server
- isc-dhcp-server6
- nfs-server
- rpcbind
- bind9
- vsftpd
- dovecot
- smbd
- snmpd
- squid
register: output
failed_when: "output.rc != 0 and output.rc != 1"
However the loop only works if I use ignore_errors: True which is not shown in the example code.
I do know that I wish to catch RCs of 0 and 1 from the command executed by the playbook. But no matter what failed_when always generates a fatal error and my playbook fails.
I also tried the failed_when line without the "" but that doesn't change a thing.
Something is missing, but I don't see what.
Any advice?
Therefore the playbook is supposed to check whether several services are disabled and whether they are stopped.
Your playbook isn't doing either of these things. If a service exists, whether or not it's running, then systemctl stop <service> will return successfully in most cases. The only time you'll get a non-zero exit code is if either (a) the service does not exist or (b) systemd is unable to stop the service for some reason.
Note that calling systemctl stop has no effect on whether or not a service is disabled; with your current playbook, all those services would start back up the next time the host boots.
If your goal is not simply to check that services are stopped and disabled, but rather to ensure that services are stopped and disabled, you could do something like this:
- name: "Stop services"
service:
name: "{{ item }}"
enabled: false
state: stopped
ignore_errors: true
register: results
loop:
- cups
- avahi
- slapd
- isc-dhcp-server
- isc-dhcp-server6
- nfs-server
- rpcbind
- bind9
- vsftpd
- dovecot
- smbd
- snmpd
- squid
You probably want ignore_errors: true here, because you want run the task for every item in your list.
A different way of handling errors might be with something like:
failed_when: >-
results is failed and
"Could not find the requested service" not in results.msg|default('')
This would only fail the task if stopping the service failed for a reason other than the the fact that there was not matching service on the target host.

Disable systemd service only if the package is installed

This is what I have
- name: disable os firewall - firewalld
systemd:
name: firewalld
state: stopped
enabled: no
This task works fine on hosts where firewalld package is installed but fails on hosts where it is not.
What is the best simple approach to make sure it can handle hosts which do not have firewalld installed? I do not want to use any CMDB as it's an additional setup. Also, I want the task to be idempotent; using a shell command like dpkg or rpm to query if firewalld is installed makes the ansible playbook summary report changes which I do not want.
Here is an example using failed_when:. While it does run every time, it ignores the failure if the package is not installed:
- name: Stop and disable chrony service
service:
name: chronyd
enabled: no
state: stopped
register: chronyd_service_result
failed_when: "chronyd_service_result is failed and 'Could not find the requested service' not in chronyd_service_result.msg"
What is the best approach to make sure it can handle hosts which do not have firewalld installed?
Get the information on which systems firewalld is installed from a CMDB of any kind (like Ansible vars files, since you already use Ansible), and run the task on those systems.
Likewise, do not run the task on systems that do not have firewalld installed according to the configuration pulled from the CMDB.
The actual implementation vary. Among others: specifying appropriate host groups for plays, using conditional in the task, using limit option for inventory.
In your Ansible play. you can check first if the package is installed before trying to stop/start it.
something similar to this: https://stackoverflow.com/a/46975808/3768721
With shell approach:
- name: Disable {{ service }} if enabled
shell: if systemctl is-enabled --quiet {{ service }}; then systemctl disable {{ service }} && echo disable_ok ; fi
register: output
changed_when: "'disable_ok' in output.stdout"
It produces 3 states:
service is absent or disabled already — ok
service exists and was disabled — changed
service exists and disable failed — failed

Ansible define a custom service path and notify only if the service is restarted

I'm working on a new opsware agent service check on AIX, its agent path is /etc/rc.d/init.d/opsware-agent.
Firstly please let me know how to define this variable path and call in service.
Secondly it should run the command only if this opsware agent service has been restarted. How to do it, since below one is not working.
- name: Ensure Opsware agent is running on AIX
service: name={{ aix_service_path }} state=started enabled=yes
register: aix_status
- name: Opsware AIX Notify only if it failed
when: aix_status|success
notify:
- hardware refresh
- software refresh
- name: hardware refresh
command: chdir=/opt/opsware/agent/pylibs/cog/ ./bs_hardware
- name: software refresh
command: chdir=/opt/opsware/agent/pylibs/cog/ ./bs_Software
Let me assume the YML formatting is correct and just got broken in your post. Otherwise you first need to indent your lines correctly.
Then make sure your handlers are inside handlers/main.yml. In your post it looks like everything is in the same file which then would of course get executed on every play.
Finally you can trigger your handlers in the service task, no need to have the dummy task, which additionally wouldn't work because there actually is no action defined.
So this should work:
your_role/tasks/main.yml:
---
- name: Ensure Opsware agent is running on AIX
service: name={{ aix_service_path }} state=started enabled=yes
notify:
- hardware refresh
- software refresh
...
your_role/handlers/main.yml:
---
- name: hardware refresh
command: chdir=/opt/opsware/agent/pylibs/cog/ ./bs_hardware
- name: software refresh
command: chdir=/opt/opsware/agent/pylibs/cog/ ./bs_Software
...
The handlers will be notified only when the service status is changed.
How you define aix_service_path depends on what you want to archive. You can define a default value in your_role/defaults/main.yml:
---
aix_service_path: foo
...
Or force it by defining it in your_role/vars/main.yml - same format as defaults above.
You can pass parameters in the role calls in your playbook, e.g.
roles:
- role: your_role
aix_service_path: foo
A parameter passed like this would override a definition in defaults/main.yml, but not those defined in vars/main.yml.
You can define it in a vars section in the playbook.
You can pass it on command-line when calling your playbook.
ansible-playbook ... --extra-vars "aix_service_path=foo"
Or define it as server- or group var. As well you can define variables in the inventory... There really are a ton of options for defining variables. You have to decide which fits your needs. Check out the variables section in the Ansible docs for more details.

Using Ansible to stop service that might not exist

I am using Ansible 2.6.1.
I am trying to ensure that certain service is not running on target hosts.
Problem is that the service might not exist at all on some hosts. If this is the case Ansible fails with error because of missing service. Services are run by Systemd.
Using service module:
- name: Stop service
service:
name: '{{ target_service }}'
state: stopped
Fails with error Could not find the requested service SERVICE: host
Trying with command module:
- name: Stop service
command: service {{ target_service }} stop
Gives error: Failed to stop SERVICE.service: Unit SERVICE.service not loaded.
I know I could use ignore_errors: yes but it might hide real errors too.
An other solution would be having 2 tasks. One checking for existance of service and other that is run only when first task found service but feels complex.
Is there simpler way to ensure that service is stopped and avoid errors if the service does not exists?
I'm using the following steps:
- name: Get the list of services
service_facts:
- name: Stop service
systemd:
name: <service_name_here>
state: stopped
when: "'<service_name_here>.service' in services"
service_facts could be called once in the gathering facts phase.
The following will register the module output in service_stop; if the module execution's standard output does not contain "Could not find the requested service" AND the service fails to stop based on return code, the module execution will fail. Since you did not include the entire stack trace I am assuming the error you posted is in the standard output, you may need to change slightly based on your error.
- name: Stop service
register: service_stop
failed_when:
- '"Could not find the requested service" not in service_stop.stdout'
- service_stop.rc != 0
service:
name: '{{ target_service }}'
state: stopped
IMHO there isn't simpler way to ensure that service is stopped. Ansible service module doesn't check service's existence. Either (1) more then one task, or (2) command that check service's existence is needed. The command would be OS specific. For example for FreeBSD
command: "service -e | grep {{ target_service }} && service {{ target_service }} stop"
Same solution as Vladimir's, but for Ubuntu (systemd) and with better state handling:
- name: restart {{ target_service }} if exists
shell: if systemctl is-enabled --quiet {{ target_service }}; then systemctl restart {{ target_service }} && echo restarted ; fi
register: output
changed_when: "'restarted' in output.stdout"
It produces 3 states:
service is absent or disabled — ok
service exists and was restarted — changed
service exists and restart failed — failed
Same solution as #ToughKernel's but use systemd to manage service.
- name: disable ntpd service
systemd:
name: ntpd
enabled: no
state: stopped
register: stop_service
failed_when:
- stop_service.failed == true
- '"Could not find the requested service" not in stop_service.msg'
# the order is important, only failed == true, there will be
# attribute 'msg' in the result
When the service module fails, check if the service that needs to be stopped is installed at all. That is similar to this answer, but avoids the rather lengthy gathering of service facts unless necessary.
- name: Stop a service
block:
- name: Attempt to stop the service
service:
name: < service name >
state: stopped
rescue:
- name: Get the list of services
service_facts:
- name: Verify that Nagios is not installed
assert:
that:
- "'< service name >.service' not in services"

Resources