ansible playbook to check the condition of Selinux status - ansible

I am using the below playbook to check SELinux status. All I wanted to do further is that, if the status is not disabled, in the same playbook I need to make the changes to disabled the SELinux status.
tasks:
- name: To check SELinux status
shell: getenforce
register: result
- set_fact: selinux_status_output="{{ result.stdout }}"
- debug: var=selinux_status_output

You can use the selinux module
- name: Disable SELinux
selinux:
state: disabled
See: selinux Module
Edit:
You don't need to check if the state is not disabled. Ansible will check the state of selinux and only if it is not disabled it will try to change the state.
You may want to check the difference between declarative and imperative models.
Declarative vs. Imperative: Two Modeling Patterns
for the Automated Deployment of Applications
Declarative vs. Imperative Models for Configuration Management: Which Is Really Better?

I can't comment yet, but as Yabberth said, you can just use the selinux module.
When your running the play, only systems in a changed state would've been not set to disabled. If the state is already disabled ansible will leave it alone and move on to the next task.
If you use the shell module to check first, you'll always see the changed state since its registering it into the job flow. If your running a check check first and then change afterwards, it might be a bit overkill considering the selinux module will do what your asking IMO.

If you just want to disable SELinux, follow #yabberth's answer - Ansible is declarative and idempotent unless you mess things up. Therefore, if you have a task that declares selinux state as disabled, Ansible selinux module will check and set selinux state accordingly.
On the other hand, if someone's still looking for a condition to run a task based on SELinux state, I'd use Ansible facts. Here is an example:
- seboolean:
name: 'some_boolean'
state: yes
persistent: yes
when: ansible_facts.selinux.status == 'enabled'

There are some cases where you may still wish to test whether SELinux is enabled. For instance, the Ansible module sefcontext generates a failure message, if SELinux is disabled. E.g.:
TASK [users : Set SELinux context of directory /foo/bar to ftpd_u] ***************************************************************************************************************************************************************
fatal: [172.16.1.76]: FAILED! => {"changed": false, "msg": "SELinux is disabled on this host."}
To test whether SELinux is enabled, use selinuxenabled rather than getenforce (or perhaps both).
Here's an example of some tasks with this dependency. Note that in the first task, you ignore errors, because you don't want that task to fail based on the exit code of selinuxenabled.
- name: Test whether SELinux is enabled
command: /usr/sbin/selinuxenabled
ignore_errors: yes
register: selinux_status
- name: Set SELinux context of custom ftp directory /foo/bar to ftpd_u if SE Linux is enabled
sefcontext:
target: /foo/bar
setype: ftpd_u
state: present
register: ftp_dir
when: selinux_status.rc == 0
- name: Apply new SELinux context for custom FTP directory
command: restorecon -irv /foo/bar
when: ftp_dir.changed

Related

Error handling with "failed_when" doesn't work

I wish to create an ansible playbook to run on multiple servers in order to check for conformity to the compliance rules. Therefore the playbook is supposed to check whether several services are disabled and whether they are stopped.
I do know that not all the services will be running or enabled on all machines. Therefore I wish to handle certain return codes within the playbook.
I tried the failed_when Statement for this Task. It seems the way to go as it allows to set which RCs to handle.
- hosts: server_group1
remote_user: ansible
tasks:
- name: Stop services
command: /bin/systemctl stop "{{ item }}"
with_items:
- cups
- avahi
- slapd
- isc-dhcp-server
- isc-dhcp-server6
- nfs-server
- rpcbind
- bind9
- vsftpd
- dovecot
- smbd
- snmpd
- squid
register: output
failed_when: "output.rc != 0 and output.rc != 1"
However the loop only works if I use ignore_errors: True which is not shown in the example code.
I do know that I wish to catch RCs of 0 and 1 from the command executed by the playbook. But no matter what failed_when always generates a fatal error and my playbook fails.
I also tried the failed_when line without the "" but that doesn't change a thing.
Something is missing, but I don't see what.
Any advice?
Therefore the playbook is supposed to check whether several services are disabled and whether they are stopped.
Your playbook isn't doing either of these things. If a service exists, whether or not it's running, then systemctl stop <service> will return successfully in most cases. The only time you'll get a non-zero exit code is if either (a) the service does not exist or (b) systemd is unable to stop the service for some reason.
Note that calling systemctl stop has no effect on whether or not a service is disabled; with your current playbook, all those services would start back up the next time the host boots.
If your goal is not simply to check that services are stopped and disabled, but rather to ensure that services are stopped and disabled, you could do something like this:
- name: "Stop services"
service:
name: "{{ item }}"
enabled: false
state: stopped
ignore_errors: true
register: results
loop:
- cups
- avahi
- slapd
- isc-dhcp-server
- isc-dhcp-server6
- nfs-server
- rpcbind
- bind9
- vsftpd
- dovecot
- smbd
- snmpd
- squid
You probably want ignore_errors: true here, because you want run the task for every item in your list.
A different way of handling errors might be with something like:
failed_when: >-
results is failed and
"Could not find the requested service" not in results.msg|default('')
This would only fail the task if stopping the service failed for a reason other than the the fact that there was not matching service on the target host.

How to view the single service status via Ansible with modules rather than shell?

Need to check the Single service status on multiple system using Ansible playbook , is there a way to do using service modules rather than shell module ?
With the service module (and the related more specific modules like systemd) you can make sure that a service is in a desired state.
For example, the following task will enable apache start at boot if not already configured, start apache if it is stopped, and report change if any change was made or ok if no change was needed.
- name: Enable and start apache
service:
name: apache
enabled: yes
state: started
Simply checking the service status without any change is not supported by those modules. You will have to use the command line and analyse the output / return status.
example with systemd
- name: Check status of my service
command: systemctl -q is-active my_service
check_mode: no
failed_when: false
changed_when: false
register: my_service_status
- name: Report status of my service
debug:
msg: "my_service is {{ (my_service_status.rc == 0) | ternary('Up', 'Down') }}"
To be noted:
check_mode: no make the task run wether or not you use '--check' on ansible_playbook command line. Without this, in check mode, the next task will fail with an undefined variable.
failed_when: false refrains the task to fail when the return code is different from 0 (when the service is not started). You can be more specific by listing all the possible return code in normal conditions and failing when you get an other (e.g. failed_when: not (my_service_status in [0, 3, X]))
changed_when: false makes the task always report ok except of changed by default for command and shell module.

Disable systemd service only if the package is installed

This is what I have
- name: disable os firewall - firewalld
systemd:
name: firewalld
state: stopped
enabled: no
This task works fine on hosts where firewalld package is installed but fails on hosts where it is not.
What is the best simple approach to make sure it can handle hosts which do not have firewalld installed? I do not want to use any CMDB as it's an additional setup. Also, I want the task to be idempotent; using a shell command like dpkg or rpm to query if firewalld is installed makes the ansible playbook summary report changes which I do not want.
Here is an example using failed_when:. While it does run every time, it ignores the failure if the package is not installed:
- name: Stop and disable chrony service
service:
name: chronyd
enabled: no
state: stopped
register: chronyd_service_result
failed_when: "chronyd_service_result is failed and 'Could not find the requested service' not in chronyd_service_result.msg"
What is the best approach to make sure it can handle hosts which do not have firewalld installed?
Get the information on which systems firewalld is installed from a CMDB of any kind (like Ansible vars files, since you already use Ansible), and run the task on those systems.
Likewise, do not run the task on systems that do not have firewalld installed according to the configuration pulled from the CMDB.
The actual implementation vary. Among others: specifying appropriate host groups for plays, using conditional in the task, using limit option for inventory.
In your Ansible play. you can check first if the package is installed before trying to stop/start it.
something similar to this: https://stackoverflow.com/a/46975808/3768721
With shell approach:
- name: Disable {{ service }} if enabled
shell: if systemctl is-enabled --quiet {{ service }}; then systemctl disable {{ service }} && echo disable_ok ; fi
register: output
changed_when: "'disable_ok' in output.stdout"
It produces 3 states:
service is absent or disabled already — ok
service exists and was disabled — changed
service exists and disable failed — failed

How to implement conditional logic in ansible regarding modified files?

In ansible, I want to restart a service only if the configuration was changed.
Here is an example:
- hosts: workers
tasks:
- lineinfile: 'dest=/etc/default/autofs regexp=^OPTIONS= line="OPTIONS=\"-O soft\"" backup=yes'
- shell: service autofs reload
As you can see this code will always restart the autofs, even when the configuration file is not updated.
How can I improve this so it will restart only when the configuration file is changed?
Note: that's a generic question that is not specific to autofs, it could apply to any service where I do want to execute something IF a configuration file was changed, probably via lineinfile or ini_file core modules.
For a start you should be using the service module for controlling running services when running out. In general, if there's a proper module for something and it does what you need then you should do that and only resort to shelling out for edge cases.
Also, when an Ansible task runs it returns a series of facts that you can register to be able to use directly. This nearly always includes a changed attribute which is a boolean saying whether Ansible thinks it changed something (it can't always know. If a shell task returns something in stdout then it assumes something changed unless you directly override it with changed_when).
So you could go with something like this:
- hosts: workers
tasks:
- name: Set autofs options
lineinfile: 'dest=/etc/default/autofs regexp=^OPTIONS= line="OPTIONS=\"-O soft\"" backup=yes'
register: result
- name: reload autofs is autofs options are changed
service: name=autofs state=reloaded
when: result.changed
If you would create a role instead of using loose tasks right in your playbook you could work with handlers. Also see Best practices: Task And Handler Organization For A Role.
Tasks file of your role:
- name: Change autofs config
lineinfile: dest=/etc/default/autofs
regexp=^OPTIONS=
line="OPTIONS=\"-O soft\""
backup=yes
notify:
- Restart autfs
Then in your handlers file of the same role:
- name: Restart autfs
service: name=autfs
state=restarted
The handler gets notified whenever the config tasks has a changed state.
PS: I used the service module for managing the service. You should only use the shell module if no specific module for your tasks is available
Regarding roles:
One thing you will definitely want to do though, is use the “roles” organization feature, which is documented as part of the main playbooks page. See Playbook Roles and Include Statements. You absolutely should be using roles. Roles are great. Use roles. Roles! Did we say that enough? Roles are great.
I know this is old, but I landed on this and felt there was more to add to what others have already answered. You should probably use a handler, but you do not need to do a discreet role to use a handler. Here's an example of handlers used directly in the playbook.
---
- hosts: workers
tasks:
- lineinfile: 'dest=/etc/default/autofs regexp=^OPTIONS= line="OPTIONS=\"-O soft\"" backup=yes'
notify: reload autofs
handlers:
- name: reload autofs
service: name=autofs state=reloaded
One thing to note.. handlers fire off at the end of the play. So if you have multiple tasks, and you're expecting the handler to fire after the task that notified it, you might be in for a bad time. You can do a 'meta' task to flush_handlers if you need that handler to fire off in a sequential order with your tasks, or you can go the register/conditional way like ydaetskcoR provided. In your play, with a single task, it doesn't really matter.

Ansible notify handlers in another role

Can I notify the handler in another role? What should I do to make ansible find it?
The use case is, e.g. I want to configure some service and then restart it if changed. Different OS have probably different files to edit and even the file format can be different. So I would like to put them into different roles (because the file format can be different, it can't be done by setting group_vars). But the way to restart the service is the same, using service module; so I'd like to put the handler to common role.
Is anyway to achieve this? Thanks.
You can also call handlers of a dependency role. May be cleaner than including files or explicitly listing roles in a playbook just for the purpose of role to role relationship. E.g.:
roles/my-handlers/handlers/main.yml
---
- name: nginx restart
service: >
name=nginx
state=restarted
roles/my-other/meta/main.yml
---
dependencies:
- role: my-handlers
roles/my-other/tasks/main.yml
---
- copy: >
src=nginx.conf
dest=/etc/nginx/
notify: nginx restart
You should be able to do that if you include the handler file.
Example:
handlers:
- include: someOtherRole/handlers/main.yml
But I don't think its elegant.
A more elegant way is to have a play that manages both roles, something like this:
- hosts: all
roles:
- role1
- role2
This will make both roles able to call other handlers.
But again I would suggest to make it all in one role and separate files and use a conditional include http://docs.ansible.com/playbooks_conditionals.html#conditional-imports
Hope that helps
You may import additional handlers from YourRole/handlers/main.yml file by using import_tasks.
So, if MyRole needs to call handlers in some OtherRole, roles/MyRole/handlers/main.yml will look like this:
- import_tasks: roles/OtherRole/handlers/main.yml
Of course roles/MyRole/handlers/main.yml may include additional handlers as well.
This way if I want to run MyRole without running tasks from the OtherRole, ansible will be able to correctly import and run handlers from the OtherRole
I had a similar issue, but needed to take many actions in the other dependent roles.
So rather than invoking the handeler - we set a fact like so:
- name: install mylib to virtualenv
pip: requirements=/opt/mylib/requirements.txt virtualenv={{ mylib_virtualenv_path }}
sudo_user: mylib
register: mylib_wheel_upgraded
- name: set variable if source code was upgraded
set_fact:
mylib_source_upgraded: true
when: mylib_wheel_upgraded.changed
Then elsewhere in another role:
- name: restart services if source code was upgraded
command: /bin/true
notify: restart mylib server
when: mylib_source_upgraded
Currently I'm using ansible v2.10.3 and it supports to call handlers on different roles. This was because the handlers are visible on the play-level, as per Ansible Docs says. You can see the docs mentioned that in the bottom-most point.
handlers are play scoped and as such can be used outside of the role they are defined in.
FYI, I tested the solution i.e. calling other role's handlers and it works! No need to import or else, just make sure that the roles are in the same playbook execution.
To illustrate:
roles/vm/handlers/main.yaml
---
- name: rebootvm
ansible.builtin.reboot:
reboot_timeout: 600
test_command: whoami
roles/config-files/tasks/main.yaml
---
- name: Copy files from local to remote
ansible.builtin.copy:
dest: /home/ubuntu/config.conf
src: config.conf
backup: yes
force: yes
notify:
- rebootvm
So when the config file (i.e. config.conf) changed, Ansible will send it to the remote location and it will notify the handler rebootvm, then the VM is rebooted.
P.S. I don't know what version exactly Ansible support this.
Edit: code indentation fix

Resources