Disable systemd service only if the package is installed - ansible

This is what I have
- name: disable os firewall - firewalld
systemd:
name: firewalld
state: stopped
enabled: no
This task works fine on hosts where firewalld package is installed but fails on hosts where it is not.
What is the best simple approach to make sure it can handle hosts which do not have firewalld installed? I do not want to use any CMDB as it's an additional setup. Also, I want the task to be idempotent; using a shell command like dpkg or rpm to query if firewalld is installed makes the ansible playbook summary report changes which I do not want.

Here is an example using failed_when:. While it does run every time, it ignores the failure if the package is not installed:
- name: Stop and disable chrony service
service:
name: chronyd
enabled: no
state: stopped
register: chronyd_service_result
failed_when: "chronyd_service_result is failed and 'Could not find the requested service' not in chronyd_service_result.msg"

What is the best approach to make sure it can handle hosts which do not have firewalld installed?
Get the information on which systems firewalld is installed from a CMDB of any kind (like Ansible vars files, since you already use Ansible), and run the task on those systems.
Likewise, do not run the task on systems that do not have firewalld installed according to the configuration pulled from the CMDB.
The actual implementation vary. Among others: specifying appropriate host groups for plays, using conditional in the task, using limit option for inventory.

In your Ansible play. you can check first if the package is installed before trying to stop/start it.
something similar to this: https://stackoverflow.com/a/46975808/3768721

With shell approach:
- name: Disable {{ service }} if enabled
shell: if systemctl is-enabled --quiet {{ service }}; then systemctl disable {{ service }} && echo disable_ok ; fi
register: output
changed_when: "'disable_ok' in output.stdout"
It produces 3 states:
service is absent or disabled already — ok
service exists and was disabled — changed
service exists and disable failed — failed

Related

How to start apache2 service , when it is in inactive state using ansible( When it is inactive state

Already apache2 is installed in my target node
I have to start apache2 service when it is in inactive state.
How to implement such kind of solution in ansible
I have to start apache2 service whenever it is inactive
Command: /sbin/service apache2 status
if the output is showing inactive then only i have to run below command
Command: /sbin/service apache2 start
using adhoc ansible commands you should be able to run:
ansible <host_name_or_ip> -a "service httpd start"
or as part of a role:
- name: Ensure that apache2 is started
become: true
service: name=httpd state=started
Ansible is about describing states in tasks that always produce the same result (idempotence). Your problem then comes down to:
apache must be active on boot
apache must be started
We could also add in a prior separate task:
apache package must be installed on the system
A sample playbook would be
---
- name: Install and enable apache
hosts: my_web_servers
vars:
package_name: apache2 # adapt to your distribution
tasks:
- name: Make sure apache is installed
package:
name: "{{ package_name }}"
state: present
- name: Make sure apache is active on boot and started
service:
name: "{{ package_name }}"
enabled: true
state: started
Running this playbook against your target host will (make sure you change the hosts param to a relevant group or machine):
For the install task:
Install apache if it is not already installed and report "Changed"
Report "Ok" if apache is already installed
For the service task:
Enable and/or start apache if needed and report "Changed"
Report "Ok" if apache is already enabled and started.
Note: in my example, I used the agnostic modules package and service. If you which you can replace with their specific equivalent (apt, yum ... and systemd, sysvinit ...)

Ansible playbook check if service is up if not - install something

i need to install Symantec endpoint security on my linux system and im trying to write a playbook to do so
when i want to install the program i use ./install.sh -i
but after the installation when i run the installation again i get this msg:
root#TestKubuntu:/usr/SEP# ./install.sh -i
Starting to install Symantec Endpoint Protection for Linux
Downgrade is not supported. Please make sure the target version is newer than the original one.
this is how i install it in the playbook
- name: Install_SEP
command: bash /usr/SEP/install.sh -i
I would like if it's possible to maybe check if the service is up and if there is no service then install it or maybe there is a better way doing this.
Thank you very much for your time
Q: "I would like to check if the service is up and if there is no service then install it."
It's possible to use service_facts. For example to check a service is running
vars:
my_service: "<name-of-my-service>"
tasks:
- name: Collect service facts
service_facts:
- name: Install service when not running
command: "<install-service>"
when: "my_service not in ansible_facts.services|
dict2items|
json_query('[?value.state == `running`].key')"
To check a service installed use
json_query('[].key') }}"
(not tested)
Please try something like below.
- name: Check if service is up
command: <command to check if service is up>
register: output
- name: Install_SEP
command: bash /usr/SEP/install.sh -i
when: "'running' not in output.stdout"
Note: I have used running in when condition : If the service command returns something specific, include that instead of running.

ansible: how to restart auditd service on centos 7 get error about dependency

In my playbook, i have a task to update audit.rules and then notify a handler which should restart the auditd service.
task:
- name: 6.6.7 - audit rules configuration
template: src=X/ansible/templates/auditd_rules.j2
dest=/etc/audit/rules.d/audit.rules
backup=yes
owner=root group=root mode=0640
notify:
- restart auditd
handlers:
- name: restart auditd
service: name=auditd state=restarted
When the playbook runs, the audit rules are updated and a request is made to restart auditd but this fails as below.
RUNNING HANDLER [restart auditd] ***********************************************
fatal: [ipX-southeast-2.compute.internal]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to restart service auditd: Failed to restart auditd.service: Operation refused, unit auditd.service may be requested by dependency only.\n"}
When i look at the unit definition for auditd, i can see refuseManualStop=yes. Is this why i cant restart the service? how does one over come this to pickup the new audit rules?
systemctl cat auditd.service
# /usr/lib/systemd/system/auditd.service
[Unit]
Description=Security Auditing Service
DefaultDependencies=no
After=local-fs.target systemd-tmpfiles-setup.service
Conflicts=shutdown.target
Before=sysinit.target shutdown.target
RefuseManualStop=yes
ConditionKernelCommandLine=!audit=0
Documentation=man:auditd(8) https://people.redhat.com/sgrubb/audit/
[Service]
ExecStart=/sbin/auditd -n
## To not use augenrules, copy this file to /etc/systemd/system/auditd.service
## and comment/delete the next line and uncomment the auditctl line.
## NOTE: augenrules expect any rules to be added to /etc/audit/rules.d/
ExecStartPost=-/sbin/augenrules --load
#ExecStartPost=-/sbin/auditctl -R /etc/audit/audit.rules
ExecReload=/bin/kill -HUP $MAINPID
# By default we don't clear the rules on exit. To enable this, uncomment
# the next line after copying the file to /etc/systemd/system/auditd.service
#ExecStopPost=/sbin/auditctl -R /etc/audit/audit-stop.rules
[Install]
WantedBy=multi-user.target
This has been explored, discussed, and resolved (mostly) in the Red Hat Bugzilla #1026648 and Anisble Issue # 22171 (github) reports.
Resolution
Use the ansible service module parameter use=service to force execution of the /sbin/service utility instead of the gathered-fact value of systemd (which invokes /sbin/systemctl) like this:
- service: name=auditd state=restarted use=service
Example playbook (pastebin.com)
Workaround:
Use the ansible command module to explicitly run the service executable like this:
- command: /sbin/service auditd restart
Analysis - root cause:
This is an issue created by upstream packaging of auditd.service unit. It will not start/stop/restart when acted upon by systemctl, apparently by design.
It is further compounded by the Ansible service control function, which uses the preferred method identified when system facts are gathered and "ansible_service_mgr" returns "systemd". This is regardless of the actual module used to manage the service.unit.
RHEL dev team may fix if considered a problem in upcoming updates (ERRATA)
Ansible dev team has offered a workaround and (as of 2.2) updated the service module with the use parameter.
Maybe is quit late for the answer, but if anyone else is facing the same problem, you can import the new rules for auditd with this command :
auditctl -R /path/to_your_rules_file
So, no need to restart auditd.service to import new rules
I would validate the auditd service reloading correctly as even using the command module with the command you specified will not work or behave in the manner you would expect it to;
Confirm via
service auditd status
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-starting_the_audit_service.html
Try instead
service auditd condrestart
:)
You should not change the parameter refuseManualStop, it's there to keep you system secure. What you could do instead, after creating new rules, reboot the host, wait for it, then continue with you playbook.
Playbook example:
- name: Create new rules file
copy:
src: 01-personalized.rules
dest: /etc/audit/rules.d/01-personalized.rules
owner: root
group: root
mode: 0600
register: result
- name: Reboot server
shell: "sleep 5 && reboot"
async: 1
poll: 0
when: result is changed
- name: Wait for server to become available
wait_for_connection:
delay: 60
sleep: 5
timeout: 300
when: result is changed
Change the manual stop to NO and try
sudo service auditd restart
If this works, then the code will also work
systemctl start auditd
and
systemctl enable auditd
is for version CentOS version7.
Follow the links for further help.
Link
Auditd in CentOS7

Ansible handler runs only if changed: true

Installing ntp with Ansible,
I notify handler in order to start ntpd service:
Task:
---
# roles/common/tasks/ntp.yml
- name: ntp | installing
yum: name=ntp state=latest
notify: start ntp
Handler:
---
# roles/common/handlers/main.yml
- name: start ntp
service: name=ntpd state=started
If service has not been installed, ansible installs and starts it.
If service has been installed already, but is not running, it does not notify handler: status of task is changed: false
That means, I cannot start it, if it has been already presented in OS.
Is there any good practice that helps to be sure that service has been installed and is in running state?
PS: I may do so:
---
# roles/common/tasks/ntp.yml
- name: ntp | installing
yum: name=ntp state=latest
notify: start ntp
changed: true
but I am not sure that it is good practice.
From the Intro to Playbooks guide:
As we’ve mentioned, modules are written to be ‘idempotent’ and can relay when they have made a change on the remote system. Playbooks recognize this and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once even if notified by multiple different tasks.
Handlers only run on change by design. If you change a configuration you often need to restart a service, but don't want to if nothing has changed.
What you want is to start a service if it is not already running. To do this you should use a regular task as described by #udondan :
- name: ntp | installing
yum:
name: ntp
state: latest
- name: ntp | starting
service:
name: ntpd
state: started
enabled: yes
Ansible is idempotent by design, so this second task will only run if ntp is not already running. The enabled line will set the service to start on boot. Remove this line if that is not desired behavior.
Why don't you just add a service task then? A handler usually is for restarting a service after configuration has changed. To ensure a service is running not matter what, just add at task like that:
- name: Ensure ntp is running
service:
name: ntpd
state: started

Using Ansible to stop service that might not exist

I am using Ansible 2.6.1.
I am trying to ensure that certain service is not running on target hosts.
Problem is that the service might not exist at all on some hosts. If this is the case Ansible fails with error because of missing service. Services are run by Systemd.
Using service module:
- name: Stop service
service:
name: '{{ target_service }}'
state: stopped
Fails with error Could not find the requested service SERVICE: host
Trying with command module:
- name: Stop service
command: service {{ target_service }} stop
Gives error: Failed to stop SERVICE.service: Unit SERVICE.service not loaded.
I know I could use ignore_errors: yes but it might hide real errors too.
An other solution would be having 2 tasks. One checking for existance of service and other that is run only when first task found service but feels complex.
Is there simpler way to ensure that service is stopped and avoid errors if the service does not exists?
I'm using the following steps:
- name: Get the list of services
service_facts:
- name: Stop service
systemd:
name: <service_name_here>
state: stopped
when: "'<service_name_here>.service' in services"
service_facts could be called once in the gathering facts phase.
The following will register the module output in service_stop; if the module execution's standard output does not contain "Could not find the requested service" AND the service fails to stop based on return code, the module execution will fail. Since you did not include the entire stack trace I am assuming the error you posted is in the standard output, you may need to change slightly based on your error.
- name: Stop service
register: service_stop
failed_when:
- '"Could not find the requested service" not in service_stop.stdout'
- service_stop.rc != 0
service:
name: '{{ target_service }}'
state: stopped
IMHO there isn't simpler way to ensure that service is stopped. Ansible service module doesn't check service's existence. Either (1) more then one task, or (2) command that check service's existence is needed. The command would be OS specific. For example for FreeBSD
command: "service -e | grep {{ target_service }} && service {{ target_service }} stop"
Same solution as Vladimir's, but for Ubuntu (systemd) and with better state handling:
- name: restart {{ target_service }} if exists
shell: if systemctl is-enabled --quiet {{ target_service }}; then systemctl restart {{ target_service }} && echo restarted ; fi
register: output
changed_when: "'restarted' in output.stdout"
It produces 3 states:
service is absent or disabled — ok
service exists and was restarted — changed
service exists and restart failed — failed
Same solution as #ToughKernel's but use systemd to manage service.
- name: disable ntpd service
systemd:
name: ntpd
enabled: no
state: stopped
register: stop_service
failed_when:
- stop_service.failed == true
- '"Could not find the requested service" not in stop_service.msg'
# the order is important, only failed == true, there will be
# attribute 'msg' in the result
When the service module fails, check if the service that needs to be stopped is installed at all. That is similar to this answer, but avoids the rather lengthy gathering of service facts unless necessary.
- name: Stop a service
block:
- name: Attempt to stop the service
service:
name: < service name >
state: stopped
rescue:
- name: Get the list of services
service_facts:
- name: Verify that Nagios is not installed
assert:
that:
- "'< service name >.service' not in services"

Resources