Does Ansible's systemd module a `daemon-reload` before starting a service? - ansible

I have a playbook where I first copy a new service file to /etc/systemd/system/ and then start the service. Normally, I'd have to run sudo systemctl daemon-reload before starting the service.
There is a daemon_reload parameter to the systemd module, but the description is not clear. It says "When set to true, runs daemon-reload even if the module does not start or stop anything." It sounds like it usually runs daemon-reload before starting or stopping services, and that this switch just makes it run daemon-reload always even when there's no state change.
Example of what I'm doing:
- name: Install Foo
hosts: all
tasks:
- name: Install SystemD service
become: true
copy:
src: ./foo.service
dest: /etc/systemd/system/
- name: Ensure the service is running
become: true
systemd:
name: mqtt-button.service
enabled: true
state: started

Ansible does not implicitly run systemctl daemon-reload.
It only runs it when you set daemon_reload: true, but in this case it will run the daemon-reload command regardless of whether or not it needs to start or stop any services.
When in doubt about the documentation you can always refer to the source.

Related

Generated podman systemd file can not be enabled

I'm trying to write an ansible playbook, which deploys nginx as a podman container, generates the systemd unit related to said container and enables & starts that systemd unit. The container itself is created fine and is up and running, but the system is unable to recognize the created unit file. The relevant part of the ansible playbook after the container is created is as follows
---
- name: Create systemd script for created container
become: true
shell: podman generate systemd -n nginx --files
register: container_systemd_file
- name: Get service name
set_fact:
service_name: "{{ container_systemd_file.stdout | basename }}"
- name: Move created systemd script to correct location
become: true
shell: mv {{ container_systemd_file.stdout }} /etc/systemd/system/
- name: Force systemd to reread configs
become: true
systemd: daemon_reload=yes
- name: Enable nginx container service
become: true
systemd:
name: "{{ service_name }}"
enabled: true
state: started
The final step fails with message "msg": "Could not find the requested service container-nginx.service: host".
The unit file can be found on the host machine
$ ls -al /etc/systemd/system/container-nginx.service
-rw-r--r--. 1 root root 701 Sep 24 15:19 /etc/systemd/system/container-nginx.service
and has contents
# container-nginx.service
# autogenerated by Podman 3.2.3
# Fri Sep 24 15:19:41 EEST 2021
[Unit]
Description=Podman container-nginx.service
Documentation=man:podman-generate-systemd(1)
Wants=network.target
After=network-online.target
RequiresMountsFor=/var/run/containers/storage
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStart=/usr/bin/podman start nginx
ExecStop=/usr/bin/podman stop -t 10 nginx
ExecStopPost=/usr/bin/podman stop -t 10 nginx
PIDFile=/var/run/containers/storage/overlay-containers/c5ad5d961913a8d48ea09557484eaf17749086581eb5a1750e65fb4cf8965272/userdata/conmon.pid
Type=forking
[Install]
WantedBy=multi-user.target default.target
Similarly if I run systemctl daemon-reload and try to enable the service manually on host, it fails with message Failed to enable unit: Unit file container-nginx.service does not exist.
How to make the system recognize the newly created unit file and enable the service? I'm using Podman 3.2.3 and RHEL 8.4.
I was unable to figure out why the initial approach didn't work, but instead of first generating the systemd unit as a file to user's home directory and then moving it, writing the file directly under /etc/systemd/system/ worked.
---
- name: Create systemd script for created container
become: true
shell: podman generate systemd --new --name nginx > /etc/systemd/system/nginx.service
- name: Force systemd to reread configs
become: true
systemd: daemon_reload=yes
- name: Enable nginx container service
become: true
systemd:
name: nginx.service
enabled: true
state: started

stopping a task by removing the process id in ansible

I am new to ansible. I have the following task:
- name: Run mock server on port 4000
shell: |
sudo cd "{{ travis.base_build_path }}/mock-server"
sudo npm run start
and after I need to stop this task by removing process id. How can I do that?
Can I name the above process some how stop it in another task?
This does not look like an Ansible-specific task. I would do this with systemd: create a systemd unit for this 'npm run start'.
- systemd: name=your_unit.service state=started
- debug: msg="do something"
- systemd: name=your_unit.service state=stopped
More information on systemd units is here:
https://www.freedesktop.org/software/systemd/man/systemd.service.html

How to start apache2 service , when it is in inactive state using ansible( When it is inactive state

Already apache2 is installed in my target node
I have to start apache2 service when it is in inactive state.
How to implement such kind of solution in ansible
I have to start apache2 service whenever it is inactive
Command: /sbin/service apache2 status
if the output is showing inactive then only i have to run below command
Command: /sbin/service apache2 start
using adhoc ansible commands you should be able to run:
ansible <host_name_or_ip> -a "service httpd start"
or as part of a role:
- name: Ensure that apache2 is started
become: true
service: name=httpd state=started
Ansible is about describing states in tasks that always produce the same result (idempotence). Your problem then comes down to:
apache must be active on boot
apache must be started
We could also add in a prior separate task:
apache package must be installed on the system
A sample playbook would be
---
- name: Install and enable apache
hosts: my_web_servers
vars:
package_name: apache2 # adapt to your distribution
tasks:
- name: Make sure apache is installed
package:
name: "{{ package_name }}"
state: present
- name: Make sure apache is active on boot and started
service:
name: "{{ package_name }}"
enabled: true
state: started
Running this playbook against your target host will (make sure you change the hosts param to a relevant group or machine):
For the install task:
Install apache if it is not already installed and report "Changed"
Report "Ok" if apache is already installed
For the service task:
Enable and/or start apache if needed and report "Changed"
Report "Ok" if apache is already enabled and started.
Note: in my example, I used the agnostic modules package and service. If you which you can replace with their specific equivalent (apt, yum ... and systemd, sysvinit ...)

ansible: how to restart auditd service on centos 7 get error about dependency

In my playbook, i have a task to update audit.rules and then notify a handler which should restart the auditd service.
task:
- name: 6.6.7 - audit rules configuration
template: src=X/ansible/templates/auditd_rules.j2
dest=/etc/audit/rules.d/audit.rules
backup=yes
owner=root group=root mode=0640
notify:
- restart auditd
handlers:
- name: restart auditd
service: name=auditd state=restarted
When the playbook runs, the audit rules are updated and a request is made to restart auditd but this fails as below.
RUNNING HANDLER [restart auditd] ***********************************************
fatal: [ipX-southeast-2.compute.internal]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to restart service auditd: Failed to restart auditd.service: Operation refused, unit auditd.service may be requested by dependency only.\n"}
When i look at the unit definition for auditd, i can see refuseManualStop=yes. Is this why i cant restart the service? how does one over come this to pickup the new audit rules?
systemctl cat auditd.service
# /usr/lib/systemd/system/auditd.service
[Unit]
Description=Security Auditing Service
DefaultDependencies=no
After=local-fs.target systemd-tmpfiles-setup.service
Conflicts=shutdown.target
Before=sysinit.target shutdown.target
RefuseManualStop=yes
ConditionKernelCommandLine=!audit=0
Documentation=man:auditd(8) https://people.redhat.com/sgrubb/audit/
[Service]
ExecStart=/sbin/auditd -n
## To not use augenrules, copy this file to /etc/systemd/system/auditd.service
## and comment/delete the next line and uncomment the auditctl line.
## NOTE: augenrules expect any rules to be added to /etc/audit/rules.d/
ExecStartPost=-/sbin/augenrules --load
#ExecStartPost=-/sbin/auditctl -R /etc/audit/audit.rules
ExecReload=/bin/kill -HUP $MAINPID
# By default we don't clear the rules on exit. To enable this, uncomment
# the next line after copying the file to /etc/systemd/system/auditd.service
#ExecStopPost=/sbin/auditctl -R /etc/audit/audit-stop.rules
[Install]
WantedBy=multi-user.target
This has been explored, discussed, and resolved (mostly) in the Red Hat Bugzilla #1026648 and Anisble Issue # 22171 (github) reports.
Resolution
Use the ansible service module parameter use=service to force execution of the /sbin/service utility instead of the gathered-fact value of systemd (which invokes /sbin/systemctl) like this:
- service: name=auditd state=restarted use=service
Example playbook (pastebin.com)
Workaround:
Use the ansible command module to explicitly run the service executable like this:
- command: /sbin/service auditd restart
Analysis - root cause:
This is an issue created by upstream packaging of auditd.service unit. It will not start/stop/restart when acted upon by systemctl, apparently by design.
It is further compounded by the Ansible service control function, which uses the preferred method identified when system facts are gathered and "ansible_service_mgr" returns "systemd". This is regardless of the actual module used to manage the service.unit.
RHEL dev team may fix if considered a problem in upcoming updates (ERRATA)
Ansible dev team has offered a workaround and (as of 2.2) updated the service module with the use parameter.
Maybe is quit late for the answer, but if anyone else is facing the same problem, you can import the new rules for auditd with this command :
auditctl -R /path/to_your_rules_file
So, no need to restart auditd.service to import new rules
I would validate the auditd service reloading correctly as even using the command module with the command you specified will not work or behave in the manner you would expect it to;
Confirm via
service auditd status
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-starting_the_audit_service.html
Try instead
service auditd condrestart
:)
You should not change the parameter refuseManualStop, it's there to keep you system secure. What you could do instead, after creating new rules, reboot the host, wait for it, then continue with you playbook.
Playbook example:
- name: Create new rules file
copy:
src: 01-personalized.rules
dest: /etc/audit/rules.d/01-personalized.rules
owner: root
group: root
mode: 0600
register: result
- name: Reboot server
shell: "sleep 5 && reboot"
async: 1
poll: 0
when: result is changed
- name: Wait for server to become available
wait_for_connection:
delay: 60
sleep: 5
timeout: 300
when: result is changed
Change the manual stop to NO and try
sudo service auditd restart
If this works, then the code will also work
systemctl start auditd
and
systemctl enable auditd
is for version CentOS version7.
Follow the links for further help.
Link
Auditd in CentOS7

Ansible handler runs only if changed: true

Installing ntp with Ansible,
I notify handler in order to start ntpd service:
Task:
---
# roles/common/tasks/ntp.yml
- name: ntp | installing
yum: name=ntp state=latest
notify: start ntp
Handler:
---
# roles/common/handlers/main.yml
- name: start ntp
service: name=ntpd state=started
If service has not been installed, ansible installs and starts it.
If service has been installed already, but is not running, it does not notify handler: status of task is changed: false
That means, I cannot start it, if it has been already presented in OS.
Is there any good practice that helps to be sure that service has been installed and is in running state?
PS: I may do so:
---
# roles/common/tasks/ntp.yml
- name: ntp | installing
yum: name=ntp state=latest
notify: start ntp
changed: true
but I am not sure that it is good practice.
From the Intro to Playbooks guide:
As we’ve mentioned, modules are written to be ‘idempotent’ and can relay when they have made a change on the remote system. Playbooks recognize this and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once even if notified by multiple different tasks.
Handlers only run on change by design. If you change a configuration you often need to restart a service, but don't want to if nothing has changed.
What you want is to start a service if it is not already running. To do this you should use a regular task as described by #udondan :
- name: ntp | installing
yum:
name: ntp
state: latest
- name: ntp | starting
service:
name: ntpd
state: started
enabled: yes
Ansible is idempotent by design, so this second task will only run if ntp is not already running. The enabled line will set the service to start on boot. Remove this line if that is not desired behavior.
Why don't you just add a service task then? A handler usually is for restarting a service after configuration has changed. To ensure a service is running not matter what, just add at task like that:
- name: Ensure ntp is running
service:
name: ntpd
state: started

Resources