How to start apache2 service , when it is in inactive state using ansible( When it is inactive state - ansible

Already apache2 is installed in my target node
I have to start apache2 service when it is in inactive state.
How to implement such kind of solution in ansible
I have to start apache2 service whenever it is inactive
Command: /sbin/service apache2 status
if the output is showing inactive then only i have to run below command
Command: /sbin/service apache2 start

using adhoc ansible commands you should be able to run:
ansible <host_name_or_ip> -a "service httpd start"
or as part of a role:
- name: Ensure that apache2 is started
become: true
service: name=httpd state=started

Ansible is about describing states in tasks that always produce the same result (idempotence). Your problem then comes down to:
apache must be active on boot
apache must be started
We could also add in a prior separate task:
apache package must be installed on the system
A sample playbook would be
---
- name: Install and enable apache
hosts: my_web_servers
vars:
package_name: apache2 # adapt to your distribution
tasks:
- name: Make sure apache is installed
package:
name: "{{ package_name }}"
state: present
- name: Make sure apache is active on boot and started
service:
name: "{{ package_name }}"
enabled: true
state: started
Running this playbook against your target host will (make sure you change the hosts param to a relevant group or machine):
For the install task:
Install apache if it is not already installed and report "Changed"
Report "Ok" if apache is already installed
For the service task:
Enable and/or start apache if needed and report "Changed"
Report "Ok" if apache is already enabled and started.
Note: in my example, I used the agnostic modules package and service. If you which you can replace with their specific equivalent (apt, yum ... and systemd, sysvinit ...)

Related

Can anyone help me how to tag particular version to apache and nginx

I have trouble in setting up version for apache and nginx for below playbook.
---
- name: Install appache restart apache and install git
hosts: all
become: true
tasks:
- name : install appache2
package : name=apache2 state=present
- name : restart appache2
command : systemctl restart apache2
command : systemctl stop apache2
- name : uninstall appache2
package : name=apache2 state=absent
- name : Install git
package : name=git state=present
- name : install nginx
package : name=nginx state=present
I will write the examples as if you want to install apache v4.23 (supposedly it exists in your distribution and available in your package repository)
If you are using RHEL distros (Centos - Redhat // Or yum as package manager), that would be the format you will be using "$packagename-$version"
# Apache package name on RHEL is httpd, just putting apache for comparison.
tasks:
- name: install apache2
package:
name: apache2-4.23
state: present
Compared to if you are using apt for package management, that would be the format "$packagename=$version"
tasks:
- name: install apache2
package:
name: apache2=4.23
state: present
And generally it is one of the above 2 syntaxes.. So if you are not sure, try once with "-".. If it does not work, try with "=".
And on a side note, use service module to restart your services instead of using the command module
tasks:
- name: restart apache2
service:
name: apache2
state: started

How can I mix tags and boolean variables to execute/not execute a task?

Ansile 2.11
Say I have a task where I want to start a service only during a "new install" or an "update" of its corresponding application. IOW, I don't want it to start during an "uninstall" for example. However, I only want to do this when I'm installing this particular application by itself. When I'm installing it as part of a list of applications, I don't want to start the service since I have a separate role that starts ALL the services. I tried
In defaults/main.yml
---
web_service_only: false
In tasks/main.yml
---
- name: Start the service
service:
name: some_service
state: started
when: web_service_only
tags:
- new_install
- update
- name: Stop the service
service:
name: some_service
state: stopped
tags: uninstall
I don't expect the "Start the service" task to execute when I simply do
$ ansible-playbook -i hosts my_host my-playbook.yml --tags new_install
because of the web_service_only variable set to false, however it does execute. What am I missing. How can I achieve what I want?

Disable systemd service only if the package is installed

This is what I have
- name: disable os firewall - firewalld
systemd:
name: firewalld
state: stopped
enabled: no
This task works fine on hosts where firewalld package is installed but fails on hosts where it is not.
What is the best simple approach to make sure it can handle hosts which do not have firewalld installed? I do not want to use any CMDB as it's an additional setup. Also, I want the task to be idempotent; using a shell command like dpkg or rpm to query if firewalld is installed makes the ansible playbook summary report changes which I do not want.
Here is an example using failed_when:. While it does run every time, it ignores the failure if the package is not installed:
- name: Stop and disable chrony service
service:
name: chronyd
enabled: no
state: stopped
register: chronyd_service_result
failed_when: "chronyd_service_result is failed and 'Could not find the requested service' not in chronyd_service_result.msg"
What is the best approach to make sure it can handle hosts which do not have firewalld installed?
Get the information on which systems firewalld is installed from a CMDB of any kind (like Ansible vars files, since you already use Ansible), and run the task on those systems.
Likewise, do not run the task on systems that do not have firewalld installed according to the configuration pulled from the CMDB.
The actual implementation vary. Among others: specifying appropriate host groups for plays, using conditional in the task, using limit option for inventory.
In your Ansible play. you can check first if the package is installed before trying to stop/start it.
something similar to this: https://stackoverflow.com/a/46975808/3768721
With shell approach:
- name: Disable {{ service }} if enabled
shell: if systemctl is-enabled --quiet {{ service }}; then systemctl disable {{ service }} && echo disable_ok ; fi
register: output
changed_when: "'disable_ok' in output.stdout"
It produces 3 states:
service is absent or disabled already — ok
service exists and was disabled — changed
service exists and disable failed — failed

ansible: how to restart auditd service on centos 7 get error about dependency

In my playbook, i have a task to update audit.rules and then notify a handler which should restart the auditd service.
task:
- name: 6.6.7 - audit rules configuration
template: src=X/ansible/templates/auditd_rules.j2
dest=/etc/audit/rules.d/audit.rules
backup=yes
owner=root group=root mode=0640
notify:
- restart auditd
handlers:
- name: restart auditd
service: name=auditd state=restarted
When the playbook runs, the audit rules are updated and a request is made to restart auditd but this fails as below.
RUNNING HANDLER [restart auditd] ***********************************************
fatal: [ipX-southeast-2.compute.internal]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to restart service auditd: Failed to restart auditd.service: Operation refused, unit auditd.service may be requested by dependency only.\n"}
When i look at the unit definition for auditd, i can see refuseManualStop=yes. Is this why i cant restart the service? how does one over come this to pickup the new audit rules?
systemctl cat auditd.service
# /usr/lib/systemd/system/auditd.service
[Unit]
Description=Security Auditing Service
DefaultDependencies=no
After=local-fs.target systemd-tmpfiles-setup.service
Conflicts=shutdown.target
Before=sysinit.target shutdown.target
RefuseManualStop=yes
ConditionKernelCommandLine=!audit=0
Documentation=man:auditd(8) https://people.redhat.com/sgrubb/audit/
[Service]
ExecStart=/sbin/auditd -n
## To not use augenrules, copy this file to /etc/systemd/system/auditd.service
## and comment/delete the next line and uncomment the auditctl line.
## NOTE: augenrules expect any rules to be added to /etc/audit/rules.d/
ExecStartPost=-/sbin/augenrules --load
#ExecStartPost=-/sbin/auditctl -R /etc/audit/audit.rules
ExecReload=/bin/kill -HUP $MAINPID
# By default we don't clear the rules on exit. To enable this, uncomment
# the next line after copying the file to /etc/systemd/system/auditd.service
#ExecStopPost=/sbin/auditctl -R /etc/audit/audit-stop.rules
[Install]
WantedBy=multi-user.target
This has been explored, discussed, and resolved (mostly) in the Red Hat Bugzilla #1026648 and Anisble Issue # 22171 (github) reports.
Resolution
Use the ansible service module parameter use=service to force execution of the /sbin/service utility instead of the gathered-fact value of systemd (which invokes /sbin/systemctl) like this:
- service: name=auditd state=restarted use=service
Example playbook (pastebin.com)
Workaround:
Use the ansible command module to explicitly run the service executable like this:
- command: /sbin/service auditd restart
Analysis - root cause:
This is an issue created by upstream packaging of auditd.service unit. It will not start/stop/restart when acted upon by systemctl, apparently by design.
It is further compounded by the Ansible service control function, which uses the preferred method identified when system facts are gathered and "ansible_service_mgr" returns "systemd". This is regardless of the actual module used to manage the service.unit.
RHEL dev team may fix if considered a problem in upcoming updates (ERRATA)
Ansible dev team has offered a workaround and (as of 2.2) updated the service module with the use parameter.
Maybe is quit late for the answer, but if anyone else is facing the same problem, you can import the new rules for auditd with this command :
auditctl -R /path/to_your_rules_file
So, no need to restart auditd.service to import new rules
I would validate the auditd service reloading correctly as even using the command module with the command you specified will not work or behave in the manner you would expect it to;
Confirm via
service auditd status
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-starting_the_audit_service.html
Try instead
service auditd condrestart
:)
You should not change the parameter refuseManualStop, it's there to keep you system secure. What you could do instead, after creating new rules, reboot the host, wait for it, then continue with you playbook.
Playbook example:
- name: Create new rules file
copy:
src: 01-personalized.rules
dest: /etc/audit/rules.d/01-personalized.rules
owner: root
group: root
mode: 0600
register: result
- name: Reboot server
shell: "sleep 5 && reboot"
async: 1
poll: 0
when: result is changed
- name: Wait for server to become available
wait_for_connection:
delay: 60
sleep: 5
timeout: 300
when: result is changed
Change the manual stop to NO and try
sudo service auditd restart
If this works, then the code will also work
systemctl start auditd
and
systemctl enable auditd
is for version CentOS version7.
Follow the links for further help.
Link
Auditd in CentOS7

Ansible handler runs only if changed: true

Installing ntp with Ansible,
I notify handler in order to start ntpd service:
Task:
---
# roles/common/tasks/ntp.yml
- name: ntp | installing
yum: name=ntp state=latest
notify: start ntp
Handler:
---
# roles/common/handlers/main.yml
- name: start ntp
service: name=ntpd state=started
If service has not been installed, ansible installs and starts it.
If service has been installed already, but is not running, it does not notify handler: status of task is changed: false
That means, I cannot start it, if it has been already presented in OS.
Is there any good practice that helps to be sure that service has been installed and is in running state?
PS: I may do so:
---
# roles/common/tasks/ntp.yml
- name: ntp | installing
yum: name=ntp state=latest
notify: start ntp
changed: true
but I am not sure that it is good practice.
From the Intro to Playbooks guide:
As we’ve mentioned, modules are written to be ‘idempotent’ and can relay when they have made a change on the remote system. Playbooks recognize this and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once even if notified by multiple different tasks.
Handlers only run on change by design. If you change a configuration you often need to restart a service, but don't want to if nothing has changed.
What you want is to start a service if it is not already running. To do this you should use a regular task as described by #udondan :
- name: ntp | installing
yum:
name: ntp
state: latest
- name: ntp | starting
service:
name: ntpd
state: started
enabled: yes
Ansible is idempotent by design, so this second task will only run if ntp is not already running. The enabled line will set the service to start on boot. Remove this line if that is not desired behavior.
Why don't you just add a service task then? A handler usually is for restarting a service after configuration has changed. To ensure a service is running not matter what, just add at task like that:
- name: Ensure ntp is running
service:
name: ntpd
state: started

Resources