Ansible - Execute task based on Linux service status - ansible

How can I write an Ansible playbook to start my nginx service only if firewalld.service is in stop state?
Note: Execution being done on a sandbox server. So no issues about firewalld in stop state.

You can check the service status, register a variable, and based on the variable result - start the service.
You can also use the service_facts module to get the service status.
Example:
- name: Populate service facts
service_facts:
- name: start nginx if firewalld.service is stopped
service:
name: nginx
state: started
when: ansible_facts.services['firewalld.service'].state == 'stopped'
Note: I did not tested that. Modify it accordingly

As already proposed by #Telinov, you can get facts for services and start your own service accordingly:
---
- name: Populate service facts
service_facts:
- name: start nginx if firewalld.service is stopped
service:
name: nginx
state: started
when: ansible_facts.services['firewalld.service'].state == 'stopped'
Meanwhile, ansible is about describing state. If you need firewalld stopped and nginx started, it is much easier to just say so:
- name: Make sure firewalld is stopped
service:
name: firewalld
state: stopped
- name: Make sure nginx is started
service:
name: nginx
state: started
You could be even smarter combining the above and open the firewall ports for nginx if it is running.
---
- name: Populate service facts
service_facts:
- name: Make sure port 80 and 443 are opened if firewall is running
firewalld:
port: "{{ port }}/tcp"
state: enabled
permanent: true
immediate: true
loop:
- 80
- 443
loop_control:
loop_var: port
when: ansible_facts.services['firewalld.service'].state == 'running'
- name: Make sure nginx is started
service:
name: nginx
state: started

Related

Unable to get part of a play to run locally in Ansible

This is all running within AWX which is hosted on-prem. I'm trying to manage some EC2 instances within AWS. I've setup the bastion jump and can get all my other plays to work correctly.
However there is one simple job template I want to provide to a few devs. Essentially when they make a change to the code, it enables opcache to be cleared and invalidates the specific files in CloudFront.
I want the CloudFront API Call (cloudfront_invalidations module) to run off AWX locally and then if this is successful, notify the two web servers instances to restart their PHP and Apache process.
---
- name: Restart httpd and php-fpm
remote_user: ec2-user
hosts: all
become: true
tasks:
- name: Invalidate paths in CloudFront
cloudfront_invalidation:
distribution_id: "{{ distribution_id }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
target_paths: "{{ cloudfront_invalidations.split('\n') }}"
delegate_to: 127.0.0.1
notify:
- Restart service httpd
- Restart service php-fpm
handlers:
- name: Restart service httpd
service:
name: httpd
state: restarted
- name: Restart service php-fpm
service:
name: php-fpm
state: restarted
However when running the play it seems to ignore the 'delegate_to' action and instead runs the invalidation twice, for each host. I'm unsure if it's actually running locally. I've tried adding the run_once flag, but this only then restarted httpd + PHP on one host.
Any ideas?

How to reload Firewalld service using Ansible?

I added some rule to firewalld in centos 7 with ansible. But I must reload firewalld daemon thus service work properly. Is there any idea?
Here is my ansible code:
- name: Add port to firewalld
firewalld:
port: "{{ item }}"
permanent: yes
state: enabled
when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
loop:
- 8080/tcp
- 8000/tcp
- 8090/tcp
- 8040/tcp
First of all use with_items for list of ports as below:
- name: Add port to firewalld
firewalld:
port: "{{ item }}"
permanent: yes
state: enabled
when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
loop:
- 8080/tcp
- 8000/tcp
- 8090/tcp
- 8040/tcp
You can also use the below code to enter ports if they are not fixed and use its as a variable:
- hosts: localhost
gather_facts: no
vars_prompt:
- name: ports
prompt: "Enter port(s) number"
private: no
tasks:
- name: add port
firewalld:
service: "{{ item }}"
permanent: yes
immediate: yes
state: enabled
with_items: "{{ ports.split(',') }}"
and regarding reloading firewalld its mentioned here we can't reload firewalld using state parameter So use systemd module as below:
- name: reload service firewalld
systemd:
name: firewalld
state: reloaded
firewalld module has immediate option which is performing the same reload within firewall-cmd cli tool.
- name: Add port to firewalld
firewalld:
port: "{{ item }}"
permanent: yes
state: enabled
immediate: true
You can use service or systemd module.
#Supports init systems include BSD init, OpenRC, SysV, Solaris SMF, systemd, upstart.
- name: Restart service
service:
name: firewalld
state: restarted
#Controls systemd services on remote hosts.
- name: Restart service
systemd:
state: restarted
daemon_reload: yes
name: firewalld
I'm a bit late but given that all previous answers seem to just speculate I will give another input. Firewalld is not reloaded with 'service' or 'systemctl' commands but rather with it's own specific command:
firewall-cmd --reload
This is because that way you can load new rules without interrupting any active network connections as would be the case when using iptables directly.
Given this I think using service or systemctl is not a good solution.
So if you just want to create a task I suggest using the command module from ansible to execute this command. Or you could write a handler like so:
- name: reload firewalld
command: firewall-cmd --reload
Just put the handler in the handlers/main.yml file inside your role. Then in your tasks you can call that handler with:
notify: reload firewalld
That way Ansible only executes the handler at the end of your Ansible run. I successfully tested this on RHEL7.
If you are using permanent conditional, you can use immediate option.
Example:
- name: Apply Base Services
ansible.posix.firewalld:
service: "{{ item }}"
state: enabled
zone: public
permanent: yes
immediate: true
loop:
- http
- https
After this rule will applied, firewalld will reload automatically.
You already got a number of excellent answers. There is yet another possible approach (although the reloading part is the same as in cstoll's answer).
If you are certain that nobody and nothing else but Ansible will ever manipulate firewalld rules, you can use a template to directly generate the zone XML files in /etc/firewalld/zones . You will still need to add
notify: reload firewalld
and the corresponding handler, as in cstoll's answer.
The main advantage of this approach is that it can be dramatically faster and simpler than adding the rules one at a time.
The drawback of this approach is that it will not preserve any rules added to firewalld outside of Ansible. A second drawback is that it will not do any error checking; you can create invalid zone files easily. The firewall-cmd command (and thus the firewalld module) will verify the validity of each rule. For instance, it checks that zones do not overlap.

ansible check non-existent service [duplicate]

I have an Ansible playbook for deploying a Java app as an init.d daemon.
Being a beginner in both Ansible and Linux I'm having trouble to conditionally execute tasks on a host based on the host's status.
Namely I have some hosts having the service already present and running where I want to stop it before doing anything else. And then there might be new hosts, which don't have the service yet. So I can't simply use service: name={{service_name}} state=stopped, because this will fail on new hosts.
How I can I achieve this? Here's what I have so far:
- name: Check if Service Exists
shell: "if chkconfig --list | grep -q my_service; then echo true; else echo false; fi;"
register: service_exists
# This should only execute on hosts where the service is present
- name: Stop Service
service: name={{service_name}} state=stopped
when: service_exists
register: service_stopped
# This too
- name: Remove Old App Folder
command: rm -rf {{app_target_folder}}
when: service_exists
# This should be executed on all hosts, but only after the service has stopped, if it was present
- name: Unpack App Archive
unarchive: src=../target/{{app_tar_name}} dest=/opt
See the service_facts module, new in Ansible 2.5.
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "'docker' in services"
Of course I could also just check if the wrapper script exists in /etc/init.d. So this is what I ended up with:
- name: Check if Service Exists
stat: path=/etc/init.d/{{service_name}}
register: service_status
- name: Stop Service
service: name={{service_name}} state=stopped
when: service_status.stat.exists
register: service_stopped
I modified Florian's answer to only use the return code of the service command (this worked on Mint 18.2)
- name: Check if Logstash service exist
shell: service logstash status
register: logstash_status
failed_when: not(logstash_status.rc == 3 or logstash_status.rc == 0)
- name: Check if Logstash service exist
service:
name: logstash
state: stopped
when: logstash_status.rc == 0
It would be nice if the "service" module could handle "unrecognized service" errors.
This is my approach, using the service command instead of checking for an init script:
- name: check for apache
shell: "service apache2 status"
register: _svc_apache
failed_when: >
_svc_apache.rc != 0 and ("unrecognized service" not in _svc_apache.stderr)
- name: disable apache
service: name=apache2 state=stopped enabled=no
when: "_svc_apache.rc == 0"
check the exit code of "service status" and accept the exit code 0 when the output contains "unrecognized service"
if the exit code was 0, that service is installed (stopped or running)
Another approach for systemd (from Jakuje):
- name: Check if cups-browsed service exists
command: systemctl cat cups-browsed
check_mode: no
register: cups_browsed_exists
changed_when: False
failed_when: cups_browsed_exists.rc not in [0, 1]
- name: Stop cups-browsed service
systemd:
name: cups-browsed
state: stopped
when: cups_browsed_exists.rc == 0
Building on #Maciej's answer for RedHat 8, and combining it with the comments made on it.
This is how I managed to stop Celery only if it has already been installed:
- name: Populate service facts
service_facts:
- debug:
msg: httpd installed!
when: ansible_facts.services['httpd.service'] is defined
- name: Stop celery.service
service:
name: celery.service
state: stopped
enabled: true
when: ansible_facts.services['celery.service'] is defined
You can drop the debug statement--it's there just to confirm that ansible_facts is working.
This way using only the service module has worked for us:
- name: Disable *service_name*
service:
name: *service_name*
enabled: no
state: stopped
register: service_command_output
failed_when: >
service_command_output|failed
and 'unrecognized service' not in service_command_output.msg
and 'Could not find the requested service' not in service_command_output.msg
My few cents. The same approach as above but for kubernetes
Check if kublete service is running
- name: "Obtain state of kublet service"
command: systemctl status kubelet.service
register: kubelet_status
failed_when: kubelet_status.rc > 3
Display debug message if kublet service is not running
- debug:
msg: "{{ kubelet_status.stdout }}"
when: "'running' not in kubelet_status.stdout"
You can use the service_facts module since Ansible 2.5. But you need to know that the output are the real name of the service like docker.service or somthing#.service. So you have different options like:
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "'docker.service' in services"
Or you can search for the string beginning with the service name; that is much more reliable, because the service names are different between the distributions:
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "services.keys()|list|select('search', '^docker')|length >0"

ansible to restart network service

I copy-pasted this from the manual and it fails in my playbook (version 2.0.2):
- service: name=network state=restarted args=eth0
I am getting this error:
"msg": "Failed to stop eth0.service: Unit eth0.service not loaded.\nFailed to start eth0.service: Unit eth0.service failed to load: No such file or directory.\n"}
What is the correct syntax, please?
Just do like this (#nasr already commented it):
- name: Restart network
service:
name: network
state: restarted
But if you change network configuration before restart, something like IP address, after restart ansible hangs because connection is lost (IP address changed).
There is a way to do things right.
tasks.yml
- name: net configuration step 1
debug:
msg: we changed some files
notify: restart systemd-networkd
- name: net configuration step 2
debug:
msg: do some more work, but restart net services only once
notify: restart systemd-networkd
handlers.yml
- name: restart systemd-networkd
systemd:
name: systemd-networkd
state: restarted
async: 120
poll: 0
register: net_restarting
- name: check restart systemd-networkd status
async_status:
jid: "{{ net_restarting.ansible_job_id }}"
register: async_poll_results
until: async_poll_results.finished
retries: 30
listen: restart systemd-networkd
As per Ansible 2.7.8. You have to make following changes to restart the network.
Note: I tried this on Ubuntu 16.04
Scenario 1: Only network restart
- name: Restarting Network to take effect new IP Address
become: yes
service:
name: networking
state: restarted
Scenario 2: Flush interface and then restart network
- name: Flushing Interface
become: yes
shell: sudo ip addr flush "{{net_interface}}"
- name: Restarting Network
become: yes
service:
name: networking
state: restarted
Note: Make sure you have net_interface configured and then imported in the file where you execute this Ansible task.
OUTPUT
Please find below output that I received on my screen.
- command: /etc/init.d/network restart
does work wonderfully but I feel that using command kinda defeats the purpose of using ansible.
I m using Ubuntu 22.04.1 LTS that uses systemd instead of init
The following worked fine with me ( I tried the solutions mentioned earlier but none has worked for me)
- name: restart network
systemd:
name: NetworkManager
state: restarted

How to stop Ansible from starting then restarting a service?

In many of my Ansible roles I have a handler to restart a service and a task to make sure the service is enabled and started. When this is a first run Ansible will start my service (Ensure mongodb is started and enabled) and then at the end run the restart handler. How do I tell Ansible to only start it once.
Sample play:
---
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} state=started enabled=yes
Insert a flush_handlers task just in the front of the server started task to run the restart handler. Like:
- name: flush handlers
meta: flush_handlers
## your service start task
- name: Ensure mongodb is started and enabled
service:
service: name={{ mongodb_daemon_name }} state=started enabled=yes
If handlers are notified, the handler will restart the services (I assume you have handlers that restart the services).
If handlers are not notified, the Ensure service started task will start the service if it is not already started.
It's been a while since I asked this but anywho, if it helps, the solution I ended up with is below...
The trick is to...
Have the service task in the tasks/main.yml file with the parameters state: started and enabled: yes split out into their own tasks
For the Ensure mongodb is started task register a mongodb_service_started variable
Use the newly registered variable mongodb_service_started in the when clause of the Restart mongodb handler to only run when the Ensure mongodb is started task did NOT change
This ensures that your application does not get started then subsequently restarted in the same play and prevents your application from being stopped when it is running some critical function on startup. For example database migrations. Restarting a service right after it's been started can cause an inconsistent state.
tasks/main.yml
---
- name: Install MongoDB package
yum:
name: "mongodb-org-{{ mongodb_version }}"
state: present
notify: Restart mongodb
- name: Configure mongodb
template:
src: mongod.conf.j2
dest: "/etc/{{ mongodb_config_file }}"
owner: root
group: root
mode: 0644
notify: Restart mongodb
- name: Ensure mongodb is started
service:
name: "{{ mongodb_daemon_name }}"
state: started
register: mongodb_service_started
- name: Ensure mongodb is enabled
service:
name: "{{ mongodb_daemon_name }}"
enabled: yes
handlers/main.yml
---
- name: Restart mongodb
service:
name: "{{ mongodb_daemon_name }}"
state: started
when: mongodb_service_started.changed == False
May be you are looking something like this:
---
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
register: mongodb_install
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} state=started enabled=yes
when: mongodb_install.changed
I can think of two three ways.
Option #1. Remove the state from the task that enables the service and move it to a handler.
According to the Ansible documentation, "notify handlers are always run in the same order they are defined." Since started is idempotent and restarted always starts the service, the second handler (start) won't run if the first one does (restart).
# roles/mongodb/tasks/main.yml
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} enabled=yes
notify: mongodb start
# roles/mongodb/handlers/main.yml
- name: mongodb restart
service: name={{ mongodb_daemon_name }} state=restarted
- name: mongodb start
service: name={{ mongodb_daemon_name }} state=started
However, handlers are only triggered if the task is changed, so this doesn't ensure that the service is running if neither of those tasks are considered changed. For instance, if the service was already installed, enabled, and configured earlier, but someone has logged on to the host and shut it down manually, then it won't be started when this playbook is run.
Option #2. Add a post_task that always runs and ensures the service has been started.
Handlers are flushed before post tasks, so if the configuration has been changed, either during first installation or because it changed, the service will either be started or restarted. However, if nothing has changed and the service isn't running for some reason, it will explicitly be started during the post_task.
# roles/mongodb/tasks/main.yml
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} enabled=yes
# In your playbook, e.g., site.yml
- hosts: all
roles:
- mongodb
post_tasks:
- name: Ensure mongodb is running
service: name={{ mongodb_daemon_name }} state=started
Option #3. Like Option #1, except always trigger the "start" handler
# roles/mongodb/tasks/main.yml
# ...same as Option #1 above...
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} enabled=yes
notify: mongodb start
changed_when: yes
As long as seeing a "change" in your output doesn't bother you, this will work the way you want.
If you have a handler (in the handlers directory of your role) then you do not need an additional task to start/enable mongo. In your handler you should not start but restart (state=restarted). When notified for the first time it will start and enable your service, and in future when the service is running, it will re-start.
- name: mongodb restart
service: name={{ mongodb_daemon_name }} state=restarted enabled=yes
No additional task required.

Resources