ansible to restart network service - ansible

I copy-pasted this from the manual and it fails in my playbook (version 2.0.2):
- service: name=network state=restarted args=eth0
I am getting this error:
"msg": "Failed to stop eth0.service: Unit eth0.service not loaded.\nFailed to start eth0.service: Unit eth0.service failed to load: No such file or directory.\n"}
What is the correct syntax, please?

Just do like this (#nasr already commented it):
- name: Restart network
service:
name: network
state: restarted
But if you change network configuration before restart, something like IP address, after restart ansible hangs because connection is lost (IP address changed).

There is a way to do things right.
tasks.yml
- name: net configuration step 1
debug:
msg: we changed some files
notify: restart systemd-networkd
- name: net configuration step 2
debug:
msg: do some more work, but restart net services only once
notify: restart systemd-networkd
handlers.yml
- name: restart systemd-networkd
systemd:
name: systemd-networkd
state: restarted
async: 120
poll: 0
register: net_restarting
- name: check restart systemd-networkd status
async_status:
jid: "{{ net_restarting.ansible_job_id }}"
register: async_poll_results
until: async_poll_results.finished
retries: 30
listen: restart systemd-networkd

As per Ansible 2.7.8. You have to make following changes to restart the network.
Note: I tried this on Ubuntu 16.04
Scenario 1: Only network restart
- name: Restarting Network to take effect new IP Address
become: yes
service:
name: networking
state: restarted
Scenario 2: Flush interface and then restart network
- name: Flushing Interface
become: yes
shell: sudo ip addr flush "{{net_interface}}"
- name: Restarting Network
become: yes
service:
name: networking
state: restarted
Note: Make sure you have net_interface configured and then imported in the file where you execute this Ansible task.
OUTPUT
Please find below output that I received on my screen.

- command: /etc/init.d/network restart
does work wonderfully but I feel that using command kinda defeats the purpose of using ansible.

I m using Ubuntu 22.04.1 LTS that uses systemd instead of init
The following worked fine with me ( I tried the solutions mentioned earlier but none has worked for me)
- name: restart network
systemd:
name: NetworkManager
state: restarted

Related

Ansible - Execute task based on Linux service status

How can I write an Ansible playbook to start my nginx service only if firewalld.service is in stop state?
Note: Execution being done on a sandbox server. So no issues about firewalld in stop state.
You can check the service status, register a variable, and based on the variable result - start the service.
You can also use the service_facts module to get the service status.
Example:
- name: Populate service facts
service_facts:
- name: start nginx if firewalld.service is stopped
service:
name: nginx
state: started
when: ansible_facts.services['firewalld.service'].state == 'stopped'
Note: I did not tested that. Modify it accordingly
As already proposed by #Telinov, you can get facts for services and start your own service accordingly:
---
- name: Populate service facts
service_facts:
- name: start nginx if firewalld.service is stopped
service:
name: nginx
state: started
when: ansible_facts.services['firewalld.service'].state == 'stopped'
Meanwhile, ansible is about describing state. If you need firewalld stopped and nginx started, it is much easier to just say so:
- name: Make sure firewalld is stopped
service:
name: firewalld
state: stopped
- name: Make sure nginx is started
service:
name: nginx
state: started
You could be even smarter combining the above and open the firewall ports for nginx if it is running.
---
- name: Populate service facts
service_facts:
- name: Make sure port 80 and 443 are opened if firewall is running
firewalld:
port: "{{ port }}/tcp"
state: enabled
permanent: true
immediate: true
loop:
- 80
- 443
loop_control:
loop_var: port
when: ansible_facts.services['firewalld.service'].state == 'running'
- name: Make sure nginx is started
service:
name: nginx
state: started

Unable to get part of a play to run locally in Ansible

This is all running within AWX which is hosted on-prem. I'm trying to manage some EC2 instances within AWS. I've setup the bastion jump and can get all my other plays to work correctly.
However there is one simple job template I want to provide to a few devs. Essentially when they make a change to the code, it enables opcache to be cleared and invalidates the specific files in CloudFront.
I want the CloudFront API Call (cloudfront_invalidations module) to run off AWX locally and then if this is successful, notify the two web servers instances to restart their PHP and Apache process.
---
- name: Restart httpd and php-fpm
remote_user: ec2-user
hosts: all
become: true
tasks:
- name: Invalidate paths in CloudFront
cloudfront_invalidation:
distribution_id: "{{ distribution_id }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
target_paths: "{{ cloudfront_invalidations.split('\n') }}"
delegate_to: 127.0.0.1
notify:
- Restart service httpd
- Restart service php-fpm
handlers:
- name: Restart service httpd
service:
name: httpd
state: restarted
- name: Restart service php-fpm
service:
name: php-fpm
state: restarted
However when running the play it seems to ignore the 'delegate_to' action and instead runs the invalidation twice, for each host. I'm unsure if it's actually running locally. I've tried adding the run_once flag, but this only then restarted httpd + PHP on one host.
Any ideas?

How to reload Firewalld service using Ansible?

I added some rule to firewalld in centos 7 with ansible. But I must reload firewalld daemon thus service work properly. Is there any idea?
Here is my ansible code:
- name: Add port to firewalld
firewalld:
port: "{{ item }}"
permanent: yes
state: enabled
when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
loop:
- 8080/tcp
- 8000/tcp
- 8090/tcp
- 8040/tcp
First of all use with_items for list of ports as below:
- name: Add port to firewalld
firewalld:
port: "{{ item }}"
permanent: yes
state: enabled
when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
loop:
- 8080/tcp
- 8000/tcp
- 8090/tcp
- 8040/tcp
You can also use the below code to enter ports if they are not fixed and use its as a variable:
- hosts: localhost
gather_facts: no
vars_prompt:
- name: ports
prompt: "Enter port(s) number"
private: no
tasks:
- name: add port
firewalld:
service: "{{ item }}"
permanent: yes
immediate: yes
state: enabled
with_items: "{{ ports.split(',') }}"
and regarding reloading firewalld its mentioned here we can't reload firewalld using state parameter So use systemd module as below:
- name: reload service firewalld
systemd:
name: firewalld
state: reloaded
firewalld module has immediate option which is performing the same reload within firewall-cmd cli tool.
- name: Add port to firewalld
firewalld:
port: "{{ item }}"
permanent: yes
state: enabled
immediate: true
You can use service or systemd module.
#Supports init systems include BSD init, OpenRC, SysV, Solaris SMF, systemd, upstart.
- name: Restart service
service:
name: firewalld
state: restarted
#Controls systemd services on remote hosts.
- name: Restart service
systemd:
state: restarted
daemon_reload: yes
name: firewalld
I'm a bit late but given that all previous answers seem to just speculate I will give another input. Firewalld is not reloaded with 'service' or 'systemctl' commands but rather with it's own specific command:
firewall-cmd --reload
This is because that way you can load new rules without interrupting any active network connections as would be the case when using iptables directly.
Given this I think using service or systemctl is not a good solution.
So if you just want to create a task I suggest using the command module from ansible to execute this command. Or you could write a handler like so:
- name: reload firewalld
command: firewall-cmd --reload
Just put the handler in the handlers/main.yml file inside your role. Then in your tasks you can call that handler with:
notify: reload firewalld
That way Ansible only executes the handler at the end of your Ansible run. I successfully tested this on RHEL7.
If you are using permanent conditional, you can use immediate option.
Example:
- name: Apply Base Services
ansible.posix.firewalld:
service: "{{ item }}"
state: enabled
zone: public
permanent: yes
immediate: true
loop:
- http
- https
After this rule will applied, firewalld will reload automatically.
You already got a number of excellent answers. There is yet another possible approach (although the reloading part is the same as in cstoll's answer).
If you are certain that nobody and nothing else but Ansible will ever manipulate firewalld rules, you can use a template to directly generate the zone XML files in /etc/firewalld/zones . You will still need to add
notify: reload firewalld
and the corresponding handler, as in cstoll's answer.
The main advantage of this approach is that it can be dramatically faster and simpler than adding the rules one at a time.
The drawback of this approach is that it will not preserve any rules added to firewalld outside of Ansible. A second drawback is that it will not do any error checking; you can create invalid zone files easily. The firewall-cmd command (and thus the firewalld module) will verify the validity of each rule. For instance, it checks that zones do not overlap.

Issue with handler restarting service

I have an issue where I was handed a gaming service from a dev group in Bulgaria that was coded and built last fall. the only documentation I have for the platform which is a video showing one of the devs running the playbooks to build the env. the video doesnt show version of ansible that is being used. I am trying to run playbooks on ubuntu 16.04 and ansible 2.0.0.2. I am trying to execute the following:
- hosts: nler-example-nxx
user: ubuntu
tasks:
- include: ../../templates/nler-nxx.yml
- name: change nodes ip in nxx config
replace:
path: /www/nler-nxx/conf/nxx.properties
regexp: '(\s+)nxx.defaultPeers=(.*)$'
replace: '\1nxt.defaultPeers=52.202.223.114; 35.173.242.207; 34.238.136.137; 54.164.46.62; 54.86.17.225;'
notify:
- restart nxx service
handlers:
- name: restart nxx service
become: yes
systemd: name=nxx state=restarted
And get this error:
ERROR! no action detected in task
The error appears to have been in '/etc/ansible/playbook/niler/example/nxx.yml': line 15, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
handlers:
- name: restart nxx service
^ here
In doing some research I am thinking this is a version conflict issue but no way to confirm what version they were using.
Indentation is wrong. Should be:
tasks:
- include: ../../templates/nler-nxx.yml
...
.
handlers:
- name: restart nxx service
...

Ansible handler runs only if changed: true

Installing ntp with Ansible,
I notify handler in order to start ntpd service:
Task:
---
# roles/common/tasks/ntp.yml
- name: ntp | installing
yum: name=ntp state=latest
notify: start ntp
Handler:
---
# roles/common/handlers/main.yml
- name: start ntp
service: name=ntpd state=started
If service has not been installed, ansible installs and starts it.
If service has been installed already, but is not running, it does not notify handler: status of task is changed: false
That means, I cannot start it, if it has been already presented in OS.
Is there any good practice that helps to be sure that service has been installed and is in running state?
PS: I may do so:
---
# roles/common/tasks/ntp.yml
- name: ntp | installing
yum: name=ntp state=latest
notify: start ntp
changed: true
but I am not sure that it is good practice.
From the Intro to Playbooks guide:
As we’ve mentioned, modules are written to be ‘idempotent’ and can relay when they have made a change on the remote system. Playbooks recognize this and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once even if notified by multiple different tasks.
Handlers only run on change by design. If you change a configuration you often need to restart a service, but don't want to if nothing has changed.
What you want is to start a service if it is not already running. To do this you should use a regular task as described by #udondan :
- name: ntp | installing
yum:
name: ntp
state: latest
- name: ntp | starting
service:
name: ntpd
state: started
enabled: yes
Ansible is idempotent by design, so this second task will only run if ntp is not already running. The enabled line will set the service to start on boot. Remove this line if that is not desired behavior.
Why don't you just add a service task then? A handler usually is for restarting a service after configuration has changed. To ensure a service is running not matter what, just add at task like that:
- name: Ensure ntp is running
service:
name: ntpd
state: started

Resources