I am trying to reboot machine using ansible and want to display broad cast message to users when rebooted is triggered by ansible . I am using below lines in playbook. but it is not displaying any message to user not even the default one .
- name: Machine Reboot
reboot:
msg: "Reboot triggered by Ansible"
You can add command: wall reboot from Ansible to your playbook before doing the actual reboot.
For handlers it can look like this:
handlers:
- name: reboot notify
become: true
command: wall reboot with love from ansible
listen: reboot
- name: reboot
become: true
reboot:
Related
I am using ansible to configure some VM's.
Problem I am facing right now is, I can't execute ansible commands right after the VM's are just started, it gives connection time out error. This happens when I execute the ansible right after the VMs are spinned up in GCP.
Commands working fine when I execute ansible playbook after 60 seconds, but I am looking for a way to do this automatically without manually wait 60s and execute, so I can execute right after VM's are spun up and ansible will wait until they are ready. I don't want to add a delay seconds to ansible tasks as well,
I am looking for a dynamic way where ansible tries to execute playbook and when it fails, it won't show any error but wait until the VM's are ready?
I used this, but it still doesn't work (as it fails)
---
- hosts: all
tasks:
- name: Wait for connection
wait_for_connection: # but this will still fails, am I doing this wrong?
- name: Ping all hosts for connectivity check
ping:
Can someone please help me?
I have the same issue on my side.
I've fixed htis with this task wait_for.
The basic way is to waiting ssh connection like this :
- name: Wait 300 seconds for port 22 to become open and contain "OpenSSH"
wait_for:
port: 22
host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
search_regex: OpenSSH
delay: 10
connection: local
I guess your VM must launch an application/service so you can monitor on the vm in the log file where application is started, like this for example (here for nexus container):
- name: Wait container is start and running
become: yes
become_user: "{{ ansible_nexus_user }}"
wait_for:
path: "{{ ansible_nexus_directory_data }}/log/nexus.log"
search_regex: ".*Started Sonatype Nexus.*"
I believe what you are looking for is to postpone gather_facts until the server is up, as that otherwise will time out as you experienced. Your file could work as follows:
---
- hosts: all
gather_facts: no
tasks:
- name: Wait for connection (600s default)
ansible.builtin.wait_for_connection:
- name: Gather facts manually
ansible.builtin.wait_for_connection
I have these under pre_tasks instead of tasks, but it should probably work if they are first in your file.
I poked around a bit here but didn't see anything that quite matched up to what I am trying to accomplish, so here goes.
So I've put together my first Ansible playbook which opens or closes one or more ports on the firewall of one or more hosts, for one or more specified IP addresses. Works great so far. But what I want to do is restart the firewall service after all the tasks for a given host are complete (with no errors, of course).
NOTE: The hostvars/localhost references just hold vars_prompt input from the user in a task list above this one. I store prompted data in hosts: localhost build a dynamic host list based on what the user entered, and then have a separate task list to actually do the work.
So:
- name: Execute remote firewall-cmd for each host in "dynamically created host group"
hosts: dynamically_created_host_list
gather_facts: no
tasks:
- set_fact:
hostList: "{{hostvars['localhost']['hostList']}}"
- set_fact:
portList: "{{hostvars['localhost']['portList']}}"
- set_fact:
portStateRequested: "{{hostvars['localhost']['portStateRequested']}}"
- set_fact:
portState: "{{hostvars['localhost']['portState']}}"
- set_fact:
remoteIPs: "{{hostvars['localhost']['remoteIPs']}}"
- name: Invoke firewall-cmd remotely
firewalld:
.. module-specific stuff here ...
with_nested:
- "{{ remoteIPs.split(',') }}"
- "{{ portList.split(',') }}"
register: requestStatus
In my original version of the script, which only did 1 port for 1 host for 1 IP, I just did:
- name: Reload firewalld
when: requestStatus.changed
systemd:
name: firewalld
state: reloaded
But I don't think that will work as easily here because of the nesting. For example. Let's say I want to open port 9999 for a remote IP address of 1.1.1.1 on 10 different hosts. And let's say the 5th host has an error for some reason. I may not want to restart the firewall service at that point.
Actually, now that I think about it, I guess that in that scenario, there would be 4 new entries to the firewall config, and 6 that didn't take because of the error. Now I'm wondering if I need to track the successes, and have a rescue block within the Playbook to back those entries that did go through.
Grrr.... any ideas? Sorry, new to Ansible here. Plus, I hate YAML for things like this. :D
Thanks in advance for any guidance.
It looks to me like what you are looking for is what Ansible call handlers.
As we’ve mentioned, modules should be idempotent and can relay when
they have made a change on the remote system. Playbooks recognize this
and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks
in a play, and will only be triggered once even if notified by
multiple different tasks.
For instance, multiple resources may indicate that apache needs to be
restarted because they have changed a config file, but apache will
only be bounced once to avoid unnecessary restarts.
Note that handlers are simply a pair of
A notify attribute on one or multiple tasks
A handler, with a name matching your above mentioned notify attribute
So your playbook should look like
- name: Execute remote firewall-cmd for each host in "dynamically created host group"
hosts: dynamically_created_host_list
gather_facts: no
tasks:
# set_fact removed for concision
- name: Invoke firewall-cmd remotely
firewalld:
# .. module-specific stuff here ...
with_nested:
- "{{ remoteIPs.split(',') }}"
- "{{ portList.split(',') }}"
notify: Reload firewalld
handlers:
- name: Reload firewalld
systemd:
name: firewalld
state: reloaded
I copy-pasted this from the manual and it fails in my playbook (version 2.0.2):
- service: name=network state=restarted args=eth0
I am getting this error:
"msg": "Failed to stop eth0.service: Unit eth0.service not loaded.\nFailed to start eth0.service: Unit eth0.service failed to load: No such file or directory.\n"}
What is the correct syntax, please?
Just do like this (#nasr already commented it):
- name: Restart network
service:
name: network
state: restarted
But if you change network configuration before restart, something like IP address, after restart ansible hangs because connection is lost (IP address changed).
There is a way to do things right.
tasks.yml
- name: net configuration step 1
debug:
msg: we changed some files
notify: restart systemd-networkd
- name: net configuration step 2
debug:
msg: do some more work, but restart net services only once
notify: restart systemd-networkd
handlers.yml
- name: restart systemd-networkd
systemd:
name: systemd-networkd
state: restarted
async: 120
poll: 0
register: net_restarting
- name: check restart systemd-networkd status
async_status:
jid: "{{ net_restarting.ansible_job_id }}"
register: async_poll_results
until: async_poll_results.finished
retries: 30
listen: restart systemd-networkd
As per Ansible 2.7.8. You have to make following changes to restart the network.
Note: I tried this on Ubuntu 16.04
Scenario 1: Only network restart
- name: Restarting Network to take effect new IP Address
become: yes
service:
name: networking
state: restarted
Scenario 2: Flush interface and then restart network
- name: Flushing Interface
become: yes
shell: sudo ip addr flush "{{net_interface}}"
- name: Restarting Network
become: yes
service:
name: networking
state: restarted
Note: Make sure you have net_interface configured and then imported in the file where you execute this Ansible task.
OUTPUT
Please find below output that I received on my screen.
- command: /etc/init.d/network restart
does work wonderfully but I feel that using command kinda defeats the purpose of using ansible.
I m using Ubuntu 22.04.1 LTS that uses systemd instead of init
The following worked fine with me ( I tried the solutions mentioned earlier but none has worked for me)
- name: restart network
systemd:
name: NetworkManager
state: restarted
I have a playbook with the following task and handler section (just a snippet):
tasks:
- name: 'Run legacy script and power off'
debug: msg="Preparing for reboot"
notify: Legacy sysprep
handlers:
- name: Enable Service1
service: name=service1 enabled=yes state=restarted
- name: Legacy sysprep
shell: /var/scripts/prep-reboot.sh
When I run the playbook, I see the debug message for the task that calls the Legacy sysprep handler, and I see the Enable Service1 handler executed, but the Legacy sysprep handler isn't called (it doesn't show up in the playbook output, nor does it run on the system) and the servers are not rebooted (part of the script).
Yes, I plan to migrate the prep-reboot.sh script to an Ansible playbook, but I was surprised that apparently the shell module doesn't seem to work? Or is there an error I've overlooked? Running with -vvv doesn't report anything unexpected.
Ansible and Ansible-playbook version 2.1.1.0 running on RHEL 6.8.
Thank you, #techraf you got me pointed in the right direction. I ended up adding the changed_when: true to the debug line and that forces that play to register a change which then triggers the appropriate handler.
Here is my actual test playbook for reference:
---
- name: Testing forced handler
hosts: testsys_only
gather_facts: True
tasks:
- name: 'Run legacy script and power off'
debug: msg="Preparing for reboot"
changed_when: true
notify: Legacy sysprep
handlers:
- name: Enable Service1
service: name=service1 enabled=yes state=restarted
- name: Legacy sysprep
shell: /var/scripts/prep-reboot.sh
Installing ntp with Ansible,
I notify handler in order to start ntpd service:
Task:
---
# roles/common/tasks/ntp.yml
- name: ntp | installing
yum: name=ntp state=latest
notify: start ntp
Handler:
---
# roles/common/handlers/main.yml
- name: start ntp
service: name=ntpd state=started
If service has not been installed, ansible installs and starts it.
If service has been installed already, but is not running, it does not notify handler: status of task is changed: false
That means, I cannot start it, if it has been already presented in OS.
Is there any good practice that helps to be sure that service has been installed and is in running state?
PS: I may do so:
---
# roles/common/tasks/ntp.yml
- name: ntp | installing
yum: name=ntp state=latest
notify: start ntp
changed: true
but I am not sure that it is good practice.
From the Intro to Playbooks guide:
As we’ve mentioned, modules are written to be ‘idempotent’ and can relay when they have made a change on the remote system. Playbooks recognize this and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once even if notified by multiple different tasks.
Handlers only run on change by design. If you change a configuration you often need to restart a service, but don't want to if nothing has changed.
What you want is to start a service if it is not already running. To do this you should use a regular task as described by #udondan :
- name: ntp | installing
yum:
name: ntp
state: latest
- name: ntp | starting
service:
name: ntpd
state: started
enabled: yes
Ansible is idempotent by design, so this second task will only run if ntp is not already running. The enabled line will set the service to start on boot. Remove this line if that is not desired behavior.
Why don't you just add a service task then? A handler usually is for restarting a service after configuration has changed. To ensure a service is running not matter what, just add at task like that:
- name: Ensure ntp is running
service:
name: ntpd
state: started