Ping a host inside ansible playbook - ansible

I just want to ping a host(DNS host) to check reachability. Looks there is no proper way to do this? I'm not sure. Below is my playbook with net_ping
---
- name: Set User
hosts: web_servers
gather_facts: false
become: false
vars:
ansible_network_os: linux
tasks:
- name: Pinging Host
net_ping
dest: 10.250.30.11
But,
TASK [Pinging Host] *******************************************************************************************************************
task path: /home/veeru/PycharmProjects/Miscellaneous/tests/ping_test.yml:10
ok: [10.250.30.11] => {
"changed": false,
"msg": "Could not find implementation module net_ping for linux"
}
With ping module
---
- name: Set User
hosts: dns
gather_facts: false
become: false
tasks:
- name: Pinging Host
action: ping
Looks like it is trying to ssh into the IP.(Checked in verbose mode). I don't know why? How can I do ICMP ping? I don't want to put the DNS IP in inventory also.
UPDATE1:
hmm, Looks like there no support for linux in ansible_network_os.
https://www.reddit.com/r/ansible/comments/9dn5ff/possible_values_for_ansible_network_os/

You can use ping command:
---
- hosts: all
gather_facts: False
connection: local
tasks:
- name: ping
shell: ping -c 1 -w 2 8.8.8.8
ignore_errors: true

Try to use delegate_to module to specify that this task should be executed on localhost. Maybe ansible is trying to connect to those devices to execute ping shell command. The following code sample works for me.
tasks:
- name: ping test
shell: ping -c 1 -w 2 {{ ansible_host }}
delegate_to: localhost
ignore_errors: true

can also run/check ping latency by below adhoc command in ansible
ansible all -m shell -a "ping -c3 google.com"

Related

Ansible SSH Proxy "Error reading SSH protocol banner"

When using an ssh proxy and the netwrok_cli connection method, I've been getting a "Error reading SSH protocol banner" error in Ansible.
Seems like it may be the banner_timeout setting, which I can set if I write a netmiko script and it works but I don't think I can set that in a playbook.
This is what the playbook looks like:
- name: playbook
hosts: target_dev_hostname
gather_facts: no
connection: network_cli
vars:
ansible_user: username
ansible_ssh_pass: password
ansible_become_pass: password
ansible_network_os: ios
tasks:
- name: set mode for private key
file:
path: jump_host.pem
mode: 0400
delegate_to: localhost
- name: config proxy
set_fact:
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -i /jump_host.pem -W %h:%p -q jump_user#jump_host.me.com"'
delegate_to: localhost
- name: basic show run
ios_command:
commands: show run
register: running_config
- name: show run results
debug: var=running_config
Any suggestions? Seems like this issue has been popping up in Ansible since about last year.... found this issue being reported:
https://github.com/ansible-collections/ansible.netcommon/issues/46
I should add, this same playbook works in Ansible 2.5.1

Simple ansible example that connects to new server as root with password

I want to provision a new vps. The way this is typically done: 1) try login manually as a non-root user, and 2) if that fails then perform the provisioning.
But I can't connect. I can't even login as root. (I can ssh from the shell, so the password is correct.)
hosts:
[server]
42.42.42.42
playbook.yml:
---
- hosts: all
vars:
ROOT_PASSWORD: foo
gather_facts: no
tasks:
- name: set root password
set_fact: ansible_password={{ ROOT_PASSWORD }}
- name: try login with password
local_action: "command ssh -q -o BatchMode=yes -o ConnectTimeout=3 root#{{ inventory_hostname }} 'echo ok'"
ignore_errors: true
changed_when: false
# more stuff here...
I tried the following, but all don't connect:
I stored the password in a variable like above
I prompted for the password using ansible-playbook -k playbook.yml
I moved the password to the inventory file
[server]
42.42.42.42 ansible_user=root ansible_password=foo
I added the ssh flag -o PreferredAuthentications=password to force password auth
But none of the above connects. I always get the error
root#42.42.42.42: Permission denied (publickey,password).
If I remove -o BatchMode=yes then it prompts me for a password, and does connect. But that prevents automation, the idea is to do this without user intervention.
What am I doing wrong?
This is a new vps, nothing is set up yet - so I'm looking for the simplest possible example of a playbook that connects using root and a password.
You're close. The variable is ansible_ssh_password, not ansible_ssh_pass. The variables with _ssh in the name are legacy names, so you can juse use ansible_user and ansible_password instead.
If I have an inventory like this:
[server]
example ansible_host=192.168.122.148 ansible_user=root ansible_password=secret
Then I can run this command successfully:
$ ansible all -i hosts -m ping
example | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
If the above ad-hoc command works correctly, then a playbook should work correctly as well. E.g., still assuming the above inventory, I can use the following playbook:
---
- hosts: all
gather_facts: false
tasks:
- ping:
And I can call it like this:
$ ansible-playbook playbook.yml -i hosts
PLAY [all] ***************************************************************************
TASK [ping] **************************************************************************
ok: [example]
PLAY RECAP ***************************************************************************
example : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
...and it all works just fine.
Try using --ask-become-pass
ansible-playbook -k playbook.yml --ask-become-pass
That way it's not hardcoded.
Also, inside the playbook you can invoke:
---
- hosts: all
become: true
gather_facts: no
All SO answers and blog articles I've seen so far recommend doing it the way I've shown.
But after spending much time on this, I don't believe it could work that way, so I don't understand why it is always recommended. I noticed that ansible has changed its API many times, and maybe that approach is simply outdated!
So I came up with an alternative, using sshpass:
hosts:
[server]
42.42.42.42
playbook.yml:
---
- hosts: all
vars:
ROOT_PASSWORD: foo
gather_facts: no
tasks:
- name: try login with password (using out-of-band ssh connection)
local_action: command sshpass -p {{ ROOT_PASSWORD }} ssh -q -o ConnectTimeout=3 root#{{ inventory_hostname }} 'echo ok'
ignore_errors: true
register: exists_user
- name: ping if above succeeded (using in-band ssh connection)
remote_user: root
block:
- name: set root ssh password
set_fact:
ansible_password: "{{ ROOT_PASSWORD }}"
- name: ping
ping:
data: pong
when: exists_user is success
This is just a tiny proof of concept.
The actual use case is to try connect with a non-root user, and if that fails, to provision the server. The above is the starting point for such a playbook.
Unlike #larsks' excellent alternative, this does not assume python is installed on the remote, and performs the ssh connection test out of band, assisted by sshpass.

Using hosts: localhost and delegate_to causes kerberos UNREACHABLE! error?

These plays work completely on (non-tower) ansible command line, tower command line, but not in the tower GUI. I've trimmed it down to 3 plays. The first 2 work in the tower GUI, but not the 3rd play. I am obviously missing something basic ...
ping shows good connections
- name: works on all ansible versions
hosts: comp1.private.net
gather_facts: false
tasks:
- win_ping:
- name: works on all ansible versions
hosts: localhost
gather_facts: false
tasks:
ping:
- name: doesn't work in tower GUI.
hosts: localhost
gather_facts: false
tasks:
- win_stat:
path: C:\blah\blah
delegate_to: comp1.private.net
Throws fatal: [localhost] unreachable! kerberos cert obviously this means comp1
What am I missing here??
Why did it work on the command lines? Sounds like a bug.
command line used: ansible-playbook -i inventory/inventory abovePlay.yml
(update) Needed to add localhost to inventory prior to inventory inport.
also add variable ansible_connection: local
Apparently, for the ansible command line (ansible-playbook) it has a default localhost. For the ansible GUI, it doesn't automatically import the host_vars/localhost file when using tower-manage inventory-import command.

Ansible task for checking that a host is really offline after shutdown

I am using the following Ansible playbook to shut down a list of remote Ubuntu hosts all at once:
- hosts: my_hosts
become: yes
remote_user: my_user
tasks:
- name: Confirm shutdown
pause:
prompt: >-
Do you really want to shutdown machine(s) "{{play_hosts}}"? Press
Enter to continue or Ctrl+C, then A, then Enter to abort ...
- name: Cancel existing shutdown calls
command: /sbin/shutdown -c
ignore_errors: yes
- name: Shutdown machine
command: /sbin/shutdown -h now
Two questions on this:
Is there any module available which can handle the shutdown in a more elegant way than having to run two custom commands?
Is there any way to check that the machines are really down? Or is it an anti-pattern to check this from the same playbook?
I tried something with the net_ping module but I am not sure if this is its real purpose:
- name: Check that machine is down
become: no
net_ping:
dest: "{{ ansible_host }}"
count: 5
state: absent
This, however, fails with
FAILED! => {"changed": false, "msg": "invalid connection specified, expected connection=local, got ssh"}
In more restricted environments, where ping messages are blocked you can listen on ssh port until it goes down. In my case I have set timeout to 60 seconds.
- name: Save target host IP
set_fact:
target_host: "{{ ansible_host }}"
- name: wait for ssh to stop
wait_for: "port=22 host={{ target_host }} delay=10 state=stopped timeout=60"
delegate_to: 127.0.0.1
There is no shutdown module. You can use single fire-and-forget call:
- name: Shutdown server
become: yes
shell: sleep 2 && /sbin/shutdown -c && /sbin/shutdown -h now
async: 1
poll: 0
As for net_ping, it is for network appliances such as switches and routers. If you rely on ICMP messages to test shutdown process, you can use something like this:
- name: Store actual host to be used with local_action
set_fact:
original_host: "{{ ansible_host }}"
- name: Wait for ping loss
local_action: shell ping -q -c 1 -W 1 {{ original_host }}
register: res
retries: 5
until: ('100.0% packet loss' in res.stdout)
failed_when: ('100.0% packet loss' not in res.stdout)
changed_when: no
This will wait for 100% packet loss or fail after 5 retries.
Here you want to use local_action because otherwise commands are executed on remote host (which is supposed to be down).
And you want to use trick to store ansible_host into temp fact, because ansible_host is replaced with 127.0.0.1 when delegated to local host.

Ansible: Using Inventory file in shell command

Playbook below. I'm trying to replace test#ip: with a way to pull from my inventory file the IP from a group I've created.
- hosts: firewall
gather_facts: no
tasks:
- name: run shell script
raw: 'sh /home/test/firewall.sh'
- hosts: localhost
gather_facts: no
tasks:
- name: Copy File to Local Machine
shell: 'scp test#ip:/home/test/test.test /Users/dest/Desktop'
You need to change your task like this:
- hosts: localhost
gather_facts: no
tasks:
- name: Copy File to Local Machine
shell: 'scp test#{{ item }}:/home/test/test.test /Users/dest/Desktop'
with_items: groups['your_group_name']
If you want to run on all the hosts in the inventory then you can use like this:
with_items: groups['all']
Hope that will help you.

Resources