issue with delegate_to used in ansible task - ansible

Is this the proper behavior that task with "delegate_to: "localhost"" is trying to ssh "localhost" and not from the "localhost" (ansible master) to the remote's ssh port?
Playbook fails on:
"Timeout when waiting for localhost:2022"
Here example configuration I reproduce it with:
Inventory file:
[testremote]
192.168.170.113 ansible_user=ja
Ansible config file:
[defaults]
host_key_checking = False
inventory = hosts
callback_enabled = profile_tasks
ansible_port = 2022
Playbook file:
- hosts: testremote
gather_facts: false
vars:
desired_port: 2022
tasks:
- name: check if ssh is running on {{ desired_port }}
delegate_to: localhost
wait_for:
port: "{{ desired_port }}"
host: "{{ ansible_host }}"
timeout: 10
ignore_errors: true
register: desired_port_check
- when: desired_port_check is success
block:
- debug:
msg: "ssh is running on desired port"
- name: configure ansible to use port {{ desired_port }}
set_fact:
ansible_port: "{{ desired_port }}"
- name: run a command on the target host
command: uptime
register: uptime
- debug:
msg: "{{ uptime.stdout }}"
Remote host is accessible on desired port already:
[ansible]$ ssh -p 2022 ja#testremote date
Sun Jun 20 16:40:36 CEST 2021
[ansible]$ ping testremote
PING testremote (192.168.170.113) 56(84) bytes of data.
64 bytes from testremote (192.168.170.113): icmp_seq=1 ttl=63 time=1.14 ms
And result when playbook is run:
[ansible]$ ansible-playbook test_playbook.yml
PLAY [testremote] **********************************************************************************************************************************************************
TASK [check if ssh is running on 2022] *************************************************************************************************************************************
fatal: [192.168.170.113 -> localhost]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for localhost:2022"}
...ignoring
TASK [debug] ***************************************************************************************************************************************************************
skipping: [192.168.170.113]
TASK [configure ansible to use port 2022] **********************************************************************************************************************************
skipping: [192.168.170.113]
TASK [run a command on the target host] ************************************************************************************************************************************
fatal: [192.168.170.113]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.170.113 port 22: Connection refused", "unreachable": true}
PLAY RECAP *****************************************************************************************************************************************************************
192.168.170.113 : ok=1 changed=0 unreachable=1 failed=0 skipped=2 rescued=0 ignored=1

Related

ansible task - Locate running port on a list of servers

I have a list of servers that only part of them are running port 9090.
I need to create two tasks, each should loop the server's hostnames. the first task should define inside a register the first server hostname that was found running port 9090 , the second task should define in a register all server's hostnames that are running port 9090. If no server is running port 9090 the tasks should fail
I have nc installed on the server and I thought about using a shell with the following task:
server_details:
- server1
- server2
- server3
- server4
- server5
- name: locate servers running port 9090
shell: nc -zv {{ item.hostname }} 9090
register: results_port_9090
with_items:
- "{{ server_details }}"
ignore_errors: yes
But I don't know how to filter the answer and add the hostname to a new register - I assume an additional task is required that runs on the existing register and creates a new register with the relevant outputs
For example:
server1
server2
server3 #running 9090
server4
server5 #running 9090
The first task should create a register that holds server 3 or server 5 hostname
The second task should create a register that holds server 3 and server 5 hostname
Q: "If no server is running port the play should fail."
A: Use wait_for to test a port. For example, given the variables
server_details: [test_11, test_12, test_13]
time_out: 3
fail: "{{ out.results|items2dict(key_name='item', value_name='failed') }}"
The task below
- wait_for:
host: "{{ item }}"
port: "{{ test_port }}"
timeout: "{{ time_out }}"
loop: "{{ server_details }}"
register: out
gives, for example when test_port=22
fail:
test_11: false
test_12: false
test_13: false
This means that port 22 is open at all remote hosts. Test it
- name: "If no server is running port {{ test_port }} the tasks should fail."
fail:
msg: "No server is running port {{ test_port }}"
when: fail.values()|list is all
Example of a complete playbook for testing
If you want to use the module command, instead of wait_for, the second block provides the same functionality. Use the tags to select the block. Optionally, enable debug to see the dictionaries
shell> cat pb.yml
- hosts: localhost
vars:
server_details: [test_11, test_12, test_13]
time_out: 3
fail: "{{ out.results|items2dict(key_name='item', value_name='failed') }}"
rc: "{{ out.results|items2dict(key_name='item', value_name='rc') }}"
tasks:
- assert:
that: test_port is defined
fail_msg: The variable test_port is mandatory.
tags: always
- block:
- wait_for:
host: "{{ item }}"
port: "{{ test_port }}"
timeout: "{{ time_out }}"
loop: "{{ server_details }}"
register: out
always:
- debug:
var: fail
when: debug|d(false)|bool
- name: "If no server is running port {{ test_port }} the tasks should fail."
fail:
msg: "No server is running port {{ test_port }}"
when: fail.values()|list is all
tags: t1
- block:
- command: "nc -w {{ time_out }} -zv {{ item }} {{ test_port }}"
loop: "{{ server_details }}"
register: out
always:
- debug:
var: rc
when: debug|d(false)|bool
- name: "If no server is running port {{ test_port }} the tasks should fail."
fail:
msg: "No server is running port {{ test_port }}"
when: rc.values()|list is all
tags: t2
Test wait_for port 22
shell> ansible-playbook pb.yml -t t1 -e test_port=22 -e debug=true
PLAY [localhost] *****************************************************************************
TASK [assert] ********************************************************************************
ok: [localhost] => changed=false
msg: All assertions passed
TASK [wait_for] ******************************************************************************
ok: [localhost] => (item=test_11)
ok: [localhost] => (item=test_12)
ok: [localhost] => (item=test_13)
TASK [debug] *********************************************************************************
ok: [localhost] =>
fail:
test_11: false
test_12: false
test_13: false
TASK [If no server is running port 22 the tasks should fail.] ********************************
skipping: [localhost]
PLAY RECAP ***********************************************************************************
localhost: ok=3 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
Test wait_for port 80
shell> ansible-playbook pb.yml -t t1 -e test_port=80 -e debug=true
PLAY [localhost] *****************************************************************************
TASK [assert] ********************************************************************************
ok: [localhost] => changed=false
msg: All assertions passed
TASK [wait_for] ******************************************************************************
failed: [localhost] (item=test_11) => changed=false
ansible_loop_var: item
elapsed: 4
item: test_11
msg: Timeout when waiting for test_11:80
failed: [localhost] (item=test_12) => changed=false
ansible_loop_var: item
elapsed: 4
item: test_12
msg: Timeout when waiting for test_12:80
failed: [localhost] (item=test_13) => changed=false
ansible_loop_var: item
elapsed: 4
item: test_13
msg: Timeout when waiting for test_13:80
TASK [debug] *********************************************************************************
ok: [localhost] =>
fail:
test_11: true
test_12: true
test_13: true
TASK [If no server is running port 80 the tasks should fail.] ********************************
fatal: [localhost]: FAILED! => changed=false
msg: No server is running port 80
PLAY RECAP ***********************************************************************************
localhost: ok=2 changed=0 unreachable=0 failed=2 skipped=0 rescued=0 ignored=0
Test command port 22
shell> ansible-playbook pb.yml -t t2 -e test_port=22 -e debug=true
PLAY [localhost] *****************************************************************************
TASK [assert] ********************************************************************************
ok: [localhost] => changed=false
msg: All assertions passed
TASK [command] *******************************************************************************
changed: [localhost] => (item=test_11)
changed: [localhost] => (item=test_12)
changed: [localhost] => (item=test_13)
TASK [debug] *********************************************************************************
ok: [localhost] =>
rc:
test_11: 0
test_12: 0
test_13: 0
TASK [If no server is running port 22 the tasks should fail.] ********************************
skipping: [localhost]
PLAY RECAP ***********************************************************************************
localhost: ok=3 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
Test command port 80
shell> ansible-playbook pb.yml -t t2 -e test_port=80 -e debug=true
PLAY [localhost] *****************************************************************************
TASK [assert] ********************************************************************************
ok: [localhost] => changed=false
msg: All assertions passed
TASK [command] *******************************************************************************
failed: [localhost] (item=test_11) => changed=true
ansible_loop_var: item
cmd:
- nc
- -w
- '3'
- -zv
- test_11
- '80'
delta: '0:00:03.006955'
end: '2022-08-21 17:11:30.449997'
item: test_11
msg: non-zero return code
rc: 1
start: '2022-08-21 17:11:27.443042'
stderr: 'nc: connect to test_11 port 80 (tcp) timed out: Operation now in progress'
stderr_lines: <omitted>
stdout: ''
stdout_lines: <omitted>
failed: [localhost] (item=test_12) => changed=true
ansible_loop_var: item
cmd:
- nc
- -w
- '3'
- -zv
- test_12
- '80'
delta: '0:00:03.005854'
end: '2022-08-21 17:11:33.656814'
item: test_12
msg: non-zero return code
rc: 1
start: '2022-08-21 17:11:30.650960'
stderr: 'nc: connect to test_12 port 80 (tcp) timed out: Operation now in progress'
stderr_lines: <omitted>
stdout: ''
stdout_lines: <omitted>
failed: [localhost] (item=test_13) => changed=true
ansible_loop_var: item
cmd:
- nc
- -w
- '3'
- -zv
- test_13
- '80'
delta: '0:00:03.009158'
end: '2022-08-21 17:11:36.860258'
item: test_13
msg: non-zero return code
rc: 1
start: '2022-08-21 17:11:33.851100'
stderr: 'nc: connect to test_13 port 80 (tcp) timed out: Operation now in progress'
stderr_lines: <omitted>
stdout: ''
stdout_lines: <omitted>
TASK [debug] *********************************************************************************
ok: [localhost] =>
rc:
test_11: 1
test_12: 1
test_13: 1
TASK [If no server is running port 80 the tasks should fail.] ********************************
fatal: [localhost]: FAILED! => changed=false
msg: No server is running port 80
PLAY RECAP ***********************************************************************************
localhost: ok=2 changed=0 unreachable=0 failed=2 skipped=0 rescued=0 ignored=0
Your thinking is correct. What you can use to filter results is the rc (return code) of the shell command.
When nc succeeds to open a connection, the exist code would be 0, otherwise usually 1. You should iterate the returned list, and check for exit codes, creating a new array.
You can do something like this:
- name: look for live services
set_fact:
live_services: "{{ live_services|default([]) + [item] }}
when: item.rc == 0
loop:
- "{{ results_port_9090 }}"
Then, to get the first server the has this service running you can check if this array isn't empty and if so, the first server would be placed at live_services[0].
To get the open ports use listen_ports_facts
- hosts: all
gather_facts: no
vars:
check_port: 9090 ################ desired port ################
tasks:
- name: Gather facts on listening ports
community.general.listen_ports_facts:
################ Filter ports and store them in a variable ################
- name: List all ports
set_fact:
ports_list: "{{ (ansible_facts.tcp_listen + ansible_facts.udp_listen) | map(attribute='port') | unique | sort | list }}"
# Check if the desired port exists in the listed ports and write output according to that #
- name: Check if the desired port exists in the listed ports
debug:
msg: |
{% if check_port in ports_list %}
{{ inventory_hostname }} #running {{check_port}}
{% else %}
{{ inventory_hostname }} #not running {{check_port}}
{% endif %}
register: checkoutput
################ Combine the output for all hosts ################
- name: Combine the output for all hosts
debug:
msg: "{{ ansible_play_hosts | map('extract', hostvars, 'checkoutput') | map(attribute='msg') | list }}"
register: finaloutput
run_once: yes
The list of ports in debug mode:
TASK [List all ports] *********************************************************************
task path: /root/ansible-test/dictionary.yml:10
ok: [test-001] => {
"ansible_facts": {
"ports_list": [
22,
68,
111,
2049,
34007,
37539,
38611,
41237,
45851,
46679,
48941,
50640,
52215,
52637,
52772,
54317,
54750,
56105,
58060,
58842
]
},
"changed": false
}
ok: [test-002] => {
"ansible_facts": {
"ports_list": [
22,
68,
111,
323,
782,
9090,
35717,
35799,
36247,
44387
]
},
"changed": false
}
TASK [Check if the desired port exists in the listed ports] ******************************
ok: [test-001] => {
"msg": "test-001 #not running 9090\n"
}
ok: [test-002] => {
"msg": "test-002 #running 9090\n"
}
TASK [Combine the output for all hosts] **************************************************
ok: [test-001] => {
"msg": [
"test-001 #not running 9090\n",
"test-002 #running 9090\n"
]
}

Multiple hosts prompt is working but there's a problem with execution

This is my code:
---
- hosts: localhost
gather_facts: no
vars_prompt:
- name: server
prompt: "What is the hostname/ip you want to execute at?"
private: no
tasks:
- add_host:
name: "{{ server }}"
groups: dynamic_hosts
with_items: "{{ server.split(',') }}"
#### Dynamic Host
- hosts: dynamic_hosts
gather_facts: no
tasks:
- name: "Running task id"
command: id
and this is the behaviour:
What is the hostname you want to execute at?: 10.0.0.2, 10.0.0.3
PLAY [localhost] *******************************************************************************************************************************************************************************************************************
TASK [add_host] ********************************************************************************************************************************************************************************************************************
changed: [localhost] => (item= 10.0.0.2)
changed: [localhost] => (item= 10.0.0.3)
PLAY [dynamic_hosts] ***************************************************************************************************************************************************************************************************************
TASK [Running task id] ********************************************************************************************************************************************
fatal: [10.0.0.2, 10.0.0.3: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname 10.0.0.2, 10.0.0.3: Name or service not known\r\n", "unreachable": true}
to retry, use: --limit #/home/user/playbook.yaml
PLAY RECAP *************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0
10.0.0.2, 10.0.0.3 : ok=0 changed=0 unreachable=1 failed=0
So the input of multiple hosts is working properly but when I try to call the group in hosts basically it tries to "ssh 10.0.0.2, 10.0.0.3" and naturally it fails.
What am I missing here?
What I want to do is prompt the user for the hosts he wants to execute at, input
and then just execute the tasks to each host. I do not want to make use of an inventory file.
Is it possible? Thank you in advance
Working code:
---
- hosts: localhost
gather_facts: no
vars_prompt:
- name: server
prompt: "What is the hostname/ip you want to execute at?"
private: no
tasks:
- add_host:
name: "{{ item }}"
groups: dynamic_hosts
with_items: "{{ server.split(',') }}"
#### Dynamic Host
- hosts: dynamic_hosts
gather_facts: no
tasks:
- name: "Running task id"
command: id
thank you

Ansible when condition from debug msg

i want create a task of a when condition from a stdout.
Example here of playbook:
---
- hosts: localhost
gather_facts: false
ignore_errors: yes
vars:
- dev_ip: '192.168.20.192'
tasks:
- name: checkking ssh status
wait_for:
host: "{{dev_ip}}"
port: 22
timeout: 2
state: present
register: ssh_stat
- name: checkcondition
debug:
msg: "{{ssh_stat}}"
message out put is:
ok: [localhost] => {
"msg": {
"changed": false,
"elapsed": 2,
"failed": true,
"msg": "Timeout when waiting for 192.168.20.192:22"
}
}
i want to make a when condition task if string "Timeout when waiting for 192.168.20.192:22" is in the ssh_stat.stdout
Here's what you need:
---
- name: answer stack overflow
hosts: localhost
gather_facts: false
ignore_errors: yes
tasks:
- name: checkking ssh status
wait_for:
host: 192.168.1.23
port: 22
timeout: 2
state: present
register: ssh_stat
- name: do something else when ssh_stat.msg == "Timeout when waiting for 192.168.1.23:22"
shell: echo "I am doing it"
when: ssh_stat.msg == "Timeout when waiting for 192.168.1.23:22"
output:
PLAY [answer stack overflow] **************************************************************************************************************************************************************************************
TASK [checkking ssh status] ***************************************************************************************************************************************************************************************
[WARNING]: Platform linux on host localhost is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See
https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
fatal: [localhost]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "elapsed": 3, "msg": "Timeout when waiting for 192.168.1.23:22"}
...ignoring
TASK [do something else when ssh_stat.msg == "Timeout when waiting for 192.168.1.23:22"] ************************************************************************************************************************
changed: [localhost]
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1

Ansible playbook only working with root user and failing when running with other sudo user

I had a working playbook in my test environment, where my user was root user itslef and didnt have any issue.
When i moved my playbook to staging environment , there my login user account is "admin" which is sudo user. But all of my playbooks are failing in this environment.
---
- name: Replace the silent-install-server_ file to each Application Servers
hosts: localhost,CCM,RM
vars:
jts_file: /etc/ansible/roles/IBM2/files/silent-install-server_JTS.xml
ccm_file: /etc/ansible/roles/IBM2/files/silent-install-server_CCM.xml
rm_file: /etc/ansible/roles/IBM2/files/silent-install-server_RM.xml
dest_dir: /opt/CLM-Web-Installer-Linux-6.0.5/im/linux.gtk.x86_64
tasks:
- name: check the folder existance
stat: path=/opt/CLM-Web-Installer-Linux-6.0.5/im/linux.gtk.x86_64
register: folder_exist
- name: JTS Server
copy:
src: "{{ jts_file }}"
dest: "{{ dest_dir }}/"
mode: 777
backup: yes
delegate_to: localhost
when: folder_exist.stat.exists == True
- name: CCM Server
copy:
src: "{{ ccm_file }}"
dest: "{{ dest_dir }}/"
mode: 777
backup: yes
delegate_to: 10.16.24.102`enter code here`
when: folder_exist.stat.exists == True
- name: RM Server
copy:
src: "{{ rm_file }}"
dest: "{{ dest_dir }}/"
mode: 777
backup: yes
delegate_to: 10.16.24.103
when: folder_exist.stat.exists == True
getting below error.
PLAY [Replace the silent-install-server_ file to each Application Servers] **********************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************
ok: [localhost]
fatal: [10.16.24.102]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}
fatal: [10.165.240.103]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}
TASK [check the folder existance] ***************************************************************************************************************************
ok: [localhost]
TASK [JTS Server] *******************************************************************************************************************************************
ok: [localhost -> localhost]
TASK [CCM Server] *******************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "Failed to get information on remote file (/opt/CLM-Web-Installer-Linux-6.0.5/im/linux.gtk.x86_64/silent-install-server_CCM.xml): Shared connection to 10.16.24.102 closed.\r\n"}
to retry, use: --limit #/etc/ansible/roles/IBM2/tasks/best/silentiInstallerfile.retry
PLAY RECAP **************************************************************************************************************************************************
10.16.24.102 : ok=0 changed=0 unreachable=0 failed=1
10.16.24.103 : ok=0 changed=0 unreachable=0 failed=1
localhost : ok=3 changed=0 unreachable=0 failed=1
my hostfile is as below
[IHS]
10.16.24.100
[JTS]
10.16.24.101
[CCM]
10.16.24.102
[RM]
10.16.24.103
I will suggest making for admin#10.16.24.102 and admin#10.165.240.103 sudo access without password:
You can add to /etc/sudoers file:
admin ALL=(ALL:ALL) NOPASSWD:ALL
Make sure admin#10.16.24.102 and admin#10.165.240.103 is able to "sudo su".
fatal: [10.16.24.102]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}
fatal: [10.165.240.103]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}
If you are not running as root, you need to tell ansible to become root
become: yes
This can be done per play, per host in inventory, or on individual tasks

ansible ios_command timeout when doing "show conf" on cisco 3850

I've got a simple ansible playbook that works fine on most ios devices. It fails on some of my 3850 switches with what looks like a timeout when doing a "show conf". How do I specify a longer, non-default timeout for command completion with the ios_command module (and presumably also ios_config)?
Useful details:
Playbook:
---
- hosts: ios_devices
gather_facts: no
connection: local
tasks:
- name: OBTAIN LOGIN CREDENTIALS
include_vars: secrets.yaml
- name: DEFINE PROVIDER
set_fact:
provider:
host: "{{ inventory_hostname }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
- name: LIST NAME SERVERS
ios_command:
provider: "{{ provider }}"
commands: "show run | inc name-server"
register: dns_servers
- debug: var=dns_servers.stdout_lines
successful run:
$ ansible-playbook listnameserver.yaml -i inventory/onehost
PLAY [ios_devices] *****************************************************************************************************************
TASK [OBTAIN LOGIN CREDENTIALS] ****************************************************************************************************
ok: [iosdevice1.example.com]
TASK [DEFINE PROVIDER] *************************************************************************************************************
ok: [iosdevice1.example.com]
TASK [LIST NAME SERVERS] ***********************************************************************************************************
ok: [iosdevice1.example.com]
TASK [debug] ***********************************************************************************************************************
ok: [iosdevice1.example.com] => {
"dns_servers.stdout_lines": [
[
"ip name-server 10.1.1.166",
"ip name-server 10.1.1.168"
]
]
}
PLAY RECAP *************************************************************************************************************************
iosdevice1.example.com : ok=4 changed=0 unreachable=0 failed=0
unsuccessful run:
$ ansible-playbook listnameserver.yaml -i inventory/onehost
PLAY [ios_devices] *****************************************************************************************************************
TASK [OBTAIN LOGIN CREDENTIALS] ****************************************************************************************************
ok: [iosdevice2.example.com]
TASK [DEFINE PROVIDER] *************************************************************************************************************
ok: [iosdevice2.example.com]
TASK [LIST NAME SERVERS] ***********************************************************************************************************
fatal: [iosdevice2.example.com]: FAILED! => {"changed": false, "msg": "timeout trying to send command: show run | inc name-server", "rc": 1}
to retry, use: --limit #/home/sample/ansible-playbooks/listnameserver.retry
PLAY RECAP *************************************************************************************************************************
iosdevice2.example.com : ok=2 changed=0 unreachable=0 failed=1
The default timeout is 10 seconds if the request takes longer than this ios_command will fail.
You can add the timeout as a key in the provider variable, like this:
- name: DEFINE PROVIDER
set_fact:
provider:
host: "{{ inventory_hostname }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
timeout: 30
If you've already got a timeout value in provider here's a handy way to update only that key in the variable.
- name: Update existing provider timeout key
set_fact:
provider: "{{ provider | combine( {'timeout': '180'} ) }}"

Resources