Ansible Playbook command timeout when connecting to cisco switch using SSH key - ansible

Summary:
im trying to setup a playbook to get a list of IOS devices from netbox and then export the config files
Versions:
ansible-playbook [core 2.13.2]
python version = 3.8.13
switch IOS = 15.0(2)EX5
ansible host = Ubuntu 18.04.6 LTS
Issue:
When i run the command:
ansible-playbook -vvvvv playbook.yml
I get the error:
The full traceback is:
File "/usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py", line 94, in get_capabilities
capabilities = Connection(module._socket_path).get_capabilities()
File "/usr/local/lib/python3.8/dist-packages/ansible/module_utils/connection.py", line 200, in __rpc__
raise ConnectionError(to_text(msg, errors='surrogate_then_replace'), code=code)
fatal: [PHC-16JUB-SW01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"commands": [
"show version"
],
"interval": 1,
"match": "all",
"provider": null,
"retries": 10,
"wait_for": null
}
},
"msg": "command timeout triggered, timeout value is 60 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide."
}
Files:
playbook.yml
- name: Playbook to backup configs on Cisco Routers
connection: network_cli
hosts: device_roles_switch
remote_user: admin
gather_facts: False
tasks:
- name: Show Running configuration on Device
ios_command:
commands:
- show version
vars:
ansible_user: admin
register: config
- name: Save output to backups folder
copy:
content: "{{ config.stdout[0] }}"
dest: "./backups/{{ inventory_hostname }}-config.txt"
ansible.cfg
[defaults]
inventory = ./netbox_inv.yml
host_key_checking = false
retry_files_enabled = false
forks = 4
ansible_user = admin
remote_user = admin
private_key_file = ~/.ssh/id_rsa
[privilege_escalation]
become = True
become_method = enable
become_pass = passwordhere
[persistent_connection]
command_timeout = 60
netbox_inv.yml
plugin: netbox.netbox.nb_inventory
api_endpoint: http://172.16.1.32
token: 'apikeygoeshere'
validate_certs: False
config_context: False
group_by:
- device_roles
query_filters:
- tag: backup
- status: active
compose:
ansible_network_os: platform.slug
Troubleshooting:
I have confirmed i can use the key to directly connect to the host
root#netbox:~/config_backups# ssh admin#172.16.100.133
PHC-SW01>exit
Connection to 172.16.100.133 closed.
Confirmed SSH connection on switch:
PHC-SW01#debug ip ssh detail
ssh detail messages debugging is on
PHC-SW01#terminal monitor
PHC-SW01#
Aug 3 23:58:33.930: SSH1: starting SSH control process
Aug 3 23:58:33.930: SSH1: sent protocol version id SSH-1.99-Cisco-1.25
Aug 3 23:58:33.934: SSH1: protocol version id is - SSH-2.0-libssh_0.9.3
Aug 3 23:58:33.934: SSH2 1: SSH2_MSG_KEXINIT sent
Aug 3 23:58:33.934: SSH2 1: SSH2_MSG_KEXINIT received
Aug 3 23:58:33.934: SSH2:kex: client->server enc:aes256-cbc mac:hmac-sha1
Aug 3 23:58:33.934: SSH2:kex: server->client enc:aes256-cbc mac:hmac-sha1
Aug 3 23:58:34.004: SSH2 1: expecting SSH2_MSG_KEXDH_INIT
Aug 3 23:58:34.018: SSH2 1: SSH2_MSG_KEXDH_INIT received
Aug 3 23:58:34.147: SSH2: kex_derive_keys complete
Aug 3 23:58:34.150: SSH2 1: SSH2_MSG_NEWKEYS sent
Aug 3 23:58:34.150: SSH2 1: waiting for SSH2_MSG_NEWKEYS
Aug 3 23:58:34.154: SSH2 1: SSH2_MSG_NEWKEYS received
Aug 3 23:58:34.371: SSH2 1: Using method = none
Aug 3 23:58:34.378: SSH2 1: Using method = publickey
Aug 3 23:58:34.378: SSH2 1: Authenticating 'admin' with method: publickey
Aug 3 23:58:34.388: SSH2 1: Client Signature verification PASSED
Aug 3 23:58:34.388: SSH2 1: authentication successful for admin
Aug 3 23:58:34.392: SSH2 1: channel open request
Aug 3 23:58:34.395: SSH2 1: pty-req request
Aug 3 23:58:34.395: SSH2 1: setting TTY - requested: height 24, width 80; set: height 24, width 80
Aug 3 23:58:34.395: SSH2 1: shell request
Aug 3 23:58:34.395: SSH2 1: shell message received
Aug 3 23:58:34.395: SSH2 1: starting shell for vty
Aug 3 23:58:34.409: SSH2 1: channel window adjust message received 1216017
Aug 3 23:59:04.035: SSH1: Session terminated normally
Also changed the timeout to 60 secs, no change even if the timeout is longer, appears it.
Log Output:
root#netbox:~/config_backups# ansible-playbook -vvvvv playbook.yml
ansible-playbook [core 2.13.2]
config file = /root/config_backups/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.8.13 (default, Apr 19 2022, 00:53:22) [GCC 7.5.0]
jinja version = 3.1.2
libyaml = True
Using /root/config_backups/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /root/config_backups/netbox_inv.yml as it did not pass its verify_file() method
script declined parsing /root/config_backups/netbox_inv.yml as it did not pass its verify_file() method
Loading collection netbox.netbox from /root/.ansible/collections/ansible_collections/netbox/netbox
Using inventory plugin 'ansible_collections.netbox.netbox.plugins.inventory.nb_inventory' to process inventory source '/root/config_backups/netbox_inv.yml'
Fetching: http://172.16.1.32/api/status
Fetching: http://172.16.1.32/api/docs/?format=openapi
Fetching: http://172.16.1.32/api/dcim/devices/?limit=0&tag=backup&status=active&exclude=config_context
Fetching: http://172.16.1.32/api/virtualization/virtual-machines/?limit=0&tag=backup&status=active&exclude=config_context
Fetching: http://172.16.1.32/api/dcim/sites/?limit=0
Fetching: http://172.16.1.32/api/dcim/regions/?limit=0
Fetching: http://172.16.1.32/api/dcim/site-groups/?limit=0
Fetching: http://172.16.1.32/api/dcim/locations/?limit=0
Fetching: http://172.16.1.32/api/tenancy/tenants/?limit=0
Fetching: http://172.16.1.32/api/dcim/device-roles/?limit=0
Fetching: http://172.16.1.32/api/dcim/platforms/?limit=0
Fetching: http://172.16.1.32/api/dcim/device-types/?limit=0
Fetching: http://172.16.1.32/api/dcim/manufacturers/?limit=0
Fetching: http://172.16.1.32/api/virtualization/clusters/?limit=0
Fetching: http://172.16.1.32/api/ipam/services/?limit=0
Fetching: http://172.16.1.32/api/dcim/racks/?limit=0
Parsed /root/config_backups/netbox_inv.yml inventory source with auto plugin
[WARNING]: Skipping plugin (/usr/local/lib/python3.8/dist-packages/ansible/plugins/connection/winrm.py) as it seems to be invalid: invalid syntax (spawnbase.py, line 224)
redirecting (type: modules) ansible.builtin.ios_command to cisco.ios.ios_command
Loading collection cisco.ios from /usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios
Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python3.8/dist-packages/ansible/plugins/callback/default.py
Attempting to use 'default' callback.
Skipping callback 'default', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'tree' callback.
PLAYBOOK: playbook.yml **********************************************************************************************************************************************************************
Positional arguments: playbook.yml
verbosity: 5
private_key_file: /root/.ssh/id_rsa
remote_user: admin
connection: smart
timeout: 10
become: True
become_method: enable
tags: ('all',)
inventory: ('/root/config_backups/netbox_inv.yml',)
forks: 4
1 plays in playbook.yml
PLAY [Playbook to backup configs on Cisco Routers] ******************************************************************************************************************************************
META: ran handlers
TASK [Show Running configuration on Device] *************************************************************************************************************************************************
task path: /root/config_backups/playbook.yml:8
redirecting (type: connection) ansible.builtin.network_cli to ansible.netcommon.network_cli
Loading collection ansible.netcommon from /usr/local/lib/python3.8/dist-packages/ansible_collections/ansible/netcommon
redirecting (type: terminal) ansible.builtin.ios to cisco.ios.ios
redirecting (type: cliconf) ansible.builtin.ios to cisco.ios.ios
redirecting (type: become) ansible.builtin.enable to ansible.netcommon.enable
<172.16.9.1> attempting to start connection
<172.16.9.1> using connection plugin ansible.netcommon.network_cli
Found ansible-connection at path /usr/local/bin/ansible-connection
<172.16.9.1> local domain socket does not exist, starting it
<172.16.9.1> control socket path is /root/.ansible/pc/4be7afdbe7
<172.16.9.1> redirecting (type: connection) ansible.builtin.network_cli to ansible.netcommon.network_cli
<172.16.9.1> Loading collection ansible.netcommon from /usr/local/lib/python3.8/dist-packages/ansible_collections/ansible/netcommon
<172.16.9.1> redirecting (type: terminal) ansible.builtin.ios to cisco.ios.ios
<172.16.9.1> Loading collection cisco.ios from /usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios
<172.16.9.1> redirecting (type: cliconf) ansible.builtin.ios to cisco.ios.ios
<172.16.9.1> local domain socket listeners started successfully
<172.16.9.1> loaded cliconf plugin ansible_collections.cisco.ios.plugins.cliconf.ios from path /usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/cliconf/ios.py for network_os ios
<172.16.9.1> ssh type is set to auto
<172.16.9.1> autodetecting ssh_type
<172.16.9.1> ssh type is now set to libssh
<172.16.9.1>
<172.16.9.1> local domain socket path is /root/.ansible/pc/4be7afdbe7
redirecting (type: modules) ansible.builtin.ios_command to cisco.ios.ios_command
redirecting (type: action) ansible.builtin.ios to cisco.ios.ios
<172.16.9.1> Using network group action ios for ios_command
redirecting (type: action) ansible.builtin.ios to cisco.ios.ios
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: enabled
redirecting (type: modules) ansible.builtin.ios_command to cisco.ios.ios_command
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: found ios_command at /usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/modules/ios_command.py
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: running ios_command
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: complete
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: Result: {'failed': True, 'msg': 'command timeout triggered, timeout value is 30 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.', 'exception': ' File "/usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py", line 94, in get_capabilities\n capabilities = Connection(module._socket_path).get_capabilities()\n File "/usr/local/lib/python3.8/dist-packages/ansible/module_utils/connection.py", line 200, in __rpc__\n raise ConnectionError(to_text(msg, errors=\'surrogate_then_replace\'), code=code)\n', 'invocation': {'module_args': {'commands': ['show version'], 'match': 'all', 'retries': 10, 'interval': 1, 'wait_for': None, 'provider': None}}, '_ansible_parsed': True}
The full traceback is:
File "/usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py", line 94, in get_capabilities
capabilities = Connection(module._socket_path).get_capabilities()
File "/usr/local/lib/python3.8/dist-packages/ansible/module_utils/connection.py", line 200, in __rpc__
raise ConnectionError(to_text(msg, errors='surrogate_then_replace'), code=code)
fatal: [PHC-SW01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"commands": [
"show version"
],
"interval": 1,
"match": "all",
"provider": null,
"retries": 10,
"wait_for": null
}
},
"msg": "command timeout triggered, timeout value is 30 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide."
}
PLAY RECAP **********************************************************************************************************************************************************************************
PHC-SW01 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: Result: {'failed': True, 'msg': 'command timeout triggered, timeout value is 30 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.', 'exception': ' File "/usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py", line 94, in get_capabilities\n capabilities = Connection(module._socket_path).get_capabilities()\n File "/usr/local/lib/python3.8/dist-packages/ansible/module_utils/connection.py", line 200, in rpc\n raise ConnectionError(to_text(msg, errors='surrogate_then_replace'), code=code)\n', 'invocation': {'module_args': {'commands': ['show version'], 'match': 'all', 'retries': 10, 'interval': 1, 'wait_for': None, 'provider': None}}, '_ansible_parsed': True}
Im not sure if the issue is with authentication, or the command being run.
the command show version doesnt need privilage escalation.
UPDATE
I Enabled Logging and got the below;
2 things i noticed was;
command: None ? command was show version
Response 2 ? privilage escalation enacted 'enable' did this work?
| invoked shell using ssh_type: libssh
| ssh connection done, setting terminal
| loaded terminal plugin for network_os ios
| command: None
| response-1: b'\r\nPHC-SW01>'
| matched cli prompt 'b'\nPHC-SW01>'' with regex 'b'[\\r\\n]?[\\w\\+\\-\\.:\\/\\[\\]]+(?:\\([^\\)]+\\)){0,3}(?:[>#]) ?$'' from response 'b'\r\nPHC-SW01>''
| firing event: on_become
| send command: b'enable\r'
| command: b'enable'
| response-1: b'enabl'
| response-2: b'e\r\nPassword: '
| resetting persistent connection for socket_path /root/.ansible/pc/31759300a1
| closing ssh connection to device

Related

ansible playbook executes and shows no errors but does no do what to expect on host

Trying to use ansible in combination with a wago controller
host-file is set up correctly. Before getting in the custome coding i want to check if everything works as expected. Therefore i did create a small simple test playbook which just creates a text file ...
1 ---
2 - name: configure wago-controller pfc200
3 hosts: pfc200
4 connection: local
5 become: true
6 become_user: root
7 gather_facts: no
8 vars:
9 ansible_python_interpreter: /usr/bin/python3
10
11 tasks:
12
13 - name: "information"
14 command: touch /tmp/hello.txt
15 register: command_output
16
17 - debug: var=command_output
on the controller python 3 is installed
/tmp folder has the following access rights
0 drwxrwxrwt 2 root root 160 Aug 30 18:16 tmp
executing the script
sudo ansible-playbook test.yml
brings the following outputs
PLAY [configure wago-controller pfc200]
********************************************************************
TASK [information]
*****************************************************************************************
[WARNING]: Consider using the file module with state=touch rather than running 'touch'. If
you need to use command because file is insufficient you can add 'warn: false' to this
command task or set
'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [pfc200]
TASK [debug]
************************************************************************************
ok: [pfc200] => {
"command_output": {
"changed": true,
"cmd": [
"touch",
"/tmp/hello.txt"
],
"delta": "0:00:00.002519",
"end": "2020-08-30 18:12:48.129959",
"failed": false,
"rc": 0,
"start": "2020-08-30 18:12:48.127440",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": [],
"warnings": [
"Consider using the file module with state=touch rather than running 'touch'. If you need to use command because file is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message."
]
}
}
PLAY RECAP ******************************************************************
pfc200 : ok=2 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
if i log into the controller and check for the file ...no file is present...
Before this does not work ..it does not make sense to dive deeper into the configuration
Advice...did also check with the shell command...which leads to the same effect..
Silly me...
connection local
..what was i thinking...besides that a new error did show up now... i guess the controller is missing the library zlib.. see below
1 ---
2 - name: configure wago-controller pfc200
3 hosts: pfc200
4# connection: local
5 become: true
6 become_user: root
7 gather_facts: no
8 vars:
9 ansible_python_interpreter: /usr/bin/python3
10
11 tasks:
12
13 - name: "information"
14 command: touch /tmp/hello.txt
15 register: command_output
16
17 - debug: var=command_output
new error
TASK [information]
******************************************************************************
An exception occurred during task execution. To see the full traceback, use
-vvv. The error was: zipimport.ZipImportError: can't decompress data; zlib not
available
fatal: [pfc200]: FAILED! => {"changed": false, "module_stderr": "Shared
connection to 192.168.4.112 closed.\r\n", "module_stdout": "Traceback (most
recent call last):\r\n File \"/root/.ansible/tmp/ansible-
tmp-1598822914.27-13741-65296373053829/AnsiballZ_command.py\", line 102, in
<module>\r\n _ansiballz_main()\r\n File \"/root/.ansible/tmp/ansible-
tmp-1598822914.27-13741-65296373053829/AnsiballZ_command.py\", line 94, in
_ansiballz_main\r\n invoke_module(zipped_mod, temp_path,
ANSIBALLZ_PARAMS)\r\n File \"/root/.ansible/tmp/ansible-
tmp-1598822914.27-13741-65296373053829/AnsiballZ_command.py\", line 37, in
invoke_module\r\n from ansible.module_utils import basic\r
\nzipimport.ZipImportError: can't decompress data; zlib not available\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

Ansible with winrm only works as root?

I am using ansible 2.9.6, installed with pip, using python 3.7.3 on a debian buster server and I am trying to manage some of our windows 2016 servers with that ; I am already using powershell remoting without problem from other windows servers.
The strange thing is that I can only connect to the windows servers when the command is started as root on the buster server.
For the windows part I am using this in the ansible.cfg :
[windows:vars]
ansible_become=false
ansible_user=Administrateur
ansible_password=somepassword
ansible_port=5985
ansible_connection=winrm
ansible_winrm_server_cert_validation=ignore
ansible_winrm_transport=credssp
ansible_become_method=runas
The results of running a simple win_ping check are :
As root :
sudo ansible -m win_ping srv-prp-tb01c -vvvvvv
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python3.7/dist-packages/ansible/plugins/callback/minimal.py
META: ran handlers
Using module file /usr/local/lib/python3.7/dist-packages/ansible/modules/windows/win_ping.ps1
Pipelining is enabled.
<srv-prp-tb01c> ESTABLISH WINRM CONNECTION FOR USER: Administrateur on PORT 5985 TO srv-prp-tb01c
<srv-prp-tb01c> WINRM CONNECT: transport=credssp endpoint=http://srv-prp-tb01c:5985/wsman
<srv-prp-tb01c> WINRM OPEN SHELL: 067774F1-8E9A-4366-A5B1-C9A47A2D665F
EXEC (via pipeline wrapper)
<srv-prp-tb01c> WINRM EXEC 'PowerShell' ['-NoProfile', '-NonInteractive', '-ExecutionPolicy', 'Unrestricted', '-EncodedCommand', 'UABvAHcAZQByAFMAaABlAGwAbAAgAC0ATgBvAFAAcgBvAGYAaQBsAGUAIAAtAE4AbwBuAEkAbgB0AGUAcgBhAGMAdABpAHYAZQAgAC0ARQB4AGUAYwB1AHQAaQBvAG4AUABvAGwAaQBjAHkAIABVAG4AcgBlAHMAdAByAGkAYwB0AGUAZAAgAC0ARQBuAGMAbwBkAGUAZABDAG8AbQBtAGEAbgBkACAASgBnAEIAagBBAEcAZwBBAFkAdwBCAHcAQQBDADQAQQBZAHcAQgB2AEEARwAwAEEASQBBAEEAMgBBAEQAVQBBAE0AQQBBAHcAQQBEAEUAQQBJAEEAQQArAEEAQwBBAEEASgBBAEIAdQBBAEgAVQBBAGIAQQBCAHMAQQBBAG8AQQBKAEEAQgBsAEEASABnAEEAWgBRAEIAagBBAEYAOABBAGQAdwBCAHkAQQBHAEUAQQBjAEEAQgB3AEEARwBVAEEAYwBnAEIAZgBBAEgATQBBAGQAQQBCAHkAQQBDAEEAQQBQAFEAQQBnAEEAQwBRAEEAYQBRAEIAdQBBAEgAQQBBAGQAUQBCADAAQQBDAEEAQQBmAEEAQQBnAEEARQA4AEEAZABRAEIAMABBAEMAMABBAFUAdwBCADAAQQBIAEkAQQBhAFEAQgB1AEEARwBjAEEAQwBnAEEAawBBAEgATQBBAGMAQQBCAHMAQQBHAGsAQQBkAEEAQgBmAEEASABBAEEAWQBRAEIAeQBBAEgAUQBBAGMAdwBBAGcAQQBEADAAQQBJAEEAQQBrAEEARwBVAEEAZQBBAEIAbABBAEcATQBBAFgAdwBCADMAQQBIAEkAQQBZAFEAQgB3AEEASABBAEEAWgBRAEIAeQBBAEYAOABBAGMAdwBCADAAQQBIAEkAQQBMAGcAQgBUAEEASABBAEEAYgBBAEIAcABBAEgAUQBBAEsAQQBCAEEAQQBDAGcAQQBJAGcAQgBnAEEARABBAEEAWQBBAEEAdwBBAEcAQQBBAE0AQQBCAGcAQQBEAEEAQQBJAGcAQQBwAEEAQwB3AEEASQBBAEEAeQBBAEMAdwBBAEkAQQBCAGIAQQBGAE0AQQBkAEEAQgB5AEEARwBrAEEAYgBnAEIAbgBBAEYATQBBAGMAQQBCAHMAQQBHAGsAQQBkAEEAQgBQAEEASABBAEEAZABBAEIAcABBAEcAOABBAGIAZwBCAHoAQQBGADAAQQBPAGcAQQA2AEEARgBJAEEAWgBRAEIAdABBAEcAOABBAGQAZwBCAGwAQQBFAFUAQQBiAFEAQgB3AEEASABRAEEAZQBRAEIARgBBAEcANABBAGQAQQBCAHkAQQBHAGsAQQBaAFEAQgB6AEEAQwBrAEEAQwBnAEIASgBBAEcAWQBBAEkAQQBBAG8AQQBDADAAQQBiAGcAQgB2AEEASABRAEEASQBBAEEAawBBAEgATQBBAGMAQQBCAHMAQQBHAGsAQQBkAEEAQgBmAEEASABBAEEAWQBRAEIAeQBBAEgAUQBBAGMAdwBBAHUAQQBFAHcAQQBaAFEAQgB1AEEARwBjAEEAZABBAEIAbwBBAEMAQQBBAEwAUQBCAGwAQQBIAEUAQQBJAEEAQQB5AEEAQwBrAEEASQBBAEIANwBBAEMAQQBBAGQAQQBCAG8AQQBIAEkAQQBiAHcAQgAzAEEAQwBBAEEASQBnAEIAcABBAEcANABBAGQAZwBCAGgAQQBHAHcAQQBhAFEAQgBrAEEAQwBBAEEAYwBBAEIAaABBAEgAawBBAGIAQQBCAHYAQQBHAEUAQQBaAEEAQQBpAEEAQwBBAEEAZgBRAEEASwBBAEYATQBBAFoAUQBCADAAQQBDADAAQQBWAGcAQgBoAEEASABJAEEAYQBRAEIAaABBAEcASQBBAGIAQQBCAGwAQQBDAEEAQQBMAFEAQgBPAEEARwBFAEEAYgBRAEIAbABBAEMAQQBBAGEAZwBCAHoAQQBHADgAQQBiAGcAQgBmAEEASABJAEEAWQBRAEIAMwBBAEMAQQBBAEwAUQBCAFcAQQBHAEUAQQBiAEEAQgAxAEEARwBVAEEASQBBAEEAawBBAEgATQBBAGMAQQBCAHMAQQBHAGsAQQBkAEEAQgBmAEEASABBAEEAWQBRAEIAeQBBAEgAUQBBAGMAdwBCAGIAQQBEAEUAQQBYAFEAQQBLAEEAQwBRAEEAWgBRAEIANABBAEcAVQBBAFkAdwBCAGYAQQBIAGMAQQBjAGcAQgBoAEEASABBAEEAYwBBAEIAbABBAEgASQBBAEkAQQBBADkAQQBDAEEAQQBXAHcAQgBUAEEARwBNAEEAYwBnAEIAcABBAEgAQQBBAGQAQQBCAEMAQQBHAHcAQQBiAHcAQgBqAEEARwBzAEEAWABRAEEANgBBAEQAbwBBAFEAdwBCAHkAQQBHAFUAQQBZAFEAQgAwAEEARwBVAEEASwBBAEEAawBBAEgATQBBAGMAQQBCAHMAQQBHAGsAQQBkAEEAQgBmAEEASABBAEEAWQBRAEIAeQBBAEgAUQBBAGMAdwBCAGIAQQBEAEEAQQBYAFEAQQBwAEEAQQBvAEEASgBnAEEAawBBAEcAVQBBAGUAQQBCAGwAQQBHAE0AQQBYAHcAQgAzAEEASABJAEEAWQBRAEIAdwBBAEgAQQBBAFoAUQBCAHkAQQBBAD0APQA=']
<srv-prp-tb01c> WINRM RESULT '<Response code 0, out "{"changed":false,"in", err "#< CLIXML\r\n<Objs Ver">'
<srv-prp-tb01c> WINRM STDOUT {"changed":false,"invocation":{"module_args":{"data":"pong"}},"ping":"pong"}
<srv-prp-tb01c> WINRM STDERR #< CLIXML
<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Préparation des modules à la première utilisation.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>
<srv-prp-tb01c> WINRM CLOSE SHELL: 067774F1-8E9A-4366-A5B1-C9A47A2D665F
srv-prp-tb01c | SUCCESS => {
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"ping": "pong"
}
META: ran handlers
META: ran handlers
As a normal user :
ansible -m win_ping srv-prp-tb01c -vvvvvv
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/fluxvision/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python3.7/dist-packages/ansible/plugins/callback/minimal.py
META: ran handlers
Using module file /usr/local/lib/python3.7/dist-packages/ansible/modules/windows/win_ping.ps1
Pipelining is enabled.
<srv-prp-tb01c> ESTABLISH WINRM CONNECTION FOR USER: Administrateur on PORT 5985 TO srv-prp-tb01c
<srv-prp-tb01c> WINRM CONNECT: transport=credssp endpoint=http://srv-prp-tb01c:5985/wsman
<srv-prp-tb01c> WINRM CONNECTION ERROR: Server did not response with a CredSSP token after step Step 1. TLS Handshake - actual 'Negotiate, Kerberos, CredSSP'
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/ansible/plugins/connection/winrm.py", line 413, in _winrm_connect
self.shell_id = protocol.open_shell(codepage=65001) # UTF-8
File "/usr/local/lib/python3.7/dist-packages/winrm/protocol.py", line 166, in open_shell
res = self.send_message(xmltodict.unparse(req))
File "/usr/local/lib/python3.7/dist-packages/winrm/protocol.py", line 243, in send_message
resp = self.transport.send_message(message)
File "/usr/local/lib/python3.7/dist-packages/winrm/transport.py", line 310, in send_message
self.build_session()
File "/usr/local/lib/python3.7/dist-packages/winrm/transport.py", line 293, in build_session
self.setup_encryption()
File "/usr/local/lib/python3.7/dist-packages/winrm/transport.py", line 299, in setup_encryption
self._send_message_request(prepared_request, '')
File "/usr/local/lib/python3.7/dist-packages/winrm/transport.py", line 328, in _send_message_request
response = self.session.send(prepared_request, timeout=self.read_timeout_sec)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 653, in send
r = dispatch_hook('response', hooks, r, **kwargs)
File "/usr/lib/python3/dist-packages/requests/hooks.py", line 31, in dispatch_hook
_hook_data = hook(hook_data, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/requests_credssp/credssp.py", line 448, in response_hook
response = self.handle_401(response, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/requests_credssp/credssp.py", line 484, in handle_401
step_name)
File "/usr/local/lib/python3.7/dist-packages/requests_credssp/credssp.py", line 517, in _get_credssp_token
raise AuthenticationException(error_msg)
requests_credssp.exceptions.AuthenticationException: Server did not response with a CredSSP token after step Step 1. TLS Handshake - actual 'Negotiate, Kerberos, CredSSP'
srv-prp-tb01c | UNREACHABLE! => {
"changed": false,
"msg": "credssp: Server did not response with a CredSSP token after step Step 1. TLS Handshake - actual 'Negotiate, Kerberos, CredSSP'",
"unreachable": true
}
I am a bit at a loss here. Is this the expected behaviour?
Thanks for your help,
Nicolas
For the record, this has nothing to do with root/non-root.
The normal user environment has proxy definitions. The root account does not have them. After removing the definition, all works fine :
ansible#srv-prod-lnx01:~$ ansible -m win_ping srv-prp-tb01c
srv-prp-tb01c | UNREACHABLE! => {
"changed": false,
"msg": "credssp: Server did not response with a CredSSP token after step Step 1. TLS Handshake - actual 'Negotiate, Kerberos, CredSSP'",
"unreachable": true
}
ansible#srv-prod-lnx01:~$ unset HTTP_PROXY
ansible#srv-prod-lnx01:~$ ansible -m win_ping srv-prp-tb01c
srv-prp-tb01c | SUCCESS => {
"changed": false,
"ping": "pong"
}
For more details:
https://learn.microsoft.com/en-us/windows/win32/winrm/proxy-servers-and-winrm#configuring-a-proxy-server-for-winrm-20
Sorry for the noise.
Nicolas

Getting Module Failure error while running Ansible playbook

I am getting the following error when running my Ansible playbook:
hosts file
[node1]
rabbit-node1 ansible_ssh_host=x.x.x.x ansible_ssh_user=ubuntu
[node2]
rabbit-node2 ansible_ssh_host=x.x.x.x ansible_ssh_user=ubuntu
[node3]
rabbit-node3 ansible_ssh_host=x.x.x.x ansible_ssh_user=ubuntu
[workers]
rabbit-node2
rabbit-node3
[all_group]
rabbit-node1
rabbit-node2
rabbit-node3
[all:vars]
ansible_python_interpreter=/usr/bin/python3
ansible_ssh_user=ubuntu
ansible_private_key_file=private key path
ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
Error
fatal: [rabbit-node1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "setup"}, "module_stderr": "OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g 1 Mar 2016\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for x.x.x.x\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 25400\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to x.x.x.x closed.\r\n", "module_stdout": " File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1584361123.7-96709661573808/setup\", line 3160\r\n except OSError, e:\r\n ^\r\nSyntaxError: invalid syntax\r\n", "msg": "MODULE FAILURE", "parsed": false}
3 playbook.yml
Playbook file for Setting hostname installing rabbitmq and creating cluster of rabitmq having 3 nodes.
- name: deploy RabbitMQ and setup the environment
hosts:
- all_group
#gather_facts: False
user: ubuntu
sudo: yes
roles:
- set_hostname
- install_rabbitmq
- name: Configure RabbitMQ Cluster
hosts:
- workers
user: ubuntu
sudo: yes
roles:
- cluster_setup

How to use ansible without ssh keys - getpwnam() MODULE FAILURE

I am trying to use ansible without setting ssh keys. I want to log in with username and password. I have installed on the master sshpass.
My directory layout looks like this:
ansible.cfg
inventory/
hosts
group_vars/
all.yaml
roles
whoami/
tasks/
main.yaml
site.yaml
ansible.cfg:
[defaults]
host_key_checking = false
inventory = inventory
inventory/hosts:
[defaults]
machine
inventory/group_vars/all.yaml:
---
# file: group_vars/all
ansible_user: username
ansible_password: password
main.yaml:
---
- name: whoami
shell: whoami
site.yaml:
---
- hosts: machine
roles:
- whoami
if i run:
$ansible machine -i inventory/ -m shell -a "whoami"
it is executed successfully:
machine | SUCCESS | rc=0 >>
username
whereas if I run:
$ansible-playbook -i inventory site.yml -v
I get this:
fatal: [machine]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "setup"
},
"module_stderr": "OpenSSH_7.4p1 Debian-10+deb9u1, OpenSSL 1.0.2l 25 May 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 2908\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to machine closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_QAFcSD/ansible_module_setup.py\", line 134, in <module>\r\n main()\r\n File \"/tmp/ansible_QAFcSD/ansible_module_setup.py\", line 126, in main\r\n data = get_all_facts(module)\r\n File \"/tmp/ansible_QAFcSD/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3518, in get_all_facts\r\n File \"/tmp/ansible_QAFcSD/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3457, in ansible_facts\r\n File \"/tmp/ansible_QAFcSD/ansible_modlib.zip/ansible/module_utils/facts.py\", line 171, in __init__\r\n File \"/tmp/ansible_QAFcSD/ansible_modlib.zip/ansible/module_utils/facts.py\", line 513, in get_user_facts\r\nKeyError: 'getpwnam(): name not found: username'\r\n", "msg": "MODULE FAILURE"
}
to retry, use: --limit #/home/username/playbook/my_ansible/site.retry
PLAY RECAP *********************************************************************
machine : ok=0 changed=0 unreachable=0 failed=1
I was running version 2.2.1.0 which is affected by this bug GitHub. I upgraded to version 2. 4.1.0 with:
$pip install ansible
and it works smoothly.
Thanks to techraf

junos_command module not returning output

I have an Ansible script where i am simply using junos_command module to get users list from Juniper switch, below is the snippet of my code. I keep getting the RuntimeWarning whenever i try to run this. Moreover I have been successfully able to run commands like 'show version' using the below code itself. Please help
Script:
name: / GET USERS / Get list of all the current users on switch
action: junos_command
args: { commands: 'show configuration system login',
provider: "{{ netconf }}" }
register: curr_users_on_switch
Error:
TASK [/ GET USERS / Get list of all the current users on switch] ***************
fatal: [rlab-er1]: FAILED! => {"changed": false, "failed": true, "module_stderr": "/home/mbhadoria/.local/lib/python2.7/site-packages/jnpr/junos/device.py:429: RuntimeWarning: CLI command is for debug use only!
\n warnings.warn(\"CLI command is for debug use only!\", RuntimeWarning)\nTraceback (most recent call last):
\n File \"/tmp/ansible_lVOmPp/ansible_module_junos_command.py\", line 261, in <module>
\n main()
\n File \"/tmp/ansible_lVOmPp/ansible_module_junos_command.py\", line 233, in main
\n xmlout.append(xml_to_string(response[index]))
\n File \"/tmp/ansible_lVOmPp/ansible_modlib.zip/ansible/module_utils/junos.py\", line 79, in xml_to_string\n File \"src/lxml/lxml.etree.pyx\", line 3350, in lxml.etree.tostring (src/lxml/lxml.etree.c:84534)\nTypeError: Type 'str' cannot be serialized.
\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
junos_command only support operation junos commands. What you are trying to run is configurational command. Hence you see "show version" which is operational command working but not "show configuration system login".
For such configuration data you can should use rpc option (get-configuration) with junos_command.
junos_command:
rpcs:
- "get_configuration
You can also use junos_get_config.
http://junos-ansible-modules.readthedocs.io/en/latest/junos_get_config.html
or junos_rpc
https://github.com/Juniper/ansible-junos-stdlib/blob/master/library/junos_rpc
ex:
- name: Junos OS version
hosts: all
connection: local
gather_facts: no
tasks:
- name: Get rpc run
junos_rpc:
host={{ inventory_hostname }}
user=xxxx
passwd=xxx
rpc=get-config
dest=get_config.conf
filter_xml="<configuration><system><login/></system></configuration>"
register: junos
or
tasks:
- name: Get rpc run
junos_get_config:
host: "{{ inventory_hostname }}"
user: xxxx
passwd: xxxx
logfile: get_config.log
dest: "{{ inventory_hostname }}.xml"
format: xml
filter: "system/login"
TASK [Get rpc run] *************************************************************
......
PLAY RECAP *********************************************************************
xxxk : ok=1 changed=1 unreachable=0 failed=0

Resources