Ansible with winrm only works as root? - ansible

I am using ansible 2.9.6, installed with pip, using python 3.7.3 on a debian buster server and I am trying to manage some of our windows 2016 servers with that ; I am already using powershell remoting without problem from other windows servers.
The strange thing is that I can only connect to the windows servers when the command is started as root on the buster server.
For the windows part I am using this in the ansible.cfg :
[windows:vars]
ansible_become=false
ansible_user=Administrateur
ansible_password=somepassword
ansible_port=5985
ansible_connection=winrm
ansible_winrm_server_cert_validation=ignore
ansible_winrm_transport=credssp
ansible_become_method=runas
The results of running a simple win_ping check are :
As root :
sudo ansible -m win_ping srv-prp-tb01c -vvvvvv
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python3.7/dist-packages/ansible/plugins/callback/minimal.py
META: ran handlers
Using module file /usr/local/lib/python3.7/dist-packages/ansible/modules/windows/win_ping.ps1
Pipelining is enabled.
<srv-prp-tb01c> ESTABLISH WINRM CONNECTION FOR USER: Administrateur on PORT 5985 TO srv-prp-tb01c
<srv-prp-tb01c> WINRM CONNECT: transport=credssp endpoint=http://srv-prp-tb01c:5985/wsman
<srv-prp-tb01c> WINRM OPEN SHELL: 067774F1-8E9A-4366-A5B1-C9A47A2D665F
EXEC (via pipeline wrapper)
<srv-prp-tb01c> WINRM EXEC 'PowerShell' ['-NoProfile', '-NonInteractive', '-ExecutionPolicy', 'Unrestricted', '-EncodedCommand', 'UABvAHcAZQByAFMAaABlAGwAbAAgAC0ATgBvAFAAcgBvAGYAaQBsAGUAIAAtAE4AbwBuAEkAbgB0AGUAcgBhAGMAdABpAHYAZQAgAC0ARQB4AGUAYwB1AHQAaQBvAG4AUABvAGwAaQBjAHkAIABVAG4AcgBlAHMAdAByAGkAYwB0AGUAZAAgAC0ARQBuAGMAbwBkAGUAZABDAG8AbQBtAGEAbgBkACAASgBnAEIAagBBAEcAZwBBAFkAdwBCAHcAQQBDADQAQQBZAHcAQgB2AEEARwAwAEEASQBBAEEAMgBBAEQAVQBBAE0AQQBBAHcAQQBEAEUAQQBJAEEAQQArAEEAQwBBAEEASgBBAEIAdQBBAEgAVQBBAGIAQQBCAHMAQQBBAG8AQQBKAEEAQgBsAEEASABnAEEAWgBRAEIAagBBAEYAOABBAGQAdwBCAHkAQQBHAEUAQQBjAEEAQgB3AEEARwBVAEEAYwBnAEIAZgBBAEgATQBBAGQAQQBCAHkAQQBDAEEAQQBQAFEAQQBnAEEAQwBRAEEAYQBRAEIAdQBBAEgAQQBBAGQAUQBCADAAQQBDAEEAQQBmAEEAQQBnAEEARQA4AEEAZABRAEIAMABBAEMAMABBAFUAdwBCADAAQQBIAEkAQQBhAFEAQgB1AEEARwBjAEEAQwBnAEEAawBBAEgATQBBAGMAQQBCAHMAQQBHAGsAQQBkAEEAQgBmAEEASABBAEEAWQBRAEIAeQBBAEgAUQBBAGMAdwBBAGcAQQBEADAAQQBJAEEAQQBrAEEARwBVAEEAZQBBAEIAbABBAEcATQBBAFgAdwBCADMAQQBIAEkAQQBZAFEAQgB3AEEASABBAEEAWgBRAEIAeQBBAEYAOABBAGMAdwBCADAAQQBIAEkAQQBMAGcAQgBUAEEASABBAEEAYgBBAEIAcABBAEgAUQBBAEsAQQBCAEEAQQBDAGcAQQBJAGcAQgBnAEEARABBAEEAWQBBAEEAdwBBAEcAQQBBAE0AQQBCAGcAQQBEAEEAQQBJAGcAQQBwAEEAQwB3AEEASQBBAEEAeQBBAEMAdwBBAEkAQQBCAGIAQQBGAE0AQQBkAEEAQgB5AEEARwBrAEEAYgBnAEIAbgBBAEYATQBBAGMAQQBCAHMAQQBHAGsAQQBkAEEAQgBQAEEASABBAEEAZABBAEIAcABBAEcAOABBAGIAZwBCAHoAQQBGADAAQQBPAGcAQQA2AEEARgBJAEEAWgBRAEIAdABBAEcAOABBAGQAZwBCAGwAQQBFAFUAQQBiAFEAQgB3AEEASABRAEEAZQBRAEIARgBBAEcANABBAGQAQQBCAHkAQQBHAGsAQQBaAFEAQgB6AEEAQwBrAEEAQwBnAEIASgBBAEcAWQBBAEkAQQBBAG8AQQBDADAAQQBiAGcAQgB2AEEASABRAEEASQBBAEEAawBBAEgATQBBAGMAQQBCAHMAQQBHAGsAQQBkAEEAQgBmAEEASABBAEEAWQBRAEIAeQBBAEgAUQBBAGMAdwBBAHUAQQBFAHcAQQBaAFEAQgB1AEEARwBjAEEAZABBAEIAbwBBAEMAQQBBAEwAUQBCAGwAQQBIAEUAQQBJAEEAQQB5AEEAQwBrAEEASQBBAEIANwBBAEMAQQBBAGQAQQBCAG8AQQBIAEkAQQBiAHcAQgAzAEEAQwBBAEEASQBnAEIAcABBAEcANABBAGQAZwBCAGgAQQBHAHcAQQBhAFEAQgBrAEEAQwBBAEEAYwBBAEIAaABBAEgAawBBAGIAQQBCAHYAQQBHAEUAQQBaAEEAQQBpAEEAQwBBAEEAZgBRAEEASwBBAEYATQBBAFoAUQBCADAAQQBDADAAQQBWAGcAQgBoAEEASABJAEEAYQBRAEIAaABBAEcASQBBAGIAQQBCAGwAQQBDAEEAQQBMAFEAQgBPAEEARwBFAEEAYgBRAEIAbABBAEMAQQBBAGEAZwBCAHoAQQBHADgAQQBiAGcAQgBmAEEASABJAEEAWQBRAEIAMwBBAEMAQQBBAEwAUQBCAFcAQQBHAEUAQQBiAEEAQgAxAEEARwBVAEEASQBBAEEAawBBAEgATQBBAGMAQQBCAHMAQQBHAGsAQQBkAEEAQgBmAEEASABBAEEAWQBRAEIAeQBBAEgAUQBBAGMAdwBCAGIAQQBEAEUAQQBYAFEAQQBLAEEAQwBRAEEAWgBRAEIANABBAEcAVQBBAFkAdwBCAGYAQQBIAGMAQQBjAGcAQgBoAEEASABBAEEAYwBBAEIAbABBAEgASQBBAEkAQQBBADkAQQBDAEEAQQBXAHcAQgBUAEEARwBNAEEAYwBnAEIAcABBAEgAQQBBAGQAQQBCAEMAQQBHAHcAQQBiAHcAQgBqAEEARwBzAEEAWABRAEEANgBBAEQAbwBBAFEAdwBCAHkAQQBHAFUAQQBZAFEAQgAwAEEARwBVAEEASwBBAEEAawBBAEgATQBBAGMAQQBCAHMAQQBHAGsAQQBkAEEAQgBmAEEASABBAEEAWQBRAEIAeQBBAEgAUQBBAGMAdwBCAGIAQQBEAEEAQQBYAFEAQQBwAEEAQQBvAEEASgBnAEEAawBBAEcAVQBBAGUAQQBCAGwAQQBHAE0AQQBYAHcAQgAzAEEASABJAEEAWQBRAEIAdwBBAEgAQQBBAFoAUQBCAHkAQQBBAD0APQA=']
<srv-prp-tb01c> WINRM RESULT '<Response code 0, out "{"changed":false,"in", err "#< CLIXML\r\n<Objs Ver">'
<srv-prp-tb01c> WINRM STDOUT {"changed":false,"invocation":{"module_args":{"data":"pong"}},"ping":"pong"}
<srv-prp-tb01c> WINRM STDERR #< CLIXML
<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Préparation des modules à la première utilisation.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>
<srv-prp-tb01c> WINRM CLOSE SHELL: 067774F1-8E9A-4366-A5B1-C9A47A2D665F
srv-prp-tb01c | SUCCESS => {
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"ping": "pong"
}
META: ran handlers
META: ran handlers
As a normal user :
ansible -m win_ping srv-prp-tb01c -vvvvvv
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/fluxvision/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python3.7/dist-packages/ansible/plugins/callback/minimal.py
META: ran handlers
Using module file /usr/local/lib/python3.7/dist-packages/ansible/modules/windows/win_ping.ps1
Pipelining is enabled.
<srv-prp-tb01c> ESTABLISH WINRM CONNECTION FOR USER: Administrateur on PORT 5985 TO srv-prp-tb01c
<srv-prp-tb01c> WINRM CONNECT: transport=credssp endpoint=http://srv-prp-tb01c:5985/wsman
<srv-prp-tb01c> WINRM CONNECTION ERROR: Server did not response with a CredSSP token after step Step 1. TLS Handshake - actual 'Negotiate, Kerberos, CredSSP'
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/ansible/plugins/connection/winrm.py", line 413, in _winrm_connect
self.shell_id = protocol.open_shell(codepage=65001) # UTF-8
File "/usr/local/lib/python3.7/dist-packages/winrm/protocol.py", line 166, in open_shell
res = self.send_message(xmltodict.unparse(req))
File "/usr/local/lib/python3.7/dist-packages/winrm/protocol.py", line 243, in send_message
resp = self.transport.send_message(message)
File "/usr/local/lib/python3.7/dist-packages/winrm/transport.py", line 310, in send_message
self.build_session()
File "/usr/local/lib/python3.7/dist-packages/winrm/transport.py", line 293, in build_session
self.setup_encryption()
File "/usr/local/lib/python3.7/dist-packages/winrm/transport.py", line 299, in setup_encryption
self._send_message_request(prepared_request, '')
File "/usr/local/lib/python3.7/dist-packages/winrm/transport.py", line 328, in _send_message_request
response = self.session.send(prepared_request, timeout=self.read_timeout_sec)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 653, in send
r = dispatch_hook('response', hooks, r, **kwargs)
File "/usr/lib/python3/dist-packages/requests/hooks.py", line 31, in dispatch_hook
_hook_data = hook(hook_data, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/requests_credssp/credssp.py", line 448, in response_hook
response = self.handle_401(response, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/requests_credssp/credssp.py", line 484, in handle_401
step_name)
File "/usr/local/lib/python3.7/dist-packages/requests_credssp/credssp.py", line 517, in _get_credssp_token
raise AuthenticationException(error_msg)
requests_credssp.exceptions.AuthenticationException: Server did not response with a CredSSP token after step Step 1. TLS Handshake - actual 'Negotiate, Kerberos, CredSSP'
srv-prp-tb01c | UNREACHABLE! => {
"changed": false,
"msg": "credssp: Server did not response with a CredSSP token after step Step 1. TLS Handshake - actual 'Negotiate, Kerberos, CredSSP'",
"unreachable": true
}
I am a bit at a loss here. Is this the expected behaviour?
Thanks for your help,
Nicolas

For the record, this has nothing to do with root/non-root.
The normal user environment has proxy definitions. The root account does not have them. After removing the definition, all works fine :
ansible#srv-prod-lnx01:~$ ansible -m win_ping srv-prp-tb01c
srv-prp-tb01c | UNREACHABLE! => {
"changed": false,
"msg": "credssp: Server did not response with a CredSSP token after step Step 1. TLS Handshake - actual 'Negotiate, Kerberos, CredSSP'",
"unreachable": true
}
ansible#srv-prod-lnx01:~$ unset HTTP_PROXY
ansible#srv-prod-lnx01:~$ ansible -m win_ping srv-prp-tb01c
srv-prp-tb01c | SUCCESS => {
"changed": false,
"ping": "pong"
}
For more details:
https://learn.microsoft.com/en-us/windows/win32/winrm/proxy-servers-and-winrm#configuring-a-proxy-server-for-winrm-20
Sorry for the noise.
Nicolas

Related

Ansible Playbook command timeout when connecting to cisco switch using SSH key

Summary:
im trying to setup a playbook to get a list of IOS devices from netbox and then export the config files
Versions:
ansible-playbook [core 2.13.2]
python version = 3.8.13
switch IOS = 15.0(2)EX5
ansible host = Ubuntu 18.04.6 LTS
Issue:
When i run the command:
ansible-playbook -vvvvv playbook.yml
I get the error:
The full traceback is:
File "/usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py", line 94, in get_capabilities
capabilities = Connection(module._socket_path).get_capabilities()
File "/usr/local/lib/python3.8/dist-packages/ansible/module_utils/connection.py", line 200, in __rpc__
raise ConnectionError(to_text(msg, errors='surrogate_then_replace'), code=code)
fatal: [PHC-16JUB-SW01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"commands": [
"show version"
],
"interval": 1,
"match": "all",
"provider": null,
"retries": 10,
"wait_for": null
}
},
"msg": "command timeout triggered, timeout value is 60 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide."
}
Files:
playbook.yml
- name: Playbook to backup configs on Cisco Routers
connection: network_cli
hosts: device_roles_switch
remote_user: admin
gather_facts: False
tasks:
- name: Show Running configuration on Device
ios_command:
commands:
- show version
vars:
ansible_user: admin
register: config
- name: Save output to backups folder
copy:
content: "{{ config.stdout[0] }}"
dest: "./backups/{{ inventory_hostname }}-config.txt"
ansible.cfg
[defaults]
inventory = ./netbox_inv.yml
host_key_checking = false
retry_files_enabled = false
forks = 4
ansible_user = admin
remote_user = admin
private_key_file = ~/.ssh/id_rsa
[privilege_escalation]
become = True
become_method = enable
become_pass = passwordhere
[persistent_connection]
command_timeout = 60
netbox_inv.yml
plugin: netbox.netbox.nb_inventory
api_endpoint: http://172.16.1.32
token: 'apikeygoeshere'
validate_certs: False
config_context: False
group_by:
- device_roles
query_filters:
- tag: backup
- status: active
compose:
ansible_network_os: platform.slug
Troubleshooting:
I have confirmed i can use the key to directly connect to the host
root#netbox:~/config_backups# ssh admin#172.16.100.133
PHC-SW01>exit
Connection to 172.16.100.133 closed.
Confirmed SSH connection on switch:
PHC-SW01#debug ip ssh detail
ssh detail messages debugging is on
PHC-SW01#terminal monitor
PHC-SW01#
Aug 3 23:58:33.930: SSH1: starting SSH control process
Aug 3 23:58:33.930: SSH1: sent protocol version id SSH-1.99-Cisco-1.25
Aug 3 23:58:33.934: SSH1: protocol version id is - SSH-2.0-libssh_0.9.3
Aug 3 23:58:33.934: SSH2 1: SSH2_MSG_KEXINIT sent
Aug 3 23:58:33.934: SSH2 1: SSH2_MSG_KEXINIT received
Aug 3 23:58:33.934: SSH2:kex: client->server enc:aes256-cbc mac:hmac-sha1
Aug 3 23:58:33.934: SSH2:kex: server->client enc:aes256-cbc mac:hmac-sha1
Aug 3 23:58:34.004: SSH2 1: expecting SSH2_MSG_KEXDH_INIT
Aug 3 23:58:34.018: SSH2 1: SSH2_MSG_KEXDH_INIT received
Aug 3 23:58:34.147: SSH2: kex_derive_keys complete
Aug 3 23:58:34.150: SSH2 1: SSH2_MSG_NEWKEYS sent
Aug 3 23:58:34.150: SSH2 1: waiting for SSH2_MSG_NEWKEYS
Aug 3 23:58:34.154: SSH2 1: SSH2_MSG_NEWKEYS received
Aug 3 23:58:34.371: SSH2 1: Using method = none
Aug 3 23:58:34.378: SSH2 1: Using method = publickey
Aug 3 23:58:34.378: SSH2 1: Authenticating 'admin' with method: publickey
Aug 3 23:58:34.388: SSH2 1: Client Signature verification PASSED
Aug 3 23:58:34.388: SSH2 1: authentication successful for admin
Aug 3 23:58:34.392: SSH2 1: channel open request
Aug 3 23:58:34.395: SSH2 1: pty-req request
Aug 3 23:58:34.395: SSH2 1: setting TTY - requested: height 24, width 80; set: height 24, width 80
Aug 3 23:58:34.395: SSH2 1: shell request
Aug 3 23:58:34.395: SSH2 1: shell message received
Aug 3 23:58:34.395: SSH2 1: starting shell for vty
Aug 3 23:58:34.409: SSH2 1: channel window adjust message received 1216017
Aug 3 23:59:04.035: SSH1: Session terminated normally
Also changed the timeout to 60 secs, no change even if the timeout is longer, appears it.
Log Output:
root#netbox:~/config_backups# ansible-playbook -vvvvv playbook.yml
ansible-playbook [core 2.13.2]
config file = /root/config_backups/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.8.13 (default, Apr 19 2022, 00:53:22) [GCC 7.5.0]
jinja version = 3.1.2
libyaml = True
Using /root/config_backups/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /root/config_backups/netbox_inv.yml as it did not pass its verify_file() method
script declined parsing /root/config_backups/netbox_inv.yml as it did not pass its verify_file() method
Loading collection netbox.netbox from /root/.ansible/collections/ansible_collections/netbox/netbox
Using inventory plugin 'ansible_collections.netbox.netbox.plugins.inventory.nb_inventory' to process inventory source '/root/config_backups/netbox_inv.yml'
Fetching: http://172.16.1.32/api/status
Fetching: http://172.16.1.32/api/docs/?format=openapi
Fetching: http://172.16.1.32/api/dcim/devices/?limit=0&tag=backup&status=active&exclude=config_context
Fetching: http://172.16.1.32/api/virtualization/virtual-machines/?limit=0&tag=backup&status=active&exclude=config_context
Fetching: http://172.16.1.32/api/dcim/sites/?limit=0
Fetching: http://172.16.1.32/api/dcim/regions/?limit=0
Fetching: http://172.16.1.32/api/dcim/site-groups/?limit=0
Fetching: http://172.16.1.32/api/dcim/locations/?limit=0
Fetching: http://172.16.1.32/api/tenancy/tenants/?limit=0
Fetching: http://172.16.1.32/api/dcim/device-roles/?limit=0
Fetching: http://172.16.1.32/api/dcim/platforms/?limit=0
Fetching: http://172.16.1.32/api/dcim/device-types/?limit=0
Fetching: http://172.16.1.32/api/dcim/manufacturers/?limit=0
Fetching: http://172.16.1.32/api/virtualization/clusters/?limit=0
Fetching: http://172.16.1.32/api/ipam/services/?limit=0
Fetching: http://172.16.1.32/api/dcim/racks/?limit=0
Parsed /root/config_backups/netbox_inv.yml inventory source with auto plugin
[WARNING]: Skipping plugin (/usr/local/lib/python3.8/dist-packages/ansible/plugins/connection/winrm.py) as it seems to be invalid: invalid syntax (spawnbase.py, line 224)
redirecting (type: modules) ansible.builtin.ios_command to cisco.ios.ios_command
Loading collection cisco.ios from /usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios
Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python3.8/dist-packages/ansible/plugins/callback/default.py
Attempting to use 'default' callback.
Skipping callback 'default', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'tree' callback.
PLAYBOOK: playbook.yml **********************************************************************************************************************************************************************
Positional arguments: playbook.yml
verbosity: 5
private_key_file: /root/.ssh/id_rsa
remote_user: admin
connection: smart
timeout: 10
become: True
become_method: enable
tags: ('all',)
inventory: ('/root/config_backups/netbox_inv.yml',)
forks: 4
1 plays in playbook.yml
PLAY [Playbook to backup configs on Cisco Routers] ******************************************************************************************************************************************
META: ran handlers
TASK [Show Running configuration on Device] *************************************************************************************************************************************************
task path: /root/config_backups/playbook.yml:8
redirecting (type: connection) ansible.builtin.network_cli to ansible.netcommon.network_cli
Loading collection ansible.netcommon from /usr/local/lib/python3.8/dist-packages/ansible_collections/ansible/netcommon
redirecting (type: terminal) ansible.builtin.ios to cisco.ios.ios
redirecting (type: cliconf) ansible.builtin.ios to cisco.ios.ios
redirecting (type: become) ansible.builtin.enable to ansible.netcommon.enable
<172.16.9.1> attempting to start connection
<172.16.9.1> using connection plugin ansible.netcommon.network_cli
Found ansible-connection at path /usr/local/bin/ansible-connection
<172.16.9.1> local domain socket does not exist, starting it
<172.16.9.1> control socket path is /root/.ansible/pc/4be7afdbe7
<172.16.9.1> redirecting (type: connection) ansible.builtin.network_cli to ansible.netcommon.network_cli
<172.16.9.1> Loading collection ansible.netcommon from /usr/local/lib/python3.8/dist-packages/ansible_collections/ansible/netcommon
<172.16.9.1> redirecting (type: terminal) ansible.builtin.ios to cisco.ios.ios
<172.16.9.1> Loading collection cisco.ios from /usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios
<172.16.9.1> redirecting (type: cliconf) ansible.builtin.ios to cisco.ios.ios
<172.16.9.1> local domain socket listeners started successfully
<172.16.9.1> loaded cliconf plugin ansible_collections.cisco.ios.plugins.cliconf.ios from path /usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/cliconf/ios.py for network_os ios
<172.16.9.1> ssh type is set to auto
<172.16.9.1> autodetecting ssh_type
<172.16.9.1> ssh type is now set to libssh
<172.16.9.1>
<172.16.9.1> local domain socket path is /root/.ansible/pc/4be7afdbe7
redirecting (type: modules) ansible.builtin.ios_command to cisco.ios.ios_command
redirecting (type: action) ansible.builtin.ios to cisco.ios.ios
<172.16.9.1> Using network group action ios for ios_command
redirecting (type: action) ansible.builtin.ios to cisco.ios.ios
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: enabled
redirecting (type: modules) ansible.builtin.ios_command to cisco.ios.ios_command
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: found ios_command at /usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/modules/ios_command.py
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: running ios_command
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: complete
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: Result: {'failed': True, 'msg': 'command timeout triggered, timeout value is 30 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.', 'exception': ' File "/usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py", line 94, in get_capabilities\n capabilities = Connection(module._socket_path).get_capabilities()\n File "/usr/local/lib/python3.8/dist-packages/ansible/module_utils/connection.py", line 200, in __rpc__\n raise ConnectionError(to_text(msg, errors=\'surrogate_then_replace\'), code=code)\n', 'invocation': {'module_args': {'commands': ['show version'], 'match': 'all', 'retries': 10, 'interval': 1, 'wait_for': None, 'provider': None}}, '_ansible_parsed': True}
The full traceback is:
File "/usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py", line 94, in get_capabilities
capabilities = Connection(module._socket_path).get_capabilities()
File "/usr/local/lib/python3.8/dist-packages/ansible/module_utils/connection.py", line 200, in __rpc__
raise ConnectionError(to_text(msg, errors='surrogate_then_replace'), code=code)
fatal: [PHC-SW01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"commands": [
"show version"
],
"interval": 1,
"match": "all",
"provider": null,
"retries": 10,
"wait_for": null
}
},
"msg": "command timeout triggered, timeout value is 30 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide."
}
PLAY RECAP **********************************************************************************************************************************************************************************
PHC-SW01 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: Result: {'failed': True, 'msg': 'command timeout triggered, timeout value is 30 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.', 'exception': ' File "/usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py", line 94, in get_capabilities\n capabilities = Connection(module._socket_path).get_capabilities()\n File "/usr/local/lib/python3.8/dist-packages/ansible/module_utils/connection.py", line 200, in rpc\n raise ConnectionError(to_text(msg, errors='surrogate_then_replace'), code=code)\n', 'invocation': {'module_args': {'commands': ['show version'], 'match': 'all', 'retries': 10, 'interval': 1, 'wait_for': None, 'provider': None}}, '_ansible_parsed': True}
Im not sure if the issue is with authentication, or the command being run.
the command show version doesnt need privilage escalation.
UPDATE
I Enabled Logging and got the below;
2 things i noticed was;
command: None ? command was show version
Response 2 ? privilage escalation enacted 'enable' did this work?
| invoked shell using ssh_type: libssh
| ssh connection done, setting terminal
| loaded terminal plugin for network_os ios
| command: None
| response-1: b'\r\nPHC-SW01>'
| matched cli prompt 'b'\nPHC-SW01>'' with regex 'b'[\\r\\n]?[\\w\\+\\-\\.:\\/\\[\\]]+(?:\\([^\\)]+\\)){0,3}(?:[>#]) ?$'' from response 'b'\r\nPHC-SW01>''
| firing event: on_become
| send command: b'enable\r'
| command: b'enable'
| response-1: b'enabl'
| response-2: b'e\r\nPassword: '
| resetting persistent connection for socket_path /root/.ansible/pc/31759300a1
| closing ssh connection to device

Why does the ansible option "--private-key" works on one host but not on another?

I installed ansible 2.8.0 on VM-1 without modifying any other default configs in ansible.cfg except the "host_key_checking = false".
Then I ran ansible all -i "<IP of VM-3>," --private-key <key of VM-3> -u root -m ping with OK on VM-3, but ran ansible all -i "<IP of VM-2>," --private-key <key of VM-2> -u root -m ping with ERROR on VM-2.
I generated a pair of ssh-key on VM-2 (user is root) and copy its private key (id_rsa) content to VM-1. I save it in a file named 'key', and set this file's mode to be '700'. Finally, I ran the command below:
ansible all -i "<ip of VM-2>," --private-key key -u root -m ping
It works wrong. The error info is:
/opt # ansible --version
ansible 2.8.0
config file = /opt/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Apr 8 2019, 18:17:52) [GCC 8.3.0]
/opt # ls
ansible.cfg key
/opt # ansible all -i "192.168.100.100," --private-key key -u root -m ping
192.168.100.100 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: root#192.168.100.100: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).",
"unreachable": true
}
Then I turned to use option "-k" instead, and it works.
/opt # ansible all -i "192.168.100.100," -k -u root -m ping
SSH password:
192.168.100.100 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
Then I tried again on VM-3 with the same steps of VM-2. The command using "--private-key" works. The environment of VM-2 and VM-3 are very similar.
I didn't find any difference between VM-2 and VM-3's sshd configure at all.
So I got very confused with these above.
In addition, the "--private-key" command will be OK after the "-k" command was run, because there is a ansible process living on the background, like this:
/opt # ansible all -i "192.168.100.100," -k -u root -m ping
SSH password:
192.168.100.100 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
/opt # ps -ef |grep ansible
126 root 0:00 ssh: /root/.ansible/cp/e42d5dc861 [mux]
/opt # ansible all -i "192.168.100.100," --private-key key -u root -m ping
192.168.100.100 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
I want to know how to use "--private-key" correctly in ansible command line.

Ansible Dynamic Inventory with Openstack

I am deploying several Linux hosts to an openstack environment and attempting to configure them with ansible. I'm having some difficulty with the stock dynamic inventory script from https://github.com/ansible/ansible/blob/devel/contrib/inventory/openstack.py
If I run ansible with a static hosts file, everything works fine
# inventory/static-hosts
localhost ansible_connection=local
linweb01 ansible_host=10.1.1.101
% ansible linweb01 -m ping -i ./inventory/static-hosts \
--extra-vars="ansible_user=setup ansible_ssh_private_key_file=/home/ian/keys/setup.key"
linweb01 | SUCCESS => {
"changed": false,
"ping": "pong"
}
But if I use the dynamic inventory, the host isn't found
% ansible linweb01 -m ping -i ./inventory/openstack.py \
--extra-vars="ansible_user=setup ansible_ssh_private_key_file=/home/ian/keys/setup.key"
linweb01 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname linweb01: Name or service not known\r\n",
"unreachable": true
}
When I run the inventory script manually, the host is found and the address returned is correct
% ./inventory/openstack.py --host linweb01
[...]
"name": "linweb01",
"networks": {},
"os-extended-volumes:volumes_attached": [],
"power_state": 1,
"private_v4": "10.1.1.101",
[...]
My guess is that the inventory script doesn't know to use the "private_v4" value for the IP address, although I can't seem to find a reference for this.
How do I get ansible to use the "private_v4" value returned by the inventory script as the "ansible_host" value for the host?
Quick look into the code suggests that ip address is expected to be in interface_ip key:
hostvars[key] = dict(
ansible_ssh_host=server['interface_ip'],
ansible_host=server['interface_ip'],
openstack=server)
If you need a workaround, you can try to add this to you group_vars/all.yml:
ansible_host: "{{ private_v4 }}"

How to use ansible without ssh keys - getpwnam() MODULE FAILURE

I am trying to use ansible without setting ssh keys. I want to log in with username and password. I have installed on the master sshpass.
My directory layout looks like this:
ansible.cfg
inventory/
hosts
group_vars/
all.yaml
roles
whoami/
tasks/
main.yaml
site.yaml
ansible.cfg:
[defaults]
host_key_checking = false
inventory = inventory
inventory/hosts:
[defaults]
machine
inventory/group_vars/all.yaml:
---
# file: group_vars/all
ansible_user: username
ansible_password: password
main.yaml:
---
- name: whoami
shell: whoami
site.yaml:
---
- hosts: machine
roles:
- whoami
if i run:
$ansible machine -i inventory/ -m shell -a "whoami"
it is executed successfully:
machine | SUCCESS | rc=0 >>
username
whereas if I run:
$ansible-playbook -i inventory site.yml -v
I get this:
fatal: [machine]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "setup"
},
"module_stderr": "OpenSSH_7.4p1 Debian-10+deb9u1, OpenSSL 1.0.2l 25 May 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 2908\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to machine closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_QAFcSD/ansible_module_setup.py\", line 134, in <module>\r\n main()\r\n File \"/tmp/ansible_QAFcSD/ansible_module_setup.py\", line 126, in main\r\n data = get_all_facts(module)\r\n File \"/tmp/ansible_QAFcSD/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3518, in get_all_facts\r\n File \"/tmp/ansible_QAFcSD/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3457, in ansible_facts\r\n File \"/tmp/ansible_QAFcSD/ansible_modlib.zip/ansible/module_utils/facts.py\", line 171, in __init__\r\n File \"/tmp/ansible_QAFcSD/ansible_modlib.zip/ansible/module_utils/facts.py\", line 513, in get_user_facts\r\nKeyError: 'getpwnam(): name not found: username'\r\n", "msg": "MODULE FAILURE"
}
to retry, use: --limit #/home/username/playbook/my_ansible/site.retry
PLAY RECAP *********************************************************************
machine : ok=0 changed=0 unreachable=0 failed=1
I was running version 2.2.1.0 which is affected by this bug GitHub. I upgraded to version 2. 4.1.0 with:
$pip install ansible
and it works smoothly.
Thanks to techraf

Use Hashicorp Vault with Ansible - plugin setup

I want to use Hashicorp Vault with Ansible to retrieve username/password which I will use in Ansible playbook.
Vault is setup - I created a secret. What are the steps to integrate both? the documentation around plugins isn't that great. I tried the file lookup from ansible and this works but how to use 3rd party plugins? Can somebody help me with the steps to follow?
Install the plugin, pip install ansible-modules-hashivault
What is the difference with https://github.com/jhaals/ansible-vault
2.a The environment variables (VAULT ADDR & VAULT TOKEN) I put where?
Change ansible.cfg to point to vault.py which is located in "plugin" folder of my Ansible Project
To test basic integration, can I use the following playbook?
https://pypi.python.org/pypi/ansible-modules-hashivault
- hosts: localhost
-tasks:
- hashivault_status:
register: 'vault_status'
Tried this but I get:
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 119, in run
res = self._execute()
File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 431, in _execute
self._task.post_validate(templar=templar)
File "/usr/lib/python2.7/site-packages/ansible/playbook/task.py", line 248, in post_validate
super(Task, self).post_validate(templar)
File "/usr/lib/python2.7/site-packages/ansible/playbook/base.py", line 371, in post_validate
value = templar.template(getattr(self, name))
File "/usr/lib/python2.7/site-packages/ansible/template/__init__.py", line 359, in template
d[k] = self.template(variable[k], preserve_trailing_newlines=preserve_trailing_newlines, fail_on_undefined=fail_on_undefined, overrides=overrides)
File "/usr/lib/python2.7/site-packages/ansible/template/__init__.py", line 331, in template
result = self._do_template(variable, preserve_trailing_newlines=preserve_trailing_newlines, escape_backslashes=escape_backslashes, fail_on_undefined=fail_on_undefined, overrides=overrides)
File "/usr/lib/python2.7/site-packages/ansible/template/__init__.py", line 507, in _do_template
res = j2_concat(rf)
File "<template>", line 8, in root
File "/usr/lib/python2.7/site-packages/jinja2/runtime.py", line 193, in call
return __obj(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ansible/template/__init__.py", line 420, in _lookup
instance = self._lookup_loader.get(name.lower(), loader=self._loader, templar=self)
File "/usr/lib/python2.7/site-packages/ansible/plugins/__init__.py", line 339, in get
self._module_cache[path] = self._load_module_source('.'.join([self.package, name]), path)
File "/usr/lib/python2.7/site-packages/ansible/plugins/__init__.py", line 324, in _load_module_source
module = imp.load_source(name, path, module_file)
File "/etc/ansible/ProjectA/lookup_plugins/vault.py", line 5
<!DOCTYPE html>
^
SyntaxError: invalid syntax
fatal: [win01]: FAILED! => {
"failed": true,
"msg": "Unexpected failure during module execution.",
"stdout": ""
Since you put so many eggs into the post, that I have no clue what the question is really about, here's something to get you going with the native lookup plugin and jhaals/ansible-vault.
you can create lookup_plugins in the current directory and save vault.py inside;
the VAULT_ADDR and VAULT_TOKEN environment variables are as you see them in the script;
The Bash script below (it uses screen and jq, you might need to install them) runs Vault in dev mode, sets the secret, and runs Ansible playbook which queries the secret with two lookup plugins:
#!/bin/bash
set -euo pipefail
export VAULT_ADDR=http://127.0.0.1:8200
if [[ ! $(pgrep -f "vault server -dev") ]]; then
echo \"vault server -dev\" not running, starting...
screen -S vault -d -m vault server -dev
printf "sleeping for 3 seconds\n"
sleep 3
else
echo \"vault server -dev\" already running, leaving as is...
fi
vault write secret/hello value=world excited=yes
export VAULT_TOKEN=$(vault token-create -format=json | jq -r .auth.client_token)
ansible-playbook playbook.yml --extra-vars="vault_token=${VAULT_TOKEN}"
and playbook.yml:
---
- hosts: localhost
connection: local
tasks:
- name: Retrieve secret/hello using native hashi_vault plugin
debug: msg="{{ lookup('hashi_vault', 'secret=secret/hello token={{ vault_token }} url=http://127.0.0.1:8200') }}"
- name: Retrieve secret/hello using jhaals vault lookup
debug: msg="{{ lookup('vault', 'secret/hello') }}"
In the end you should get:
TASK [Retrieve secret/hello using native hashi_vault plugin] *******************
ok: [localhost] => {
"msg": "world"
}
TASK [Retrieve secret/hello using jhaals vault lookup] *************************
ok: [localhost] => {
"msg": {
"excited": "yes",
"value": "world"
}
}
The word world was fetched from Vault.

Resources