Getting Module Failure error while running Ansible playbook - ansible

I am getting the following error when running my Ansible playbook:
hosts file
[node1]
rabbit-node1 ansible_ssh_host=x.x.x.x ansible_ssh_user=ubuntu
[node2]
rabbit-node2 ansible_ssh_host=x.x.x.x ansible_ssh_user=ubuntu
[node3]
rabbit-node3 ansible_ssh_host=x.x.x.x ansible_ssh_user=ubuntu
[workers]
rabbit-node2
rabbit-node3
[all_group]
rabbit-node1
rabbit-node2
rabbit-node3
[all:vars]
ansible_python_interpreter=/usr/bin/python3
ansible_ssh_user=ubuntu
ansible_private_key_file=private key path
ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
Error
fatal: [rabbit-node1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "setup"}, "module_stderr": "OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g 1 Mar 2016\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for x.x.x.x\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 25400\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to x.x.x.x closed.\r\n", "module_stdout": " File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1584361123.7-96709661573808/setup\", line 3160\r\n except OSError, e:\r\n ^\r\nSyntaxError: invalid syntax\r\n", "msg": "MODULE FAILURE", "parsed": false}
3 playbook.yml
Playbook file for Setting hostname installing rabbitmq and creating cluster of rabitmq having 3 nodes.
- name: deploy RabbitMQ and setup the environment
hosts:
- all_group
#gather_facts: False
user: ubuntu
sudo: yes
roles:
- set_hostname
- install_rabbitmq
- name: Configure RabbitMQ Cluster
hosts:
- workers
user: ubuntu
sudo: yes
roles:
- cluster_setup

Related

For the Ansible cisco.asa module "cisco.asa.asa_acls:" why do I get the below error?

I'm running a basic acl creation on Ansible but get this error:
TASK [Merge provided configuration with device configuration] ********************************************************************
fatal: [192.168.0.140]: FAILED! => {"changed": false, "msg": "sh access-list\r\n ^\r\nERROR: % Invalid input detected at '^' marker.\r\n\rASA> "}
---
- name: "ACL TEST 1"
hosts: ASA
connection: local
gather_facts: false
collections:
- cisco.asa
tasks:
- name: Merge provided configuration with device configuration
cisco.asa.asa_acls:
config:
acls:
- name: purple_access_in
acl_type: extended
aces:
- grant: permit
line: 1
protocol_options:
tcp: true
source:
address: 10.0.3.0
netmask: 255.255.255.0
destination:
address: 52.58.110.120
netmask: 255.255.255.255
port_protocol:
eq: https
log: default
state: merged
The hosts file is:
[ASA]
192.168.0.140
[ASA:vars]
ansible_user=admin
ansible_ssh_pass=admin
ansible_become_method=enable
ansible_become_pass=cisco
ansible_connection=ansible.netcommon.network_cli
ansible_network_os=cisco.asa.asa
ansible_python_interpreter=python
There's not much to the code but am struggling to get past the error. I don't even need the "sh access-list" output.

Ansible Playbook command timeout when connecting to cisco switch using SSH key

Summary:
im trying to setup a playbook to get a list of IOS devices from netbox and then export the config files
Versions:
ansible-playbook [core 2.13.2]
python version = 3.8.13
switch IOS = 15.0(2)EX5
ansible host = Ubuntu 18.04.6 LTS
Issue:
When i run the command:
ansible-playbook -vvvvv playbook.yml
I get the error:
The full traceback is:
File "/usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py", line 94, in get_capabilities
capabilities = Connection(module._socket_path).get_capabilities()
File "/usr/local/lib/python3.8/dist-packages/ansible/module_utils/connection.py", line 200, in __rpc__
raise ConnectionError(to_text(msg, errors='surrogate_then_replace'), code=code)
fatal: [PHC-16JUB-SW01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"commands": [
"show version"
],
"interval": 1,
"match": "all",
"provider": null,
"retries": 10,
"wait_for": null
}
},
"msg": "command timeout triggered, timeout value is 60 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide."
}
Files:
playbook.yml
- name: Playbook to backup configs on Cisco Routers
connection: network_cli
hosts: device_roles_switch
remote_user: admin
gather_facts: False
tasks:
- name: Show Running configuration on Device
ios_command:
commands:
- show version
vars:
ansible_user: admin
register: config
- name: Save output to backups folder
copy:
content: "{{ config.stdout[0] }}"
dest: "./backups/{{ inventory_hostname }}-config.txt"
ansible.cfg
[defaults]
inventory = ./netbox_inv.yml
host_key_checking = false
retry_files_enabled = false
forks = 4
ansible_user = admin
remote_user = admin
private_key_file = ~/.ssh/id_rsa
[privilege_escalation]
become = True
become_method = enable
become_pass = passwordhere
[persistent_connection]
command_timeout = 60
netbox_inv.yml
plugin: netbox.netbox.nb_inventory
api_endpoint: http://172.16.1.32
token: 'apikeygoeshere'
validate_certs: False
config_context: False
group_by:
- device_roles
query_filters:
- tag: backup
- status: active
compose:
ansible_network_os: platform.slug
Troubleshooting:
I have confirmed i can use the key to directly connect to the host
root#netbox:~/config_backups# ssh admin#172.16.100.133
PHC-SW01>exit
Connection to 172.16.100.133 closed.
Confirmed SSH connection on switch:
PHC-SW01#debug ip ssh detail
ssh detail messages debugging is on
PHC-SW01#terminal monitor
PHC-SW01#
Aug 3 23:58:33.930: SSH1: starting SSH control process
Aug 3 23:58:33.930: SSH1: sent protocol version id SSH-1.99-Cisco-1.25
Aug 3 23:58:33.934: SSH1: protocol version id is - SSH-2.0-libssh_0.9.3
Aug 3 23:58:33.934: SSH2 1: SSH2_MSG_KEXINIT sent
Aug 3 23:58:33.934: SSH2 1: SSH2_MSG_KEXINIT received
Aug 3 23:58:33.934: SSH2:kex: client->server enc:aes256-cbc mac:hmac-sha1
Aug 3 23:58:33.934: SSH2:kex: server->client enc:aes256-cbc mac:hmac-sha1
Aug 3 23:58:34.004: SSH2 1: expecting SSH2_MSG_KEXDH_INIT
Aug 3 23:58:34.018: SSH2 1: SSH2_MSG_KEXDH_INIT received
Aug 3 23:58:34.147: SSH2: kex_derive_keys complete
Aug 3 23:58:34.150: SSH2 1: SSH2_MSG_NEWKEYS sent
Aug 3 23:58:34.150: SSH2 1: waiting for SSH2_MSG_NEWKEYS
Aug 3 23:58:34.154: SSH2 1: SSH2_MSG_NEWKEYS received
Aug 3 23:58:34.371: SSH2 1: Using method = none
Aug 3 23:58:34.378: SSH2 1: Using method = publickey
Aug 3 23:58:34.378: SSH2 1: Authenticating 'admin' with method: publickey
Aug 3 23:58:34.388: SSH2 1: Client Signature verification PASSED
Aug 3 23:58:34.388: SSH2 1: authentication successful for admin
Aug 3 23:58:34.392: SSH2 1: channel open request
Aug 3 23:58:34.395: SSH2 1: pty-req request
Aug 3 23:58:34.395: SSH2 1: setting TTY - requested: height 24, width 80; set: height 24, width 80
Aug 3 23:58:34.395: SSH2 1: shell request
Aug 3 23:58:34.395: SSH2 1: shell message received
Aug 3 23:58:34.395: SSH2 1: starting shell for vty
Aug 3 23:58:34.409: SSH2 1: channel window adjust message received 1216017
Aug 3 23:59:04.035: SSH1: Session terminated normally
Also changed the timeout to 60 secs, no change even if the timeout is longer, appears it.
Log Output:
root#netbox:~/config_backups# ansible-playbook -vvvvv playbook.yml
ansible-playbook [core 2.13.2]
config file = /root/config_backups/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.8.13 (default, Apr 19 2022, 00:53:22) [GCC 7.5.0]
jinja version = 3.1.2
libyaml = True
Using /root/config_backups/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /root/config_backups/netbox_inv.yml as it did not pass its verify_file() method
script declined parsing /root/config_backups/netbox_inv.yml as it did not pass its verify_file() method
Loading collection netbox.netbox from /root/.ansible/collections/ansible_collections/netbox/netbox
Using inventory plugin 'ansible_collections.netbox.netbox.plugins.inventory.nb_inventory' to process inventory source '/root/config_backups/netbox_inv.yml'
Fetching: http://172.16.1.32/api/status
Fetching: http://172.16.1.32/api/docs/?format=openapi
Fetching: http://172.16.1.32/api/dcim/devices/?limit=0&tag=backup&status=active&exclude=config_context
Fetching: http://172.16.1.32/api/virtualization/virtual-machines/?limit=0&tag=backup&status=active&exclude=config_context
Fetching: http://172.16.1.32/api/dcim/sites/?limit=0
Fetching: http://172.16.1.32/api/dcim/regions/?limit=0
Fetching: http://172.16.1.32/api/dcim/site-groups/?limit=0
Fetching: http://172.16.1.32/api/dcim/locations/?limit=0
Fetching: http://172.16.1.32/api/tenancy/tenants/?limit=0
Fetching: http://172.16.1.32/api/dcim/device-roles/?limit=0
Fetching: http://172.16.1.32/api/dcim/platforms/?limit=0
Fetching: http://172.16.1.32/api/dcim/device-types/?limit=0
Fetching: http://172.16.1.32/api/dcim/manufacturers/?limit=0
Fetching: http://172.16.1.32/api/virtualization/clusters/?limit=0
Fetching: http://172.16.1.32/api/ipam/services/?limit=0
Fetching: http://172.16.1.32/api/dcim/racks/?limit=0
Parsed /root/config_backups/netbox_inv.yml inventory source with auto plugin
[WARNING]: Skipping plugin (/usr/local/lib/python3.8/dist-packages/ansible/plugins/connection/winrm.py) as it seems to be invalid: invalid syntax (spawnbase.py, line 224)
redirecting (type: modules) ansible.builtin.ios_command to cisco.ios.ios_command
Loading collection cisco.ios from /usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios
Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python3.8/dist-packages/ansible/plugins/callback/default.py
Attempting to use 'default' callback.
Skipping callback 'default', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'tree' callback.
PLAYBOOK: playbook.yml **********************************************************************************************************************************************************************
Positional arguments: playbook.yml
verbosity: 5
private_key_file: /root/.ssh/id_rsa
remote_user: admin
connection: smart
timeout: 10
become: True
become_method: enable
tags: ('all',)
inventory: ('/root/config_backups/netbox_inv.yml',)
forks: 4
1 plays in playbook.yml
PLAY [Playbook to backup configs on Cisco Routers] ******************************************************************************************************************************************
META: ran handlers
TASK [Show Running configuration on Device] *************************************************************************************************************************************************
task path: /root/config_backups/playbook.yml:8
redirecting (type: connection) ansible.builtin.network_cli to ansible.netcommon.network_cli
Loading collection ansible.netcommon from /usr/local/lib/python3.8/dist-packages/ansible_collections/ansible/netcommon
redirecting (type: terminal) ansible.builtin.ios to cisco.ios.ios
redirecting (type: cliconf) ansible.builtin.ios to cisco.ios.ios
redirecting (type: become) ansible.builtin.enable to ansible.netcommon.enable
<172.16.9.1> attempting to start connection
<172.16.9.1> using connection plugin ansible.netcommon.network_cli
Found ansible-connection at path /usr/local/bin/ansible-connection
<172.16.9.1> local domain socket does not exist, starting it
<172.16.9.1> control socket path is /root/.ansible/pc/4be7afdbe7
<172.16.9.1> redirecting (type: connection) ansible.builtin.network_cli to ansible.netcommon.network_cli
<172.16.9.1> Loading collection ansible.netcommon from /usr/local/lib/python3.8/dist-packages/ansible_collections/ansible/netcommon
<172.16.9.1> redirecting (type: terminal) ansible.builtin.ios to cisco.ios.ios
<172.16.9.1> Loading collection cisco.ios from /usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios
<172.16.9.1> redirecting (type: cliconf) ansible.builtin.ios to cisco.ios.ios
<172.16.9.1> local domain socket listeners started successfully
<172.16.9.1> loaded cliconf plugin ansible_collections.cisco.ios.plugins.cliconf.ios from path /usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/cliconf/ios.py for network_os ios
<172.16.9.1> ssh type is set to auto
<172.16.9.1> autodetecting ssh_type
<172.16.9.1> ssh type is now set to libssh
<172.16.9.1>
<172.16.9.1> local domain socket path is /root/.ansible/pc/4be7afdbe7
redirecting (type: modules) ansible.builtin.ios_command to cisco.ios.ios_command
redirecting (type: action) ansible.builtin.ios to cisco.ios.ios
<172.16.9.1> Using network group action ios for ios_command
redirecting (type: action) ansible.builtin.ios to cisco.ios.ios
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: enabled
redirecting (type: modules) ansible.builtin.ios_command to cisco.ios.ios_command
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: found ios_command at /usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/modules/ios_command.py
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: running ios_command
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: complete
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: Result: {'failed': True, 'msg': 'command timeout triggered, timeout value is 30 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.', 'exception': ' File "/usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py", line 94, in get_capabilities\n capabilities = Connection(module._socket_path).get_capabilities()\n File "/usr/local/lib/python3.8/dist-packages/ansible/module_utils/connection.py", line 200, in __rpc__\n raise ConnectionError(to_text(msg, errors=\'surrogate_then_replace\'), code=code)\n', 'invocation': {'module_args': {'commands': ['show version'], 'match': 'all', 'retries': 10, 'interval': 1, 'wait_for': None, 'provider': None}}, '_ansible_parsed': True}
The full traceback is:
File "/usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py", line 94, in get_capabilities
capabilities = Connection(module._socket_path).get_capabilities()
File "/usr/local/lib/python3.8/dist-packages/ansible/module_utils/connection.py", line 200, in __rpc__
raise ConnectionError(to_text(msg, errors='surrogate_then_replace'), code=code)
fatal: [PHC-SW01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"commands": [
"show version"
],
"interval": 1,
"match": "all",
"provider": null,
"retries": 10,
"wait_for": null
}
},
"msg": "command timeout triggered, timeout value is 30 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide."
}
PLAY RECAP **********************************************************************************************************************************************************************************
PHC-SW01 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
<172.16.9.1> ANSIBLE_NETWORK_IMPORT_MODULES: Result: {'failed': True, 'msg': 'command timeout triggered, timeout value is 30 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.', 'exception': ' File "/usr/local/lib/python3.8/dist-packages/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py", line 94, in get_capabilities\n capabilities = Connection(module._socket_path).get_capabilities()\n File "/usr/local/lib/python3.8/dist-packages/ansible/module_utils/connection.py", line 200, in rpc\n raise ConnectionError(to_text(msg, errors='surrogate_then_replace'), code=code)\n', 'invocation': {'module_args': {'commands': ['show version'], 'match': 'all', 'retries': 10, 'interval': 1, 'wait_for': None, 'provider': None}}, '_ansible_parsed': True}
Im not sure if the issue is with authentication, or the command being run.
the command show version doesnt need privilage escalation.
UPDATE
I Enabled Logging and got the below;
2 things i noticed was;
command: None ? command was show version
Response 2 ? privilage escalation enacted 'enable' did this work?
| invoked shell using ssh_type: libssh
| ssh connection done, setting terminal
| loaded terminal plugin for network_os ios
| command: None
| response-1: b'\r\nPHC-SW01>'
| matched cli prompt 'b'\nPHC-SW01>'' with regex 'b'[\\r\\n]?[\\w\\+\\-\\.:\\/\\[\\]]+(?:\\([^\\)]+\\)){0,3}(?:[>#]) ?$'' from response 'b'\r\nPHC-SW01>''
| firing event: on_become
| send command: b'enable\r'
| command: b'enable'
| response-1: b'enabl'
| response-2: b'e\r\nPassword: '
| resetting persistent connection for socket_path /root/.ansible/pc/31759300a1
| closing ssh connection to device

Groups in my Ansible inventory does not work as expected

My yaml inventory (dev environment) is like this:
$> more inventory/dev/hosts.yml
all:
children:
dmz1:
children:
ch:
children:
amq:
hosts:
myamqdev01.company.net: nodeId=1
myamqdev02.company.net: nodeId=2
smx:
hosts:
mysmxdev01.company.net: nodeId=1
mysmxdev02.company.net: nodeId=2
intranet:
children:
ch:
children:
amq:
hosts:
amqintradev01.company.net: nodeId=1
amqintradev02.company.net: nodeId=2
smx:
hosts:
smxintradev01.company.net: nodeId=1
smxintradev02.company.net: nodeId=2
and when I try to ping (with ansible -i inventory/dev -m ping all I got the error:
children: | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname children:: Temporary failure in name resolution",
"unreachable": true
}
ch: | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname ch:: Temporary failure in name resolution",
"unreachable": true
}
all: | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname all:: Temporary failure in name resolution",
"unreachable": true
}
lan: | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname lan:: Temporary failure in name resolution",
"unreachable": true
}
amq: | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname amq:: Temporary failure in name resolution",
"unreachable": true
}
hosts: | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname hosts:: Temporary failure in name resolution",
"unreachable": true
}
etc...
For troubleshooting, when I execute ansible -i inventory/dev --list-hosts all I got:
hosts (16):
all:
children:
dmz1:
ch:
amq:
hosts:
myamqdev01.company.net:
myamqdev02.company.net:
smx:
mysmxdev01.company.net:
mysmxdev02.company.net:
intranet:
amqintradev01.company.net:
amqintradev02.company.net:
smxintradev01.company.net:
smxintradev02.company.net:
I think this command should only give the hosts, no?
I am not sure what is the issue but I followed the example from the official doc and I think there's a problem with my hosts.yml file but cannot guess what I am missing.
UPDATE
When I correct the nodeId variables according to the answer, the list all is working fine. However when I try to filter by an intermediate parent this is not working:
ansible -i inventory/dev --list-hosts intranet
does not return the intranet hosts but all.
When I try: ansible -i inventory/dev --list-hosts amq
only amq server are properly returned.
Your inventory file does not respect ansible's format so the yaml inventory plugin fails to parse it. Since it tries plugins in order, for some reason I don't really get it finally succeeds using the ini format and gives one host for each line in your file.
Moreover, you have to understand that a group (e.g. smx) contains all the hosts that have been defined in the inventory, wherever they have been defined (e.g. as a children of intranet or dmz1).
So in your actual inventory structure, dmz1 and intranet both contain as children the groups amq and smx which themselves contain all hosts defined, either in the intranet or dmz1 section. Hence the groups all, dmz1 and intranet are all equivalent here and contain all hosts in the inventory.
Here is an inventory fixing your format problems and with a slightly different structure to address your expectation in terms of group targeting:
---
all:
children:
dmz1:
hosts:
myamqdev01.company.net:
nodeId: 1
myamqdev02.company.net:
nodeId: 2
mysmxdev01.company.net:
nodeId: 1
mysmxdev02.company.net:
nodeId: 2
intranet:
hosts:
amqintradev01.company.net:
nodeId: 1
amqintradev02.company.net:
nodeId: 2
smxintradev01.company.net:
nodeId: 1
smxintradev02.company.net:
nodeId: 2
amq:
hosts:
myamqdev01.company.net:
myamqdev02.company.net:
amqintradev01.company.net:
amqintradev02.company.net:
smx:
hosts:
mysmxdev01.company.net:
mysmxdev02.company.net:
smxintradev01.company.net:
smxintradev02.company.net:
And here are a few example of how to target the desired groups of machines
$ # All machines
$ ansible -i dev/ --list-hosts all
hosts (8):
myamqdev01.company.net
myamqdev02.company.net
mysmxdev01.company.net
mysmxdev02.company.net
amqintradev01.company.net
amqintradev02.company.net
smxintradev01.company.net
smxintradev02.company.net
$ # Intranet
$ ansible -i dev/ --list-hosts intranet
hosts (4):
amqintradev01.company.net
amqintradev02.company.net
smxintradev01.company.net
smxintradev02.company.net
$ # all smx machines
$ ansible -i dev/ --list-hosts smx
hosts (4):
mysmxdev01.company.net
mysmxdev02.company.net
smxintradev01.company.net
smxintradev02.company.net
$ # amq machines only on dmz1
$ # 1. Only whith patterns
$ ansible -i dev/ --list-hosts 'amq:&dmz1'
hosts (2):
myamqdev01.company.net
myamqdev02.company.net
$ # 2. Using limit
$ ansible -i dev/ --list-hosts amq -l dmz1
hosts (2):
myamqdev01.company.net
myamqdev02.company.net

Ansible + LXC (Proxmox)

Problem: to create the LXC (Proxmox) in ansible playbook.
Playbook:
- name: Create LXC
proxmox:
node: PVE-03
api_user: root#pam
api_password: password
api_host: 192.168.254.23
password: 11111
hostname: ans
ostemplate: data:vztmpl/debian-9.0-standard_9.5-1_amd64.tar.gz
Log:
fatal: [192.168.254.23]: FAILED! => {"changed": false, "msg": "authorization on proxmox cluster failed with exception: Couldn't authenticate user: ********#pam to https://192.168.254.23:8006/api2/json/access/ticket"}
It is necessary to update a proxmox:
apt full-upgrade

How to use ansible without ssh keys - getpwnam() MODULE FAILURE

I am trying to use ansible without setting ssh keys. I want to log in with username and password. I have installed on the master sshpass.
My directory layout looks like this:
ansible.cfg
inventory/
hosts
group_vars/
all.yaml
roles
whoami/
tasks/
main.yaml
site.yaml
ansible.cfg:
[defaults]
host_key_checking = false
inventory = inventory
inventory/hosts:
[defaults]
machine
inventory/group_vars/all.yaml:
---
# file: group_vars/all
ansible_user: username
ansible_password: password
main.yaml:
---
- name: whoami
shell: whoami
site.yaml:
---
- hosts: machine
roles:
- whoami
if i run:
$ansible machine -i inventory/ -m shell -a "whoami"
it is executed successfully:
machine | SUCCESS | rc=0 >>
username
whereas if I run:
$ansible-playbook -i inventory site.yml -v
I get this:
fatal: [machine]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "setup"
},
"module_stderr": "OpenSSH_7.4p1 Debian-10+deb9u1, OpenSSL 1.0.2l 25 May 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 2908\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to machine closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_QAFcSD/ansible_module_setup.py\", line 134, in <module>\r\n main()\r\n File \"/tmp/ansible_QAFcSD/ansible_module_setup.py\", line 126, in main\r\n data = get_all_facts(module)\r\n File \"/tmp/ansible_QAFcSD/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3518, in get_all_facts\r\n File \"/tmp/ansible_QAFcSD/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3457, in ansible_facts\r\n File \"/tmp/ansible_QAFcSD/ansible_modlib.zip/ansible/module_utils/facts.py\", line 171, in __init__\r\n File \"/tmp/ansible_QAFcSD/ansible_modlib.zip/ansible/module_utils/facts.py\", line 513, in get_user_facts\r\nKeyError: 'getpwnam(): name not found: username'\r\n", "msg": "MODULE FAILURE"
}
to retry, use: --limit #/home/username/playbook/my_ansible/site.retry
PLAY RECAP *********************************************************************
machine : ok=0 changed=0 unreachable=0 failed=1
I was running version 2.2.1.0 which is affected by this bug GitHub. I upgraded to version 2. 4.1.0 with:
$pip install ansible
and it works smoothly.
Thanks to techraf

Resources