I am trying to run an ansible job targeting en ESXi host.
My playbook is simple. Setup a cronjob/Replace a cronjob.
I am 2 different approached ::
Approach 1 :
---
- hosts: esxi
tasks:
- name: Deploy cronjobs for CAC 2.0 nodes.
cron: name="Deploy cronjobs" minute="1" hour="*"
job="/opt/test/test.sh" disabled=no
Approach 2 :
---
-
gather_facts: false
hosts: esxi
tasks:
- lineinfile: dest=/var/spool/cron/crontabs/root
regexp='^.*\/vmfs\/volumes\/datastore1\/scripts\/backup.sh$'
line='test'
When I run the playbook, both approaches fail stating:
fatal: [5.232.57.150]: FAILED! => {"changed": false, "failed": true,
"module_stderr": "", "module_stdout": "Traceback (most recent call
last):\r\n File \"/tmp/ansible_GvDGZb/ansible_module_lineinfile.py\", line
412, in <module>\r\n from ansible.module_utils.basic import *\r\n File
\"/tmp/ansible_GvDGZb/ansible_modlib.zip/ansible/module_utils/basic.py\",
line 52, in <module>\r\nImportError: No module named grp\r\n", "msg":
"MODULE FAILURE", "parsed": false}
Main ERROR ::
ImportError: No module named grp
In Debug mode :
fatal: [5.232.57.150]: FAILED! => {"changed": false, "failed": true,
"invocation": {"module_name": "setup"}, "module_stderr": "OpenSSH_5.3p1,
OpenSSL 1.0.1e-fips 11 Feb 2013\ndebug1: Reading configuration data
/etc/ssh/ssh_config\r\ndebug1: Applying options for *\r\ndebug1: auto-mux:
Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2:
mux_client_hello_exchange: master version 4\r\ndebug3:
mux_client_request_forwards: requesting forwardings: 0 local, 0
remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3:
mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done
pid = 12018\r\ndebug3: mux_client_request_session: session request
sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug1:
mux_client_request_session: master session id: 2\r\ndebug3:
mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received
exit
status from master 0\r\nShared connection to 5.232.57.150 closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File
\"/tmp/ansible_XhPWuX/ansible_module_setup.py\", line 123, in <module>\r\n
from ansible.module_utils.basic import *\r\n File
\"/tmp/ansible_XhPWuX/ansible_modlib.zip/ansible/module_utils/basic.py\",
line
52, in <module>\r\nImportError: No module named grp\r\n", "msg": "MODULE
FAILURE", "parsed": false}
Would I need to install some python packages on ESXi host ?
You are correct that the Python grp module (at least) is missing on the ESXi host, based on that error. If you can easily get the right python modules installed via the ansible shell or pip modules, that may fix this.
This Serverfault answer used the Ansible raw module to work around the lack of this Python module.
See also this thread which indicates this module is present on at least some ESXi versions.
Related
I'm new to Ansible and I'm running into errors. My goal is to be able to manage Fortigate/Cisco devices.
I created a Ubuntu VM(22.04) with all the necessary packages needed to run Ansible. I've created a very basic hosts file with a firewall group:
[firewalls]
10.23.60.120
10.23.60.122
I've been successful at pinging each of the firewalls as well as using SSH to connect to the firewalls. But once I attempt to ping the firewalls using the -m ping module I get the following errors:
ansible -i hosts firewalls -m ping
[WARNING]: Platform unknown on host 10.23.60.120 is using the discovered Python
interpreter at /usr/bin/python, but future installation of another Python
interpreter could change the meaning of that path. See
https://docs.ansible.com/ansible-
core/2.13/reference_appendices/interpreter_discovery.html for more information.
10.23.60.120 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
}
"changed": false,
"module_stderr": "Shared connection to 10.23.60.120 closed.\r\n",
"module_stdout": "TR1-SDWAN-LAB-01 # 8415: Unknown action 0\r\nCommand fail. Return code
-1\r\n\r\n TR1-SDWAN-LAB-01 # ",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 0
}
Any help is appreciated.
According the information provided it seems that you try to establish a SSH connection to a switch. Such devices may not have all capabilities for Python scripts.
Because of ping module – Try to connect to host, verify a usable python and return pong on success it
"is NOT ICMP ping, ... just a trivial test module that requires Python on the remote-node"
is a "... test module, this module always returns pong on successful contact. It does not make sense in playbooks, but it is useful from /usr/bin/ansible to verify the ability to login and that a usable Python is configured."
The Most Significant Information is the error message Unknown action 0 and which is according Fortigate Documentation - Command syntax just an unknown command
"If you do not enter a known command, the CLI will return an error message such as: Unknown action 0"
Further Background Information
Fortinet Ansible Issue #72 "Unknown Action 0 when running modules"
Similar Q&A
Ansible: How to check SSH access
Ansible: Error "Line has invalid autocommand"
I have a very simple play that I'm having a lot of issues to get the run correctly. I keep getting an SSH unreachable error when it is run. I'm getting this on 2 separate machines in different environments (although built from the same images) These are Debian 9 boxes.
Play is to simply update an internal mirror server.
- name: Update Mirror Servers
hosts: all
become: yes
gather_facts: yes
tasks:
- name: Run Update
shell: "sudo apt-mirror"
- name: Change Permissions
file:
path: /var/apt-mirror
state: directory
recurse: yes
mode: '0755'
This is being run from Ansible AWX just in case that makes any difference.
Error is as follows.
"unreachable": true,
"msg": "Failed to connect to the host via ssh: OpenSSH_8.0p1, OpenSSL 1.1.1c FIPS 28 May 2019\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug3: /etc/ssh/ssh_config line 51: Including file /etc/ssh/ssh_config.d/05-redhat.conf depth 0\r\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf\r\ndebug2: checking match for 'final all' host x.x.x.x originally x.x.x.x\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 3: not matched 'final'\r\ndebug2: match not found\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1 (parse only)\r\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\r\ndebug3: gss kex names ok: [gss-gex-sha1-,gss-group14-sha1-]\r\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256#libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1]\r\ndebug1: configuration requests final Match pass\r\ndebug2: resolve_canonicalize: hostname x.x.x.x is address\r\ndebug1: re-parsing configuration\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug3: /etc/ssh/ssh_config line 51: Including file /etc/ssh/ssh_config.d/05-redhat.conf depth 0\r\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf\r\ndebug2: checking match for 'final all' host x.x.x.x originally x.x.x.x\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 3: matched 'final'\r\ndebug2: match found\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1\r\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\r\ndebug3: gss kex names ok: [gss-gex-sha1-,gss-group14-sha1-]\r\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256#libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1]\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10035\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Control master terminated unexpectedly\r\nShared connection to x.x.x.x closed.",
"changed": false
I've tried adding the ansible_ssh_user and ansible_ssh_pass as variables on the group these machines are in.
I am able to run ping successfully etc.
Identity added: /tmp/awx_187_x0lo965h/artifacts/187/ssh_key_data
(/tmp/awx_187_x0lo965h/artifacts/187/ssh_key_data)
SSH password:
BECOME password[defaults to SSH password]:
redacted | SUCCESS => {
"changed": false,
"ping": "pong"
}
I'm sure it is probably something minor that I'm missing in config somewhere. Could someone point me in the right direction?
Thanks,
Kam
I am trying to run example from Ansible Up and Running book.
My playbooks directory
ls
ansible.cfg hosts ubuntu-bionic-18.04-cloudimg-console.log Vagrantfile
hosts
testserver ansible_host=127.0.0.1 ansible_port=2222
ansible.cfg
[defaults]
inventory = hosts
remote_user = vagrant
private_key_file = .vagrant/machines/default/virtualbox/private_key
host_key_checking = False
Vagrantfile
config.vm.box = "ubuntu/bionic64"
When I try ping
ansible testserver -m ping
I got
testserver | FAILED! => {
"changed": false,
"module_stderr": "Shared connection to 127.0.0.1 closed.\r\n",
"module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n",
"msg": "MODULE FAILURE",
"rc": 127
}
I can ssh without any problems
ssh vagrant#127.0.0.1 -p 2222 -i /home/miki/playbooks/.vagrant/machines/default/virtualbox/private_key
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-50-generic x86_64)
System information as of Tue May 21 06:39:46 UTC 2019
System load: 0.0 Processes: 108
Usage of /: 18.5% of 9.63GB Users logged in: 0
Memory usage: 17% IP address for enp0s3: 10.0.2.15
Swap usage: 0%
Last login: Tue May 21 06:32:13 2019 from 10.0.2.2
Why ansible ping does not work?
From the error message
"module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n",
it seems the remote host does not have python installed.
Quoting from the requiremet docs
On the managed nodes, you need a way to communicate, which is normally ssh. By default this uses sftp. If that’s not available, you can switch to scp in ansible.cfg. You also need Python 2 (version 2.6 or later) or Python 3 (version 3.5 or later).
Ansible needs python to be present in remote host.
Also, about the usage of ping module, it's not the same as ping shell command.
Try installing python in the remote host (either manually or using raw module) and then re-run the script.
I got the same error as well. Then found this:
"module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n",
As python is installed in local host, and python is not installed in the remote host, just installed python in the remote host. And found the problem is solved !!!
This is just because you are mixing up ansible ping module and classic ICMP ping command in your terminal which are not equivalent. From the above link
This is NOT ICMP ping, this is just a trivial test module that requires Python on the remote-node.
With the above confusion, you are miss-interpreting the clear error messages you are getting when running the playbook:
First
Shared connection to 127.0.0.1 closed
... which means a connection was first opened and that your host is reachable
Second
/bin/sh: 1: /usr/bin/python: not found
... which means that python (required for ansible) is not installed or not in a default path.
I have an Ubuntu 16.04 virtuell machine and a two MS Windows Server 2008 R 2 virtuell machine.
I follow this instructions until "Once we have these two files setup, we can look to test connectivity". Now I want to ping the windows VMs. I get after the comman an error, but I dont know wy.
Execution:
stefan#ansible-server:~/ansible_test$ ansible windows -i host -m win_ping
Answer:
[IP-ADRESS] | FAILED! => {
"failed": true,
"msg": "ERROR! ssl: 500 WinRMTransport. [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:"
}
[IP-ADRESS] | FAILED! => {
"failed": true,
"msg": "ERROR! ssl: 500 WinRMTransport. [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:"
}
Do u know wy it didn't work?
I get the solution, but i am not satisfied, because it's for me not the proper solution, but it works.
Create in the folder ansible_test:
mkdir callback_plugins
nano callback_plugins/fix-ssl.py
Write in the file:
import ssl
if hasattr(ssl, '_create_default_https_context') and hasattr(ssl, '_create_unverified_context'):
ssl._create_default_https_context = ssl._create_unverified_context
class CallbackModule(object):
pass
Run:
ansible windows -i host -m win_ping -vvvvv
Result:
10.92.0.38 | SUCCESS => {
"changed": false,
"invocation": {
"module_name": "win_ping"
},
"ping": "pong"
}
To be able to provision windows Machines you need to run this power shell in windows machines, first to generate cert files for winrm.
https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1
I have a roles/ec2/tasks/main.yml that is trying to create a folder:
---
- name: Mkdir /opt/applications
file: path=/opt/applications state=directory
it is called in the roles of start.yml:
- hosts: tag_composant_XXX:&tag_Name_XXX-sandbox
remote_user: ec2-user
vars:
ec2_ami_name: XXX-base-{{ ansible_date_time.year }}-{{ ansible_date_time.month }}-{{ ansible_date_time.day }}
ec2_ami_description: Ami to launch XXX
instance_tag_environnement: XXX
roles:
- {role: ec2, sudo: true}
it is saying that
failed: [x.x.x.x] => {"failed": true, "parsed": false}
Traceback (most recent call last):
File "/home/ec2usr/.ansible/tmp/ansible-tmp-1438095761.0-196976221154211/file", line 1994, in <module>
main()
File "/home/ec2usr/.ansible/tmp/ansible-tmp-1438095761.0-196976221154211/file", line 279, in main
os.mkdir(curpath)
OSError: [Errno 13] Permission denied: '/opt/applications'
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /home/xxx/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 4869
debug3: mux_client_request_session: session request sent
debug1: mux_client_request_session: master session id: 2
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 0
Shared connection to x.x.x.x closed.
The execution is done via:
ansible-playbook --private-key=~/.ssh/key -vvvv -i ../ec2.py start.yml
(I have not touched the py script)
It worked before changing the ansible version (see this). What I have done more than just uninstalling + installing ansible, is that I have removed some folders in ~/.ansible/tmp/ (something like ansible-tmp-1438095761.0-196976221154211/, but I do not remember the names exactly). Is it a problem because of it?
I have managed to connect to the EC2 instance manually and create the folder, but with Ansible it seems not to work. Why? What is the problem?
Not sure if this was possible before. But one can define this directly at the task level now e.g.
- name: Mkdir /opt/applications
file:
path=/opt/applications
state=directory
become: yes
also https://docs.ansible.com/ansible/2.7/user_guide/become.html might help with further questions
Based on all the comments I am making an answer to this question:
Accordingly to the discussions on the forum of Ansible's repo there was a role level break. So it will be better to switch to 1.9.1 version. What is more, there was another change in the roles: sudo has changed to become (as mentioned in another question's answer). And that seems to fix my problem even if the docs says that sudo still works.
I have replaced:
- {role: ec2, sudo: true}
by
- {role: ec2, become: yes}