Ansible fails to connect to host on some plays - ansible

I have a very simple play that I'm having a lot of issues to get the run correctly. I keep getting an SSH unreachable error when it is run. I'm getting this on 2 separate machines in different environments (although built from the same images) These are Debian 9 boxes.
Play is to simply update an internal mirror server.
- name: Update Mirror Servers
hosts: all
become: yes
gather_facts: yes
tasks:
- name: Run Update
shell: "sudo apt-mirror"
- name: Change Permissions
file:
path: /var/apt-mirror
state: directory
recurse: yes
mode: '0755'
This is being run from Ansible AWX just in case that makes any difference.
Error is as follows.
"unreachable": true,
"msg": "Failed to connect to the host via ssh: OpenSSH_8.0p1, OpenSSL 1.1.1c FIPS 28 May 2019\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug3: /etc/ssh/ssh_config line 51: Including file /etc/ssh/ssh_config.d/05-redhat.conf depth 0\r\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf\r\ndebug2: checking match for 'final all' host x.x.x.x originally x.x.x.x\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 3: not matched 'final'\r\ndebug2: match not found\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1 (parse only)\r\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\r\ndebug3: gss kex names ok: [gss-gex-sha1-,gss-group14-sha1-]\r\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256#libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1]\r\ndebug1: configuration requests final Match pass\r\ndebug2: resolve_canonicalize: hostname x.x.x.x is address\r\ndebug1: re-parsing configuration\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug3: /etc/ssh/ssh_config line 51: Including file /etc/ssh/ssh_config.d/05-redhat.conf depth 0\r\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf\r\ndebug2: checking match for 'final all' host x.x.x.x originally x.x.x.x\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 3: matched 'final'\r\ndebug2: match found\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1\r\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\r\ndebug3: gss kex names ok: [gss-gex-sha1-,gss-group14-sha1-]\r\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256#libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1]\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10035\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Control master terminated unexpectedly\r\nShared connection to x.x.x.x closed.",
"changed": false
I've tried adding the ansible_ssh_user and ansible_ssh_pass as variables on the group these machines are in.
I am able to run ping successfully etc.
Identity added: /tmp/awx_187_x0lo965h/artifacts/187/ssh_key_data
(/tmp/awx_187_x0lo965h/artifacts/187/ssh_key_data)
SSH password:
BECOME password[defaults to SSH password]:
redacted | SUCCESS => {
"changed": false,
"ping": "pong"
}
I'm sure it is probably something minor that I'm missing in config somewhere. Could someone point me in the right direction?
Thanks,
Kam

Related

LXD Container fails to SSH out to AWS

I have copied over the .ssh keys and config from my main workstation into an LXD container on it. I have also cleared my ip tables which was an issue with docker I believe preventing my bridge from accessing the internet. I do have access now.
Unfortunately the connection times out when I try to SSH out :(
OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /root/.ssh/config
debug1: /root/.ssh/config line 16: Applying options for rabbit-dev
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no
files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug1: Connecting to ec2-##-###-###-###.compute-1.amazonaws.com [##.###.###.###] por
t 22.
debug1: connect to address ##.###.###.### port 22: Connection timed out
ssh: connect to host ec2-##-###-###-###.compute-1.amazonaws.com port 22: Connection t
imed out
My ~/.ssh/config contains
Host rabbit-dev
HostName ec2-##-###-###-###.compute-1.amazonaws.com
IdentityFile ~/.ssh/dev_company.pem
LocalForward 11003 rabbitmq.internal.dev.company.com:80
LocalForward 11004 rabbitmq.internal.dev.company.com:5672
User ec2-user
IdentitiesOnly yes
Docker prevents host operating as a router which is needed for LXD
https://discuss.linuxcontainers.org/t/lxd-and-docker-firewall-redux-how-to-deal-with-forward-policy-set-to-drop/9953/3

Ansible attempting to Connect to Windows Machine Via SSH (fails)

We have a linux ansible server managing software installation on a Windows domain. We have successfully installed software onto all our windows machines without issue. We just added a new Windows 10 computer (yes, we have succesfully connected to other Win10 computers), and when we run our ansible install script we are getting the following error
fatal: [afc54]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/ansible/.ansible/cp/ansible-ssh-afc54-22-ansible\" does not exist\r\ndebug2: ssh_connect: needpriv 0\r\ndebug1: Connecting to afc54 [192.168.2.193] port 22.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.2.193 port 22: Connection timed out\r\nssh: connect to host afc54 port 22: Connection timed out\r\n",
"unreachable": true
In the [Gathering Facts] section of the playbook, the new machine shows
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setu‌​p.py
while the other windows machines shows
Using module file /usr/lib/python2.7/site-packages/ansible/modules/windows/win‌​_updates.ps1
Why is ansbile trying to connect via SSH rather than the windows port 5986? The same script works successfully on all our other windows computers, but this one has me stumped.
EDIT:
If I specify the credentials and specs on the machine's line in the host file (i.e. ansible_user=user#domain ansible_password=password ansible_port=5986 ansible_connection=winrm) then I get the following error
afc54 | UNREACHABLE! => { "changed": false, "msg": "kerberos: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579), ssl: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)", "unreachable": true }
I am not clear why this worked, but I changed the group name in the hosts file and in the playbook (it was [install] is now [windows]), and it's now running correctly.
EDIT: A year later I finally noticed the reason this worked. In the group_vars directory, there was nothing set up for [install] but there was an existing configuration for [windows] under windows.yml Hopefully this helps someone else =)

SSH in git behind proxy on windows - FATAL: Connection closed by peer

By a SSH connection, I'm trying to clone a Repository from a Company BitBucket which use 7999 port (not Bitbucket.org) using Git Bash. I've generated the RSA key and added the public key into my profile of the BitBucket Company and the keys re located in ~/.ssh, I've setup the proxy by using git config --global http.proxy http://userPrx:pwdPrx#ipProx:8080 (because I'm under the Company Proxy) and also I have setup my config file as this post suggest. Then, when I try to Test the connection I get this:
$ ssh -vT globaldevtools -p 7999
OpenSSH_7.3p1, OpenSSL 1.0.2j 26 Sep 2016
debug1: Reading configuration data /c/Users/MyUser/.ssh/config
debug1: /c/Users/MyUser/.ssh/config line 5: Applying options for globaldevtools
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 20: Applying options for *
debug1: Executing proxy command: exec /C/Users/MyUser/AppData/Local/Programs/Git/mingw64/bin/connect.exe -S IpProxy:8080 x.x.x.x 7999
debug1: permanently_drop_suid: 1104711
debug1: identity file /c/Users/MyUser/.ssh/id_rsa type 1
debug1: key_load_public: No such file or directory
debug1: identity file /c/Users/MyUser/.ssh/id_rsa-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.3
FATAL: Connection closed by peer.
ssh_exchange_identification: Connection closed by remote host
This is my config file:
ProxyCommand /C/Users/MyUser/AppData/Local/Programs/Git/mingw64/bin/connect.exe -S IpProxy:8080 %h %p
Host globaldevtools
User git
Port 7999
Hostname x.x.x.x
IdentityFile ~/.ssh/id_rsa
TCPKeepAlive yes
IdentitiesOnly yes
I must indicate that in this file (config) instead of IpProxy:8080 I've tried with
http://IpProxy:8080
http://usrProx:pwdProx#IpProxy:8080
usrProx:pwdProx#IpProxy:8080
Do I have to do something else? Did I miss something? All help is appreciated.

Unable to run ansible tasks on ESXi hosts

I am trying to run an ansible job targeting en ESXi host.
My playbook is simple. Setup a cronjob/Replace a cronjob.
I am 2 different approached ::
Approach 1 :
---
- hosts: esxi
tasks:
- name: Deploy cronjobs for CAC 2.0 nodes.
cron: name="Deploy cronjobs" minute="1" hour="*"
job="/opt/test/test.sh" disabled=no
Approach 2 :
---
-
gather_facts: false
hosts: esxi
tasks:
- lineinfile: dest=/var/spool/cron/crontabs/root
regexp='^.*\/vmfs\/volumes\/datastore1\/scripts\/backup.sh$'
line='test'
When I run the playbook, both approaches fail stating:
fatal: [5.232.57.150]: FAILED! => {"changed": false, "failed": true,
"module_stderr": "", "module_stdout": "Traceback (most recent call
last):\r\n File \"/tmp/ansible_GvDGZb/ansible_module_lineinfile.py\", line
412, in <module>\r\n from ansible.module_utils.basic import *\r\n File
\"/tmp/ansible_GvDGZb/ansible_modlib.zip/ansible/module_utils/basic.py\",
line 52, in <module>\r\nImportError: No module named grp\r\n", "msg":
"MODULE FAILURE", "parsed": false}
Main ERROR ::
ImportError: No module named grp
In Debug mode :
fatal: [5.232.57.150]: FAILED! => {"changed": false, "failed": true,
"invocation": {"module_name": "setup"}, "module_stderr": "OpenSSH_5.3p1,
OpenSSL 1.0.1e-fips 11 Feb 2013\ndebug1: Reading configuration data
/etc/ssh/ssh_config\r\ndebug1: Applying options for *\r\ndebug1: auto-mux:
Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2:
mux_client_hello_exchange: master version 4\r\ndebug3:
mux_client_request_forwards: requesting forwardings: 0 local, 0
remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3:
mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done
pid = 12018\r\ndebug3: mux_client_request_session: session request
sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug1:
mux_client_request_session: master session id: 2\r\ndebug3:
mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received
exit
status from master 0\r\nShared connection to 5.232.57.150 closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File
\"/tmp/ansible_XhPWuX/ansible_module_setup.py\", line 123, in <module>\r\n
from ansible.module_utils.basic import *\r\n File
\"/tmp/ansible_XhPWuX/ansible_modlib.zip/ansible/module_utils/basic.py\",
line
52, in <module>\r\nImportError: No module named grp\r\n", "msg": "MODULE
FAILURE", "parsed": false}
Would I need to install some python packages on ESXi host ?
You are correct that the Python grp module (at least) is missing on the ESXi host, based on that error. If you can easily get the right python modules installed via the ansible shell or pip modules, that may fix this.
This Serverfault answer used the Ansible raw module to work around the lack of this Python module.
See also this thread which indicates this module is present on at least some ESXi versions.

Permission denied when ansible tries to create a directory with sudo

I have a roles/ec2/tasks/main.yml that is trying to create a folder:
---
- name: Mkdir /opt/applications
file: path=/opt/applications state=directory
it is called in the roles of start.yml:
- hosts: tag_composant_XXX:&tag_Name_XXX-sandbox
remote_user: ec2-user
vars:
ec2_ami_name: XXX-base-{{ ansible_date_time.year }}-{{ ansible_date_time.month }}-{{ ansible_date_time.day }}
ec2_ami_description: Ami to launch XXX
instance_tag_environnement: XXX
roles:
- {role: ec2, sudo: true}
it is saying that
failed: [x.x.x.x] => {"failed": true, "parsed": false}
Traceback (most recent call last):
File "/home/ec2usr/.ansible/tmp/ansible-tmp-1438095761.0-196976221154211/file", line 1994, in <module>
main()
File "/home/ec2usr/.ansible/tmp/ansible-tmp-1438095761.0-196976221154211/file", line 279, in main
os.mkdir(curpath)
OSError: [Errno 13] Permission denied: '/opt/applications'
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /home/xxx/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 4869
debug3: mux_client_request_session: session request sent
debug1: mux_client_request_session: master session id: 2
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 0
Shared connection to x.x.x.x closed.
The execution is done via:
ansible-playbook --private-key=~/.ssh/key -vvvv -i ../ec2.py start.yml
(I have not touched the py script)
It worked before changing the ansible version (see this). What I have done more than just uninstalling + installing ansible, is that I have removed some folders in ~/.ansible/tmp/ (something like ansible-tmp-1438095761.0-196976221154211/, but I do not remember the names exactly). Is it a problem because of it?
I have managed to connect to the EC2 instance manually and create the folder, but with Ansible it seems not to work. Why? What is the problem?
Not sure if this was possible before. But one can define this directly at the task level now e.g.
- name: Mkdir /opt/applications
file:
path=/opt/applications
state=directory
become: yes
also https://docs.ansible.com/ansible/2.7/user_guide/become.html might help with further questions
Based on all the comments I am making an answer to this question:
Accordingly to the discussions on the forum of Ansible's repo there was a role level break. So it will be better to switch to 1.9.1 version. What is more, there was another change in the roles: sudo has changed to become (as mentioned in another question's answer). And that seems to fix my problem even if the docs says that sudo still works.
I have replaced:
- {role: ec2, sudo: true}
by
- {role: ec2, become: yes}

Resources