Remote Machine unreachable while trying to ping through ansible - ansible

This is my hosts file :
[openstack]
ec2-54-152-162-0.compute-1.amazonaws.com
I am trying to ping it using the following command :
ansible openstack -u redhat -m ping -vvvv
I got the following response :
Loaded callback minimal of type stdout, v2.0
Using module file /usr/lib/python2.7/site-packages/ansible-2.2.0-py2.7.egg/ansible/modules/core/system/ping.py
<ec2-54-152-162-0.compute-1.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: redhat
<ec2-54-152-162-0.compute-1.amazonaws.com> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=22 -o 'IdentityFile="/home/centos/AnsibleKeyPair.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=redhat -o ConnectTimeout=10 -o ControlPath=/home/centos/.ansible/cp/ansible-ssh-%h-%p-%r ec2-54-152-162-0.compute-1.amazonaws.com '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1480529571.83-128837972481874 `" && echo ansible-tmp-1480529571.83-128837972481874="` echo $HOME/.ansible/tmp/ansible-tmp-1480529571.83-128837972481874 `" ) && sleep 0'"'"''
ec2-54-152-162-0.compute-1.amazonaws.com | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
NOTE : I am able to connect to centos machines properly. But, I can't ping Ubuntu and Redhat machines. My controller machine is Centos. What might the problem be?

I solved it finally by using the following command :
ansible openstack -u ec2-user -m ping
I have been typing -u redhat but AWS has already given a name to it automatically ec2-user

"ESTABLISH SSH CONNECTION FOR USER: None" - this means that it is trying to ssh this host using a blank username which will not work.
Two solutions:
Edit the hosts file to include ansible_user=ubuntu (or whatever user your flavor uses, i.e. ec2-user for amazon linux)
[openstack]
ec2-54-204-230-203.compute-1.amazonaws.com ansibler_user=ubuntu
Just call it with the -u ubuntu when calling the playbook (or again whatever your flavor uses).
ansible openstack -u ubuntu -m ping -vvvv
Hope this helps!
--Edit--
(this is what helped me do it)
1.) Add your ssh key to the ~/.ssh directory
touch ~/.ssh/mykey.pem
2.) Enter ssh-agent bash mode
ssh-agent bash
3.) Ehange its permissionschmod
chmod 600 ~/.ssh/mykey.pem
4.) Make a path for ansible to use the permission
ssh-add ~/.ssh/mykey.pem

In your command line, use argument -k to ask ssh passwork:
ansible openstack -u redhat -m ping -k

Related

Ansible: Timeout (12s) waiting for privilege escalation prompt

Ansible 2.9.27. Target is Linux CentOs7
'become sudo' always fails with the error Timeout (12s) waiting for privilege escalation prompt
When I try manually, sudo su takes about 60 seconds to return a prompt. I don't know why, but I'd like to know how to change the timeout so that Ansible waits more time for become.
I've tried different solutions I found in StackOverflow, such as running with -c paramiko, but they didn't work.
<myhostname.com> ESTABLISH SSH CONNECTION FOR USER: myuserid
<myhostname.com> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'User="myuserid"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/myuserid/.ansible/xx/e123e1234e myhostname.com '/bin/sh -c '"'"'rm -f -r /tmp/myuserid/ansible/ansible-tmp-12334567890/ > /dev/null 2>&1 && sleep 0'"'"''
<myhostname.com> (0, '', '')
fatal: [myhostname.com]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: \r\n"
There are multiple ways, one way is to set environment variable as below
export ANSIBLE_TIMEOUT=120;
Run the playbook on same terminal where environment variable is set.

What does sleep 0 do in a shell script and what does it do if it used in the ansible SSH config to append after each command?

What does sleep 0 do in a shell script? I read the man page for sleep and it says "delay for a specified amount of time" And the argument NUMBER specifies this time in SECONDS (by default).
But I see ansible using sh -c 'echo ~ec2-user && sleep 0' to start with each task.
Also, it uses this at the end of each remote command it is firing.
I didn't find any special case mention of sleep 0 on the man page and based on the functionality of the sleep command it doesn't make any sense to have sleep 0.
The sleep command on my server is from GNU coreutils 8.22
After looking into this for some more time, here are few things that I have learned,
this is a SSH configuration given to the ansible,
each time ansible using SSH to execute a task it is running SSH with -C with multiple options. These are not part of playbook or task.
I looked for ansible configuration on ansible page here. Checked all files and Env variables but found nothing related to ssh
Checked the /etc/ssh/ssh_config there are not all the parameters/arguments that I see in the SSH that ansible is doing
In the inventory as well the host is mentioned just like this
ansible_host=localhost ansible_user=ec2-user
e.g. log lines at the beginning when ansible executes any task:
<localhost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o C
onnectTimeout=120 -o ControlPath=/home/ec2-user/.ansible/cp/6bc5a26ee4 localhost '/bin/sh -c '"'"'echo ~ec2-user && sleep 0'"'"''
<localhost> (0, '/home/ec2-user\n', '')
<localhost> ESTABLISH SSH CONNECTION FOR USER: ec2-user
I'm executing an ansible playbook written by one team, there is no one in that team I can talk to. I'm struggling to find where Ansible is taking all these arguments it is using in each SSH and why is it using this sleep 0

Installing package via Ansible using a user with limited sudo rights [duplicate]

I have a playbook that performs some prechecks on the database as the Oracle user. The remote node is an AIX server and so I created a shell script that is ran via the playbook.
---
- hosts: db
var_files:
- ansible_var.yml
tasks:
- name: "DB Checks"
become: True
become_user: oracle
script: "{ db_prechk }"
On the AIX server, I added the below entry to the sudoers file
ansible ALL=(oracle) NOPASSWD: /tmp/ansible-tmp-*/db_prechecks.sh
But the playbook fails with the error that it's waiting for the privilege escalation prompt.
This runs fine if it is ran as root. However we do not want passwordless root between the Ansible controller and the remote nodes. So we created ansible user on the controller and remote nodes and exchanged the SSH keys.
This also runs if the sudoers entry is just
ansible ALL=(oracle) NOPASSWD: ALL
We do not want to provide full access to the oracle userid via the ansible user id too.
I ran the playbook in the verbose mode and can see that Ansible is copying the script to the remote_tmp dir and is executing it as the oracle userid. In that case the sudoers line should've allowed it to run?
If you look at the verbose mode output, you will see that the actual command differs from the one you specified in the sudoers file:
<127.0.0.1> SSH: EXEC ssh -o ForwardAgent=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2202 -o 'IdentityFile="/Users/techraf/devops/testground/debian/.vagrant/machines/debian/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible -o ConnectTimeout=120 -o ControlPath=/Users/techraf/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '"'"'sudo -H -S -n -u oracle /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xoamupogqwtteubvedoscaghzmfascsr; /tmp/ansible-tmp-1488508771.72-271591203197790/db_prechecks.sh '"'"'"'"'"'"'"'"' && sleep 0'"'"''
So what is executed after sudo -u oracle starts actually with /bin/sh -c.
I managed to filter a working string to:
ansible ALL=(oracle) NOPASSWD: /bin/sh -c echo BECOME-SUCCESS*; * /tmp/ansible-tmp-*/db_prechecks.sh*
But it is based on trial-and-error. I'm not sure yet why * is required between ; and /tmp/... and at the end, but otherwise it does not work.
In both places Ansible added superfluous space characters and it seems to be the reason, as adding a space to a shell command (specified in the sudoers file) does affect the ability to sudo.
You might try with ? instead of *, I will test later
Q: "This also runs if the sudoers entry is just ansible ALL=(oracle) NOPASSWD: ALL"
A: Quoting from Privilege escalation must be general:
"You cannot limit privilege escalation permissions to certain commands..."
Replying to #techraf's answer: sudo seems to truncate the extra space and you can see it with sudo -l. I was able to get around this by escaping the spaces with \ as instructed in sudo's man page:
\x For any character ‘x’, evaluates to ‘x’.

Restrict Ansible script module using sudoers on the remote node

I have a playbook that performs some prechecks on the database as the Oracle user. The remote node is an AIX server and so I created a shell script that is ran via the playbook.
---
- hosts: db
var_files:
- ansible_var.yml
tasks:
- name: "DB Checks"
become: True
become_user: oracle
script: "{ db_prechk }"
On the AIX server, I added the below entry to the sudoers file
ansible ALL=(oracle) NOPASSWD: /tmp/ansible-tmp-*/db_prechecks.sh
But the playbook fails with the error that it's waiting for the privilege escalation prompt.
This runs fine if it is ran as root. However we do not want passwordless root between the Ansible controller and the remote nodes. So we created ansible user on the controller and remote nodes and exchanged the SSH keys.
This also runs if the sudoers entry is just
ansible ALL=(oracle) NOPASSWD: ALL
We do not want to provide full access to the oracle userid via the ansible user id too.
I ran the playbook in the verbose mode and can see that Ansible is copying the script to the remote_tmp dir and is executing it as the oracle userid. In that case the sudoers line should've allowed it to run?
If you look at the verbose mode output, you will see that the actual command differs from the one you specified in the sudoers file:
<127.0.0.1> SSH: EXEC ssh -o ForwardAgent=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2202 -o 'IdentityFile="/Users/techraf/devops/testground/debian/.vagrant/machines/debian/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible -o ConnectTimeout=120 -o ControlPath=/Users/techraf/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '"'"'sudo -H -S -n -u oracle /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xoamupogqwtteubvedoscaghzmfascsr; /tmp/ansible-tmp-1488508771.72-271591203197790/db_prechecks.sh '"'"'"'"'"'"'"'"' && sleep 0'"'"''
So what is executed after sudo -u oracle starts actually with /bin/sh -c.
I managed to filter a working string to:
ansible ALL=(oracle) NOPASSWD: /bin/sh -c echo BECOME-SUCCESS*; * /tmp/ansible-tmp-*/db_prechecks.sh*
But it is based on trial-and-error. I'm not sure yet why * is required between ; and /tmp/... and at the end, but otherwise it does not work.
In both places Ansible added superfluous space characters and it seems to be the reason, as adding a space to a shell command (specified in the sudoers file) does affect the ability to sudo.
You might try with ? instead of *, I will test later
Q: "This also runs if the sudoers entry is just ansible ALL=(oracle) NOPASSWD: ALL"
A: Quoting from Privilege escalation must be general:
"You cannot limit privilege escalation permissions to certain commands..."
Replying to #techraf's answer: sudo seems to truncate the extra space and you can see it with sudo -l. I was able to get around this by escaping the spaces with \ as instructed in sudo's man page:
\x For any character ‘x’, evaluates to ‘x’.

Error: Failed to connect to the host via ssh

I am trying to learn ansible, and am following the o'riley Ansible Up and running book.
In the getting started section of the book, it asks me to install ansible, virtualbox and vagrant and then via CLI run:
vagrant init ubuntu/trusty64
vagrant up
Afterwards I can ssh into the VM via vagrant ssh or via:
ssh vagrant#127.0.0.1 -p 2222 -i /Users/XXX/playbooks/.vagrant/machines/default/virtualbox/private_key
Next is creating the hosts file which looks like this:
testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 \ ansible_ssh_user=vagrant \ ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
Lastly is running this command:
ansible testserver -i hosts -m ping
Which gets me:
testserver | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
Adding -vvv gets me:
No config file found; using defaults
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None
<127.0.0.1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o Port=2222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/XXX/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468541275.7-255802522359895 `" && echo ansible-tmp-1468541275.7-255802522359895="` echo $HOME/.ansible/tmp/ansible-tmp-1468541275.7-255802522359895 `" ) && sleep 0'"'"''
testserver | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
I tried modifying ansible_ssh_private_key_file in the hosts file to point to the full path of the private key, but that still didn't work:
ansible testserver -i hosts -m ping -vvv
No config file found; using defaults
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None
<127.0.0.1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o Port=2222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/XXX/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468541370.61-137685863794569 `" && echo ansible-tmp-1468541370.61-137685863794569="` echo $HOME/.ansible/tmp/ansible-tmp-1468541370.61-137685863794569 `" ) && sleep 0'"'"''
testserver | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
This is my Ansible version:
ansible --version
ansible 2.1.0.0
config file =
configured module search path = Default w/o override
Anyone have any ideas why ansible isn't connecting to my vagrant VM?
I don't see any of your inventory variables past the first one taking effect in the ssh command. Does your inventory file really look like this?
testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 \ ansible_ssh_user=vagrant \ ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
You shouldn't have backslashes in there. The direct reformatting is
testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
However, in the long run you probably want to split these out into separate host_vars files.

Resources