Ansible cannot connect to host via specified private key - ansible

The following command works from my OSX terminal:
ssh vagrant#192.168.50.100 -i .vagrant/machines/centos100/virtualbox/private_key
I have an inventory file called "hosts" with the following:
192.168.50.100 ansible_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/centos100/virtualbox/private_key
However when I run the following:
ansible -i hosts all -m ping -v
I get:
192.168.50.100 | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue

I was using ansible 1.8.2 where ansible_user is still ansible_ssh_user
The correct hosts file is:
192.168.50.100 ansible_ssh_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/centos100/virtualbox/private_key

Related

Ansible :execute playbook from localhost through bastion host

I am newbie to the ansible
We are doing our deployments via ansible and a bastion host is provisioned for the deployments.
The current approach I am using is to clone the ansible repo in bastion host and run the commands from that folder
My question is it possible to run the ansible code through the local machine through bastion??
(basically, avoid the repo in bastion host)
Let's say you want to provision a couple of VMs 172.20.0.10 and 172.20.0.11 in your development environment going through your 172.20.0.1 bastion. Your inventory looks a bit like this
[development]
172.20.0.10
172.20.0.11
Then you can edit your ~/.ssh/config and add
Host bastion
Hostname 172.20.0.1
User youruser
Host 172.20.*
ProxyJump bastion
User youruser
Then you can test a ssh 172.20.0.10 that should land you in your first VM. If it works for SSH, Ansible should work the same.
Note, you can run ansible with -vvv (or is it one more v, not sure atm), you'll see the SSH commands Ansible is running.
Note 2, ProxyJump requires a recent OpenSSH, 6.7 at least if I remember correctly
Using this data
host remoto : 10.0.1.121
user remoto : application_user
ssh key : app_ssh_key
host bastian : 212.34.345.12
user bastian : bastian_user
ssh key: bastian_ssh_key
and using key to access with ssh (you have to store keys in a secure storage, not with ansible playbook).
In a ssh single command
$ ssh application_user#10.0.1.121 -i path/to/app_ssh_key \
-o ProxyCommand="ssh -q bastian_user#212.34.345.12 -i path/to/bastian_ssh_key -W %h:%p"
In ansible
you can use two method:
Method 1
Use variables for inventory machine/group, in order to have different connection option for different machine/group.
Add to inventory file:
[remote-vm]
10.0.1.121
[remote-vm:vars]
ansible_ssh_user=application_user
ansible_ssh_private_key_file=path/to/app_ssh_key
ansible_ssh_common_args= -o ProxyCommand="ssh -q bastian_user#212.34.345.12 -i path/to/bastian_ssh_key -W %h:%p"
Method 2
Single configuration valid for all inventory machines.
Add to/replace in ansible.cfg:
[defaults]
remote_user = application_user
[ssh_connection]
ssh_args=-i path/to/app_ssh_key -o ProxyCommand="ssh -q bastian_user#212.34.345.12 -i path/to/bastian_ssh_key -W %h:%p"

Why ansible is not able to ping for locally?

Without writing ansible-playbook Why ansible is not able to ping locally ?
Problem:-
I have taken 1 ec2 instance and the IP of ec2 is "52.15.160.250". I installed ansible in it. Inside the inventory file [/etc/ansible/hosts] i have taken
[localhost]
52.15.160.250
Then visudo description
I tried to ping local host
ansible -m ping all
or
ansible -m ping 52.15.160.250
I am getting the following error
error
try adding like this:
[localhost]
52.15.160.250 ansible_connection=local
this way, it would not attempt over ssh rather it would go by local connection.

Ansible 2.0.0.2: ansible doesn't respect the "- u" switch

I'm probing a freshly installed Archlinux installation on a Raspberry PI 2 like so:
ansible -i PI2 arch -m setup -c paramiko -k -u alarm -vvvv
This reads to me: Fire the setup module against the IP connecting with the user "alarm" asking for the password of this specific user. However the user that eventually attempts to connect is "root".
Here's the debug response:
Loaded callback minimal of type stdout, v2.0
<192.168.1.18> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 192.168.1.18
192.168.1.18 | UNREACHABLE! => {
"changed": false,
"msg": "ERROR! Authentication failed.",
"unreachable": true
}
The inventory looks like this:
[arch]
192.168.1.18
Some things that may or may not be relevant are the following:
ssh logins via root are not permitted
sudo is not installed
default user and pass are "alarm" : "alarm"
no ssh key being copied to the machine hence the paramiko connection attempt
What is NOT ignored and leads to a successful connection is adding ansible_user=alarm to the IP line in the inventory file.
EDIT
Found this interesting passage in the official docs: http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable which states:
Another important thing to consider (for all versions) is that connection specific variables override config, command line and play specific options and directives. For example:
ansible_user will override-u andremote_user: `
The original question seems to remain though. Without any mention of ansible_user in the inventory, why is root being used instead of the explicitly mentioned user via - u?
EDIT_END
Is this expected behaviour?
Thanks
You don't need to specify paramiko as the connection type, Ansible will figure that part out. You may have a group_vars directory with an ansible_user or ansible_ssh_user variable defined for this host which could be overriding the alarm user.
I was able to replicate your test on ansible 2.0.0.2 without any issues against a raspberry pi 2 running Raspian:
➜ ansible ansible -i PI2 arch -m setup -u alarm -vvvv -k
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loaded callback minimal of type stdout, v2.0
<192.168.1.84> ESTABLISH CONNECTION FOR USER: alarm on PORT 22 TO 192.168.1.84
CONNECTION: pid 78534 waiting for lock on 9
CONNECTION: pid 78534 acquired lock on 9
paramiko: The authenticity of host '192.168.1.84' can't be established.
The ssh-rsa key fingerprint is 54e12e8153e0319f450934d606dca7df.
Are you sure you want to continue connecting (yes/no)?
yes
CONNECTION: pid 78534 released lock on 9
<192.168.1.84> EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454502995.07-263327298967895 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454502995.07-263327298967895 )" )
<192.168.1.84> PUT /var/folders/39/t0dm88q50dbcshd5nc5m5c640000gn/T/tmp5DqywL TO /home/alarm/.ansible/tmp/ansible- tmp-1454502995.07-263327298967895/setup
<192.168.1.84> EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/alarm/.ansible/ tmp/ansible-tmp-1454502995.07-263327298967895/setup; rm -rf "/home/alarm/.ansible/tmp/ansible-tmp-1454502995. 07-263327298967895/" > /dev/null 2>&1
192.168.1.84 | SUCCESS => {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"192.168.1.84"
],
I virtualenv'd myself a Ansible 1.9.4 sandbox, copied over the ansible.cfg and the inventory and ran the command again. Kinda worked as expected:
⤷ ansible --version
ansible 1.9.4
configured module search path = None
(ANS19TEST)~/Documents/Code/VENVS/ANS19TEST
⤷ ansible -i PI2 arch -m setup -c paramiko -k -u alarm -vvvv
SSH password:
<192.168.1.18> ESTABLISH CONNECTION FOR USER: alarm on PORT 22 TO 192.168.1.18
<192.168.1.18> REMOTE_MODULE setup
From where I'm standing I'd say this is a bug. Maybe somebody can confirm?! This goes to the bugtracker then...
Cheers
EDIT
For brevity I omitted a important part of my inventory file, which ultimately is responsible for the behavior. It looks like this:
[hypriot]
192.168.1.18 ansible_user=root
[arch]
192.168.1.18
Quote from the Ansible bugtracker:
The names used in the inventory is the key in a dictionary. So everything you put in there as host-specific variables will be merged into one big dictionary. That means that in some conflicting cases variables are superseded by other values.
You can prevent this by using different names for the same host (e.g. using IP address and hostname, or an alias or DNS-alias) and in that case you can still do what you like to do.
So my inventory looks like this now:
[hypriot]
hypriot_local ansible_host=192.168.1.18 ansible_user=root
[archlinux]
arch_local ansible_host=192.168.1.18
This works fine. The corresponding issue on the Ansible tracker is here: https://github.com/ansible/ansible/issues/14268

Ansible dynamic inventory on GCE: success command not found

I am trying to configure my Ansible for dynamic inventory. According to the ansible documentation I am typing on my OS X laptop command line:
GCE_INI_PATH=~/.gce.ini ansible all -i gce.py -m setup hostname | success >> {"ansible_facts": {"ansible_all_ipv4_addresses": ["x.x.x.x"],
and I am getting:
-bash: success: command not found
close failed in file object destructor:
sys.excepthook is missing
lost sys.stderr
What should I type instead?
Try just running the command
GCE_INI_PATH=~/.gce.ini ansible all -i gce.py -m setup
if this works you will get a stream of JSON output (from the Ansible setup module) showing the system information about any GCE hosts you have set up.

ansible-playbook -> ControlPath too long

I just trying ouy a playbook that a colleague has set up that I needed to modify. The first problem I get running on my mac was
ERROR: Unable to find an inventory file, specify one with -i ?
This was easily solved by adding -i verif to the command. But then the following error occured.
loadgen-verif-app1.internal.machines | FAILED => SSH encountered an unknown error. The output was:
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /Users/andreas.joelsson/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: auto-mux: Trying existing master
ControlPath too long
This is true for all 8 machines (loadgen-verif-app[1-8].internal.machines)
After some debugging that the file could be too long, I tried the following command with the same result:
ansible nukes -m ping -i verif -vvvv
then I thought it was issues with ssh but executing the command through ssh works:
ssh loadgen-verif-app1.internal.machines ping loadgen-verif-app2.internal.machines
And now I am stumped because the ping command works on some of the machines not in the range listed above, the thing is that they are shorter than the loagenXXX.machines path if that makes it an issue. But then the ssh command shouldn't work I guess.
I have some ssh config settings set up for the targets as well, but that is no different that the ones that did work with the ping command.
Host loadgen1
HostName loadgen-verif-app1.internal.machines
Now I am stumped as it works for the colleague on a mac as well. So not sure if there is some setting i'm missing or similar. He doesn't need to provide the -i verif either that can also be a reason why it doesn't work.
edit 2014-12-17:
Have tried modifying the ansible setting control_path according to http://docs.ansible.com/intro_configuration.html#control-path
We are running the same version of ansible
We are running the same version of OpenSSH.
We have the same ssh configs as far as we can tell.
Have been looking for Host* that I found in /etc/ssh_config and removed without progress according to f.e. https://help.openshift.com/hc/en-us/articles/202186044-Unable-to-git-clone-an-application-when-SSH-session-sharing-is-in-use-ControlPath-too-long-
edit 2015-01-08:
SE-C02N76PGG5RP:verif_provisioning andreas.joelsson$ ansible loadgen-verif-app1.internal.machines -m ping -i verif -vvvv
<loadgen-verif-app1.internal.machines> ESTABLISH CONNECTION FOR USER: andreas.joelsson
<loadgen-verif-app1.internal.machines> REMOTE_MODULE ping
<loadgen-verif-app1.internal.machines> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/andreas.joelsson/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 loadgen-verif-app1.internal.machines /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1420723708.99-33622628424665 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1420723708.99-33622628424665 && echo $HOME/.ansible/tmp/ansible-tmp-1420723708.99-33622628424665'
loadgen-verif-app1.internal.machines | FAILED => SSH encountered an unknown error. The output was:
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /Users/andreas.joelsson/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: auto-mux: Trying existing master
ControlPath too long
edit 2015-02-12:
SE-C02N76PGG5RP:verif_provisioning andreas.joelsson$ ansible nukes -m ping -i verif
loadgen-verif-app4.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app5.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app3.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app1.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app2.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app8.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app6.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app7.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
And with the working one:
SE-C02N76PGG5RP:verif_provisioning andreas.joelsson$ ansible duke -m ping -i verif
steve-verif-app1.internal.machines | success >> {
"changed": false,
"ping": "pong"
}
Solution to this error is mentioned at ansible documentation, please refer to this link.
I was getting this error, when I tried to connect to EC2 instances but after modifying the below mentioned configuration, it solved my problem.
I am supposing that you have installed the ansible on Mac using pip. So, please do these steps:
create the /etc/ansible directory
sudo mkdir /etc/ansible
change the permission of it
sudo chown $(whoami):staff /etc/ansible
download the ansible.cfg file from here and place inside the /etc/ansible directory
edit/uncomment the following line
[ssh_connection]
control_path = %(directory)s/%%h-%%r
edit the ~/.ssh/config file:
Host *
GSSAPIAuthentication no
EXTRA STEP:
brew install https://raw.github.com/eugeneoden/homebrew/eca9de1/Library/Formula/sshpass.rb
Ok the way I got it working was that I did the changes in ansible.cfg and the extra steps but it did not work. The only way I found is to export ANSIBLE_SSH_CONTROL_PATH:
This is coz, I think it always is picking the default path. Even after the change in ansible.cfg
1.9.4 git:(master) pwd
/usr/local/Cellar/ansible/1.9.4
➜ 1.9.4 git:(master) ag ANSIBLE_SSH_CONTROL
libexec/lib/python2.7/site-packages/ansible/constants.py
187:ANSIBLE_SSH_CONTROL_PATH = get_config(p, 'ssh_connection', 'control_path', 'ANSIBLE_SSH_CONTROL_PATH', "%(directory)s/ansible-ssh-%%h-%%p-%%r")
Output without exporting ANSIBLE_SSH_CONTOL_PATH:
ControlPath="/Users/vinitkhandagle/.ansible/cp/ansible-ssh-%h-%p-%r"
Exported the variable as:
export ANSIBLE_SSH_CONTROL_PATH='%(directory)s/%%h-%%r'
Control path changes accordingly:
ControlPath="/Users/vinitkhandagle/.ansible/cp/%h-%r"
Adding a note to #techraf above comment is that I encountered the same issue when using ansible in combination with molecule in a virtual environment. So even though ansible.cfg has been read. ANSIBLE_SSH_CONTROL_PATH will be overwritten. So in that case a workaround was to set it as environment variable outside the configuration file itself. As this is then shorter than the default path.
export ANSIBLE_SSH_CONTROL_PATH='%(directory)s/tmp'

Resources