Ansible doesn't connect to server - Connection timed out - ansible

I try connect to server using ansible. I have installed ubuntu server with openssh. I add public key on server and when I try connect to server using ssh it works:
ssh ubuntu#92.168.0.14
So I create ansible inventory file hosts
[dbservers]
192.16.0.14 ansible_ssh_port=22 ansible_ssh_user=ubuntu
And next I try run command:
ansible all -i hosts -m ping -vvvv
But when I run it i get an error:
<192.16.0.14> ESTABLISH CONNECTION FOR USER: ubuntu
<192.16.0.14> REMOTE_MODULE ping
<192.16.0.14> EXEC ['ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/karol/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'Port=22', '-o', 'IdentityFile=/home/karol/.ssh/id_rsa', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'User=ubuntu', '-o', 'ConnectTimeout=10', '192.16.0.14', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1431369235.84-48071922815331 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1431369235.84-48071922815331 && echo $HOME/.ansible/tmp/ansible-tmp-1431369235.84-48071922815331'"]
192.16.0.14 | FAILED => SSH encountered an unknown error. The output was:
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/home/karol/.ansible/cp/ansible-ssh-192.16.0.14-22-ubuntu" does not exist
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.16.0.14 [192.16.0.14] port 22.
debug2: fd 3 setting O_NONBLOCK
debug1: connect to address 192.16.0.14 port 22: Connection timed out
ssh: connect to host 192.16.0.14 port 22: Connection timed out
I use the same identity file so why ansible can't connect to server?

Maybe, the trouble is here:
ubuntu#**92.168.0.14**
[dbservers]
**192.16**.0.14 ansible_ssh_port=22 ansible_ssh_user=ubuntu
Here is different ip-addresses with different fors octets:
92.168 and 192.16

Related

Shh connection, in a Jenkins pipeline, disconnects after login

Scenario: I'm developing a jenkins step that needs to transfer a file to a machine (install a jboss module). I'm trying to do it via ssh interactions. I need to connect via ssh, switch to an authorized user in order to access jboss folders/files, and then use rsync to transfer the jar file inside jboss modules folder. I cannot use the same user to ssh and jboss.
Problem: I can successfully connect via ssh, but when I send the first command (to switch user), it disconnects and then nothing works anymore. Appearently is disconnecting before the 'su' command is executed. The next command would be to check if module folder exists (and create it if doesn't).
The sequence of commands is executed inside a function:
def installModule(HOST, USER, PASSWORD) {
sh set -x && sshpass -p [PASSWORD] ssh -v -tt -o StrictHostKeyChecking=no [USER]#[HOST] echo [PASSWORD] | sudo -S su - jboss && cd [MODULE_FOLDER] && if [[ ! -e [MODULE_VERSION] ]]; then mkdir [MODULE_VERSION]; fi
}
The console output:
debug1: Authentication succeeded (keyboard-interactive).
Authenticated to [MACHINE_NAME_HERE] ([IP_HERE]:22).
debug1: channel 0: new [client-session]
debug1: Requesting no-more-sessions#openssh.com
debug1: Entering interactive session.
debug1: pledge: network
debug1: client_input_global_request: rtype hostkeys-00#openssh.com want_reply 0
debug1: tty_make_modes: no fd or tio
debug1: Sending environment.
debug1: Sending env LANG = en_GB.UTF-8
debug1: Sending command: echo [PASSWORD_HERE]
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: channel 0: free: client-session, nchannels 1
debug1: fd 0 clearing O_NONBLOCK
debug1: fd 1 clearing O_NONBLOCK
debug1: fd 2 clearing O_NONBLOCK
Connection to [MACHINE_NAME_HERE] closed.
Transferred: sent 2180, received 3356 bytes, in 0.3 seconds
Bytes per second: sent 7006.2, received 10785.6
debug1: Exit status 0
Sorry, try again.
[sudo] password for jenkins: Sorry, try again.
[sudo] password for jenkins:
sudo: no password was provided
sudo: 2 incorrect password attempts
Any help would be appreciated =)
I had to do something similar, in my case, I opt for having a shell script file in my environment containing all the commands I needed to be executed on the remote machine.
I did it like this:
withCredentials([
usernamePassword(credentialsId: "$VM_CREDENTIALS", usernameVariable: 'USER_VM', passwordVariable: 'PWD_VM')
]) {
script {
sh 'sshpass -p $PWD_VM ssh -o StrictHostKeyChecking=no $USER_VM#$IP_VM "bash -s ' + "$VARIABLE_A $VARIABLE_B" + '" < path/to/shell/script.sh'
}
}
I used $VARIABLE_A and $VARIABLE_B to pass some arguments to the script. The $path/to/shell_script.sh represents the path to the script placed in your Jenkins environment to be executed on the remote machine.
I also had to switch users in the shell script, I did it like so:
# Switch to root user
echo $PWD | sudo -S sleep 1 && sudo -E su
Remember, don't define the $PWD variable hardcoded somewhere, you need to take security measures.

SSH setting for mac to create an alias

ssh address I want to alias is
ssh -o StrictHostKeyChecking=no username#hostipaddress#jumpServerAdress.com
I am populating in the MAC ~/.ssh/config as
Host prod
HostName hostipaddress
User usrname
ServerAliveInterval 100
ProxyJump jumpServerAdress.com
StrictHostKeyChecking no
GlobalKnownHostsFile /dev/null
UserKnownHostsFile /dev/null
When I do ssh prod..
it is not letting me inside the host
It signals me:
channel 0: open failed: connect failed: open failed
stdio forwarding failed
ssh_exchange_identification: Connection closed by remote host
Is there any mistake in the config I am doing please let me know ?
I tried this config which worked
Host jump
HostName jumpServerAdress.com
User jumpuser
Host prod
HostName hostipaddress
User usrname
ServerAliveInterval 100
ProxyJump jump
StrictHostKeyChecking no
GlobalKnownHostsFile /dev/null
UserKnownHostsFile /dev/null
assuming you have setup ssh keys correctly.

rsync and torsocks error: connection refused (111) and error in socket IO (code 10)

I wanted to download some files from tor (.onion site) with rsync and torsocks with this command (I'm on Linux):
torsocks rsync rsync://root#snatchvwddns6zto.onion/targets/perceptics.com/
and it returns back the error:
1560930992 PERROR torsocks[13894]: socks5 libc connect: Connection refused (in socks5_connect() at socks5.c:202)
rsync: failed to connect to snatchvwddns6zto.onion (127.42.42.0): Connection refused (111)
rsync error: error in socket IO (code 10) at clientserver.c(127) [Receiver=3.1.3]
And the onion site snatchvwddns6zto.onion/targets/perceptics.com/ is fully functional
Already changed the port of the torsocks.conf file to the same port that tor is already on (9051) and still didn't work.
Even when trying to authenticate:
echo -e 'AUTHENTICATE "passwordhere"\r\nsignal NEWNYM\r\nQUIT' | nc 127.0.0.1 9051
it returns:
(UNKNOWN) [127.0.0.1] 9051 (?) : Connection refused
torsocks.conf:
TorAddress 127.0.0.1
TorPort 9150
Does someone know how to solve does errors?
You can also expose the SSH port instead of the rsync one and do something like:
torify rsync -e "ssh -p 9051" root#snatchvwddns6zto.onion:targets/perceptics.com/ ./

Ansible for Windows: Cannot access Windows machine via WinRM

I have installed Ansible on my Linux (14.04) workstation, along with Python (2.7.6), ansible (2.3.0), pywinrm (0.2.1). I want to use Ansible to configure my Windows VMs. I have a Windows VM (Win2k12r2) in Azure. I have opened all ports in the Azure network security group, and I have opened ports in the Windows Firewall for WinRM (5985 and 5986).
I also ran the script located at https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1, in my Windows VM to ensure that WinRM is enabled.
I am able to RDP into the Windows VM, so I know that its public network interface is working.
As per the ansible docs (http://docs.ansible.com/ansible/intro_windows.html) I have an inventory file inventories/test1
[windows]
<azure_ip_address>
And I have a file group_vars/windows.yml
ansible_user: <my_user_id>
ansible_password: <azure_vm_password>
ansible_port: 5986
ansible_connection: winrm
# The following is necessary for Python 2.7.9+ when using default WinRM self-signed certificates:
ansible_winrm_server_cert_validation: ignore
ansible_winrm_scheme: https
When I run the command:
ansible windows -i inventories/test1 -m win_ping --ask-vault-pass -vvvvv
I get the following response:
No config file found; using defaults
Vault password:
Loading callback plugin minimal of type stdout, v2.0 from /home/jgodse/ansible/lib/ansible/plugins/callback/__init__.pyc
Using module file /home/jgodse/ansible/lib/ansible/modules/core/windows/win_ping.ps1
<azure_vm_ip_address> ESTABLISH SSH CONNECTION FOR USER: None
<azure_vm_ip_address> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<azure_vm_ip_address> SSH: ansible_password/ansible_ssh_pass not set: (-o) (KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<azure_vm_ip_address> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<azure_vm_ip_address> SSH: PlayContext set ssh_common_args: ()
<azure_vm_ip_address> SSH: PlayContext set ssh_extra_args: ()
<azure_vm_ip_address> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/jgodse/.ansible/cp/ansible-ssh-%h-%p-%r)
<azure_vm_ip_address> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/jgodse/.ansible/cp/ansible-ssh-%h-%p-%r <azure_vm_ip_address> '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477490434.84-228998637685624 `" && echo ansible-tmp-1477490434.84-228998637685624="` echo $HOME/.ansible/tmp/ansible-tmp-1477490434.84-228998637685624 `" ) && sleep 0'"'"''
<azure_vm_ip_address> | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/jgodse/.ansible/cp/ansible-ssh-<azure_vm_ip_address>-22-jgodse\" does not exist\r\ndebug2: ssh_connect: needpriv 0\r\ndebug1: Connecting to <azure_vm_ip_address> [<azure_vm_ip_address>] port 22.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address <azure_vm_ip_address> port 22: Connection timed out\r\nssh: connect to host <azure_vm_ip_address> port 22: Connection timed out\r\n",
"unreachable": true
}
I tried telnetting to the machine as follows:
$ telnet <azure_vm_ip_address> 5986
Trying <azure_vm_ip_address>...
Connected to <azure_vm_ip_address>.
Escape character is '^]'.
This tells me that telnet worked to 5986, and therefore my firewall rules were OK.
Do I have to do something to tell Ansible that I'm trying to connect to a Windows VM using WinRM? Or am I missing something to help my Ansible workstation connect to my Windows VM via WinRM?
It turns out that putting the connection variables in group_vars/windows.yml didn't work. I got rid of that file completely, and edited inventories/test1 to look like this:
[windows]
<azure_ip_address>
[windows:vars]
ansible_user=<my_user_id>
ansible_password=<azure_vm_password>
ansible_port=5986
ansible_connection=winrm
ansible_winrm_server_cert_validation=ignore
Running the command worked, and running win_ping worked successfully.
I then tried encrypting inventories/test1, and I got:
[WARNING]: No hosts matched, nothing to do
And win_ping didn't run.
My guess at this point is that variables needed for connection initialization have to be in an inventory file, and are read before encrypted variables are decrypted and used.

vagrant ansible The following settings don't exist: inventory_file

I've pulled down a git repo and ran vagrant up but I'm getting this error message
The following settings don't exist: inventory_file
I've installed virtual box and vagrant and ansible for osx mountain lion.
But I can't get anything to work.
also when I run ansible all -m ping -vvvv I get
<192.168.0.62> ESTABLISH CONNECTION FOR USER: Grant
<192.168.0.62> EXEC ['ssh', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/Users/Grant/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'Port=22', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', '192.168.0.62', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544 && chmod a+rx $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544 && echo $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544'"]
192.168.0.62 | FAILED => SSH encountered an unknown error. The output was:
OpenSSH_5.9p1, OpenSSL 0.9.8y 5 Feb 2013
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/Users/Grant/.ansible/cp/ansible-ssh-192.168.0.62-22-Grant" does not exist
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.168.0.62 [192.168.0.62] port 22.
debug2: fd 3 setting O_NONBLOCK
debug1: connect to address 192.168.0.62 port 22: Operation timed out
ssh: connect to host 192.168.0.62 port 22: Operation timed out
Any ideas on what is going on will be appreciated :)
For the inventory_file issue try changing the Vagrantfile to use inventory_path instead. I think this subtle change occurred with Vagrant 1.3.x. If you don't want to modify the Vagrantfile try using Vagrant 1.2.x.
When running:
ansible all -m ping -vvvv
This will use your current user and will look in the default location for the Ansible hosts inventory (/etc/ansible/hosts).
In order to get it working with a Vagrant defined VM you need to use the vagrant user, specify the SSH key to use during the connection and specify the location of the hosts inventory, e.g.
ansible all \
-i provisioning/inventory # <-- or wherever the inventory is \
-m ping \
-u vagrant \
--private-key ~/.vagrant.d/insecure_private_key
It has been repeated all over the block about using ~/.vagrant.d/insecure_private_key but I found that it was using actually .vagrant/machines/default/virtualbox/private_key, in the path where the Vagrantfile is. They probably changed their key generation to be per-machine, not user-wide, but the documentation does not reflect that yet.
So for the whole command, it would be:
ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory --private-key=.vagrant/machines/default/virtualbox/private_key -u vagrant playbook.yml
You can check whether is one or the other by running vagrant ssh-config and looking for the IdentityFile value.
Instead of passing the inventory_file, private_key and ssh_user every time, you can put those into an ansible config file. See my more detailed answer here: https://stackoverflow.com/a/25316963/502457
$ ansible all -i inventory -m ping -u vagrant --private-key ~/.vagrant.d/insecure_private_key
ansible_ssh_private_key_file=/Users/dxiao/.vagrant.d/insecure_private_key | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
ansible_ssh_user=vagrant | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
testserver | success >> {
"changed": false,
"ping": "pong"
}
$ cat inventory
testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=/Users/dxiao/.vagrant.d/insecure_private_key
it works.

Resources