Ansible for Windows: Cannot access Windows machine via WinRM - windows

I have installed Ansible on my Linux (14.04) workstation, along with Python (2.7.6), ansible (2.3.0), pywinrm (0.2.1). I want to use Ansible to configure my Windows VMs. I have a Windows VM (Win2k12r2) in Azure. I have opened all ports in the Azure network security group, and I have opened ports in the Windows Firewall for WinRM (5985 and 5986).
I also ran the script located at https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1, in my Windows VM to ensure that WinRM is enabled.
I am able to RDP into the Windows VM, so I know that its public network interface is working.
As per the ansible docs (http://docs.ansible.com/ansible/intro_windows.html) I have an inventory file inventories/test1
[windows]
<azure_ip_address>
And I have a file group_vars/windows.yml
ansible_user: <my_user_id>
ansible_password: <azure_vm_password>
ansible_port: 5986
ansible_connection: winrm
# The following is necessary for Python 2.7.9+ when using default WinRM self-signed certificates:
ansible_winrm_server_cert_validation: ignore
ansible_winrm_scheme: https
When I run the command:
ansible windows -i inventories/test1 -m win_ping --ask-vault-pass -vvvvv
I get the following response:
No config file found; using defaults
Vault password:
Loading callback plugin minimal of type stdout, v2.0 from /home/jgodse/ansible/lib/ansible/plugins/callback/__init__.pyc
Using module file /home/jgodse/ansible/lib/ansible/modules/core/windows/win_ping.ps1
<azure_vm_ip_address> ESTABLISH SSH CONNECTION FOR USER: None
<azure_vm_ip_address> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<azure_vm_ip_address> SSH: ansible_password/ansible_ssh_pass not set: (-o) (KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<azure_vm_ip_address> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<azure_vm_ip_address> SSH: PlayContext set ssh_common_args: ()
<azure_vm_ip_address> SSH: PlayContext set ssh_extra_args: ()
<azure_vm_ip_address> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/jgodse/.ansible/cp/ansible-ssh-%h-%p-%r)
<azure_vm_ip_address> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/jgodse/.ansible/cp/ansible-ssh-%h-%p-%r <azure_vm_ip_address> '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477490434.84-228998637685624 `" && echo ansible-tmp-1477490434.84-228998637685624="` echo $HOME/.ansible/tmp/ansible-tmp-1477490434.84-228998637685624 `" ) && sleep 0'"'"''
<azure_vm_ip_address> | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/jgodse/.ansible/cp/ansible-ssh-<azure_vm_ip_address>-22-jgodse\" does not exist\r\ndebug2: ssh_connect: needpriv 0\r\ndebug1: Connecting to <azure_vm_ip_address> [<azure_vm_ip_address>] port 22.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address <azure_vm_ip_address> port 22: Connection timed out\r\nssh: connect to host <azure_vm_ip_address> port 22: Connection timed out\r\n",
"unreachable": true
}
I tried telnetting to the machine as follows:
$ telnet <azure_vm_ip_address> 5986
Trying <azure_vm_ip_address>...
Connected to <azure_vm_ip_address>.
Escape character is '^]'.
This tells me that telnet worked to 5986, and therefore my firewall rules were OK.
Do I have to do something to tell Ansible that I'm trying to connect to a Windows VM using WinRM? Or am I missing something to help my Ansible workstation connect to my Windows VM via WinRM?

It turns out that putting the connection variables in group_vars/windows.yml didn't work. I got rid of that file completely, and edited inventories/test1 to look like this:
[windows]
<azure_ip_address>
[windows:vars]
ansible_user=<my_user_id>
ansible_password=<azure_vm_password>
ansible_port=5986
ansible_connection=winrm
ansible_winrm_server_cert_validation=ignore
Running the command worked, and running win_ping worked successfully.
I then tried encrypting inventories/test1, and I got:
[WARNING]: No hosts matched, nothing to do
And win_ping didn't run.
My guess at this point is that variables needed for connection initialization have to be in an inventory file, and are read before encrypted variables are decrypted and used.

Related

SSH shows the wrong IP address when SSH with port forward

My use case is I have to access AWS ec2 instances through a jumpbox.
Here is my SSH config.
Host awsjumpbox
User sshuser
HostName jumpboxhostname
IdentityFile /Users/myusername/.ssh/id_rsa
LocalForward 8022 10.0.168.43:22
It works when I do SCP command to copy files to the EC2 instance.
myusername % scp -r -i ~/aws/aws-keypair.pem -P 8022 * ec2-user#localhost:testdir
The authenticity of host '[localhost]:8022 ([::1]:8022)' can't be established.
ECDSA key fingerprint is SHA256:rrwr62yjP2cgUTT9SowdlrIwGi4jMMwt5x4Aj6E4Y3Y.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[localhost]:8022' (ECDSA) to the list of known hosts.
/etc/profile.d/lang.sh: line 19: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory
README.md 100% 1064 24.3KB/s 00:00
However, when I executed SSH command. It returns a strange IP address.
myusername % ssh -i ~/aws/aws-keypair.pem -P 8022 ec2-user#localhost
ssh: connect to host 0.0.31.86 port 22: No route to host
What is the cause of this issue? How do I fix it?
Thank you.
Don't use LocalForward and reverse the flow.
Use ProxyCommand or ProxyJump. This will allow SSH to open a session to your bastion server transparently.
E.g. your configuration should be something in the line of
Host 10.0.168.43
User root
ProxyCommand ssh -W %h:%p sshuser#awsjumpbox
...
or
Host 10.0.168.43
User root
ProxyJump sshuser#awsjumpbox
...

How to use ansible to run commands through a remote server to another remote server

In our work setup , there is a remote server B which is accessible only via a remote server A .
How can i run ansible commands/playbooks on remote server B through remote server A from my local system where ansible runs ,ie,
local system --> remote server A --> remote server B
The remote server B is accessible via remote server A through ssh . But i do not have access to the ssh keys to remote server B
this is what i tried to do in my inventory.yaml file based on the answer below
hosts:
remote-serverB:
vars:
ansible_connection: "ssh"
ansible_user: "userB"
ansible_ssh_common_args: '-o ProxyCommand="sshpass -p <password> ssh -W %h:%p -q userA#remote-serverA"'
but i get the following error from ansible
UNREACHABLE {"changed": false, "msg": "EOF on stream; last 100 lines received:\nssh_exchange_identification: Connection closed by remote host\r", "unreachable": true}
This could be done via the ansible_ssh_common_args.
ansible_ssh_common_args
This setting is always appended to the default command line for sftp, scp, and ssh. Useful to configure a ProxyCommand for a certain host (or group).
Source: https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#connecting-to-hosts-behavioral-inventory-parameters
Here is an example of inventory using this configuration, given that my two servers have, as FQDN
remote-server-A
remote-server-B
and that the user to login to remote-server-A is user-to-A.
all:
servers-needing-jump:
hosts:
remote-server-B:
vars:
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user-to-A#remote-server-A"'

SSH setting for mac to create an alias

ssh address I want to alias is
ssh -o StrictHostKeyChecking=no username#hostipaddress#jumpServerAdress.com
I am populating in the MAC ~/.ssh/config as
Host prod
HostName hostipaddress
User usrname
ServerAliveInterval 100
ProxyJump jumpServerAdress.com
StrictHostKeyChecking no
GlobalKnownHostsFile /dev/null
UserKnownHostsFile /dev/null
When I do ssh prod..
it is not letting me inside the host
It signals me:
channel 0: open failed: connect failed: open failed
stdio forwarding failed
ssh_exchange_identification: Connection closed by remote host
Is there any mistake in the config I am doing please let me know ?
I tried this config which worked
Host jump
HostName jumpServerAdress.com
User jumpuser
Host prod
HostName hostipaddress
User usrname
ServerAliveInterval 100
ProxyJump jump
StrictHostKeyChecking no
GlobalKnownHostsFile /dev/null
UserKnownHostsFile /dev/null
assuming you have setup ssh keys correctly.

Complex SSH tunnel

I have a complex SSH tunnel problem I'm trying to solve and can't seem to get it quite right.
Simply put:
ME -> Bastion:22 -> Instance:8500
Bastion uses a different username and key than instance. I would like to be able to access port 1234 on instance from localhost:1234
Right now I have the following:
Host bastion
HostName bastion.example.com
ForwardAgent yes
IdentityFile ~/.ssh/id_ecdsa
User spanky
Host internal
ForwardAgent yes
HostName consul.internal
IdentityFile ~/.ssh/aws.pem
ProxyJump bastion
User ec2-user
Port 8500
But I don't think I've got it.
The following two commands work, but I'm trying to distill them into a working config:
ssh -L 2222:10.0.0.42:22 bastion.example.com -N -i ~/.ssh/id_ecdsa
ssh -L 8500:localhost:8500 ec2-user#localhost -N -i ~/.ssh/aws.pem -p 2222
With a current version of ssh, you should be able to use:
ssh -L1234:localhost:1234 -J spanky#bastion.example.com ec2-user#consul.internal
From man ssh:
-J destination
Connect to the target host by first making a ssh
connection to the jump host described by destination and then
establishing a TCP forwarding to the ultimate destination from there.
Multiple jump hops may be specified separated by comma characters.
This is a shortcut to specify a ProxyJump configuration directive.

vagrant ansible The following settings don't exist: inventory_file

I've pulled down a git repo and ran vagrant up but I'm getting this error message
The following settings don't exist: inventory_file
I've installed virtual box and vagrant and ansible for osx mountain lion.
But I can't get anything to work.
also when I run ansible all -m ping -vvvv I get
<192.168.0.62> ESTABLISH CONNECTION FOR USER: Grant
<192.168.0.62> EXEC ['ssh', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/Users/Grant/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'Port=22', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', '192.168.0.62', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544 && chmod a+rx $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544 && echo $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544'"]
192.168.0.62 | FAILED => SSH encountered an unknown error. The output was:
OpenSSH_5.9p1, OpenSSL 0.9.8y 5 Feb 2013
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/Users/Grant/.ansible/cp/ansible-ssh-192.168.0.62-22-Grant" does not exist
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.168.0.62 [192.168.0.62] port 22.
debug2: fd 3 setting O_NONBLOCK
debug1: connect to address 192.168.0.62 port 22: Operation timed out
ssh: connect to host 192.168.0.62 port 22: Operation timed out
Any ideas on what is going on will be appreciated :)
For the inventory_file issue try changing the Vagrantfile to use inventory_path instead. I think this subtle change occurred with Vagrant 1.3.x. If you don't want to modify the Vagrantfile try using Vagrant 1.2.x.
When running:
ansible all -m ping -vvvv
This will use your current user and will look in the default location for the Ansible hosts inventory (/etc/ansible/hosts).
In order to get it working with a Vagrant defined VM you need to use the vagrant user, specify the SSH key to use during the connection and specify the location of the hosts inventory, e.g.
ansible all \
-i provisioning/inventory # <-- or wherever the inventory is \
-m ping \
-u vagrant \
--private-key ~/.vagrant.d/insecure_private_key
It has been repeated all over the block about using ~/.vagrant.d/insecure_private_key but I found that it was using actually .vagrant/machines/default/virtualbox/private_key, in the path where the Vagrantfile is. They probably changed their key generation to be per-machine, not user-wide, but the documentation does not reflect that yet.
So for the whole command, it would be:
ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory --private-key=.vagrant/machines/default/virtualbox/private_key -u vagrant playbook.yml
You can check whether is one or the other by running vagrant ssh-config and looking for the IdentityFile value.
Instead of passing the inventory_file, private_key and ssh_user every time, you can put those into an ansible config file. See my more detailed answer here: https://stackoverflow.com/a/25316963/502457
$ ansible all -i inventory -m ping -u vagrant --private-key ~/.vagrant.d/insecure_private_key
ansible_ssh_private_key_file=/Users/dxiao/.vagrant.d/insecure_private_key | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
ansible_ssh_user=vagrant | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
testserver | success >> {
"changed": false,
"ping": "pong"
}
$ cat inventory
testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=/Users/dxiao/.vagrant.d/insecure_private_key
it works.

Resources