Ansible with a bastion host / jump box? [duplicate] - ansible

This question already has answers here:
SSH to remote server using ansible
(2 answers)
Closed 6 years ago.
I'm fairly certain I've seen a feature in the ansible documentation where you can tell it that to connect to certain hosts it first needs to tunnel through a DMZ host. I can't however seem to find any documentation outside of some debates on the mailing lists.
I'm aware of hacking this in with an ssh config like on this page http://alexbilbie.com/2014/07/using-ansible-with-a-bastion-host/ however that's an overcomplicated kludge for an extremely common requirement in any kind of mildly regulated environment.
Is there a way to do this without using custom ssh config includes and voodoo netcat sorcery?

With Ansible 2, this is a built-in option:
How do I configure a jump host to access servers that I have no direct access to?
With Ansible 2, you can set a ProxyCommand in the ansible_ssh_common_args inventory variable. Any arguments specified in this variable are added to the sftp/scp/ssh command line when connecting to the relevant host(s). Consider the following inventory group:
[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2
You can create group_vars/gatewayed.yml with the following contents:
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#gateway.example.com"'
Ansible will append these arguments to the command line when trying to connect to any hosts in the group gatewayed. (These arguments are used in addition to any ssh_args from ansible.cfg, so you do not need to repeat global ControlPersist settings in ansible_ssh_common_args.)
Note that ssh -W is available only with OpenSSH 5.4 or later. With older versions, it’s necessary to execute nc %h:%p or some equivalent command on the bastion host.

If your jump box needs a private key file to connect (even if it's the same key as the one used for the private subnet instances), use this instead:
ansible_ssh_common_args: '-o ProxyCommand="ssh -i <path-to-pem-file> -W %h:%p -q user#gateway.example.com"'
I spent hours trying to fix a problem that now seems like a simple and obvious solution.

As Ansible uses SSH, you can specify a bastion host in the standard SSH config way:
e.g. to connect through a bastion host for all servers that have a name like "*.amazonaws.com":
Host *.amazonaws.com
ProxyCommand ssh -W %h:%p my_bastion_host.example.org
When ansible or ansible-playbook runs, it will read in your SSH configuration file and apply it for connections. You can also specify which SSH configuration file is read by using the ANSIBLE_SSH_ARGS environmental variable or by using the -F flag when calling the command.
You are also able to specify more SSH arguments in the ansible.cfg.

Related

How to repair sshkey pairs after recreating global ssh keys with Ansible

In a nutshell, after deleting then recreating new global ssh keys on a managed host as part of an ansible play, the shared ssh keys between the controller and the host break. I would like to know a superior method to "fix" this issue and regain the original ssh key trust using ansible itself. Unfortunately this will require some explanation.
Basically as a start, right now, I don't have ansible set up when a new image is deployed. To remedy that, I have created a bash script, utilizing expect which nicely and neatly does 2 things on that new managed host:
Creates an ansible account with appropriate sudo permissions
Creates an ssh key pair between the controller and the controller and the managed host.
That's it, and that's all, however it does require manual input at this time as to the IP of the host to be run on. We now have a desired state from which ansible works well via ssh. However it seems cumbersome at 328 lines of code to check and do this procedure, more on this later.
The issue starts, due to the fact that the host/server is deployed from an image, there is a need to recreate the global keys on each so that they do not have the same set. The fix for this part of that issue is a simple 2 steps:
Find and delete all ^ssh_host_. files in the directory /etc/ssh/
Run the command: /usr/bin/ssh-keygen -A to generate new global ssh keys.
We however now have a problem, once the current ssh connection is broken to the managed host, we can no longer connect to our managed hosts as our known_hosts file on the controller now have keys that don't match. If you do nothing else, you get a prompt again to verify the remote key as it has "changed" and you can't continue until you do. (Stopping all playbooks from functioning) OR if you try to clear the IP out of the known_hosts file on the controller and put it back in, you get the lovely below message:
"changed": false,
"msg": "Failed to connect to the host via ssh: ###########################################################\r\n# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! ***SNIP*** You can use following command to remove the offending key:\r\nssh-keygen -R 10.200.5.4 -f /home/ansible/.ssh/known_hosts\r\nECDSA host key for 10.200.5.4 has changed and you have requested strict checking.\r\nHost key verification failed.",
"unreachable": true
So now I have an issue, and there must be a few commands which I can utilize with ssh-keygen, and/or ssh-keyscan to fix this mess cleanly. However for the life of me I can't figure it out. My only recourse now is to re-run the bash script which initially sets this all up, and replace everything on the controller/host sshkey wise. This seems like overkill, I can't possibly believe that is necessary.
My only hope now is that someone else has an idea how to solve this cleanly and permanently without manual intervention. Otherwise, the only thing I can do is set the ansible_ssh_common_args: "-o StrictHostKeyChecking=no"fact and run the commands my script does but only in playbook form. I can't believe there aren't any modules which can accomplish this. I tried the known_host module, but either I don't know how to use it properly, or it doesn't have this functionality. (Also it has the annoying property of changing my known_hosts file to root ownership, which I must then change back.)
If anyone can help that would be fantastic! Thanks in advance!
The below is not strictly needed as it's extra text clogging up the works, but it does illustrate how the bash script fixes this issue and maybe give some insight on a better solution:
In short, it generates an ssh public and private key, attaches the hostname to them, creates an ssh config identity file using a heredocs population method, puts them in the proper spots, and then copies the public key over to mangaged host in question.
The code snipits are below to show how this is accomplished. This is not the entire script just relevant parts:
#HOMEDIR is /home/ansible This host is the IP of managed host in the run.
#THISHOST is the IP of the managed host in question. Yes, we ONLY use IP's, there is no DNS.
cd "$HOMEDIR"
rm -f $HOMEDIR/.ssh/id_rsa
ssh-keygen -t rsa -f "$HOMEDIR"/.ssh/id_rsa -q -P ""
sudo mkdir -p "$HOMEDIR"/.ssh/rsa_inventory && sudo chown ansible:users "$HOMEDIR"/.ssh/rsa_inventory
cp -p "$HOMEDIR"/.ssh/id_rsa "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa
cp -p "$HOMEDIR"/.ssh/id_rsa.pub "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa.pub
#Heredocs implementation of the ssh config identity file:
cat <<EOT >> /home/ansible/.ssh/config
Host $THISHOST $THISHOST
HostName $THISHOST
IdentityFile ~/.ssh/rsa_inventory/${THISHOST}-id_rsa
User ansible
EOT
#Define the variable earlier before the expect script is run so it makes sense in next snipit:
ssh_key=$( cat "$HOMEDIR"/.ssh/id_rsa.pub )
#Snipit in except script where it echos over the public ssh key to the managed host from the controller.
send "sudo echo '"$ssh_key"' >> /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
send "sudo chmod 644 /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
#etc etc, so on and so forth properly setting attributes on this file. ```
Now things work with passwordless ssh as they should. Until they are re-ruined by the global ssh key replacement.

Ansible :execute playbook from localhost through bastion host

I am newbie to the ansible
We are doing our deployments via ansible and a bastion host is provisioned for the deployments.
The current approach I am using is to clone the ansible repo in bastion host and run the commands from that folder
My question is it possible to run the ansible code through the local machine through bastion??
(basically, avoid the repo in bastion host)
Let's say you want to provision a couple of VMs 172.20.0.10 and 172.20.0.11 in your development environment going through your 172.20.0.1 bastion. Your inventory looks a bit like this
[development]
172.20.0.10
172.20.0.11
Then you can edit your ~/.ssh/config and add
Host bastion
Hostname 172.20.0.1
User youruser
Host 172.20.*
ProxyJump bastion
User youruser
Then you can test a ssh 172.20.0.10 that should land you in your first VM. If it works for SSH, Ansible should work the same.
Note, you can run ansible with -vvv (or is it one more v, not sure atm), you'll see the SSH commands Ansible is running.
Note 2, ProxyJump requires a recent OpenSSH, 6.7 at least if I remember correctly
Using this data
host remoto : 10.0.1.121
user remoto : application_user
ssh key : app_ssh_key
host bastian : 212.34.345.12
user bastian : bastian_user
ssh key: bastian_ssh_key
and using key to access with ssh (you have to store keys in a secure storage, not with ansible playbook).
In a ssh single command
$ ssh application_user#10.0.1.121 -i path/to/app_ssh_key \
-o ProxyCommand="ssh -q bastian_user#212.34.345.12 -i path/to/bastian_ssh_key -W %h:%p"
In ansible
you can use two method:
Method 1
Use variables for inventory machine/group, in order to have different connection option for different machine/group.
Add to inventory file:
[remote-vm]
10.0.1.121
[remote-vm:vars]
ansible_ssh_user=application_user
ansible_ssh_private_key_file=path/to/app_ssh_key
ansible_ssh_common_args= -o ProxyCommand="ssh -q bastian_user#212.34.345.12 -i path/to/bastian_ssh_key -W %h:%p"
Method 2
Single configuration valid for all inventory machines.
Add to/replace in ansible.cfg:
[defaults]
remote_user = application_user
[ssh_connection]
ssh_args=-i path/to/app_ssh_key -o ProxyCommand="ssh -q bastian_user#212.34.345.12 -i path/to/bastian_ssh_key -W %h:%p"

Ansible root/password login

I'm trying to use Ansible to provision a server and the first thing I want to do is test the ssh access. If I use ssh directly I can log in fine...
ssh root#server
root#backups's password:
If I use Ansible I can't...
user#ansible:~$ ansible backups -m ping --user root --ask-pass
SSH password:
backups | UNREACHABLE! => {
"changed": false,
"msg": "Invalid/incorrect password: Permission denied, please try again.",
"unreachable": true
}
The password I'm using is correct - 100%.
Before anyone suggests using SSH keys - that's what part of what I'm looking to automate.
The issue was caused by the getting started documentation setting a trap.
It instructs you to create an inventory file with servers, use ansible all -m ping to ping those servers and to use the -u switch to change the remote user.
What it doesn't tell you is that if like me not all you servers have the same user, the advised way to specify a user per server is in the inventory file...
server1 ansible_connection=ssh ansible_user=user1
server2 ansible_connection=ssh ansible_user=user2
server3 ansible_connection=ssh ansible_user=user3
I was provisioning a server, and the only user I had available to me at the time was root. But trying to do ansible server3 -user root --ask-pass failed to authenticate. After a couple of wasted hours I discovered the -user switch is only effective if the inventory file doesn't have a user. This is intended precedence behaviour. There are a few gripes about this in GitHub issues but a firm 'intended behaviour' mantra is the response you get if you challenge it. It seems to go against the grain to me.
I subsequently discovered that you can specify -e 'ansible_ssh_user=root' to override the inventory user - I will see about creating a pull request to improve the docs.
While you're here, I might be able to save you some time with some further gotchas. This behaviour is the same if you use playbooks. In there you can specify a remote_user but this isn't honoured - presumably also because of precedence. Again you can override the inventory user with -e 'ansible_ssh_user=root'
Finally, until I realised Linode could provision a server with an SSH key deployed, I was trying to specify the root password to an ad-hoc command. You have to encrypt the password and this gives you a long string and this is almost certainly going to include $ in it which bash will treat as substitutions. Make sure you escape these.
The lineinfile module behaviour isn't intuitive either.
Write your hosts file like this. It will work.
192.168.2.4
192.168.1.4
[all:vars]
ansible_user=azureuser
Then execute the following command: ansible-playbook --ask-pass -i hosts main.yml --check to check before configuration.
Also create a ansible.cfg file. Then paste the following contents there:
[defaults]
inventory = hosts
host_key_checking = False
Note: All the 3 files namely, main.yml,ansible.cfg & hosts must be in the same folder.
Also, the code is tested for devices connected to a private network using Private IPs. I haven't checked using Public IPs. If using Azure/AWS, create a test VM and connect it to the VPN of the other devices.
Note: You need to install the SSHPass package to be able to authenticate with Password.
For Ubuntu: apt-get install sshpass

Can an ~/.ssh/config file use variables?

I am writing an SSH config file and want to perform a bit of logic. For example:
Host myhost1
ProxyCommand ssh -A {choose randomly between [bastion_host1] and [bastion_host2]} -W %h:%p
Is it possible to achieve the above using (bash?) variables? Thanks!
Your ProxyCommand can be a shell script.
host myhost1
ProxyCommand $HOME/bin/selecthost %h %p
And then in ~/bin/selecthost:
#!/usr/bin/env bash
hosts=(bastion1 bastion2)
onehost=${hosts[$RANDOM % ${#hosts[#]}]}
ssh -x -a -q ${2:+-W $1:$2} $onehost
Untested. Your mileage may vary. May contain nuts.
Per comments, I've also tested the following, and it works nicely:
host myhost1 myhost2
ProxyCommand bash -c 'hosts=(bastion1 bastion2); ssh -xaqW%h:22 ${hosts[$RANDOM % ${#hosts[#]}]}'
Of course, this method doesn't allow you to specify a custom port per host. You could add that to the logic of a separate shell script if your SSH config matches multiple hosts in the same host entry.
In ~/.ssh/config you cannot have much logic, and no Bash. The manual for this file is in man ssh_config, and it makes no mention of such feature.
What you can do is create a script that will have the logic you need, and make you ssh configuration call that script.
Something along the lines of:
ProxyCommand sudo /root/bin/ssh-randomly.sh [bastion_host1] [bastion_host2]
And write a Bash script /root/bin/ssh-randomly.sh to take two hostname parameters, select one of them randomly, and run the real ssh command with the appropriate parameters.
No; .ssh/config is not processed by any outside program. You'll need a shell function along the lines of
ssh () {
(( $RANDOM % 2 )) && bastion=bastion_host1 || bastion=bastion_host2
command ssh -A "$bastion" "$#"
}
This can be handled within ssh config by using a helper app. For example,
Host myhost match exec "randprog"
hostname host1
Host myhost
hostname host2
and then randprog will randomly return 1 or 0 (0 will match the first line, giving host1).

Ansible ignores ansible_ssh_extra_args in inventory or group_vars

I am trying to make ansible connect to a machine in the local network which needs some extra options passed in SSH invocations. I tried ansible_ssh_extra_args in inventory, group vars, host vars but it is ignored. Here is an example of my inventory file:
[dev]
192.168.10.15
[dev:vars]
ansible_ssh_private_key_file="keys/deploy-myserver"
ansible_ssh_extra_args="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
ansible_ssh_user="deploy"
How can I make ansible SSH connections to a specific host use my own custom SSH options?
Those were added for 2.0, so unless you're running the beta, they won't work yet...

Resources