Proxy tunnel through multiple systems with Ruby Net::SSH - ruby

I need some suggestions on how to use the Ruby Net::SSH and Net::SCP gem to proxy through several systems in order to execute commands or copy files.
It's very similar (if not almost exactly the same) as this previous post I made, using basic ssh from the linux command line.
How to script multiple ssh and scp commands to various systems
For example, LOCAL is my local system.
System A is a second system connected to LOCAL
System B is a third system connected to System A only. Also, System B is configured to only allow access from System A by way of it's ssh key.
For normal ssh from the command line, I have my .ssh/config file set up in this way:
Host systemA
HostName 192.168.0.10
User A-user
Host systemB
ProxyCommand ssh -e none systemA exec /bin/nc %h %p 2>/dev/null
HostName 192.168.0.11
User B-user
IdentityFile ~/.ssh/systemA_id_dsa
From this point, as long as my pub key is in the authorized_hosts of sysA (let's assume it always will be), and sysA's pub key is in the authorized_hosts sysB (same assumption), the following will work seamlessly:
ssh systemB
I would like to implement this exact behavior in Ruby. I have some code similar to the following:
require 'net/ssh'
require 'net/ssh/proxy/command'
str = 'ssh -l A-user -i /home/A-user/.ssh/id_dsa -e none 192.168.0.10 exec /bin/nc %h %p 2>/dev/null'
proxy = Net::SSH::Proxy::Command.new(str)
Net::SSH.start('192.168.0.11', 'B-user', :proxy => proxy) do |ssh|
ssh.exec! "ls -lA"
end
Unfortunately, this isn't working. I get an authentication failure.
~/.rvm/gems/ruby-1.9.3-p327/gems/net-ssh-2.6.2/lib/net/ssh.rb:201:in `start': root (Net::SSH::AuthenticationFailed)
What am I missing here?

Did you verify that your proxy command actually works on its own from the command line? It seems you might have mixed the order of the identity keys.
SystemA already knows you(?), you should not need to specify an identity for it. This is also based on the config setup you posted.
Instead to me it seems you need to forward the identity of SystemA to SystemB in the start command:
Net::SSH.start('192.168.0.11', 'B-user',
:proxy => proxy,
:keys => [ "~/.ssh/systemA_id_dsa" ] ) do |ssh|
ssh.exec! "ls -lA"
end
And then skip just skip the identity file in the Proxy setup command.

I solved this problem with Net::SSH, but without the need for external configuration files. Net::SSH::Gateway was also helpful in my solution. I wrapped the solution into a gem called tunneler.
require "tunneler"
# Create SSH tunnel
tunnel = Tunneler::SshTunnel.new(bastion_user, bastion_host, {:keys => [bastion_key]})
# Establish remote connection
destination_host_connection = tunnel.remote(destination_user, destination_host, {:keys => [destination_key]})
# Upload file to destination host via tunnel
destination_host_connection.scp(local_file_path, destination_file_path)
# Execute command on destination host via tunnel
response = destination_host_connection.ssh(command)

Related

How to repair sshkey pairs after recreating global ssh keys with Ansible

In a nutshell, after deleting then recreating new global ssh keys on a managed host as part of an ansible play, the shared ssh keys between the controller and the host break. I would like to know a superior method to "fix" this issue and regain the original ssh key trust using ansible itself. Unfortunately this will require some explanation.
Basically as a start, right now, I don't have ansible set up when a new image is deployed. To remedy that, I have created a bash script, utilizing expect which nicely and neatly does 2 things on that new managed host:
Creates an ansible account with appropriate sudo permissions
Creates an ssh key pair between the controller and the controller and the managed host.
That's it, and that's all, however it does require manual input at this time as to the IP of the host to be run on. We now have a desired state from which ansible works well via ssh. However it seems cumbersome at 328 lines of code to check and do this procedure, more on this later.
The issue starts, due to the fact that the host/server is deployed from an image, there is a need to recreate the global keys on each so that they do not have the same set. The fix for this part of that issue is a simple 2 steps:
Find and delete all ^ssh_host_. files in the directory /etc/ssh/
Run the command: /usr/bin/ssh-keygen -A to generate new global ssh keys.
We however now have a problem, once the current ssh connection is broken to the managed host, we can no longer connect to our managed hosts as our known_hosts file on the controller now have keys that don't match. If you do nothing else, you get a prompt again to verify the remote key as it has "changed" and you can't continue until you do. (Stopping all playbooks from functioning) OR if you try to clear the IP out of the known_hosts file on the controller and put it back in, you get the lovely below message:
"changed": false,
"msg": "Failed to connect to the host via ssh: ###########################################################\r\n# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! ***SNIP*** You can use following command to remove the offending key:\r\nssh-keygen -R 10.200.5.4 -f /home/ansible/.ssh/known_hosts\r\nECDSA host key for 10.200.5.4 has changed and you have requested strict checking.\r\nHost key verification failed.",
"unreachable": true
So now I have an issue, and there must be a few commands which I can utilize with ssh-keygen, and/or ssh-keyscan to fix this mess cleanly. However for the life of me I can't figure it out. My only recourse now is to re-run the bash script which initially sets this all up, and replace everything on the controller/host sshkey wise. This seems like overkill, I can't possibly believe that is necessary.
My only hope now is that someone else has an idea how to solve this cleanly and permanently without manual intervention. Otherwise, the only thing I can do is set the ansible_ssh_common_args: "-o StrictHostKeyChecking=no"fact and run the commands my script does but only in playbook form. I can't believe there aren't any modules which can accomplish this. I tried the known_host module, but either I don't know how to use it properly, or it doesn't have this functionality. (Also it has the annoying property of changing my known_hosts file to root ownership, which I must then change back.)
If anyone can help that would be fantastic! Thanks in advance!
The below is not strictly needed as it's extra text clogging up the works, but it does illustrate how the bash script fixes this issue and maybe give some insight on a better solution:
In short, it generates an ssh public and private key, attaches the hostname to them, creates an ssh config identity file using a heredocs population method, puts them in the proper spots, and then copies the public key over to mangaged host in question.
The code snipits are below to show how this is accomplished. This is not the entire script just relevant parts:
#HOMEDIR is /home/ansible This host is the IP of managed host in the run.
#THISHOST is the IP of the managed host in question. Yes, we ONLY use IP's, there is no DNS.
cd "$HOMEDIR"
rm -f $HOMEDIR/.ssh/id_rsa
ssh-keygen -t rsa -f "$HOMEDIR"/.ssh/id_rsa -q -P ""
sudo mkdir -p "$HOMEDIR"/.ssh/rsa_inventory && sudo chown ansible:users "$HOMEDIR"/.ssh/rsa_inventory
cp -p "$HOMEDIR"/.ssh/id_rsa "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa
cp -p "$HOMEDIR"/.ssh/id_rsa.pub "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa.pub
#Heredocs implementation of the ssh config identity file:
cat <<EOT >> /home/ansible/.ssh/config
Host $THISHOST $THISHOST
HostName $THISHOST
IdentityFile ~/.ssh/rsa_inventory/${THISHOST}-id_rsa
User ansible
EOT
#Define the variable earlier before the expect script is run so it makes sense in next snipit:
ssh_key=$( cat "$HOMEDIR"/.ssh/id_rsa.pub )
#Snipit in except script where it echos over the public ssh key to the managed host from the controller.
send "sudo echo '"$ssh_key"' >> /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
send "sudo chmod 644 /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
#etc etc, so on and so forth properly setting attributes on this file. ```
Now things work with passwordless ssh as they should. Until they are re-ruined by the global ssh key replacement.

How do I run a local command before starting SSH connection and after SSH connection closes?

Essentially what I want to do is run a Bash script I created that switches WiFi SSIDs before starting the SSH connection, and after the SSH connection closes.
I have added this to ~/.ssh/config by setting ProxyCommand to ./run-script; ssh %h:%p but by doing this, I feel like it would ignore any parameters I passed when I run the ssh command. Also, I have no idea how to get the script to run again when the SSH connection closes.
For OpenSSH you can specify a LocalCommand in your ssh config (~/.ssh/config).
But for that to work you also need the system-wide option (in /etc/ssh/ssh_config) PermitLocalCommand to yes. (By default it is set to no.)
It gets executed on the local machine after authenticating but before the remote shell is started.
There appears to be no (easy) way of executing something after the connection has been closed, though.
Assuming that it is not possible to implement a wrapper to 'ssh' (using alias, or some other method), it is possible to implement the following in the proxyCommand.
Important to note that there is no protection against multiple invocation of 'ssh' - possible that during a specific invocation that WIFI is already connected. Also, it is possible that when a specific ssh is terminated, the WIFI has to stay active because of other pending conditions.
Possible implementation of the proxy script is
ProxyCommand /path/to/run-script %h %p
#! /bin/sh
pre-command # connect to WIFI
nc -N "$1" "$2" # Tunnel, '%h' and '%p' are passed in
post-command # Disconnect WIFI
You do not want to use simple ssh in the proxy script, as this will translate into another call to the 'run-script'. Also note that all options provided to the original ssh will be handled by the initial 'ssh' session that will be leveraging the proxy 'nc' tunnel.

Secure copy over two IPs on the same network to the local machine [duplicate]

I wonder if there is a way for me to SCP the file from remote2 host directly from my local machine by going through a remote1 host.
The networks only allow connections to remote2 host from remote1 host. Also, neither remote1 host nor remote2 host can scp to my local machine.
Is there something like:
scp user1#remote1:user2#remote2:file .
First window: ssh remote1, then scp remot2:file ..
Second shell: scp remote1:file .
First window: rm file; logout
I could write a script to do all these steps, but if there is a direct way, I would rather use it.
Thanks.
EDIT: I am thinking something like opening SSH tunnels but i'm confused on what value to put where.
At the moment, to access remote1, i have the following in $HOME/.ssh/config on my local machine.
Host remote1
User user1
Hostname localhost
Port 45678
Once on remote1, to access remote2, it's the standard local DNS and port 22. What should I put on remote1 and/or change on localhost?
I don't know of any way to copy the file directly in one single command, but if you can concede to running an SSH instance in the background to just keep a port forwarding tunnel open, then you could copy the file in one command.
Like this:
# First, open the tunnel
ssh -L 1234:remote2:22 -p 45678 user1#remote1
# Then, use the tunnel to copy the file directly from remote2
scp -P 1234 user2#localhost:file .
Note that you connect as user2#localhost in the actual scp command, because it is on port 1234 on localhost that the first ssh instance is listening to forward connections to remote2. Note also that you don't need to run the first command for every subsequent file copy; you can simply leave it running.
Double ssh
Even in your complex case, you can handle file transfer using a single command line, simply with ssh ;-)
And this is useful if remote1 cannot connect to localhost:
ssh user1#remote1 'ssh user2#remote2 "cat file"' > file
tar
But you loose file properties (ownership, permissions...).
However, tar is your friend to keep these file properties:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar c file"' | tar x
You can also compress to reduce network bandwidth:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar cj file"' | tar xj
And tar also allows you transferring a recursive directory through basic ssh:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar cj ."' | tar xj
ionice
If the file is huge and you do not want to disturb other important network applications, you may miss network throughput limitation provided by scp and rsync tools (e.g. scp -l 1024 user#remote:file does not use more than 1 Mbits/second).
But, a workaround is using ionice to keep a single command line:
ionice -c2 -n7 ssh u1#remote1 'ionice -c2 -n7 ssh u2#remote2 "cat file"' > file
Note: ionice may not be available on old distributions.
This will do the trick:
scp -o 'Host remote2' -o 'ProxyCommand ssh user#remote1 nc %h %p' \
user#remote2:path/to/file .
To SCP the file from the host remote2 directly, add the two options (Host and ProxyCommand) to your ~/.ssh/config file (see also this answer on superuser). Then you can run:
scp user#remote2:path/to/file .
from your local machine without having to think about remote1.
With openssh version 7.3 and up it is easy. Use ProxyJump option in the config file.
# Add to ~/.ssh/config
Host bastion
Hostname bastion.client.com
User userForBastion
IdentityFile ~/.ssh/bastion.pem
Host appMachine
Hostname appMachine.internal.com
User bastion
ProxyJump bastion # openssh 7.3 version new feature ProxyJump
IdentityFile ~/.ssh/appMachine.pem. #no need to copy pem file to bastion host
Commands to run to login or copy
ssh appMachine # no need to specify any tunnel.
scp helloWorld.txt appMachine:. # copy without intermediate jumphost/bastion host copy.**
ofcourse you can specify bastion Jump host using option "-J" to ssh command, if not configured in config file.
Note scp does not seems to support "-J" flag as of now. (i could not find in man pages. However above scp works with config file setting)
There is a new option in scp that add recently for exactly this same job that is very convenient, it is -3.
TL;DR For the current host that has authentication already set up in ssh config files, just do:
scp -3 remote1:file remote2:file
Your scp must be from recent versions.
All other mentioned technique requires you to set up authentication from remote1 to remote2 or vice versa, which not always is a good idea.
Argument -3 means you want to move files from two remote hosts by using current host as intermediary, and this host actually does the authentication to both remote hosts, so they don't have to have access to each other.
You just have to setup authentication in ssh config files, which is fairly easy and well documented, and then just run the command in TL;DR
The source for this answer is https://superuser.com/a/686527/713762
This configuration works nice for me:
Host jump
User username
Hostname jumphost.yourorg.intranet
Host production
User username
Hostname production.yourorg.intranet
ProxyCommand ssh -q -W %h:%p jump
Then the command
scp myfile production:~
Copies myfile to production machine.
a simpler way:
scp -o 'ProxyJump your.jump.host' /local/dir/myfile.txt remote.internal.host:/remote/dir

Concatenating a local file with a remote one

These three lines of code require authentication twice. I don't yet have password-less authentication set up on this server. In fact, these lines of code are to copy my public key to the server and concatenate it with the existing file.
How can I re-write this process with a single ssh command that requires authentication only once?
scp ~/local.txt user#server.com:~/remote.txt
ssh -l user user#server.com
cat ~/remote.txt >> ~/otherRemote.txt
I've looked into the following possibilities:
command sed
operator ||
operator &&
shared session: Can I use an existing SSH connection and execute SCP over that tunnel without re-authenticating?
I also considered placing local.txt at an openly accessible location, for example, with a public dropbox link. Then if cat could accept this as an input, the scp line wouldn't be necessary. But this would also require an additional step and wouldn't work in cases where local.txt cannot be made public.
Other references:
Using a variable's value as password for scp, ssh etc. instead of prompting for user input every time
https://superuser.com/questions/400714/how-to-remotely-write-to-a-file-using-ssh
You can redirect the content to the remote, and then use commands on the remote to do something with it. Like this:
ssh user#server.com 'cat >> otherRemote.txt' < ~/local.txt
The remote cat command will receive as its input the content of ~/local.txt, passed to the ssh command by input redirection.
Btw, as #Barmar pointed out, specifying the username with both -l user and user# was also redundant in your example.

SSH connection with Ruby without username using `authorized_keys`

I have authenticated a server using authorized_keys push so I could run command ssh 192.168.1.101 from my system and could connect via server.
Now, I tried with library , It didn't worked for me
Net::SSH.start("192.168.1.209",username) do |ssh| #output=ssh.exec!("ls -l") end
as, This required username field. I want without username.
So , I tried this
system('ssh 192.168.1.209 "ls -l"')
It run the command for me. But I want the output in a variable like #output in first example. Is there any command any gem or any way by which I could get the solution ?
Any ssh connection requires a username. The default is either your system account name or whatever's specified in .ssh/config for that host you're connecting to.
Your current username should be set as ENV['USER'] if you need to access that.
If you're curious what username is being used for that connection, try finding out with ssh -v which is the verbose mode that explains what's going on.
you can pass parameters into %x[] as follows:
1. dom = ‘www.ruby-rails.in‘
2. #whois = %x[whois #\{dom\}]
Backquotes works very similar to "system" function but with important difference. Shell command enclosed between the backquotes is executed with standard output as result.
So, following statement should execute ssh 192.168.1.209 "ls -l" and puts directory files listing into #output variable:
#output = `ssh 192.168.1.209 "ls -l"`

Resources