Editing files through multi-hop ssh in Sublime Text 3 - macos

I was wondering if it is possible to edit a file with Sublime Text 3 through multi-hop SSH tunnel. In my particular case I have my Mac (let's call it A) and two Linux Machines: B and C. The files are located in C, and I access them with my machine like this:
A -> B -> C
I found these articles that can help but they only talk about editing files in B.
How to open remote files in sublime text 3
Editing files remotely via SSH on SublimeText 3
According to these articles, I can edit files in B installing rsub in the remote machine and a plugin in Sublime at A. I tried to do that in C (yes, i know it is not so useful, but who knows) but I got the error:
user#remote-C:~$ rsub
/usr/local/bin/rsub: connect: Connection refused
/usr/local/bin/rsub: line 327: /dev/tcp/localhost/52698: Connection refused
Unable to connect to TextMate on localhost:52698
I would be happy to know if there is a way to achieve this. Thanks in advance.

I will answer to myself. The solution is to do a SSH tunnelling from A to C with B in between using the ProxyCommand in the ssh config file at ~/.ssh/config.
I added these lines:
Host myMachineC
HostName NAME_OF_MACHINE_C
ProxyCommand ssh USER_IN_B#NAME_OF_MACHINE_B nc %h %p
User USER_IN_C
RemoteForward 52698 localhost:52698 # this is required by rsub
Host defines an alias for the real hostname which is written after the HostName directive. ProxyCommand is a command that is executed when you try to log in myMachineC. nc is a command that...
...by default creates a TCP socket either in listening mode (server socket) or a socket that is used in order to connect to a server (client mode) [1]
Now the machine C is accessible from A by only typing:
$ ssh myMachineC
It is recommendable that you already allowed password-less logins. To achieve this you need to have installed the public key from your home computer into the ~/.ssh/authorized_keys of each host along the way. [2]
In conclusion: With all this procedure, there will be a normal SSH connection to the intermediary machine B and then nc will be used to extend the connection to C. Using this tunnelling, the client can act as if the connection were direct using ssh. That will be useful to use with rsub.
Then, you should install and use rsub as normal and it will work like a charm.
I tried this in OSX Yosemite, but should run in almost any *nix system. I hope it will be useful for you.
Netcat Explanation and Examples
Transparent Multihop in SSH

The accepted solution didn't work for me because I use Host B as a SSH server where my SSH keys are stored. Also my SSH keys have passwords so the ProxyCommand command won't work.
But There's an easier way to do this.
You can add the following to the .ssh/config file on Host B;
Host *
RemoteForward 52698 localhost:52698
You can define a specific host or give the * wildcard for all hosts. This will forward port 52698 for all SSH sessions from Host B.

Related

SSH-FS to remote server on another SSH remote server

So, there exists the first SSH server, called A for simplicity, and the files and workspace I want to get to are on another SSH server, called B. Problem is, B is only accessible by SSH'ing to it from A.
So if I were to do this normally, I'd ssh into A, and from there ssh into B, or just ssh -t A "ssh B" which becomes mildly inefficient, if I were to code in B using vim.
So I don't know exactly how to sshfs B onto my local machine, but I can sshfs A. Sshfs'ing B onto A isn't possible, as A doesn't have it installed. Is there a way to sshfs B?
A and B are both Ubuntu Ubuntu 18.04.3 LTS and my computer is a Macbook Pro 2015.
I've tried the SSH-FS extension with VSCode. It only connects to A.
I also tried the Remote-SSH extension, and again, only gets as far as A. I even used -t for the connection command, but it doesn't seem to make any changes to the Remote-SSH config file.
I also tried ssh-fs while tunneled into A. No results there either.
Sorry for the trouble. This is a really niche problem.
You must use .ssh/config to have the ability to connect to B in on command from your Macbook Pro .
a example :
Host serverb.via.servera
HostKeyAlias serverb
User account_on_b
ProxyCommand ssh account_on_a#servera -W serverb:22
before using sshfs , you must test the setting by running :
ssh serverb.via.servera
and if you have your shell , so you can run
sshfs serverb.via.servera:/yourdir/ /tmp/localdir/
remarks:
serverb.via.servera is a arbritrary string , that you must use instead of the host
account_on_a and account_on_b must be replace by the login you use on each servers

iTerm2 - How to pass environment-variables when started via url-scheme?

Most of you certainly now the MacOS terminal emulator iTerm 2
I want to pass my environment variables which I've set/saved in ~/.ssh/environment to iTerm2, when it (the profile) is configured as default handler for this url-scheme. ( ssh://== )
Normal example ← works
You open the app iTerm2
Enter your ssh-command:
ssh hostname
It connects into your server and you can see with the command printenv your environment-variables you've put into your local ~/.ssh/environment file.
URL-Scheme example ← doesn't work
Some external application ( like the alfred-ssh workflow from deanishe) can access your .ssh/config file to make it easier to access all your configured hosts quickly and opens them then via url-scheme.
Because iTerm2 is configured for the ssh-scheme iTerm2 starts and connects quickly to the server.
You enter printenv and doesn't find your environment-varialbes.
You realize that iTerm2 started instantly and doesn't loaded the local environment-variables. Okay, I doesn't realized this at the beginning and created an issue for the workflow I used. But the developer is right, iTerm2 starts and isn't able to load the environment variables.
I've searched already several weeks for an solution, but wasn't able to solve this problem yet. That's why I'm asking here now.
My local SSH configuration (cleaned)
Content of ~/.ssh/environment is:
echo "RMATE_HOST=localhost" > sshenv
echo "RMATE_PORT=52699" > sshenv
Content of ~/.ssh/config is:
Host *
AddKeysToAgent yes
ServerAliveInterval 120
TCPKeepAlive no
UseKeychain yes
SendEnv RMATE_*
RemoteForward 52699 localhost:52699
Host personal
HostName personal.tld
IdentityFile ~/.ssh/keyFileName1
User user
Port 22
Host work
HostName business.tld
IdentityFile ~/.ssh/keyFileName2
User user
Port 22
And yeah, indeed! I just want to pass my RMATE variables to the servers via the workflow with Alfred ;-)

scp between two terminal windows (or multihop scp)

I regularly have to connect to several systems via ssh using multiple hops. It also happens often that I then want to copy a file from either the destination system to my local system or the other way around in a simple way (my current work flow is copy the file to an external location both machines can see so that it saves me a few hops or if the file is not binary cat it and copy/paste it to the other terminal window).
Is there an easy way to do such a thing?
I am using OSX and iterm2 (obviously I woudn't mind changing the latter).
So the connection is something like (local machine) -> (portal A) -> (machine B) -> (portal C) -> (machine D)
So I would like to copy files from machine A to machine D in a simple way (without copying the file via all hops or creating four tunnels).
It's not quite what you're asking for, but there are some tricks you can play with SSH proxying that simplify this sort of thing enormously. The first thing to get familiar with is proxying multihop SSH connections over netcat. If you have OpenSSH version 5.4 or later on the various hosts, add something like this to your ~/.ssh/config:
Host B
ProxyCommand ssh A -W %h:%p
Host C
ProxyCommand ssh B -W %h:%p
Host D
ProxyCommand ssh D -W %h:%p
If any of the intermediates don't have a new enough version, but do have netcat (nc), you can use something like this instead:
Host D
ProxyCommand ssh C nc %h %p
This'll make ssh D automatically open a tunnel to C to run the connection over, which will automatically open a tunnel to B, ... You'll have to authenticate 4 times (to A, then B, etc) (unless you have public-key authentication set up), but other than that it's transparent. Which means you can use it with sftp D, scp D:/path/to/file, etc.
Now, there's one significant limitation on this for what you describe. You can certainly copy files from e.g. A to D like this:
scp A:/path/to/file D:/path/to/file
...but the file's contents will travel the path A -> your computer -> A -> B -> C -> D. They won't be stored anywhere on that path, but if the network link between you and A is slow (e.g. you're working from home), this'll be a bottleneck. In this case, it'd be best to copy the ~/.ssh/config entries for C and D onto computer A, ssh into A normally, then use scp /path/to/file D:/path/to/file and cut out the extra hops.
BTW, if you want to get fancy, you can add this to your ~/.ssh/config:
Host */*
ProxyCommand ssh $(dirname %h) -W $(basename %h):%p
And then use ssh A/B/C/D etc to built the tunnel path on the spot. See the OpenSSH cookbook for details.
I had to think about this for some time, but if you have set up passwordless authentication using keys, it is possible to do the thing like this:
$ cat test | ssh f21 "tee | ssh f20 \"tee test\""
encrypted ssh key doesn't matter. For transferring through one hop it is quite straightforward, for more hops it can get messy ...

Can I use an existing SSH connection and execute SCP over that tunnel without re-authenticating?

I'm wondering if I already have an established SSH tunnel and I want to minimize re-authenticating with an ssh server for each task, is there a way to use an existing tunnel to pull a file from the SSH server using SCP on the local machine without re-authenticating?
I'm trying to avoid using ssh keys, I'd just like to minimize the amount of times a password needs to be entered for a bash script.
ssh -t user#build_server "*creates a build file...*"
Once that command is completed there is a file that exists on build_server. So if the above tunnel was still open, is there way to use that tunnel from my LOCAL machine to run SCP to and bring the file to the local machines desktop?
Yes, session sharing is possible: man ssh_config and search for ControlMaster and/or check here and here. Is this what you are looking for?

Is it possible to forward ssh requests that come in over a certain port to another machine?

I have a small local network. Only one of the machines is available to the outside world (this is not easily changeable). I'd like to be able to set it up such that ssh requests that don't come in on the standard port go to another machine. Is this possible? If so, how?
Oh and all of these machines are running either Ubuntu or OS X.
Another way to go would be to use ssh tunneling (which happens on the client side).
You'd do an ssh command like this:
ssh -L 8022:myinsideserver:22 paul#myoutsideserver
That connects you to the machine that's accessible from the outside (myoutsideserver) and creates a tunnel through that ssh connection to port 22 (the standard ssh port) on the server that's only accessible from the inside.
Then you'd do another ssh command like this (leaving the first one still connected):
ssh -p 8022 paul#localhost
That connection to port 8022 on your localhost will then get tunneled through the first ssh connection taking you over myinsideserver.
There may be something you have to do on myoutsideserver to allow forwarding of the ssh port. I'm double-checking that now.
Edit
Hmmm. The ssh manpage says this: **Only the superuser can forward privileged ports. **
That sort of implies to me that the first ssh connection has to be as root. Maybe somebody else can clarify that.
It looks like superuser privileges aren't required as long as the forwarded port (in this case, 8022) isn't a privileged port (like 22). Thanks for the clarification Mike Stone.
#Mark Biek
I was going to say that, but you beat me to it! Anyways, I just wanted to add that there is also the -R option:
ssh -R 8022:myinsideserver:22 paul#myoutsideserver
The difference is what machine you are connecting to/from. My boss showed me this trick not too long ago, and it is definitely really nice to know... we were behind a firewall and needed to give external access to a machine... he got around it by ssh -R to another machine that was accessible... then connections to that machine were forwarded into the machine behind the firewall, so you need to use -R or -L based on which machine you are on and which you are ssh-ing to.
Also, I'm pretty sure you are fine to use a regular user as long as the port you are forwarding (in this case the 8022 port) is not below the restricted range (which I think is 1024, but I could be mistaken), because those are the "reserved" ports. It doesn't matter that you are forwarding it to a "restricted" port because that port is not being opened (the machine is just having traffic sent to it through the tunnel, it has no knowledge of the tunnel), the 8022 port IS being open and so is restricted as such.
EDIT: Just remember, the tunnel is only open so long as the initial ssh remains open, so if it times out or you exit it, the tunnel will be closed.
(In this example, I am assuming port 2222 will go to your internal host. $externalip and $internalip are the ip addresses or hostnames of the visible and internal machine, respectively.)
You have a couple of options, depending on how permanent you want the proxying to be:
Some sort of TCP proxy. On Linux, the basic idea is that before the incoming packet is processed, you want to change its destination—i.e. prerouting destination NAT:
iptables -t nat -A PREROUTING -p tcp -i eth0 -d $externalip --dport 2222 --sport
1024:65535 -j DNAT --to $internalip:22
Using SSH to establish temporary port forwarding. From here, you have two options again:
Transparent proxy, where the client thinks that your visible host (on port 2222) is just a normal SSH server and doesn't realize that it is passing through. While you lose some fine-grained control, you get convenience (especially if you want to use SSH to forward VNC or X11 all the way to the inner host).
From the internal machine: ssh -g -R 2222:localhost:22 $externalip
Then from the outside world: ssh -p 2222 $externalip
Notice that the "internal" and "external" machines do not have to be on the same LAN. You can port forward all the way around the world this way.
Forcing login to the external machine first. This is true "forwarding," not "proxying"; but the basic idea is this: You force people to log in to the external machine (so you control on who can log in and when, and you get logs of the activity), and from there they can SSH through to the inside. It sounds like a chore, but if you set up simple shell scripts on the external machine with the names of your internal hosts, coupled with password-less SSH keypairs then it is very straightforward for a user to log in. So:
On the external machine, you make a simple script, /usr/local/bin/internalhost which simply runs ssh $internalip
From the outside world, users do: ssh $externalip internalhost and once they log in to the first machine, they are immediately forwarded through to the internal one.
Another advantage to this approach is that people don't get key management problems, since running two SSH services on one IP address will make the SSH client angry.
FYI, if you want to SSH to a server and you do not want to worry about keys, do this
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
I have an alias in my shell called "nossh", so I can just do nossh somehost and it will ignore all key errors. Just understand that you are ignoring security information when you do this, so there is a theoretical risk.
Much of this information is from a talk I gave at Barcamp Bangkok all about fancy SSH tricks. You can see my slides, but I recommend the text version as the S5 slides are kind of buggy. Check out the section called "Forward Anything: Simple Port Forwarding" for info. There is also information on creating a SOCKS5 proxy with OpenSSH. Yes, you can do that. OpenSSH is awesome like that.
(Finally, if you are doing a lot of traversing into the internal network, consider setting up a VPN. It sounds scary, but OpenVPN is quite simple and runs on all OSes. I would say it's overkill just for SSH; but once you start port-forwarding through your port-forwards to get VNC, HTTP, or other stuff happening; or if you have lots of internal hosts to worry about, it can be simpler and more maintainable.)
You can use Port Fowarding to do this. Take a look here:
http://portforward.com/help/portforwarding.htm
There are instructions on how to set up your router to port forward request on this page:
http://www.portforward.com/english/routers/port_forwarding/routerindex.htm
In Ubuntu, you can install Firestarter and then use it's Forward Service feature to forward the SSH traffic from a non standard port on your machine with external access to port 22 on the machine inside your network.
On OS X you can edit the /etc/nat/natd.plist file to enable port fowarding.
Without messing around with firewall rules, you can set up a ~/.ssh/config file.
Assume 10.1.1.1 is the 'gateway' system and 10.1.1.2 is the 'client' system.
Host gateway
Hostname 10.1.1.1
LocalForward 8022 10.1.1.2:22
Host client
Hostname localhost
Port 8022
You can open an ssh connection to 'gateway' via:
ssh gateway
In another terminal, open a connection to the client.
ssh client

Resources