ssh to another machine after sshing via script - bash

I have 3 servers,
server1 -> server2 -> server3
Server2 is reachable only via server 1 and server3 via server2.
Every time connection breaks I have to manually login to both the servers.
Is there any way to login and open bash terminal to server3 through this path via a script?

I have had same problem and I have a solution. I use xdotool to emulate keys (and xclip to copy password that is extracted from other file). This script opens ssh connections to list of servers in separate console tabs. Edit it according to Your needs.
for IP in $SERVERS
do
xdotool key ctrl+shift+t type "ssh $USER#$IP"
xdotool key Return
sleep 1
xdotool key ctrl+shift+v
xdotool key Return
done
Script simply iterate over table of servers. It opens new console tab, prints "ssh some_user#some_ip" and next emulate retrun key.
Sleep is used just to make sure script has enought time to connect to server. At the end password is pasted and You enter first server.
One more thing:
dont touch keyboard while script is running. I hope it can help You.

Use a ssh_config file, this will allow you to easily set this up and then directly connect by using ssh -F ssh_config servername.
Assuming you're logged in to server_1 and want to connect to server_3 via server_2 it would look something like this:
Host server_2
HostName xxx.xxx.xxx.xxx
Port xxxx
User server2_user
Host server_3
HostName xxx.xxx.xxx.xxx
Port xxxx
ProxyCommand ssh -F ssh_config server_2
User server3_user
With this you can use ssh -F ssh_config server_3 and it will connect to server_2 and from there take you directly to server_3.
If you put the ssh_config in the default location you can also omit the -F ssh_config part (in the command and the config file) since it will get picked up automatically.
For more information check out this link, or search the web for 'ssh jumphost', that's a more widely used description for your setup (server 2 is jumphost for server 3 in your case).

Related

scp from distant host to local server in a script

I have been able to find lots of examples of server hopping and ProxyCommand but none relating to my specific need.
I want to transfer a file from a distant server to a local server. Currently, I can ssh from local to jump and from jump to distant (can not ssh from local to distant directly). Then from distant I can scp a file back to local. Right now I do this manually:
from local: ssh userJ#jump
then from jump: ssh userD#distant
then from distant: scp \path\file userL#local:\dest\path\
But I want to be able to do this in a script that I run from local. I have rsh keys stored in the appropriate places to eliminate password prompts. I just can't figure out the syntax for a single command.
Do you have to "push" the files on distant back to local?
It'd be easier to simply "pull" the files while on local from distant.
Setup ~/.ssh/config on local:
[userL#local]# cat ~/.ssh/config
Host distant
HostName Distant
user userD
ProxyCommand ssh -A userJ#jump nc %h %p
Test the connection using ssh:
[userL#local]# ssh -A userD#distant [or even: ssh -A distant]
Last login: Tue Oct 23 16:05:59 2018 from jump
[userD#distant]#
Now pull a file from distant:
[userL#local]# scp userD#distant:/distantpath/distantfile /localpath/localfile
distantfile 100% 129KB 128.9KB/s 00:01
[userL#local]#
In the example above, I used ssh's agent forwarding to pass credentials from local to jump and ultimately distant. You just need to pre-populate the authorized keys on jump and distant before agent forwarding will work.
I usually address this by configuring my ssh client in the ~/.ssh/config file to do the jump automatically:
Host distant-jump
User userD
Hostname distant
ProxyCommand ssh -q -W %h:%p jump
Host jump
User userJ
Then you can just: scp distant-jump:/path/file ./

Secure copy over two IPs on the same network to the local machine [duplicate]

I wonder if there is a way for me to SCP the file from remote2 host directly from my local machine by going through a remote1 host.
The networks only allow connections to remote2 host from remote1 host. Also, neither remote1 host nor remote2 host can scp to my local machine.
Is there something like:
scp user1#remote1:user2#remote2:file .
First window: ssh remote1, then scp remot2:file ..
Second shell: scp remote1:file .
First window: rm file; logout
I could write a script to do all these steps, but if there is a direct way, I would rather use it.
Thanks.
EDIT: I am thinking something like opening SSH tunnels but i'm confused on what value to put where.
At the moment, to access remote1, i have the following in $HOME/.ssh/config on my local machine.
Host remote1
User user1
Hostname localhost
Port 45678
Once on remote1, to access remote2, it's the standard local DNS and port 22. What should I put on remote1 and/or change on localhost?
I don't know of any way to copy the file directly in one single command, but if you can concede to running an SSH instance in the background to just keep a port forwarding tunnel open, then you could copy the file in one command.
Like this:
# First, open the tunnel
ssh -L 1234:remote2:22 -p 45678 user1#remote1
# Then, use the tunnel to copy the file directly from remote2
scp -P 1234 user2#localhost:file .
Note that you connect as user2#localhost in the actual scp command, because it is on port 1234 on localhost that the first ssh instance is listening to forward connections to remote2. Note also that you don't need to run the first command for every subsequent file copy; you can simply leave it running.
Double ssh
Even in your complex case, you can handle file transfer using a single command line, simply with ssh ;-)
And this is useful if remote1 cannot connect to localhost:
ssh user1#remote1 'ssh user2#remote2 "cat file"' > file
tar
But you loose file properties (ownership, permissions...).
However, tar is your friend to keep these file properties:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar c file"' | tar x
You can also compress to reduce network bandwidth:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar cj file"' | tar xj
And tar also allows you transferring a recursive directory through basic ssh:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar cj ."' | tar xj
ionice
If the file is huge and you do not want to disturb other important network applications, you may miss network throughput limitation provided by scp and rsync tools (e.g. scp -l 1024 user#remote:file does not use more than 1 Mbits/second).
But, a workaround is using ionice to keep a single command line:
ionice -c2 -n7 ssh u1#remote1 'ionice -c2 -n7 ssh u2#remote2 "cat file"' > file
Note: ionice may not be available on old distributions.
This will do the trick:
scp -o 'Host remote2' -o 'ProxyCommand ssh user#remote1 nc %h %p' \
user#remote2:path/to/file .
To SCP the file from the host remote2 directly, add the two options (Host and ProxyCommand) to your ~/.ssh/config file (see also this answer on superuser). Then you can run:
scp user#remote2:path/to/file .
from your local machine without having to think about remote1.
With openssh version 7.3 and up it is easy. Use ProxyJump option in the config file.
# Add to ~/.ssh/config
Host bastion
Hostname bastion.client.com
User userForBastion
IdentityFile ~/.ssh/bastion.pem
Host appMachine
Hostname appMachine.internal.com
User bastion
ProxyJump bastion # openssh 7.3 version new feature ProxyJump
IdentityFile ~/.ssh/appMachine.pem. #no need to copy pem file to bastion host
Commands to run to login or copy
ssh appMachine # no need to specify any tunnel.
scp helloWorld.txt appMachine:. # copy without intermediate jumphost/bastion host copy.**
ofcourse you can specify bastion Jump host using option "-J" to ssh command, if not configured in config file.
Note scp does not seems to support "-J" flag as of now. (i could not find in man pages. However above scp works with config file setting)
There is a new option in scp that add recently for exactly this same job that is very convenient, it is -3.
TL;DR For the current host that has authentication already set up in ssh config files, just do:
scp -3 remote1:file remote2:file
Your scp must be from recent versions.
All other mentioned technique requires you to set up authentication from remote1 to remote2 or vice versa, which not always is a good idea.
Argument -3 means you want to move files from two remote hosts by using current host as intermediary, and this host actually does the authentication to both remote hosts, so they don't have to have access to each other.
You just have to setup authentication in ssh config files, which is fairly easy and well documented, and then just run the command in TL;DR
The source for this answer is https://superuser.com/a/686527/713762
This configuration works nice for me:
Host jump
User username
Hostname jumphost.yourorg.intranet
Host production
User username
Hostname production.yourorg.intranet
ProxyCommand ssh -q -W %h:%p jump
Then the command
scp myfile production:~
Copies myfile to production machine.
a simpler way:
scp -o 'ProxyJump your.jump.host' /local/dir/myfile.txt remote.internal.host:/remote/dir

iTerm2 - How to pass environment-variables when started via url-scheme?

Most of you certainly now the MacOS terminal emulator iTerm 2
I want to pass my environment variables which I've set/saved in ~/.ssh/environment to iTerm2, when it (the profile) is configured as default handler for this url-scheme. ( ssh://== )
Normal example ← works
You open the app iTerm2
Enter your ssh-command:
ssh hostname
It connects into your server and you can see with the command printenv your environment-variables you've put into your local ~/.ssh/environment file.
URL-Scheme example ← doesn't work
Some external application ( like the alfred-ssh workflow from deanishe) can access your .ssh/config file to make it easier to access all your configured hosts quickly and opens them then via url-scheme.
Because iTerm2 is configured for the ssh-scheme iTerm2 starts and connects quickly to the server.
You enter printenv and doesn't find your environment-varialbes.
You realize that iTerm2 started instantly and doesn't loaded the local environment-variables. Okay, I doesn't realized this at the beginning and created an issue for the workflow I used. But the developer is right, iTerm2 starts and isn't able to load the environment variables.
I've searched already several weeks for an solution, but wasn't able to solve this problem yet. That's why I'm asking here now.
My local SSH configuration (cleaned)
Content of ~/.ssh/environment is:
echo "RMATE_HOST=localhost" > sshenv
echo "RMATE_PORT=52699" > sshenv
Content of ~/.ssh/config is:
Host *
AddKeysToAgent yes
ServerAliveInterval 120
TCPKeepAlive no
UseKeychain yes
SendEnv RMATE_*
RemoteForward 52699 localhost:52699
Host personal
HostName personal.tld
IdentityFile ~/.ssh/keyFileName1
User user
Port 22
Host work
HostName business.tld
IdentityFile ~/.ssh/keyFileName2
User user
Port 22
And yeah, indeed! I just want to pass my RMATE variables to the servers via the workflow with Alfred ;-)

How to connect to a server using another server by ssh in shell script?

The scenario is like:
SERVER_A="servera.com"
SERVER_A_UNAME="usera"
SERVER_B="serverb.com"
SERVER_B_UNAME="userb"
I want to write a shell script which will fist connect to server A, and then only it would be connected to server B. Like:
#!/bin/sh
ssh $SERVER_A_UNAME#$SERVER_A ...and then
ssh $SERVER_B_UNAME#$SERVER_B
But I am not able to do it. It does connect to server A only. How can I achieve it?
You may be able to find some help with this previous question:
How to use bash/expect to check if an SSH login works
Depending on your situation you might also to execute an remote ssh command and wait for positive feedback.
See:
How to use SSH to run a shell script on a remote machine?
you should have a look at ssh ProxyCommands that lets you do indirect connects automatically. basically you put the following in you .ssh/config
Host gateway1
# nichts
Host gateway2
ProxyCommand ssh -q gateway1 nc -q0 gateway2 22
Host targethost
ProxyCommand ssh -q gateway2 nc -q0 targethost 22
and then you can run ssh targethost successfully even if targethost is not reachable directly. you can read more about this e.g. here http://sshmenu.sourceforge.net/articles/transparent-mulithop.html

Need shell script to auto login to remote server

I have 10 Linux servers.
To connect to server every time I have to execute the ssh command to login.
I need one single shell script to login to a remote server.
e.g if server is host name is testhost.com, user is user1 and pass password
when I give the user name user1 in terminal, it should automatically execute the shell script and logged in to remote server for the user user1
Hi i know this is an old question but here is a way to do it follow the link above from the #nick hartung then after that since you have 10 servers you call each server by name so say 'server1' or any name you like but for this example ill name one of the servers 'server1' also remember to change the port from 22 to something else eg 22277 so create a script and name it server1 and the put this in it
#!/bash/bin
ssh username#hostname -p22277
then move the script to user bin
$ sudo chmod 600 server1
$ sudo mv server1 /usr/bin/
then now u can just login to the remote host like this
$ server1
the you will be automatically logged in.
You can write a script that will take a username as a parameter and ssh to the correct host based on that. A quick example:
if [ "$1" == "username" ]; then
ssh username#hostname
fi
if [ "$1" == "username2" ]; then
...
However, the ssh command doesn't have a built in way to provide a password AFAIK. You shouldn't be storing your passwords in a script anyway. The way to get around this is to set up automatic authentication by creating a key pair using ssh-keygen. Here is a link that will show you how to set this up.

Resources