~/.ssh/rc Auto Login to seperate shell - bash

Below is the codeblock which I use within my RC file. The idea is to move straight into another servers ssh session (without port forwarding). When this executes, I see the session. This said, it blinks rapidly and is not interactive. I believe this is by design, but, is there another way to call on the ssh command after a specific user logs in? If not, do you know a workaround to make the screen interactive? Thank you.
if [ ! -z "$SSH_CONNECTION" ]
then
ssh -i ~/.ssh/id_rsa user#xxx.xxx.xxx.xxx
fi

The issue was that bashrc is called within ~/.profile appending the ssh connection to the end of the ~/.profile had fixed the problem.

Related

How to know what initial commands being executed right after a SSH login?

I was provided a tool to do a SSH to a remote host. The remote host is a new docker to be created. I was trying to understand if there are commands being executed right after the SSH (i.e. probably using ssh -t <some commands>).
It seems like the .bash_history does not include those cmds. In such case, what else can I do to figure out what cmds being executed right after my login? Thank you.
To find out the actual commands that are executed, you could add "set -v" or "set -x" to the shell initialization file(s) on the system you are ssh-ing to.
See man bash (the "INVOCATION" section) to find out which files will executed so that you can figure out which file to add the "set" command to.
You will probably want to do that temporarily ... because the output is verbose.
Another approach would be to configure sshd to set the logging level to DEBUG and see what commands are requested. However, note that sshd DEBUG logging is a user privacy violation.
If you are trying to do this kind of stuff to find out what is happening on the first "boot" of a docker instance, try putting the (temporarily) config changes into the docker image that you are starting.
The bash history only contains command lines that are submitted to the shell via a shell command prompt.

add ssh key once per login instead of once per bash

I want to use ssh passwordless-login using authentication-key-pairs.
I added
eval `ssh-agent -s`
ssh-add ~/.ssh/my_p_key
to ~/.profile. This doesn't work. If I use the ~/.bashrc it works fine.
Why do I have to set this every time I call a bash instead of every time the user logges in. I could not find any explanation.
Is there no better way to configure this?
The answer below solved my problem and for me it looks like a very legit solution.
Add private key permanently with ssh-add on Ubuntu
The
eval `ssh-agent -s`
sets the environment of the process. If you are using a window manager e.g. on a linux machine then the window manager will likely have a possibility to run a program like e.g. your ssh-agent on startup and passing the environment down to all processes started there, so all your terminal windows/tabs will allow you a passwordless login. The exact location, configuration and behaviour depends on the desktop/wm used.
If you are on a non-window-system, then you might look at the output of the ssh-agent call and paste that into all shells you open, however that may be as complicated as entering your password. The output will likely set something like
SSH_AGENT_PID=4041
SSH_AUTH_SOCK=/tmp/ssh-WZANnlaFiaBt/agent.3966
and you can access pw-less in all places where you set these
Just adding "IdentityFile ~/.ssh/config" to my .ssh/config did not work for me. I had to also add "AddKeysToAgent yes" to that file.
I think this extra line is necessary because gnome-Keyring loads my key but it has a bug that prevents it from being forwarded to the machine I ssh into.
Edit: After upgrading to Ubuntu 20.04, new terminals that I started could not access the ssh-agent. I had to add eval (ssh-agent -c) &>/dev/null to my ~/.config/fish/config.fish file.

How to input automatically when running a shell over SSH?

In my shell script I am running a command which is asking me for input.
How can I give the command the input it needs automatically?
For example:
$cat test.sh
ssh-copy-id tester#10.1.2.3
When running test.sh:
First, it will ask:
Are you sure you want to continue connecting (yes/no)?
Then, it will ask me to input the password:
tester#10.1.2.3's password:
Is there a way to input this automatically?
For simple input, like two prompts and two corresponding fixed responses, you could also use a "here document", the syntax of which looks like this:
test.sh <<!
y
pasword
!
The << prefixes a pattern, in this case '!'. Everything up to a line beginning with that pattern is interpreted as standard input. This approach is similar to the suggestion to pipe a multi-line echo into ssh, except that it saves the fork/exec of the echo command and I find it a bit more readable. The other advantage is that it uses built-in shell functionality so it doesn't depend on expect.
For general command-line automation, Expect is the classic tool. Or try pexpect if you're more comfortable with Python.
Here's a similar question that suggests using Expect: Use expect in bash script to provide password to SSH command
There definitely is... Use the spawn, expect, and send commands:
spawn test.sh
expect "Are you sure you want to continue connecting (yes/no)?"
send "yes"
There are more examples all over Stack Overflow, see:
Help with Expect within a bash script
You may need to install these commands first, depending on your system.
Also you can pipe the answers to the script:
printf "y\npassword\n" | sh test.sh
where \n is escape-sequence
ssh-key with passphrase, with keychain
keychain is a small utility which manages ssh-agent on your behalf and allows the ssh-agent to remain running when the login session ends. On subsequent logins, keychain will connect to the existing ssh-agent instance. In practice, this means that the passphrase must be be entered only during the first login after a reboot. On subsequent logins, the unencrypted key from the existing ssh-agent instance is used. This can also be useful for allowing passwordless RSA/DSA authentication in cron jobs without passwordless ssh-keys.
To enable keychain, install it and add something like the following to ~/.bash_profile:
eval keychain --agents ssh --eval id_rsa
From a security point of view, ssh-ident and keychain are worse than ssh-agent instances limited to the lifetime of a particular session, but they offer a high level of convenience. To improve the security of keychain, some people add the --clear option to their ~/.bash_profile keychain invocation. By doing this passphrases must be re-entered on login as above, but cron jobs will still have access to the unencrypted keys after the user logs out. The keychain wiki page has more information and examples.
Got this info from;
https://unix.stackexchange.com/questions/90853/how-can-i-run-ssh-add-automatically-without-password-prompt
Hope this helps
I have personally been able to automatically enter my passphrase upon terminal launch by doing this: (you can, of course, modify the script and fit it to your needs)
edit the bashrc file to add this script;
Check if the SSH agent is awake
if [ -z "$SSH_AUTH_SOCK" ] ; then
exec ssh-agent bash -c "ssh-add ; $0"
echo "The SSH agent was awakened"
exit
fi
Above line will start the expect script upon terminal launch.
./ssh.exp
here's the content of this expect script
#!/usr/bin/expect
set timeout 20
set passphrase "test"
spawn "./keyadding.sh"
expect "Enter passphrase for /the/path/of/yourkey_id_rsa:"
send "$passphrase\r";
interact
Here's the content of my keyadding.sh script (you must put both scripts in your home folder, usually /home/user)
#!/bin/bash
ssh-add /the/path/of/yourkey_id_rsa
exit 0
I would HIGHLY suggest encrypting the password on the .exp script as well as renaming this .exp file to something like term_boot.exp or whatever else for security purposes. Don't forget to create the files directly from the terminal using nano or vim (ex: nano ~/.bashrc | nano term_boot.exp) and also a chmod +x script.sh to make it executable. A chmod +r term_boot.exp would be also useful but you'll have to add sudo before ./ssh.exp in your bashrc file. So you'll have to enter your sudo password each time you launch your terminal. For me, it's more convenient than the passphrase cause I remember my admin (sudo) password by the hearth.
Also, here's another way to do it I think;
https://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
Will certainly change my method for this one when I'll have the time.
You can write the expect script as follow:
$ vi login.exp
#!/usr/bin/expect -f
spawn ssh username#machine.IP
expect "*assword: "
send -- "PASSWORD\r"
interact
And run it as:
$ expect login.exp

How to automate password entry?

I want to install a software library (SWIG) on a list of computers (Jenkins nodes). I'm using the following script to automate this somewhat:
NODES="10.8.255.70 10.8.255.85 10.8.255.88 10.8.255.86 10.8.255.65 10.8.255.64 10.8.255.97 10.8.255.69"
for node in $NODES; do
scp InstallSWIG.sh root#$node:/root/InstallSWIG.sh
ssh root#$node sh InstallSWIG.sh
done
This way it's automated, except for the password request that occur for both the scp and ssh commands.
Is there a way to enter the passwords programmatically?
Security is not an issue. I’m looking for solutions that don’t involve SSH keys.
Here’s an expect example that sshs in to Stripe’s Capture The Flag server and enters the password automatically.
expect <<< 'spawn ssh level01#ctf.stri.pe; expect "password:"; send "e9gx26YEb2\r";'
With SSH the right way to do it is to use keys instead.
# ssh-keygen
and then copy the *~/.ssh/id_rsa.pub* file to the remote machine (root#$node) into the remote user's .ssh/authorized_keys file.
You can perform the task using empty, a small utility from sourceforge. It's similar to expect but probably more convenient in this case. Once you have installed it, your first scp will be accomplished by following two commands:
./empty -f scp InstallSWIG.sh root#$node:/root/InstallSWIG.sh
echo YOUR_SECRET_PASSWORD | ./empty -s -c
The first one starts your command in the background, tricking it into thinking it's running in interactive mode on a terminal. The other one sends it data from stdin. Of course, putting your password anywhere on command line is risky due to shell history being preserved, users being able to see it in ps results etc. Not secure either, but a bit better thing would be to store the password in a file and redirect the second command's input from that file instead of using echo and a pipe.
After copying to the server, you can run the script in a similar manner:
./empty -f ssh root#$node sh InstallSWIG.sh
echo YOUR_SECRET_PASSWORD | ./empty -s -c
You could look into setting up passwordless ssh keys for that. Establishing Batch Mode Connections between OpenSSH and SSH2 is a starting point, you'll find lots of information on this topic on the web.
Wes' answer is the correct one but if you're keen on something dirty and slow, you can use expect to automate this.

How can I ssh directly to a particular directory?

I often have to login to one of several servers and go to one of several directories on those machines. Currently I do something of this sort:
localhost ~]$ ssh somehost
Welcome to somehost!
somehost ~]$ cd /some/directory/somewhere/named/Foo
somehost Foo]$
I have scripts that can determine which host and which directory I need to get into but I cannot figure out a way to do this:
localhost ~]$ go_to_dir Foo
Welcome to somehost!
somehost Foo]$
Is there an easy, clever or any way to do this?
You can do the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted ; bash --login"
This way, you will get a login shell right on the directory_wanted.
Explanation
-t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
If you don't use -t then no prompt will appear.
If you don't add ; bash then the connection will get closed and return control to your local machine
If you don't add bash --login then it will not use your configs because its not a login shell
You could add
cd /some/directory/somewhere/named/Foo
to your .bashrc file (or .profile or whatever you call it) at the other host. That way, no matter what you do or where you ssh from, whenever you log onto that server, it will cd to the proper directory for you, and all you have to do is use ssh like normal.
Of curse, rogeriopvl's solution works too, but it's a tad bit more verbose, and you have to remember to do it every time (unless you make an alias) so it seems a bit less "fun".
My preferred approach is using the SSH config file (described below), but there are a few possible solutions depending on your usages.
Command Line Arguments
I think the best answer for this approach is christianbundy's reply to the accepted answer:
ssh -t example.com "cd /foo/bar; exec \$SHELL -l"
Using double quotes will allow you to use variables from your local machine, unless they are escaped (as $SHELL is here). Alternatively, you can use single quotes, and all of the variables you use will be the ones from the target machine:
ssh -t example.com 'cd /foo/bar; exec $SHELL -l'
Bash Function
You can simplify the command by wrapping it in a bash function. Let's say you just want to type this:
sshcd example.com /foo/bar
You can make this work by adding this to your ~/.bashrc:
sshcd () { ssh -t "$1" "cd \"$2\"; exec \$SHELL -l"; }
If you are using a variable that exists on the remote machine for the directory, be sure to escape it or put it in single quotes. For example, this will cd to the directory that is stored in the JBOSS_HOME variable on the remote machine:
sshcd example.com \$JBOSS_HOME
SSH Config File
If you'd like to see this behavior all the time for specific (or any) hosts with the normal ssh command without having to use extra command line arguments, you can set the RequestTTY and RemoteCommand options in your ssh config file.
For example, I'd like to type only this command:
ssh qaapps18
but want it to always behave like this command:
ssh -t qaapps18 'cd $JBOSS_HOME; exec $SHELL'
So I added this to my ~/.ssh/config file:
Host *apps*
RequestTTY yes
RemoteCommand cd $JBOSS_HOME; exec $SHELL
Now this rule applies to any host with "apps" in its hostname.
For more information, see http://man7.org/linux/man-pages/man5/ssh_config.5.html
I've created a tool to SSH and CD into a server consecutively – aptly named sshcd. For the example you've given, you'd simply use:
sshcd somehost:/some/directory/somewhere/named/Foo
Let me know if you have any questions or problems!
Based on additions to #rogeriopvl's answer, I suggest the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted && bash"
Chaining commands by && will make the next command run only when the previous one was successful (as opposed to using ;, which executes commands sequentially). This is particularly useful when needing to cd to a directory performing the command.
Imagine doing the following:
/home/me$ cd /usr/share/teminal; rm -R *
The directory teminal doesn't exist, which causes you to stay in the home directory and remove all the files in there with the following command.
If you use &&:
/home/me$ cd /usr/share/teminal && rm -R *
The command will fail after not finding the directory.
In my very specific case, I just wanted to execute a command in a remote host, inside a specific directory from a Jenkins slave machine:
ssh myuser#mydomain
cd /home/myuser/somedir
./commandThatMustBeRunInside_somedir
exit
But my machine couldn't perform the ssh (it couldn't allocate a pseudo-tty I suppose) and kept me giving the following error:
Pseudo-terminal will not be allocated because stdin is not a terminal
I could get around this issue passing "cd to dir + my command" as a parameter of the ssh command (to not have to allocate a Pseudo-terminal) and by passing the option -T to explicitly tell to the ssh command that I didn't need pseudo-terminal allocation.
ssh -T myuser#mydomain "cd /home/myuser/somedir; ./commandThatMustBeRunInside_somedir"
I use the environment variable CDPATH
going one step further with the -t idea. I keep a set of scripts calling the one below to go to specific places in my frequently visited hosts. I keep them all in ~/bin and keep that directory in my path.
#!/bin/bash
# does ssh session switching to particular directory
# $1, hostname from config file
# $2, directory to move to after login
# can save this as say 'con' then
# make another script calling this one, e.g.
# con myhost repos/i2c
ssh -t $1 "cd $2; exec \$SHELL --login"
My answer may differ from what you really want, but I write here as may be useful for some people. In my solution you have to enter into the directory once and then every new ssh session goes to the same dir (after the first logout).
How to ssh to the same directory you have been in your last login.
(I assume you use bash on the remote node.)
Add this line to your ~/.bash_logout on the remote node(!):
echo $PWD > ~/.bash_lastpwd
and these lines to the ~/.bashrc file (still on the remote node!)
if [ -f ~/.bash_lastpwd ]; then
cd $(cat ~/.bash_lastpwd)
fi
This way you save your current path on every logout and .bashrc put you into that directory after login.
ps: You can tweak it further like using the SSH_CLIENT variable to decide to go into that directory or not, so you can differentiate between local logins and ssh or even between different ssh clients.
Another way of going to directly after logging in is create "Alias". When you login into your system just type that alias and you will be in that directory.
Example : Alias = myfolder '/var/www/Folder'
After you log in to your system type that alias (this works from any part of the system)
this command if not in bashrc will work for current session. So you can also add this alias to bashrc to use that in future
$ myfolder => takes you to that folder
I know this has been answered ages ago but I found the question while trying to incorporate an ssh login in a bash script and once logged in run a few commands and log back out and continue with the bash script. The simplest way I found which hasnt been mentioned elsewhere because it is so trivial is to do this.
#!/bin/bash
sshpass -p "password" ssh user#server 'cd /path/to/dir;somecommand;someothercommand;exit;'
Connect With User
In case if you don't know this, you can use this to connect by specifying both user and host
ssh -t <user>#<Host domain / IP> "cd /path/to/directory; bash --login"
Example: ssh -t admin#test.com "cd public_html; bash --login"
You can also append the commands to be executed on every login by appending it in the double quotes with a ; before each command
Unfortunately, the suggested solution (of #rogeriopvl) doesn't work when you use multiple hops, so I found another one.
On remote machine add into ~/.bashrc the following:
[ "x$CDTO" != "x" ] && cd $CDTO
This allows you to specify the desired target directory on command line in this way:
ssh -t host1 ssh -t host2 "CDTO=/desired_directory exec bash --login"
Sure, this way can be used for a single hop too.
This solution can be combined with the usefull tip of #redseven for greater flexibilty (if no $CDTO, go to saved directory, if exists).
SSH itself provides a means of communication, it does not know anything about directories. Since you can specify which remote command to execute (this is - by default - your shell), I'd start there.
simply modify your home with the command:
usermod -d /newhome username

Resources