add ssh key once per login instead of once per bash - bash

I want to use ssh passwordless-login using authentication-key-pairs.
I added
eval `ssh-agent -s`
ssh-add ~/.ssh/my_p_key
to ~/.profile. This doesn't work. If I use the ~/.bashrc it works fine.
Why do I have to set this every time I call a bash instead of every time the user logges in. I could not find any explanation.
Is there no better way to configure this?

The answer below solved my problem and for me it looks like a very legit solution.
Add private key permanently with ssh-add on Ubuntu

The
eval `ssh-agent -s`
sets the environment of the process. If you are using a window manager e.g. on a linux machine then the window manager will likely have a possibility to run a program like e.g. your ssh-agent on startup and passing the environment down to all processes started there, so all your terminal windows/tabs will allow you a passwordless login. The exact location, configuration and behaviour depends on the desktop/wm used.
If you are on a non-window-system, then you might look at the output of the ssh-agent call and paste that into all shells you open, however that may be as complicated as entering your password. The output will likely set something like
SSH_AGENT_PID=4041
SSH_AUTH_SOCK=/tmp/ssh-WZANnlaFiaBt/agent.3966
and you can access pw-less in all places where you set these

Just adding "IdentityFile ~/.ssh/config" to my .ssh/config did not work for me. I had to also add "AddKeysToAgent yes" to that file.
I think this extra line is necessary because gnome-Keyring loads my key but it has a bug that prevents it from being forwarded to the machine I ssh into.
Edit: After upgrading to Ubuntu 20.04, new terminals that I started could not access the ssh-agent. I had to add eval (ssh-agent -c) &>/dev/null to my ~/.config/fish/config.fish file.

Related

~/.ssh/rc Auto Login to seperate shell

Below is the codeblock which I use within my RC file. The idea is to move straight into another servers ssh session (without port forwarding). When this executes, I see the session. This said, it blinks rapidly and is not interactive. I believe this is by design, but, is there another way to call on the ssh command after a specific user logs in? If not, do you know a workaround to make the screen interactive? Thank you.
if [ ! -z "$SSH_CONNECTION" ]
then
ssh -i ~/.ssh/id_rsa user#xxx.xxx.xxx.xxx
fi
The issue was that bashrc is called within ~/.profile appending the ssh connection to the end of the ~/.profile had fixed the problem.

How to know what initial commands being executed right after a SSH login?

I was provided a tool to do a SSH to a remote host. The remote host is a new docker to be created. I was trying to understand if there are commands being executed right after the SSH (i.e. probably using ssh -t <some commands>).
It seems like the .bash_history does not include those cmds. In such case, what else can I do to figure out what cmds being executed right after my login? Thank you.
To find out the actual commands that are executed, you could add "set -v" or "set -x" to the shell initialization file(s) on the system you are ssh-ing to.
See man bash (the "INVOCATION" section) to find out which files will executed so that you can figure out which file to add the "set" command to.
You will probably want to do that temporarily ... because the output is verbose.
Another approach would be to configure sshd to set the logging level to DEBUG and see what commands are requested. However, note that sshd DEBUG logging is a user privacy violation.
If you are trying to do this kind of stuff to find out what is happening on the first "boot" of a docker instance, try putting the (temporarily) config changes into the docker image that you are starting.
The bash history only contains command lines that are submitted to the shell via a shell command prompt.

Injecting bash prompt to remote host via ssh

I have a fancy prompt working well on my local machine. However, I'm logging to multiple machines, on different accounts via ssh. I would love to have my prompt synchronized everywhere by ssh command itself.
Any idea how to get that? In many cases I'm accessing machines using root account and I can't change permanently any settings there. I want the prompt synchronized.
In principle this is just setting variable PS1.
Try this :
ssh -l root host -t "bash --rcfile /path/to/special/bashrc"
maybe /path/to/special/bashrc can be /tmp/myrc by example

Cygwin inherits environment variables from Windows sometimes

If I set the environment variable CVSROOT in Windows and give it a value like cvsserver:/home/cvs, if then
1) open Windows CMD shell and do "echo %CVSROOT%", I get "cvsserver:home/cvs"
2) open Cygwin bash shell and do "echo $CVSROOT", I get "cvsserver:home/cvs"
3) from Linux, ssh to the machine and do "echo $CVSROOT", I get nothing.
If I want the ssh session to have a value for CVSROOT, I need to insert it into the .bashrc.
Is there something that can be done so that the ssh session also inherits the environment variable from Windows?
edit:
4) from Linux, do
ssh machine "printenv CVSROOT"
with the environment variable set in .bashrc, I get nothing. At an interactive prompt, I get the variable value, but this way gives nothing.
I found a nice solution here: http://www.smithii.com/node/44
It looks for system variable in registry and sets variables in session opened via ssh.
Then, call the small piece of script from your ssh client (while connecting to your cygwin server), as this:
ssh $WINDOWSUSER#$WINDOWSBUILDSERVER "source /etc/profile ; echo $CVSROOT "
OK, I've figured it out...
For whatever reason, the bash shell is not inheriting the CVSROOT variable that is set on the system. I need to put this line
export CVSROOT=cvsserver:/home/cvs
into both the .bashrc and the .bash_profile files.
The line is needed in the .bashrc file so that non-interactive logins will get the CVSROOT variable. For example:
ssh machine "printenv CVSROOT"
needs this line in the .bashrc file so that CVSROOT exists.
The line is needed in the .bash_profile file so that interactive logins will get the CVSROOT variable. For example:
ssh machine
printenv CVSROOT
If you have your SSH Service running on a windows box (I'm assuming you do) and it's via Cygwin.
The service seems to take a snapshot of the environment variables when it starts up and doesn't refresh them.
Restarting the service should be enough.
One thing is Windows->Cygwin the other thing is ssh
have you tried forwarding your Windows variable to your linux explicitly:
ssh -t user#host "export CVSROOT=%CVSROOT%;/bin/bash --login"
Restarting your computer will solve this problem. Environment variable changes don’t just propagate to all your running programs.
I hit upon the same problem. Got this working by setting the variable like this:
ssh machine "export MYVARIABLE=1 && printenv MYVARIABLE"

ssh-agent and crontab -- is there a good way to get these to meet?

I wrote a simple script which mails out svn activity logs nightly to our developers. Until now, I've run it on the same machine as the svn repository, so I didn't have to worry about authentication, I could just use svn's file:/// address style.
Now I'm running the script on a home computer, accessing a remote repository, so I had to change to svn+ssh:// paths. With ssh-key nicely set up, I don't ever have to enter passwords for accessing the svn repository under normal circumstances.
However, crontab did not have access to my ssh-keys / ssh-agent. I've read about this problem a few places on the web, and it's also alluded to here, without resolution:
Why ssh fails from crontab but succedes when executed from a command line?
My solution was to add this to the top of the script:
### TOTAL HACK TO MAKE SSH-KEYS WORK ###
eval `ssh-agent -s`
This seems to work under MacOSX 10.6.
My question is, how terrible is this, and is there a better way?
In addition...
If your key have a passhphrase, keychain will ask you once (valid until you reboot the machine or kill the ssh-agent).
keychain is what you need! Just install it and add the follow code in your .bash_profile:
keychain ~/.ssh/id_dsa
So use the code below in your script to load the ssh-agent environment variables:
. ~/.keychain/$HOSTNAME-sh
Note: keychain also generates code to csh and fish shells.
Copied answer from https://serverfault.com/questions/92683/execute-rsync-command-over-ssh-with-an-ssh-agent-via-crontab
When you run ssh-agent -s, it launches a background process that you'll need to kill later. So, the minimum is to change your hack to something like:
eval `ssh-agent -s`
svn stuff
kill $SSH_AGENT_PID
However, I don't understand how this hack is working. Simply running an agent without also running ssh-add will not load any keys. Perhaps MacOS' ssh-agent is behaving differently than its manual page says it does.
I had a similar problem. My script (that relied upon ssh keys) worked when I ran it manually but failed when run with crontab.
Manually defining the appropriate key with
ssh -i /path/to/key
didn't work.
But eventually I found out that the SSH_AUTH_SOCK was empty when the crontab was running SSH. I wasn't exactly sure why, but I just
env | grep SSH
copied the returned value and added this definition to the head of my crontab.
SSH_AUTH_SOCK="/tmp/value-you-get-from-above-command"
I'm out of depth as to what's happening here, but it fixed my problem. The crontab runs smoothly now.
One way to recover the pid and socket of running ssh-agent would be.
SSH_AGENT_PID=`pgrep -U $USER ssh-agent`
for PID in $SSH_AGENT_PID; do
let "FPID = $PID - 1"
FILE=`find /tmp -path "*ssh*" -type s -iname "agent.$FPID"`
export SSH_AGENT_PID="$PID"
export SSH_AUTH_SOCK="$FILE"
done
This of course presumes that you have pgrep installed in the system and there is only one ssh-agent running or in case of multiple ones it will take the one which pgrep finds last.
My solution - based on pra's - slightly improved to kill process even on script failure:
eval `ssh-agent`
function cleanup {
/bin/kill $SSH_AGENT_PID
}
trap cleanup EXIT
ssh-add
svn-stuff
Note that I must call ssh-add on my machine (scientific linux 6).
To set up automated processes without automated password/passphrase hacks,
I use a separate IdentityFile that has no passphrase, and restrict the target machines' authorized_keys entries prefixed with from="automated.machine.com" ... etc..
I created a public-private keyset for the sending machine without a passphrase:
ssh-keygen -f .ssh/id_localAuto
(Hit return when prompted for a passphrase)
I set up a remoteAuto Host entry in .ssh/config:
Host remoteAuto
HostName remote.machine.edu
IdentityFile ~/.ssh/id_localAuto
and the remote.machine.edu:.ssh/authorized_keys with:
...
from="192.168.1.777" ssh-rsa ABCDEFGabcdefg....
...
Then ssh doesn't need the externally authenticated authorization provided by ssh-agent or keychain, so you can use commands like:
scp -p remoteAuto:watchdog ./watchdog_remote
rsync -Ca remoteAuto/stuff/* remote_mirror
svn svn+ssh://remoteAuto/path
svn update
...
Assuming that you already configured SSH settings and that script works fine from terminal, using the keychain is definitely the easiest way to ensure that script works fine in crontab as well.
Since keychain is not included in most of Unix/Linux derivations, here is the step by step procedure.
1. Download the appropriate rpm package depending on your OS version from http://pkgs.repoforge.org/keychain/. Example for CentOS 6:
wget http://pkgs.repoforge.org/keychain/keychain-2.7.0-1.el6.rf.noarch.rpm
2. Install the package:
sudo rpm -Uvh keychain-2.7.0-1.el6.rf.noarch.rpm
3. Generate keychain files for your SSH key, they will be located in ~/.keychain directory. Example for id_rsa:
keychain ~/.ssh/id_rsa
4. Add the following line to your script anywhere before the first command that is using SSH authentication:
source ~/.keychain/$HOSTNAME-sh
I personally tried to avoid to use additional programs for this, but everything else I tried didn't work. And this worked just fine.
Inspired by some of the other answers here (particularly vpk's) I came up with the following crontab entry, which doesn't require an external script:
PATH=/usr/bin:/bin:/usr/sbin:/sbin
* * * * * SSH_AUTH_SOCK=$(lsof -a -p $(pgrep ssh-agent) -U -F n | sed -n 's/^n//p') ssh hostname remote-command-here
Here is a solution that will work if you can't use keychain and if you can't start an ssh-agent from your script (for example, because your key is passphrase-protected).
Run this once:
nohup ssh-agent > .ssh-agent-file &
. ssh-agent-file
ssh-add # you'd enter your passphrase here
In the script you are running from cron:
# start of script
. ${HOME}/.ssh-agent-file
# now your key is available
Of course this allows anyone who can read '~/.ssh-agent-file' and the corresponding socket to use your ssh credentials, so use with caution in any multi-user environment.
Your solution works but it will spawn a new agent process every time as already indicated by some other answer.
I faced similar issues and I found this blogpost useful as well as the shell script by Wayne Walker mentioned in the blog on github.
Good luck!
Not enough reputation to comment on #markshep's answer, just wanted to add a simpler solution. lsof was not listing the socket for me without sudo, but find is enough:
* * * * * SSH_AUTH_SOCK="$(find /tmp/ -type s -path '/tmp/ssh-*/agent.*' -user $(whoami) 2>/dev/null)" ssh-command
The find command searches the /tmp directory for sockets whose full path name matches that of ssh agent socket files and are owned by the current user. It redirects stderr to /dev/null to ignore the many permission denied errors that will usually be produced by running find on directories that it doesn't have access to.
The solution assumes only one socket will be found for that user.
The target and path match might need modification for other distributions/ssh versions/configurations, should be straightforward though.

Resources