Running SSH Agent when starting Git Bash on Windows - windows

I am using git bash. I have to use
eval `ssh-agent.exe`
ssh-add /my/ssh/location/
every time when I start a new git bash.
Is there a way to set ssh agent permanently? Or does windows has a good way
to manage the ssh keys?
I'm a new guy, please give me detailed tutorial, thanks!

2013: In a git bash session, you can add a script to ~/.profile or ~/.bashrc (with ~ being usually set to %USERPROFILE%), in order for said session to launch automatically the ssh-agent.
If the file doesn't exist, just create it.
This is what GitHub describes in "Working with SSH key passphrases".
The "Auto-launching ssh-agent on Git for Windows" section of that article has a robust script that checks if the agent is running or not.
Below is just a snippet, see the GitHub article for the full solution.
# This is just a snippet. See the article above.
if ! agent_is_running; then
agent_start
ssh-add
elif ! agent_has_keys; then
ssh-add
fi
Other Resources:
"Getting ssh-agent to work with git run from windows command shell" has a similar script, but I'd refer to the GitHub article above primarily, which is more robust and up to date.
hardsetting adds in the comments (2018):
If you want to enter the passphrase the first time you need it, and not when opening a shell, the cleanest way to me is:
removing the ssh-add from the .bash_profile, and
adding "AddKeysToAgent yes" to your .ssh/config file (see "How to make ssh-agent automatically add the key on demand?").
This way you don't even have to remember running ssh-add.
And Tao adds in the comments (2022):
It's worth noting why this script makes particular sense in Windows, vs (for example) the more standard linuxey script noted by #JigneshGohel in another answer:
By not relying on the SSH_AGENT_PID at all, this script works across different msys & cygwin environments.
An agent can be started in msys2, and still used in git bash, as the SSH_AUTH_SOCK path can be reached in either environment.
The PID from one environment cannot be queried in the other, so a PID-based approach keeps resetting/creating new ssh-agent processes on each switch.

P.S: These instructions are in context of a Bash shell opened in Windows 10 Linux Subsystem and doesn't mention about sym-linking SSH keys generated in Windows with Bash on Ubuntu on Windows
1) Update your .bashrc by adding following in it
# Set up ssh-agent
SSH_ENV="$HOME/.ssh/environment"
function start_agent {
echo "Initializing new SSH agent..."
touch $SSH_ENV
chmod 600 "${SSH_ENV}"
/usr/bin/ssh-agent | sed 's/^echo/#echo/' >> "${SSH_ENV}"
. "${SSH_ENV}" > /dev/null
/usr/bin/ssh-add
}
# Source SSH settings, if applicable
if [ -f "${SSH_ENV}" ]; then
. "${SSH_ENV}" > /dev/null
kill -0 $SSH_AGENT_PID 2>/dev/null || {
start_agent
}
else
start_agent
fi
2) Then run $ source ~/.bashrc to reload your config.
The above steps have been taken from https://github.com/abergs/ubuntuonwindows#2-start-an-bash-ssh-agent-on-launch
3) Create a SSH config file, if not present. Use following command for creating a new one: .ssh$ touch config
4) Add following to ~/.ssh/config
Host github.com-<YOUR_GITHUB_USERNAME>
HostName github.com
User git
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_work_gmail # path to your private key
AddKeysToAgent yes
Host csexperimental.abc.com
IdentityFile ~/.ssh/id_work_gmail # path to your private key
AddKeysToAgent yes
<More hosts and github configs can be added in similar manner mentioned above>
5) Add your key to SSH agent using command $ ssh-add ~/.ssh/id_work_gmail and then you should be able to connect to your github account or remote host using ssh. For e.g. in context of above code examples:
$ ssh github.com-<YOUR_GITHUB_USERNAME>
or
$ ssh <USER>#csexperimental.abc.com
This adding of key to the SSH agent should be required to be performed only one-time.
6) Now logout of your Bash session on Windows Linux Subsystem i.e. exit all the Bash consoles again and start a new console again and try to SSH to your Github Host or other host as configured in SSH config file and it should work without needing any extra steps.
Note:
If you face Bad owner or permissions on ~/.ssh/config then update the permissions using the command chmod 600 ~/.ssh/config. Reference: https://serverfault.com/a/253314/98910
For the above steps to work you will need OpenSSH v 7.2 and newer. If you have older one you can upgrade it using the steps mentioned at https://stackoverflow.com/a/41555393/936494
The same details can be found in the gist Windows 10 Linux Subsystem SSH-agent issues
Thanks.

If the goal is to be able to push to a GitHub repo whenever you want to, then in Windows under C:\Users\tiago\.ssh where the keys are stored (at least in my case), create a file named config and add the following in it
Host github.com
HostName github.com
User your_user_name
IdentityFile ~/.ssh/your_file_name
Then simply open Git Bash and you'll be able to push without having to manually start the ssh-agent and adding the key.

I found the smoothest way to achieve this was using Pageant as the SSH agent and plink.
You need to have a putty session configured for the hostname that is used in your remote.
You will also need plink.exe which can be downloaded from the same site as putty.
And you need Pageant running with your key loaded. I have a shortcut to pageant in my startup folder that loads my SSH key when I log in.
When you install git-scm you can then specify it to use tortoise/plink rather than OpenSSH.
The net effect is you can open git-bash whenever you like and push/pull without being challenged for passphrases.
Same applies with putty and WinSCP sessions when pageant has your key loaded. It makes life a hell of a lot easier (and secure).

I could not get this to work based off the best answer, probably because I'm such a PC noob and missing something obvious. But just FYI in case it helps someone as challenged as me, what has FINALLY worked was through one of the links here (referenced in the answers). This involved simply pasting the following to my .bash_profile:
env=~/.ssh/agent.env
agent_load_env () { test -f "$env" && . "$env" >| /dev/null ; }
agent_start () {
(umask 077; ssh-agent >| "$env")
. "$env" >| /dev/null ; }
agent_load_env
# agent_run_state: 0=agent running w/ key; 1=agent w/o key; 2= agent not running
agent_run_state=$(ssh-add -l >| /dev/null 2>&1; echo $?)
if [ ! "$SSH_AUTH_SOCK" ] || [ $agent_run_state = 2 ]; then
agent_start
ssh-add
elif [ "$SSH_AUTH_SOCK" ] && [ $agent_run_state = 1 ]; then
ssh-add
fi
unset env
I probably have something configured weird, but was not successful when I added it to my .profile or .bashrc. The other real challenge I've run into is I'm not an admin on this computer and can't change the environment variables without getting it approved by IT, so this is a solution for those that can't access that.
You know it's working if you're prompted for your ssh password when you open git bash. Hallelujah something finally worked.

Put this in your ~/.bashrc (or a file that's source'd from it) which will stop it from being run multiple times unnecessarily per shell:
if [ -z "$SSH_AGENT_PID" ]; then
eval `ssh-agent -s`
fi
And then add "AddKeysToAgent yes" to ~/.ssh/config:
Host *
AddKeysToAgent yes
ssh to your server (or git pull) normally and you'll only be asked for password/passphrase once per session.

As I don't like using putty in Windows as a workaround, I created a very simple utility ssh-agent-wrapper. It scans your .ssh folders and adds all your keys to the agent. You simply need to put it into Windows startup folder for it to work.
Assumptions:
ssh-agent in path
shh-add in path (both by choosing the "RED" option when installing git
private keys are in %USERPROFILE%/.ssh folder
private keys names start with id (e.g. id_rsa)

I wrote a script and created a git repository, which solves this issue here: https://github.com/Cazaimi/boot-github-shell-win .
The readme contains instructions on how to set the script up, so that each time you open a new window/tab the private key is added to ssh-agent automatically, and you don't have to worry about this, if you're working with remote git repositories.

Create a new .bashrc file in your ~ directory.
There you can put your commands that you want executed everytime you start the bash

Simple two string solution from this answer:
For sh, bash, etc:
# ~/.profile
if ! pgrep -q -U `whoami` -x 'ssh-agent'; then ssh-agent -s > ~/.ssh-agent.sh; fi
. ~/.ssh-agent.sh
For csh, tcsh, etc:
# ~/.schrc
sh -c 'if ! pgrep -q -U `whoami` -x 'ssh-agent'; then ssh-agent -c > ~/.ssh-agent.tcsh; fi'
eval `cat ~/.ssh-agent.tcsh`

Related

Run SSH on Jenkins from file in Git

I have a file on Git with the following shell script
File Name = Job.sh
echo "Warehouse script starting"
ssh -n username#server_name "mkdir -p ~/directory_name/folder_name/file_name"
Under Execute Shell in Jenkins I am running -
sh job.sh
The echo command gets printed in the console. But the job is not doing ssh into username#server_name and creating the directory. Appreciate any feedback.
Try the same command when connected directly to the Jenkins agent (using the account used by Jenkins on that agent)
The goal is to check if that user (on that server agent) does have the right pubic/private key pair in its $HOME/.ssh folder.
And double-check the public key is in servername:~username/.ssh/authorized_keys.
Make sure the private key is not encrypted, to avoid having to deal with ssh-agent and passphrase caching. At least for now, for testing your setup.
Note: the -n option (preventing to read from stdin) is usually for ssh commands executed in the background. You might not need it in your case.
Try also to add #!/bin/bash -x at the beginning of your script (assuming you do have a bash) in order to print all lines executed.

How to SSH from local linux into specific directory on windows 10 remote

I want to ssh from my local linux computer into a specific directory on a windows 10 remote. The shell that is used on the remote is git bash. I don't want to keep changing the directory every time I log into my remote using ssh.
for linux remotes this is easily done using something like this:
ssh -t user#x.x.x.x "cd /targetDir ; \$SHELL --login"
The question is how can the same thing be achieved for Windows 10 remotes? If nothing else works I would also accept changing the default entry point in git bash for any ssh sessions on the remote.
Please note that I am not looking for help setting up ssh (already works). I just want to jump right into a specific directory when a session is started.
I was able to figure this thing out myself. The following command gets the job done. Using double and single quotes together is required to make it work (in no particular order).
ssh -t user#x.x.x.x "'cd /targetDir ; bash'"

How to prevent that the password to decrypt the private key has to be entered every time when using Git Bash on Windows?

I've an automatic building service which download from a git private repository.
The problem is that when it tries to clone repository it need to provide the password, because it is not remembered; so because there is no human interaction, it waits forever the password.
How can I force it to remember from id_rsa.pub?
For Windows users, just a note that this is how I set up the Git Bash environment to log me in once when I start it up. I edit my ~/.bashrc file:
eval `ssh-agent`
ssh-add
So when I start Git Bash, it looks like:
Welcome to Git (version 1.7.8-preview20111206)
(etc)
Agent pid 3376
Enter passphrase for /c/Users/starmonkey/.ssh/id_dsa:
Identity added: /c/Users/starmonkey/.ssh/id_dsa (/c/Users/starmonkey/.ssh/id_dsa)
And now I can ssh to other servers without logging in every time.
This answer explains how to get the GitHub username and password to be stored permanently, not the SSH key passphrase.
In Windows, just run
$ git config --global credential.helper wincred
This means that the next time you push, you'll enter your username and password as usual, but they'll be saved in Windows credentials. You won't have to enter them again, after that.
As in, Push to GitHub without entering username and password every time (Git Bash on Windows).
I prefer not to have to type my SSH passphrase when opening new terminals; unfortunately starmonkey's solution requires the password to be typed in for every session. Instead, I have this in my .bash_profile file:
# Note: ~/.ssh/environment should not be used, as it
# already has a different purpose in SSH.
env=~/.ssh/agent.env
# Note: Don't bother checking SSH_AGENT_PID. It's not used
# by SSH itself, and it might even be incorrect
# (for example, when using agent-forwarding over SSH).
agent_is_running() {
if [ "$SSH_AUTH_SOCK" ]; then
# ssh-add returns:
# 0 = agent running, has keys
# 1 = agent running, no keys
# 2 = agent not running
ssh-add -l >/dev/null 2>&1 || [ $? -eq 1 ]
else
false
fi
}
agent_has_keys() {
ssh-add -l >/dev/null 2>&1
}
agent_load_env() {
. "$env" >/dev/null
}
agent_start() {
(umask 077; ssh-agent >"$env")
. "$env" >/dev/null
}
if ! agent_is_running; then
agent_load_env
fi
# If your keys are not stored in ~/.ssh/id_rsa or ~/.ssh/id_dsa, you'll need
# to paste the proper path after ssh-add
if ! agent_is_running; then
agent_start
ssh-add
elif ! agent_has_keys; then
ssh-add
fi
unset env
This will remember my passphrase for new terminal sessions as well; I only have to type it in once when I open my first terminal after booting.
I'd like to credit where I got this; it's a modification of somebody else's work, but I can't remember where it came from. Thanks anonymous author!
Update 2019-07-01: I don't think all this is necessary. I now consistently have this working by ensuring my .bash_profile file runs the ssh-agent:
eval $(ssh-agent)
Then I set up an ssh configuration file like this:
touch ~/.ssh/config
chmod 600 ~/.ssh/config
echo 'AddKeysToAgent yes' >> ~/.ssh/config
If I understand the question correctly, you're already using an authorized SSH key in the build service, but you want to avoid having to type the passphrase for every clone?
I can think of two ways of doing this:
If your build service is being started interactively: Before you start the build service, start ssh-agent with a sufficiently long timeout (-t option). Then use ssh-add (msysGit should have those) to add all the private keys you need before you start your build service. You'd still have to type out all the passphrases, but only once per service launch.
If you want to avoid having to type the passphrases out at all, you can always remove the passphrases from the SSH keys, as described in https://serverfault.com/questions/50775/how-do-i-change-my-private-key-passphrase, by setting an empty new passphrase. This should do away with the password prompt entirely, but it is even less secure than the previous option.
When I tried to push my code, I got the below error:
$ git push origin dev
remote: Too many invalid password attempts. Try logging in through the website with your password.
fatal: unable to access 'https://naushadqamar-1#bitbucket.org/xxxx/xxxx-api.git/': The requested URL returned error: 403
After a few hours of research, I found I need to use the below command:
$ git config --global credential.helper cache
After executing the above command, I got the prompt for entering my GitHub username and password. After providing the correct credentials, I am able to push my code.
The right solution is:
Run the Windows default terminal - cmd and get the directory of your master profile
echo %USERPROFILE%
Run Git Bash in the directory above and create the .bashrc file with the command
echo "" > .bashrc
Open the .bashrc file with your favourite text editor and paste code from GitHub Help into that file:
env=~/.ssh/agent.env
...
COPY WHOLE CODE FROM URL - I can't add it to Stack Overflow because it breaks layout... OMG!
Restart Git Bash and it asks you for your password (only first time) and done. No password bothering again.
You need to create the authorized_keys file under the .ssh folder of the user under which you are going to connect to the repository server. For example, assuming you use username buildservice on repo.server you can run:
cd ~buidservice
mkdir ./ssh
cat id_rsa.pub >> .ssh/authorized_keys
Then you have to check the following things:
That corresponding id_rsa private key is presented in builservice#build.server:~/.shh/id_rsa.
That fingerprint of repo.server is stored in the buildservice#build.server:~/.ssh/known_hosts file. Typically that will be done after the first attempt of ssh to connect to the repo.server.

ssh-agent and crontab -- is there a good way to get these to meet?

I wrote a simple script which mails out svn activity logs nightly to our developers. Until now, I've run it on the same machine as the svn repository, so I didn't have to worry about authentication, I could just use svn's file:/// address style.
Now I'm running the script on a home computer, accessing a remote repository, so I had to change to svn+ssh:// paths. With ssh-key nicely set up, I don't ever have to enter passwords for accessing the svn repository under normal circumstances.
However, crontab did not have access to my ssh-keys / ssh-agent. I've read about this problem a few places on the web, and it's also alluded to here, without resolution:
Why ssh fails from crontab but succedes when executed from a command line?
My solution was to add this to the top of the script:
### TOTAL HACK TO MAKE SSH-KEYS WORK ###
eval `ssh-agent -s`
This seems to work under MacOSX 10.6.
My question is, how terrible is this, and is there a better way?
In addition...
If your key have a passhphrase, keychain will ask you once (valid until you reboot the machine or kill the ssh-agent).
keychain is what you need! Just install it and add the follow code in your .bash_profile:
keychain ~/.ssh/id_dsa
So use the code below in your script to load the ssh-agent environment variables:
. ~/.keychain/$HOSTNAME-sh
Note: keychain also generates code to csh and fish shells.
Copied answer from https://serverfault.com/questions/92683/execute-rsync-command-over-ssh-with-an-ssh-agent-via-crontab
When you run ssh-agent -s, it launches a background process that you'll need to kill later. So, the minimum is to change your hack to something like:
eval `ssh-agent -s`
svn stuff
kill $SSH_AGENT_PID
However, I don't understand how this hack is working. Simply running an agent without also running ssh-add will not load any keys. Perhaps MacOS' ssh-agent is behaving differently than its manual page says it does.
I had a similar problem. My script (that relied upon ssh keys) worked when I ran it manually but failed when run with crontab.
Manually defining the appropriate key with
ssh -i /path/to/key
didn't work.
But eventually I found out that the SSH_AUTH_SOCK was empty when the crontab was running SSH. I wasn't exactly sure why, but I just
env | grep SSH
copied the returned value and added this definition to the head of my crontab.
SSH_AUTH_SOCK="/tmp/value-you-get-from-above-command"
I'm out of depth as to what's happening here, but it fixed my problem. The crontab runs smoothly now.
One way to recover the pid and socket of running ssh-agent would be.
SSH_AGENT_PID=`pgrep -U $USER ssh-agent`
for PID in $SSH_AGENT_PID; do
let "FPID = $PID - 1"
FILE=`find /tmp -path "*ssh*" -type s -iname "agent.$FPID"`
export SSH_AGENT_PID="$PID"
export SSH_AUTH_SOCK="$FILE"
done
This of course presumes that you have pgrep installed in the system and there is only one ssh-agent running or in case of multiple ones it will take the one which pgrep finds last.
My solution - based on pra's - slightly improved to kill process even on script failure:
eval `ssh-agent`
function cleanup {
/bin/kill $SSH_AGENT_PID
}
trap cleanup EXIT
ssh-add
svn-stuff
Note that I must call ssh-add on my machine (scientific linux 6).
To set up automated processes without automated password/passphrase hacks,
I use a separate IdentityFile that has no passphrase, and restrict the target machines' authorized_keys entries prefixed with from="automated.machine.com" ... etc..
I created a public-private keyset for the sending machine without a passphrase:
ssh-keygen -f .ssh/id_localAuto
(Hit return when prompted for a passphrase)
I set up a remoteAuto Host entry in .ssh/config:
Host remoteAuto
HostName remote.machine.edu
IdentityFile ~/.ssh/id_localAuto
and the remote.machine.edu:.ssh/authorized_keys with:
...
from="192.168.1.777" ssh-rsa ABCDEFGabcdefg....
...
Then ssh doesn't need the externally authenticated authorization provided by ssh-agent or keychain, so you can use commands like:
scp -p remoteAuto:watchdog ./watchdog_remote
rsync -Ca remoteAuto/stuff/* remote_mirror
svn svn+ssh://remoteAuto/path
svn update
...
Assuming that you already configured SSH settings and that script works fine from terminal, using the keychain is definitely the easiest way to ensure that script works fine in crontab as well.
Since keychain is not included in most of Unix/Linux derivations, here is the step by step procedure.
1. Download the appropriate rpm package depending on your OS version from http://pkgs.repoforge.org/keychain/. Example for CentOS 6:
wget http://pkgs.repoforge.org/keychain/keychain-2.7.0-1.el6.rf.noarch.rpm
2. Install the package:
sudo rpm -Uvh keychain-2.7.0-1.el6.rf.noarch.rpm
3. Generate keychain files for your SSH key, they will be located in ~/.keychain directory. Example for id_rsa:
keychain ~/.ssh/id_rsa
4. Add the following line to your script anywhere before the first command that is using SSH authentication:
source ~/.keychain/$HOSTNAME-sh
I personally tried to avoid to use additional programs for this, but everything else I tried didn't work. And this worked just fine.
Inspired by some of the other answers here (particularly vpk's) I came up with the following crontab entry, which doesn't require an external script:
PATH=/usr/bin:/bin:/usr/sbin:/sbin
* * * * * SSH_AUTH_SOCK=$(lsof -a -p $(pgrep ssh-agent) -U -F n | sed -n 's/^n//p') ssh hostname remote-command-here
Here is a solution that will work if you can't use keychain and if you can't start an ssh-agent from your script (for example, because your key is passphrase-protected).
Run this once:
nohup ssh-agent > .ssh-agent-file &
. ssh-agent-file
ssh-add # you'd enter your passphrase here
In the script you are running from cron:
# start of script
. ${HOME}/.ssh-agent-file
# now your key is available
Of course this allows anyone who can read '~/.ssh-agent-file' and the corresponding socket to use your ssh credentials, so use with caution in any multi-user environment.
Your solution works but it will spawn a new agent process every time as already indicated by some other answer.
I faced similar issues and I found this blogpost useful as well as the shell script by Wayne Walker mentioned in the blog on github.
Good luck!
Not enough reputation to comment on #markshep's answer, just wanted to add a simpler solution. lsof was not listing the socket for me without sudo, but find is enough:
* * * * * SSH_AUTH_SOCK="$(find /tmp/ -type s -path '/tmp/ssh-*/agent.*' -user $(whoami) 2>/dev/null)" ssh-command
The find command searches the /tmp directory for sockets whose full path name matches that of ssh agent socket files and are owned by the current user. It redirects stderr to /dev/null to ignore the many permission denied errors that will usually be produced by running find on directories that it doesn't have access to.
The solution assumes only one socket will be found for that user.
The target and path match might need modification for other distributions/ssh versions/configurations, should be straightforward though.

How can I ssh directly to a particular directory?

I often have to login to one of several servers and go to one of several directories on those machines. Currently I do something of this sort:
localhost ~]$ ssh somehost
Welcome to somehost!
somehost ~]$ cd /some/directory/somewhere/named/Foo
somehost Foo]$
I have scripts that can determine which host and which directory I need to get into but I cannot figure out a way to do this:
localhost ~]$ go_to_dir Foo
Welcome to somehost!
somehost Foo]$
Is there an easy, clever or any way to do this?
You can do the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted ; bash --login"
This way, you will get a login shell right on the directory_wanted.
Explanation
-t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
If you don't use -t then no prompt will appear.
If you don't add ; bash then the connection will get closed and return control to your local machine
If you don't add bash --login then it will not use your configs because its not a login shell
You could add
cd /some/directory/somewhere/named/Foo
to your .bashrc file (or .profile or whatever you call it) at the other host. That way, no matter what you do or where you ssh from, whenever you log onto that server, it will cd to the proper directory for you, and all you have to do is use ssh like normal.
Of curse, rogeriopvl's solution works too, but it's a tad bit more verbose, and you have to remember to do it every time (unless you make an alias) so it seems a bit less "fun".
My preferred approach is using the SSH config file (described below), but there are a few possible solutions depending on your usages.
Command Line Arguments
I think the best answer for this approach is christianbundy's reply to the accepted answer:
ssh -t example.com "cd /foo/bar; exec \$SHELL -l"
Using double quotes will allow you to use variables from your local machine, unless they are escaped (as $SHELL is here). Alternatively, you can use single quotes, and all of the variables you use will be the ones from the target machine:
ssh -t example.com 'cd /foo/bar; exec $SHELL -l'
Bash Function
You can simplify the command by wrapping it in a bash function. Let's say you just want to type this:
sshcd example.com /foo/bar
You can make this work by adding this to your ~/.bashrc:
sshcd () { ssh -t "$1" "cd \"$2\"; exec \$SHELL -l"; }
If you are using a variable that exists on the remote machine for the directory, be sure to escape it or put it in single quotes. For example, this will cd to the directory that is stored in the JBOSS_HOME variable on the remote machine:
sshcd example.com \$JBOSS_HOME
SSH Config File
If you'd like to see this behavior all the time for specific (or any) hosts with the normal ssh command without having to use extra command line arguments, you can set the RequestTTY and RemoteCommand options in your ssh config file.
For example, I'd like to type only this command:
ssh qaapps18
but want it to always behave like this command:
ssh -t qaapps18 'cd $JBOSS_HOME; exec $SHELL'
So I added this to my ~/.ssh/config file:
Host *apps*
RequestTTY yes
RemoteCommand cd $JBOSS_HOME; exec $SHELL
Now this rule applies to any host with "apps" in its hostname.
For more information, see http://man7.org/linux/man-pages/man5/ssh_config.5.html
I've created a tool to SSH and CD into a server consecutively – aptly named sshcd. For the example you've given, you'd simply use:
sshcd somehost:/some/directory/somewhere/named/Foo
Let me know if you have any questions or problems!
Based on additions to #rogeriopvl's answer, I suggest the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted && bash"
Chaining commands by && will make the next command run only when the previous one was successful (as opposed to using ;, which executes commands sequentially). This is particularly useful when needing to cd to a directory performing the command.
Imagine doing the following:
/home/me$ cd /usr/share/teminal; rm -R *
The directory teminal doesn't exist, which causes you to stay in the home directory and remove all the files in there with the following command.
If you use &&:
/home/me$ cd /usr/share/teminal && rm -R *
The command will fail after not finding the directory.
In my very specific case, I just wanted to execute a command in a remote host, inside a specific directory from a Jenkins slave machine:
ssh myuser#mydomain
cd /home/myuser/somedir
./commandThatMustBeRunInside_somedir
exit
But my machine couldn't perform the ssh (it couldn't allocate a pseudo-tty I suppose) and kept me giving the following error:
Pseudo-terminal will not be allocated because stdin is not a terminal
I could get around this issue passing "cd to dir + my command" as a parameter of the ssh command (to not have to allocate a Pseudo-terminal) and by passing the option -T to explicitly tell to the ssh command that I didn't need pseudo-terminal allocation.
ssh -T myuser#mydomain "cd /home/myuser/somedir; ./commandThatMustBeRunInside_somedir"
I use the environment variable CDPATH
going one step further with the -t idea. I keep a set of scripts calling the one below to go to specific places in my frequently visited hosts. I keep them all in ~/bin and keep that directory in my path.
#!/bin/bash
# does ssh session switching to particular directory
# $1, hostname from config file
# $2, directory to move to after login
# can save this as say 'con' then
# make another script calling this one, e.g.
# con myhost repos/i2c
ssh -t $1 "cd $2; exec \$SHELL --login"
My answer may differ from what you really want, but I write here as may be useful for some people. In my solution you have to enter into the directory once and then every new ssh session goes to the same dir (after the first logout).
How to ssh to the same directory you have been in your last login.
(I assume you use bash on the remote node.)
Add this line to your ~/.bash_logout on the remote node(!):
echo $PWD > ~/.bash_lastpwd
and these lines to the ~/.bashrc file (still on the remote node!)
if [ -f ~/.bash_lastpwd ]; then
cd $(cat ~/.bash_lastpwd)
fi
This way you save your current path on every logout and .bashrc put you into that directory after login.
ps: You can tweak it further like using the SSH_CLIENT variable to decide to go into that directory or not, so you can differentiate between local logins and ssh or even between different ssh clients.
Another way of going to directly after logging in is create "Alias". When you login into your system just type that alias and you will be in that directory.
Example : Alias = myfolder '/var/www/Folder'
After you log in to your system type that alias (this works from any part of the system)
this command if not in bashrc will work for current session. So you can also add this alias to bashrc to use that in future
$ myfolder => takes you to that folder
I know this has been answered ages ago but I found the question while trying to incorporate an ssh login in a bash script and once logged in run a few commands and log back out and continue with the bash script. The simplest way I found which hasnt been mentioned elsewhere because it is so trivial is to do this.
#!/bin/bash
sshpass -p "password" ssh user#server 'cd /path/to/dir;somecommand;someothercommand;exit;'
Connect With User
In case if you don't know this, you can use this to connect by specifying both user and host
ssh -t <user>#<Host domain / IP> "cd /path/to/directory; bash --login"
Example: ssh -t admin#test.com "cd public_html; bash --login"
You can also append the commands to be executed on every login by appending it in the double quotes with a ; before each command
Unfortunately, the suggested solution (of #rogeriopvl) doesn't work when you use multiple hops, so I found another one.
On remote machine add into ~/.bashrc the following:
[ "x$CDTO" != "x" ] && cd $CDTO
This allows you to specify the desired target directory on command line in this way:
ssh -t host1 ssh -t host2 "CDTO=/desired_directory exec bash --login"
Sure, this way can be used for a single hop too.
This solution can be combined with the usefull tip of #redseven for greater flexibilty (if no $CDTO, go to saved directory, if exists).
SSH itself provides a means of communication, it does not know anything about directories. Since you can specify which remote command to execute (this is - by default - your shell), I'd start there.
simply modify your home with the command:
usermod -d /newhome username

Resources