Specify private key in SSH as string - bash

I can connect to a server via SSH using the -i option to specify the private key:
ssh -i ~/.ssh/id_dsa user#hostname
I am creating a script that takes the id_dsa text from the database but I am not sure how I can give that string to SSH. I would need something like:
ssh --option $STRING user#hostname
Where $STRING contains the value of id_dsa. I need to know the --option if there is one.

Try the following:
echo $KEY | ssh -i /dev/stdin username#host command
The key doesn't appear from a PS statement, but because stdin is redirected it's only useful for single commands or tunnels.

There is no such switch - as it would leak sensitive information. If there were, anyone could get your private key by doing a simple ps command.
EDIT: (because of theg added details in comment)
You really should store the key in to a temporary file. Make sure you set the permissions correctly before writing to the file, if you do not use command like mktemp to create the temporary file.
Make sure you run the broker (or agent in case of OpenSSH) process and load the key using <whatever command you use to fetch it form the database> | ssh-add -

Passing cryptokey as a string is not advisable but for the sake of the question, I would say I came across the same situation where I need to pass key as a string in a script. I could use key stored in a file too but the nature of the script is to make it very flexible, containing everything in itself was a requirement. so I used to assign variable and pass it and echo it as follows :
#!/bin/bash
KEY="${ YOUR SSH KEY HERE INSIDE }"
echo "${KEY}" | ssh -q -i /dev/stdin username#IP 'hostnamectl'
exit 0
Notes:
-q suppress all warnings
By the way , the catch here in above script, since we are using echo it will print the ssh key which is again not recommended , to hide that you can use grep to grep some anything which will not be printed for sure but still stdin will have the value from the echo. So the final cmd can be modified as follows :
#!/bin/bash
KEY="${ YOUR SSH KEY HERE INSIDE }"
echo "${KEY}" | grep -qw "less" | ssh -q -i /dev/stdin username#IP 'hostnamectl'
exit 0
This worked for me.

I was looking at the same problem. Adding private key content to ssh command via stdin did not work for me. I found out that its possible to add the private key file contents to ssh-agent using the command ssh-add. This will let you ssh into the remote host without explicitly specifying the identity file. My particular usecase was that I didn't want to store the SSH key in cleartext on my machine and was dynamically getting it from a secrets vault. This answer is mostly a collection of other answers on StackOverflow.
ssh-agent is a program to hold private keys used for public key
authentication. Through use of environment variables the agent can
be located and automatically used for authentication when logging
in to other machines using ssh
Source
This is what I have done.
First start the ssh-agent.
You can start it from your terminal by simply executing ssh-agent.
OPTIONAL: If you'd like to make sure ssh-agent is running on every login, you can add something like the following to your shell config.
This is what I have added to my ~/.bashrc file.
# set SSH_AUTH_SOCK env var to a fixed value
export SSH_AUTH_SOCK=~/.ssh/ssh-agent.sock
# test whether $SSH_AUTH_SOCK is valid
ssh-add -l 2>/dev/null >/dev/null
# if not valid, then start ssh-agent using $SSH_AUTH_SOCK
[ $? -ge 2 ] && ssh-agent -a "$SSH_AUTH_SOCK" >/dev/null
Source
(This particular snippet also makes sure new ssh-agent processes are not getting created when there's one already running.)
Now you have the ssh-agent running.
Since we're interested in loading SSH key as a string, I'll assume a scenario where private key contents has already been loaded in to a variable, $SSH_PRIVATE_KEY.
I can now add this Key contents to the ssh-agent by executing the following command.
ssh-add - <<< "${SSH_PRIVATE_KEY}"
This can just be added to the bashrc file as well.
You can confirm that your key has been added by listing all keys by executing ssh-agent -l. Aaand you're done now.
Try connecting to the remote host and you don't need a private key file.
ssh username#hostname
This does come with extra security risks. These are some I could think of:
Adding the private key to the ssh-agent will let any process on the machine access the key to authenticate remote hosts without explicitly providing any information.
Since the goal is to load Private key as a string, it will either be stored in a variable or the contents embedded directly in the command. This might make the key available in command history, the shell variable and other places.

Related

Run SSH on Jenkins from file in Git

I have a file on Git with the following shell script
File Name = Job.sh
echo "Warehouse script starting"
ssh -n username#server_name "mkdir -p ~/directory_name/folder_name/file_name"
Under Execute Shell in Jenkins I am running -
sh job.sh
The echo command gets printed in the console. But the job is not doing ssh into username#server_name and creating the directory. Appreciate any feedback.
Try the same command when connected directly to the Jenkins agent (using the account used by Jenkins on that agent)
The goal is to check if that user (on that server agent) does have the right pubic/private key pair in its $HOME/.ssh folder.
And double-check the public key is in servername:~username/.ssh/authorized_keys.
Make sure the private key is not encrypted, to avoid having to deal with ssh-agent and passphrase caching. At least for now, for testing your setup.
Note: the -n option (preventing to read from stdin) is usually for ssh commands executed in the background. You might not need it in your case.
Try also to add #!/bin/bash -x at the beginning of your script (assuming you do have a bash) in order to print all lines executed.

Concatenating a local file with a remote one

These three lines of code require authentication twice. I don't yet have password-less authentication set up on this server. In fact, these lines of code are to copy my public key to the server and concatenate it with the existing file.
How can I re-write this process with a single ssh command that requires authentication only once?
scp ~/local.txt user#server.com:~/remote.txt
ssh -l user user#server.com
cat ~/remote.txt >> ~/otherRemote.txt
I've looked into the following possibilities:
command sed
operator ||
operator &&
shared session: Can I use an existing SSH connection and execute SCP over that tunnel without re-authenticating?
I also considered placing local.txt at an openly accessible location, for example, with a public dropbox link. Then if cat could accept this as an input, the scp line wouldn't be necessary. But this would also require an additional step and wouldn't work in cases where local.txt cannot be made public.
Other references:
Using a variable's value as password for scp, ssh etc. instead of prompting for user input every time
https://superuser.com/questions/400714/how-to-remotely-write-to-a-file-using-ssh
You can redirect the content to the remote, and then use commands on the remote to do something with it. Like this:
ssh user#server.com 'cat >> otherRemote.txt' < ~/local.txt
The remote cat command will receive as its input the content of ~/local.txt, passed to the ssh command by input redirection.
Btw, as #Barmar pointed out, specifying the username with both -l user and user# was also redundant in your example.

How to store string into remote file?

How to store string into remote file ?
I need to override content of file on remote machine (I have username, password and ip and can access through ssh) with some string from command line.
How to achieve this on linux ?
You can modify the file in one operation with ssh:
ssh user#host "echo \"$local_variable\" > /path/to/file"
However, this is risky - What if there's a double quote character in the local variable? In Bash, you can get around it by quoting the value:
ssh user#host "echo $(printf %q "$local_variable") > /path/to/file"
The much simpler and safer way to do this, and avoid any weird escaping problems, is to simply save the contents to a file locally and then copy it over with scp or rsync.
If you have access to ssh you can execute any command on the remote machine:
ssh username#remotehost.com /usr/bin/mycommand
(For example you could echo a string into a file if you like)
if you want to get rid of the password prompt, you could use SSH Key Authentication:
ssh-keygen -t dsa
scp .ssh/id_rsa.pub username#remotehost.com:~/.ssh/authorized_keys2
Once your key is on the remote machine, you can use ssh without password. (Warning: Using a key which is not password protected, could be risky)

How to prevent that the password to decrypt the private key has to be entered every time when using Git Bash on Windows?

I've an automatic building service which download from a git private repository.
The problem is that when it tries to clone repository it need to provide the password, because it is not remembered; so because there is no human interaction, it waits forever the password.
How can I force it to remember from id_rsa.pub?
For Windows users, just a note that this is how I set up the Git Bash environment to log me in once when I start it up. I edit my ~/.bashrc file:
eval `ssh-agent`
ssh-add
So when I start Git Bash, it looks like:
Welcome to Git (version 1.7.8-preview20111206)
(etc)
Agent pid 3376
Enter passphrase for /c/Users/starmonkey/.ssh/id_dsa:
Identity added: /c/Users/starmonkey/.ssh/id_dsa (/c/Users/starmonkey/.ssh/id_dsa)
And now I can ssh to other servers without logging in every time.
This answer explains how to get the GitHub username and password to be stored permanently, not the SSH key passphrase.
In Windows, just run
$ git config --global credential.helper wincred
This means that the next time you push, you'll enter your username and password as usual, but they'll be saved in Windows credentials. You won't have to enter them again, after that.
As in, Push to GitHub without entering username and password every time (Git Bash on Windows).
I prefer not to have to type my SSH passphrase when opening new terminals; unfortunately starmonkey's solution requires the password to be typed in for every session. Instead, I have this in my .bash_profile file:
# Note: ~/.ssh/environment should not be used, as it
# already has a different purpose in SSH.
env=~/.ssh/agent.env
# Note: Don't bother checking SSH_AGENT_PID. It's not used
# by SSH itself, and it might even be incorrect
# (for example, when using agent-forwarding over SSH).
agent_is_running() {
if [ "$SSH_AUTH_SOCK" ]; then
# ssh-add returns:
# 0 = agent running, has keys
# 1 = agent running, no keys
# 2 = agent not running
ssh-add -l >/dev/null 2>&1 || [ $? -eq 1 ]
else
false
fi
}
agent_has_keys() {
ssh-add -l >/dev/null 2>&1
}
agent_load_env() {
. "$env" >/dev/null
}
agent_start() {
(umask 077; ssh-agent >"$env")
. "$env" >/dev/null
}
if ! agent_is_running; then
agent_load_env
fi
# If your keys are not stored in ~/.ssh/id_rsa or ~/.ssh/id_dsa, you'll need
# to paste the proper path after ssh-add
if ! agent_is_running; then
agent_start
ssh-add
elif ! agent_has_keys; then
ssh-add
fi
unset env
This will remember my passphrase for new terminal sessions as well; I only have to type it in once when I open my first terminal after booting.
I'd like to credit where I got this; it's a modification of somebody else's work, but I can't remember where it came from. Thanks anonymous author!
Update 2019-07-01: I don't think all this is necessary. I now consistently have this working by ensuring my .bash_profile file runs the ssh-agent:
eval $(ssh-agent)
Then I set up an ssh configuration file like this:
touch ~/.ssh/config
chmod 600 ~/.ssh/config
echo 'AddKeysToAgent yes' >> ~/.ssh/config
If I understand the question correctly, you're already using an authorized SSH key in the build service, but you want to avoid having to type the passphrase for every clone?
I can think of two ways of doing this:
If your build service is being started interactively: Before you start the build service, start ssh-agent with a sufficiently long timeout (-t option). Then use ssh-add (msysGit should have those) to add all the private keys you need before you start your build service. You'd still have to type out all the passphrases, but only once per service launch.
If you want to avoid having to type the passphrases out at all, you can always remove the passphrases from the SSH keys, as described in https://serverfault.com/questions/50775/how-do-i-change-my-private-key-passphrase, by setting an empty new passphrase. This should do away with the password prompt entirely, but it is even less secure than the previous option.
When I tried to push my code, I got the below error:
$ git push origin dev
remote: Too many invalid password attempts. Try logging in through the website with your password.
fatal: unable to access 'https://naushadqamar-1#bitbucket.org/xxxx/xxxx-api.git/': The requested URL returned error: 403
After a few hours of research, I found I need to use the below command:
$ git config --global credential.helper cache
After executing the above command, I got the prompt for entering my GitHub username and password. After providing the correct credentials, I am able to push my code.
The right solution is:
Run the Windows default terminal - cmd and get the directory of your master profile
echo %USERPROFILE%
Run Git Bash in the directory above and create the .bashrc file with the command
echo "" > .bashrc
Open the .bashrc file with your favourite text editor and paste code from GitHub Help into that file:
env=~/.ssh/agent.env
...
COPY WHOLE CODE FROM URL - I can't add it to Stack Overflow because it breaks layout... OMG!
Restart Git Bash and it asks you for your password (only first time) and done. No password bothering again.
You need to create the authorized_keys file under the .ssh folder of the user under which you are going to connect to the repository server. For example, assuming you use username buildservice on repo.server you can run:
cd ~buidservice
mkdir ./ssh
cat id_rsa.pub >> .ssh/authorized_keys
Then you have to check the following things:
That corresponding id_rsa private key is presented in builservice#build.server:~/.shh/id_rsa.
That fingerprint of repo.server is stored in the buildservice#build.server:~/.ssh/known_hosts file. Typically that will be done after the first attempt of ssh to connect to the repo.server.

How can I ssh directly to a particular directory?

I often have to login to one of several servers and go to one of several directories on those machines. Currently I do something of this sort:
localhost ~]$ ssh somehost
Welcome to somehost!
somehost ~]$ cd /some/directory/somewhere/named/Foo
somehost Foo]$
I have scripts that can determine which host and which directory I need to get into but I cannot figure out a way to do this:
localhost ~]$ go_to_dir Foo
Welcome to somehost!
somehost Foo]$
Is there an easy, clever or any way to do this?
You can do the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted ; bash --login"
This way, you will get a login shell right on the directory_wanted.
Explanation
-t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
If you don't use -t then no prompt will appear.
If you don't add ; bash then the connection will get closed and return control to your local machine
If you don't add bash --login then it will not use your configs because its not a login shell
You could add
cd /some/directory/somewhere/named/Foo
to your .bashrc file (or .profile or whatever you call it) at the other host. That way, no matter what you do or where you ssh from, whenever you log onto that server, it will cd to the proper directory for you, and all you have to do is use ssh like normal.
Of curse, rogeriopvl's solution works too, but it's a tad bit more verbose, and you have to remember to do it every time (unless you make an alias) so it seems a bit less "fun".
My preferred approach is using the SSH config file (described below), but there are a few possible solutions depending on your usages.
Command Line Arguments
I think the best answer for this approach is christianbundy's reply to the accepted answer:
ssh -t example.com "cd /foo/bar; exec \$SHELL -l"
Using double quotes will allow you to use variables from your local machine, unless they are escaped (as $SHELL is here). Alternatively, you can use single quotes, and all of the variables you use will be the ones from the target machine:
ssh -t example.com 'cd /foo/bar; exec $SHELL -l'
Bash Function
You can simplify the command by wrapping it in a bash function. Let's say you just want to type this:
sshcd example.com /foo/bar
You can make this work by adding this to your ~/.bashrc:
sshcd () { ssh -t "$1" "cd \"$2\"; exec \$SHELL -l"; }
If you are using a variable that exists on the remote machine for the directory, be sure to escape it or put it in single quotes. For example, this will cd to the directory that is stored in the JBOSS_HOME variable on the remote machine:
sshcd example.com \$JBOSS_HOME
SSH Config File
If you'd like to see this behavior all the time for specific (or any) hosts with the normal ssh command without having to use extra command line arguments, you can set the RequestTTY and RemoteCommand options in your ssh config file.
For example, I'd like to type only this command:
ssh qaapps18
but want it to always behave like this command:
ssh -t qaapps18 'cd $JBOSS_HOME; exec $SHELL'
So I added this to my ~/.ssh/config file:
Host *apps*
RequestTTY yes
RemoteCommand cd $JBOSS_HOME; exec $SHELL
Now this rule applies to any host with "apps" in its hostname.
For more information, see http://man7.org/linux/man-pages/man5/ssh_config.5.html
I've created a tool to SSH and CD into a server consecutively – aptly named sshcd. For the example you've given, you'd simply use:
sshcd somehost:/some/directory/somewhere/named/Foo
Let me know if you have any questions or problems!
Based on additions to #rogeriopvl's answer, I suggest the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted && bash"
Chaining commands by && will make the next command run only when the previous one was successful (as opposed to using ;, which executes commands sequentially). This is particularly useful when needing to cd to a directory performing the command.
Imagine doing the following:
/home/me$ cd /usr/share/teminal; rm -R *
The directory teminal doesn't exist, which causes you to stay in the home directory and remove all the files in there with the following command.
If you use &&:
/home/me$ cd /usr/share/teminal && rm -R *
The command will fail after not finding the directory.
In my very specific case, I just wanted to execute a command in a remote host, inside a specific directory from a Jenkins slave machine:
ssh myuser#mydomain
cd /home/myuser/somedir
./commandThatMustBeRunInside_somedir
exit
But my machine couldn't perform the ssh (it couldn't allocate a pseudo-tty I suppose) and kept me giving the following error:
Pseudo-terminal will not be allocated because stdin is not a terminal
I could get around this issue passing "cd to dir + my command" as a parameter of the ssh command (to not have to allocate a Pseudo-terminal) and by passing the option -T to explicitly tell to the ssh command that I didn't need pseudo-terminal allocation.
ssh -T myuser#mydomain "cd /home/myuser/somedir; ./commandThatMustBeRunInside_somedir"
I use the environment variable CDPATH
going one step further with the -t idea. I keep a set of scripts calling the one below to go to specific places in my frequently visited hosts. I keep them all in ~/bin and keep that directory in my path.
#!/bin/bash
# does ssh session switching to particular directory
# $1, hostname from config file
# $2, directory to move to after login
# can save this as say 'con' then
# make another script calling this one, e.g.
# con myhost repos/i2c
ssh -t $1 "cd $2; exec \$SHELL --login"
My answer may differ from what you really want, but I write here as may be useful for some people. In my solution you have to enter into the directory once and then every new ssh session goes to the same dir (after the first logout).
How to ssh to the same directory you have been in your last login.
(I assume you use bash on the remote node.)
Add this line to your ~/.bash_logout on the remote node(!):
echo $PWD > ~/.bash_lastpwd
and these lines to the ~/.bashrc file (still on the remote node!)
if [ -f ~/.bash_lastpwd ]; then
cd $(cat ~/.bash_lastpwd)
fi
This way you save your current path on every logout and .bashrc put you into that directory after login.
ps: You can tweak it further like using the SSH_CLIENT variable to decide to go into that directory or not, so you can differentiate between local logins and ssh or even between different ssh clients.
Another way of going to directly after logging in is create "Alias". When you login into your system just type that alias and you will be in that directory.
Example : Alias = myfolder '/var/www/Folder'
After you log in to your system type that alias (this works from any part of the system)
this command if not in bashrc will work for current session. So you can also add this alias to bashrc to use that in future
$ myfolder => takes you to that folder
I know this has been answered ages ago but I found the question while trying to incorporate an ssh login in a bash script and once logged in run a few commands and log back out and continue with the bash script. The simplest way I found which hasnt been mentioned elsewhere because it is so trivial is to do this.
#!/bin/bash
sshpass -p "password" ssh user#server 'cd /path/to/dir;somecommand;someothercommand;exit;'
Connect With User
In case if you don't know this, you can use this to connect by specifying both user and host
ssh -t <user>#<Host domain / IP> "cd /path/to/directory; bash --login"
Example: ssh -t admin#test.com "cd public_html; bash --login"
You can also append the commands to be executed on every login by appending it in the double quotes with a ; before each command
Unfortunately, the suggested solution (of #rogeriopvl) doesn't work when you use multiple hops, so I found another one.
On remote machine add into ~/.bashrc the following:
[ "x$CDTO" != "x" ] && cd $CDTO
This allows you to specify the desired target directory on command line in this way:
ssh -t host1 ssh -t host2 "CDTO=/desired_directory exec bash --login"
Sure, this way can be used for a single hop too.
This solution can be combined with the usefull tip of #redseven for greater flexibilty (if no $CDTO, go to saved directory, if exists).
SSH itself provides a means of communication, it does not know anything about directories. Since you can specify which remote command to execute (this is - by default - your shell), I'd start there.
simply modify your home with the command:
usermod -d /newhome username

Resources