ssh keys in known hosts, but keygen -F not working - openssh

I found this code elsewhere on stackoverflow:
if [ -z "`ssh-keygen -F ${wPCS_IP}`" ]; then
ssh-keyscan -p ${wPCS_PT} -H ${wPCS_IP} >> ~/.ssh/known_hosts
fi
I have two issues where I'm using the code:
This code is generating an error ($?=1) even though it succeeds.
If I run ssh-keygen -F ${wPCS_IP} again after known_hosts is appended, it does not find the keys in known_hosts, even though they were just added. This is the larger problem.
The local machine is Ubuntu Server 16.04 LTS, the remote machine is Ubuntu Server 14.04 LTS.
The major difference between my code and the code sample I found is my use of the port option -p.
Also, I've noticed that the known_hosts file does not list the machines by name or IP address. Which is different from my Gentoo laptop.

So it turns out that when there's an alternate port for ssh, it is stored in the known_hosts file as part of the IP address in this format:
[${WPCS_IP}]:WPCS_PT
Which means that for the if statement to work, it needs to look like this:
if ! ssh-keygen -F "[${wPCS_IP}]:${WPCS_PT}" -f ~/.ssh/known_hosts > /dev/null 2>&1; then ssh-keyscan -p ${wPCS_PT} ${wPCS-IP} >> ~/.ssh/known_hosts; fi
Thanks to alvits for getting me moving in the right direction...
Update: it turns out the Ubuntu 16.04 encrypts the IP address of the remote host (but not the port). I'm still trying to figure out how to adapt to this difference.
Another update: It turns out the the -H option is what's failing. Once you hash the key, it isn't found anymore. This works on Ubuntu 14.04:
if ! ssh-keygen -F ${IP_ADDR} -f ~/.ssh/known_hosts > /dev/null 2>&1; then ssh-keyscan -p ${PORT} ${IP_ADDR} >> ~/.ssh/known_hosts; fi
# IP_ADDR SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.1
# IP_ADDR SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.1
if ! ssh-keygen -F ${IP_ADDR} -f ~/.ssh/known_hosts > /dev/null 2>&1; then ssh-keyscan -p ${PORT} ${IP_ADDR} >> ~/.ssh/known_hosts; fi
You can see that the first if statement generates the keyscan data and the second does not because the keyscan data is correct, but if you add the -H, the keygen does not detect the hashed key entries...
However, to get a similar command to work on Ubuntu 16.04, the if has to be changed:
if ! ssh-keygen -F [${IP_ADDR}]:${PORT} -f ~/.ssh/known_hosts > /dev/null 2>&1; then ssh-keyscan -p ${PORT} ${IP_ADDR} >> ~/.ssh/known_hosts; fi
# IP_ADDR:PORT SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8
# IP_ADDR:PORT SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8
# IP_ADDR:PORT SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8
if ! ssh-keygen -F [${IP_ADDR}]:${PORT} -f ~/.ssh/known_hosts > /dev/null 2>&1; then ssh-keyscan -p ${PORT} ${IP_ADDR} >> ~/.ssh/known_hosts; fi
In this case the known_hosts file must include the port...
-H is right out here as well. The if won't find the key if it was generated with -H.
It's frustrating that the behavior varies from version to version and that the safest hashed version doesn't work.
Yet another edit: It may be the port is specified in known_hosts when the remote server uses a non-standard port in sshd_config. This may be an expected behavior.

I sent an email to the open-ssh list and had several good suggestions. Basically, it is not good to rely on ssh-keygen -F and ssh-keyscan -H as file formats and locations tend to vary from system to system.
The real solution, which I will implement today is to generate certificates for each of the servers so they recognize each other. This works well for me because I have complete control over both servers.
I was given a link that explains how to setup server certificates:
https://blog.habets.se/2011/07/OpenSSH-certificates.html
Here's a link specifically for Ubuntu.
https://www.digitalocean.com/community/tutorials/how-to-create-an-ssh-ca-to-validate-hosts-and-clients-with-ubuntu
If this is my last update, assume this worked for me.

Related

Auto answer ssh-copy-id in shell script

I'm writting a shell script and I want to automate login into a remote machine using ssh-copy-id, so manually when I print :
ssh-copy-id -i /root/.ssh/id_rsa $2#$4 -p $3 | echo $1
$1 refer to password,
$2 refer to username,
$3 refer to port, and
$4 refer to ip,
It is ok with that, the problem is that I have to automate inserting password after :
ssh-copy-id -i /root/.ssh/id_rsa $2#$4 -p $3
I add this "| printf $1", but it does not work it shows "password:" in the screen and still wait for the password ..
I hope you understand me and thank you.
As #Leon pointed out, you had the pipeline backwards. But even if you do it with the correct order, it will still not work because ssh-copy-id (and all other programs from openssh) do not read passwords from their stdin. The solution is to use the $SSH_ASKPASS environment variable. You can do that as follows: first, create an auxiliary script, say /var/tmp/ssh-pass.sh (actually find a better name than that), with the following contents:
#!/bin/sh
echo "$PASS"
Then you can use the following command to accomplish what you've asked for:
PASS="$1" SSH_ASKPASS="/var/tmp/ssh-pass.sh" setsid -w ssh-copy-id -i /root/.ssh/id_rsa "$2"#"$4" -p "$3"
Explanation: we use setsid -w to disassociate the ssh-copy-id process from the currently used terminal. That forces ssh-copy-id to run the executable specified in the $SSH_ASKPASS in order to obtain the password. We have specified our own script in that variable, so ssh-copy-id will execute just that. Now the script is supposed to provide the password to ssh-copy-id by printing it to its stdout. We use the $PASS variable to the password to the script, so the script just prints that variable.
2020 / Mac OS X:
Install sshpass (original answer)
brew install hudochenkov/sshpass/sshpass
Run ssh-copy-id using sshpass and with the password as an arg
sshpass -p $1 ssh-copy-id -i ~/PATH/TO/KEY $2#$4 -p $3
If you want to turn off strict host checking as well, use the -o flag, which is passed to the underlying ssh:
sshpass -p hunter2 ssh-copy-id -o StrictHostKeyChecking=no -i ~/PATH/TO/KEY $2#$4 -p $3
I tried the solution by #redneb, and installed setsid through util-linux by following this answer, but kept receiving a password denied.
I found this strategy to work for uploading my SSH key while setting up multiple raspberry pis in successino. In my script, I also run ssh-keygen -R raspberrypi.local each time too, to avoid the The ECDSA host key for raspberrypi.local has changed error.

Automate generating deploy key for github

I execute the following commands a few times a day:
ssh-keygen -t rsa -N "" -C "info#example.com" -f ~/.ssh/id_rsa_projectname
eval `ssh-agent`
ssh-add ~/.ssh/id_rsa_projectname
cat ~/.ssh/id_rsa_projectname.pub
ssh -T git#github.com
The only variable in this script is the projectname, I would like to make a keygen.sh script or something like that to automate this process and pass along the projectname. Is this possible?
Also where should I start looking and what not to forget, I'm a bit new to bash scripting and I know it can be quite dangerous in the wrong hands.
Would it not be easier to just maintain a single set of staging or development keys rather than generating them for everything? IMHO you're losing configurability and not gaining much in security.
That aside, you're on the right track but I would do things a bit different.
export PROJECT=foo;
ssh-keygen -t rsa -N "" -C "info#example.com" -f ~/.ssh/id_rsa_${PROJECT}
That will generate named keys id_rsa_foo and id_rsa_foo.pub
Now you need to make your ssh config use it for github. ~/.ssh/config should have something like:
Host remote github.com
IdentityFile ~/.ssh/id_rsa_foo
User git
StrictHostKeyChecking no
You'll need to upload the public key to github. You'll have to figure this out for yourself using their API.
If you do all this correctly you should be able to git clone automagically.
#!/bin/bash
[[ -z "${PROJECT}" ]] && echo "project must be set" && exit 1
ssh-keygen -t rsa -N "" -C "info#example.com" -f ~/.ssh/id_rsa_${PROJECT}
chmod 400 ~/.ssh/id_rsa_${PROJECT}
echo $' Host remote github.com\n IdentityFile ~/.ssh/id_rsa_'${PROJECT}'\n User git\n StrictHostKeyChecking no' >> ~/.ssh/config
chmod 644 ~/.ssh/config
# do the github api stuff to add the pub key

Using ssh-agent with docker on macOS

I would like to use ssh-agent to forward my keys into the docker image and pull from a private github repo.
I am using a slightly modified version of https://github.com/phusion/passenger-docker with boot2docker on Yosemite.
ssh-add -l
...key details
boot2docker up
Then I use the command which I have seen in a number of places (i.e. https://gist.github.com/d11wtq/8699521):
docker run --rm -t -i -v $SSH_AUTH_SOCK:/ssh-agent -e SSH_AUTH_SOCK=/ssh-agent my_image /bin/bash
However it doesn't seem to work:
root#299212f6fee3:/# ssh-add -l
Could not open a connection to your authentication agent.
root#299212f6fee3:/# eval `ssh-agent -s`
Agent pid 19
root#299212f6fee3:/# ssh-add -l
The agent has no identities.
root#299212f6fee3:/# ssh git#github.com
Warning: Permanently added the RSA host key for IP address '192.30.252.128' to the list of known hosts.
Permission denied (publickey).
Since version 2.2.0.0, docker for macOS allows users to access the host’s SSH agent inside containers.
Here's an example command that let's you do it:
docker run --rm -it \
-v /run/host-services/ssh-auth.sock:/ssh-agent \
-e SSH_AUTH_SOCK="/ssh-agent" \
my_image
Note that you have to mount the specific path (/run/host-services/ssh-auth.sock) instead of the path contained in $SSH_AUTH_SOCK environment variable, like you would do on linux hosts.
A one-liner:
Here’s how to set it up on Ubuntu 16 running a Debian Jessie image:
docker run --rm -it --name container_name \
-v $(dirname $SSH_AUTH_SOCK):$(dirname $SSH_AUTH_SOCK) \
-e SSH_AUTH_SOCK=$SSH_AUTH_SOCK my_image
https://techtip.tech.blog/2016/12/04/using-ssh-agent-forwarding-with-a-docker-container/
I expanded on #wilwilson's answer, and created a script that will setup agent forwarding in an OSX boot2docker environment.
https://gist.github.com/rcoup/53e8dee9f5ea27a51855
#!/bin/bash
# Use a unique ssh socket name per-invocation of this script
SSH_SOCK=boot2docker.$$.ssh.socket
# ssh into boot2docker with agent forwarding
ssh -i ~/.ssh/id_boot2docker \
-o StrictHostKeyChecking=no \
-o IdentitiesOnly=yes \
-o UserKnownHostsFile=/dev/null \
-o LogLevel=quiet \
-p 2022 docker#localhost \
-A -M -S $SSH_SOCK -f -n \
tail -f /dev/null
# get the agent socket path from the boot2docker vm
B2D_AGENT_SOCK=$(ssh -S $SSH_SOCK docker#localhost echo \$SSH_AUTH_SOCK)
# mount the socket (from the boot2docker vm) onto the docker container
# and set the ssh agent environment variable so ssh tools pick it up
docker run \
-v $B2D_AGENT_SOCK:/ssh-agent \
-e "SSH_AUTH_SOCK=/ssh-agent" \
"$#"
# we're done; kill off the boot2docker ssh agent
ssh -S $SSH_SOCK -O exit docker#localhost
Stick it in ~/bin/docker-run-ssh, chmod +x it, and use docker-run-ssh instead of docker run.
I ran into a similar issue, and was able to make things pretty seamless by using ssh in master mode with a control socket and wrapping it all in a script like this:
#!/bin/sh
ssh -i ~/.vagrant.d/insecure_private_key -p 2222 -A -M -S ssh.socket -f docker#127.0.0.1 tail -f /dev/null
HOST_SSH_AUTH_SOCK=$(ssh -S ssh.socket docker#127.0.0.1 env | grep "SSH_AUTH_SOCK" | cut -f 2 -d =)
docker run -v $HOST_SSH_AUTH_SOCK:/ssh-agent \
-e "SSH_AUTH_SOCK=/ssh-agent" \
-t hello-world "$#"
ssh -S ssh.socket -O exit docker#127.0.0.1
Not the prettiest thing in the universe, but much better than manually keeping an SSH session open IMO.
For me accessing ssh-agent to forward keys worked on OSX Mavericks and docker 1.5 as follows:
ssh into the boot2docker VM with boot2docker ssh -A. Don't forget to use option -A which enables forwarding of the authentication agent connection.
Inside the boot2docker ssh session:
docker#boot2docker:~$ echo $SSH_AUTH_SOCK
/tmp/ssh-BRLb99Y69U/agent.7750
This session must be left open. Take note of the value of the SSH_AUTH_SOCK environmental variable.
In another OS X terminal issue the docker run command with the SSH_AUTH_SOCK value from step 2 as follows:
docker run --rm -t -i \
-v /tmp/ssh-BRLb99Y69U/agent.7750:/ssh-agent \
-e SSH_AUTH_SOCK=/ssh-agent my_image /bin/bash
root#600d0e9b443d:/# ssh-add -l
2048 6c:8e:82:08:74:33:78:61:f9:9a:74:1b:65:46:be:eb
/Users/dev/.ssh/id_rsa (RSA)
I don't really like the fact that I have to keep a boot2docker ssh session open to make this work, but until a better solution is found, this at least worked for me.
Socket forwarding doesn't work on OS X yet. Here is a variation of #henrjk answer brought into 2019 using Docker for Mac instead of boot2docker which is now obsolete.
First run a ssh server in the container, with /tmp being on the exportable volume. Like this
docker run -v tmp:/tmp -v \
${HOME}/.ssh/id_rsa.pub:/root/.ssh/authorized_keys:ro \
-d -p 2222:22 arvindr226/alpine-ssh
Then ssh into this container with agent forwarding
ssh -A -p 2222 root#localhost
Inside of that ssh session find out the current socket for ssh-agent
3f53fa1f5452:~# echo $SSH_AUTH_SOCK
/tmp/ssh-9zjJcSa3DM/agent.7
Now you can run your real container. Just make sure to replace the value of SSH_AUTH_SOCK below, with the value you got in the step above
docker run -it -v tmp:/tmp \
-e SSH_AUTH_SOCK=/tmp/ssh-9zjJcSa3DM/agent.7 \
vladistan/ansible
By default, boot2docker shares only files under /Users. SSH_AUTH_SOCK is probably under /tmp so the -v mounts the agent of the VM, not the one from your mac.
If you setup your VirtualBox to share /tmp, it should be working.
Could not open a connection to your authentication agent.
This error occurs when $SSH_AUTH_SOCK env var is set incorrectly on the host or not set at all. There are various workarounds you could try. My suggestion, however, is to dual-boot Linux and macOS.
Additional resources:
Using SSH keys inside docker container - Related Question
SSH and docker-compose - Blog post
Build secrets and SSH forwarding in Docker 18.09 - Blog post

Running a custom profile file on remote host

I have a custom profile script on a remote host.
I'd like to source it when I connect to the host automatically.
I cannot change anything on the remote host.
Things I've tried but didn't work:
ssh -t myhost '. /local/scratch/scripts/rtc_profile ; bash -l'
ssh -t myhost 'bash -l /local/scratch/scripts/rtc_profile'
I've also tried the following (which works), but it skips all other rcfiles:
ssh -t myhost 'bash --rcfile /local/scratch/scripts/rtc_profile '
Please help :)
You can put the following at the end of your ~/.bashrc:
if [ ! -z "$SSH_CONNECTION" ] ; then
source /local/scratch/scripts/rtc_profile
fi

Shell script grep rsa key from authorized_keys

This shell command isn't working:
ssh root#IP "if '$(cat ~/.ssh/authorized_keys | grep $KEY)' == '';then;echo $KEY >> ~/.ssh/authorized_keys;done"
$KEY contains my public RSA key. What I'm trying to do is to check if my key has been added to the authorized_keys file, if not, then to add it. IP of course is replaced with a real ip address.
Any idea what I'm doing wrong?
Edit:
In case anyone is curious, this is what I was doing:
#!/bin/sh
# Get your RSA key.
KEY=""
for line in $(cat ~/.ssh/id_rsa.pub)
do
KEY="$KEY $line"
done
# Add your RSA key to the machine's authorized_keys if it's not already there.
ssh root#$1 "grep -q '$KEY' ~/.ssh/authorized_keys || echo '$KEY' >> ~/.ssh/authorized_keys"
# Connect to the machine.
ssh root#$1
The idea was to ssh into a machine (using this script) and not have to enter the password in the next time I log in. The IP address is passed as a command line argument.
You expand $() thingie on the client side. (at least).
ssh root#IP "grep -q '$KEY' .ssh/authorized_keys || echo '$KEY' >>.ssh/authorized_keys"
looks like a shorter way to do the same.

Resources