How to call ssh so it will only use identity file - bash

My SSH server uses both password and identity file authentication. I don't want to change this behaviour.
I want to check if $THE_IDENTITY file is known by the server('s user) or not.
That's why I use this code:
echo "Check if server knows the SSH identity:"
if [ "$(ssh user#host -i ${SSH_ID_FILE} echo 'hello')" == 'hello' ]; then
echo "Server already knows me"
else
echo "Registering SSH ID Key to the server..."
ssh-copy-id -i $SSH_ID_FILE "user#host"
fi
But, the problem is, inside the "if" statement, if server does not know ID file, it asks for password, that's why my code works wrong.
How can I change my ssh line to make it exit if it fails with id file?

the following may do the trick:
ssh user#host -o NumberOfPasswordPrompts=0 -i .....

There's a keyword option to tell ssh not to use password authentication:
ssh user#host -o PasswordAuthentication=no ...
On Unix systems, details of all the keyword options can be found by:
man ssh_config

Related

sshpass not working in shell script while giving username and password in varibales

I am using this below script and i am getting this error -p: command not found
Code:
echo "Enter the server name"
read -s server
echo "Enter the Remote Username"
read -s Rem_user
echo "Enter the Remote Password"
read -s rmtpasswrd
output=sshpass -p ${rmtpasswrd} ssh ${Rem_user}#${server} -T 'df -h;'
echo $output
Please let me know what is the error in the script.
You need to use a command substitution to run ssh and capture its output.
output=$(ssh "${Rem_user}#${server}" 'df -h')
There is no reason to use sshpass if you are going to prompt the user for a password. ssh already does that (and more securely than you can).

how to prevent scp/ssh from trying to read standard input

I am passing a file from one server to another using authorized keys.
However, in the event keys are no longer valid, the script is being asked for the password on and on.
I have tried
scp ${user}#$host}:/tmp ./file1 </dev/null
but I still get the prompt.
The only time when this works is if I run it off schedule like this:
echo "scp ${user}#$host}:/tmp ./file1" | at now
In this case it will correctly error out if keys are no longer valid.
But how can I create a blank input stream, that will not be prompting the user if the script is run interactively?
#David:
echo "" | scp ${user}#${host}:/tmp ./file1 </dev/null
didn't help, same response, so it may need to have an stty command to zero tty input, I'm guessing now, per Kenster's note.
But how can I create a blank input stream, that will not be prompting the user if the script is run interactively?
Instead of that disable password authentication.
scp -o BatchMode=yes -o PasswordAuthentication=no -o PubkeyAuthentication=yes ...
This might work as well
scp ${user}#${host}:/tmp ./file1 /dev/tty0 > /dev/null

how to check a ssh key is copied to remote server by script

I want use a script like below to check if ssh key in my host is copied to remote server:
#!/usr/bin/sh
ssh -q -o StrictHostKeyChecking=no user#server "ls >/dev/null </dev/null"
if [ $? -eq 0 ] ;then
echo "key copied to remote server"
else
echo "key not copied to remote server"
fi
but it always pending on password input in some case,
user#server's password:
if there any way to terminate this session and return error immediately?
add -o PubkeyAuthentication=yes and -o PasswordAuthentication=no to the ssh command in your script

can ansible ask for passwords automatically and only if necessary

So ansible-playbook has --ask-pass and --ask-sudo-pass. Is there a way to get ansible to try ssh without a password first and then only prompt for a password if passwordless login fails? Similarly, can ansible try sudo without a password first and then only prompt if that doesn't work?
FYI I have a little shell function to try to figure this out by trial and error, but I'm hoping something like this is baked into ansible.
get_ansible_auth_args() {
local base_check="ansible all --one-line --inventory-file=deploy/hosts/localhost.yml --args=/bin/true --sudo"
${base_check}
if [[ $? -eq 0 ]]; then
return;
fi
local args="--ask-pass"
${base_check} ${args}
if [[ $? -eq 0 ]]; then
export ANSIBLE_AUTH_ARGS="${args}"
return;
fi
local args="--ask-pass --ask-sudo-pass"
${base_check} ${args}
if [[ $? -eq 0 ]]; then
export ANSIBLE_AUTH_ARGS="${args}"
return;
fi
}
If you set ask_pass and ssh_args as I show below then ansible should ask you for password at the beginning once and use that password whenever public key auth doesn't work.
[defaults]
ask_pass = True
[ssh_connection]
ssh_args = -o PubkeyAuthentication=yes -o PasswordAuthentication=yes -o ControlMaster=auto -o ControlPersist=60s
This is still not the full solution: Catch being (AFAIK) ansible uses sshpass, so the password it collected from your at the start would be the only password it would use and it won't work if you have different passwords for different machines. :-)
Only other hack I can think of is to replace /usr/bin/ssh (or whichever is your openssh's ssh used by ansible) with a script of your own that wraps the logic of reading password from some flat file if needed, I suspect ansible would hide the tty so your script won't be able to 'read' the password from stdin.

Temporarily remove the ssh private key password in a shell scriptI

I am required to deploy some files from server A to server B. I connect to server A via SSH and from there, connect via ssh to server B, using a private key stored on server A, the public key of which resides in server B's authorized_keys file. The connection from A to B happens within a Bash shell script that resides on server A.
This all works fine, nice and simple, until a security-conscious admin pointed out that my SSH private key stored on server A is not passphrase protected, so that anyone who might conceivably hack into my account on server A would also have access to server B, as well as C, D, E, F, and G. He has a point, I guess.
He suggests a complicated scenario under which I would add a passphrase, then modify my shell script to add a a line at the beginning in which I would call
ssh-keygen -p -f {private key file}
answer the prompt for my old passphrase with the passphrase and the (two) prompts for my new passphrasw with just return which gets rid of the passphrase, and then at the end, after my scp command
calling
ssh-keygen -p -f {private key file}
again, to put the passphrase back
To which I say "Yecch!".
Well I can improve that a little by first reading the passphrase ONCE in the script with
read -s PASS_PHRASE
then supplying it as needed using the -N and -P parameters of ssh-keygen.
It's almost usable, but I hate interactive prompts in shell scripts. I'd like to get this down to one interactive prompt, but the part that's killing me is the part where I have to press enter twice to get rid of the passphrase
This works from the command line:
ssh-keygen -p -f {private key file} -P {pass phrase} -N ''
but not from the shell script. There, it seems I must remove the -N parameter and accept the need to type two returns.
That is the best I am able to do. Can anyone improve this? Or is there a better way to handle this? I can't believe there isn't.
Best would be some way of handling this securely without ever having to type in the passphrase but that may be asking too much. I would settle for once per script invocation.
Here is a simplified version the whole script in skeleton form
#! /bin/sh
KEYFILE=$HOME/.ssh/id_dsa
PASSPHRASE=''
unset_passphrase() {
# params
# oldpassword keyfile
echo "unset_key_password()"
cmd="ssh-keygen -p -P $1 -N '' -f $2"
echo "$cmd"
$cmd
echo
}
reset_passphrase() {
# params
# oldpassword keyfile
echo "reset_key_password()"
cmd="ssh-keygen -p -N '$1' -f $2"
echo "$cmd"
$cmd
echo
}
echo "Enter passphrase:"
read -s PASSPHRASE
unset_passphrase $PASSPHRASE $KEYFILE
# do something with ssh
reset_passphrase $PASSPHRASE $KEYFILE
Check out ssh-agent. It caches the passphrase so you can use the keyfile during a certain period regardless of how many sessions you have.
Here are more details about ssh-agent.
OpenSSH supports what's called a "control master" mode, where you can connect once, leave it running in the background, and then have other ssh instances (including scp, rsync, git, etc.) reuse that existing connection. This makes it possible to only type the password once (when setting up the control master) but execute multiple ssh commands to the same destination.
Search for ControlMaster in man ssh_config for details.
Advantages over ssh-agent:
You don't have to remember to run ssh-agent
You don't have to generate an ssh public/private key pair, which is important if the script will be run by many users (most people don't understand ssh keys, so getting a large group of people to generate them is a tiring exercise)
Depending on how it is configured, ssh-agent might time out your keys part-way through the script; this won't
Only one TCP session is started, so it is much faster if you're connecting over and over again (e.g., copying many small files one at a time)
Example usage (forgive Stack Overflow's broken syntax highlighting):
REMOTE_HOST=server
log() { printf '%s\n' "$*"; }
error() { log "ERROR: $*" >&2; }
fatal() { error "$*"; exit 1; }
try() { "$#" || fatal "'$#' failed"; }
controlmaster_start() {
CONTROLPATH=/tmp/$(basename "$0").$$.%l_%h_%p_%r
# same as CONTROLPATH but with special characters (quotes,
# spaces) escaped in a way that rsync understands
CONTROLPATH_E=$(
printf '%s\n' "${CONTROLPATH}" |
sed -e 's/'\''/"'\''"/g' -e 's/"/'\''"'\''/g' -e 's/ /" "/g'
)
log "Starting ssh control master..."
ssh -f -M -N -S "${CONTROLPATH}" "${REMOTE_HOST}" \
|| fatal "couldn't start ssh control master"
# automatically close the control master at exit, even if
# killed or interrupted with ctrl-c
trap 'controlmaster_stop' 0
trap 'exit 1' HUP INT QUIT TERM
}
controlmaster_stop() {
log "Closing ssh control master..."
ssh -O exit -S "${CONTROLPATH}" "${REMOTE_HOST}" >/dev/null \
|| fatal "couldn't close ssh control master"
}
controlmaster_start
try ssh -S "${CONTROLPATH}" "${REMOTE_HOST}" some_command
try scp -o ControlPath="${CONTROLPATH}" \
some_file "${REMOTE_HOST}":some_path
try rsync -e "ssh -S ${CONTROLPATH_E}" -avz \
some_dir "${REMOTE_HOST}":some_path
# the control master will automatically close once the script exits
I could point out an alternative solution for this. Instead of having the key stored on server A I would keep the key locally. Now I would create a local port forward to server B on port 4000.
ssh -L 4000:B:22 usernam#A
And then in a new terminal connect through the tunnel to server B.
ssh -p 4000 -i key_copied_from_a user_on_b#localhost
I don't know how feasible this is to you though.
Building up commands as a string is tricky, as you've discovered. Much more robust to use arrays:
cmd=( ssh-keygen -p -P "$1" -N "" -f "$2" )
echo "${cmd[#]}"
"${cmd[#]}"
Or even use the positional parameters
passphrase="$1"
keyfile="$2"
set -- ssh-keygen -p -P "$passphrase" -N "" -f "$keyfile"
echo "$#"
"$#"
The empty argument won't be echoed surrounded by quotes, but it's there

Resources