I write a deployAll.sh, which read ip_host.list line by line, then add group for all the remote hosts,
when I run: sh deployAll.sh
results:
Group is added to 172.25.30.11
not expected results:
Group is added to 172.25.30.11
Group is added to 172.25.30.12
Group is added to 172.25.30.13
Why it just execute the first one? please help, thanks a lot!
deployAll.sh
#!/bin/bash
function deployAll()
{
while read line;do
IFS=';' read -ra ipandhost<<< "$line"
ssh "${ipandhost[0]}" "groupadd -g 1011 test"
printf "Group is added to ${ipandhost[0]}\n"
done < ip_host.list
}
deployAll
ip_host.list
172.25.30.11;test-30-11
172.25.30.12;test-30-12
172.25.30.13;test-30-13
That's a frequent problem which is caused by the special behavior of ssh, which sucks up stdin, starving the loop ( i.e. while read line;do ...;done )
Please see Bash FAQ 89 which discusses this subject in detail.
I also just answered ( and solved ) a similar question regarding ffmpeg with the same behavior as ssh in this case. Here: When reading a file line by line, I only get to execute ffmpeg on the first line.
TL;DR :
There are three main options:
Using ssh's -n option. Quoted from man ssh:
-n Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background. A common trick is to use this to run X11 programs on a remote
machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically forwarded over an encrypted channel. The ssh pro‐
gram will be put in the background. (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.)
Adding a </dev/null at the end of ssh's line ( i.e. ssh ... </dev/null ) will fix the issue and will make ssh behave as expected.
Let read read from a File Descriptor which is unlikely to be used by a random program:
while IFS= read -r line <&3; do
# Here read is reading from FD 3, to which 'ip_host.list' is redirected.
done 3<ip_host.list
Without the ssh command (which wouldn't make sense on my network), I get the expected output so I suspect that the ssh command is swallowing the remaining standard input. You should use the -n flag to prevent ssh from reading from stdin (equivalent to redirecting stdin from /dev/null):
ssh -n "${ipandhost[0]}" "groupadd -g 1011 test"
or
ssh "${ipandhost[0]}" "groupadd -g 1011 test" < /dev/null
See also How to keep script from swallowing all of stdin?
My solution is to generate ssh keys through ssh-keygen command and replace existing public key file (if any). After which installation will resume.
Related
I got this trick from Solaris documentation, for copying ssh public keys to remote hosts.
note: ssh-copy-id isn't available on Solaris
$ cat some_data_file | ssh user#host "cat >/tmp/some_data_file; some_shell_cmd"
I wanted to adapt it to do more involved things.
Specifically I wanted some_shell_command to be a script sent from the local host to execute on the remote host... a script would interact with the local keyboard (e.g. prompt user when the script was running on the remote host).
I experimented with ways of sending multiple things over stdin from multiple sources. But certain things that work in in local shell don't work over ssh, and some things, such as the following, didn't do what I wanted at all:
$ echo "abc" | cat <(echo "def") # echoes: def (I wanted abc\ndef)
$ echo "abc" | cat < <(echo "def") # echoes: def (I wanted abc\ndef)
$ echo "abc" | cat <<-EOF
> echo $(</dev/stdin) #echoes: echo abc (I wanted: abc)
> EOF
# messed with eval for the above but that was a problem too.
#chepner concluded it's not feasible to do all of that in a single ssh command. He suggested a theoretical alternative that didn't work as hoped, but I got it working after some research and tweaking and documented the results of that and posted it as an answer to this question.
Without that solution, having to run multiple ssh, and scp commands by default entails being prompted for password multiple times, which is a major drag.
I can't expect all the users of a script I write in a multi-user environment to configure public key authorization, nor expect they will put up with having to enter a password over and over.
OpenSSH Session Multiplexing
This solution works even when using earlier versions of OpenSSH where the
ControlPersistoption isn't available. (Working bash example at end of this answer)
Note: OpenSSH 3.9 introduced Session Multiplexing over a "control master connection" (in 2005), However, the ControlPersist option wasn't introduced until OpenSSH 5.6 (released in 2010).
ssh session multiplexing allows a script to authenticate once and do multiple ssh transactions over the authenticated connection. For example, if you have a script that runs several distinct tasks using ssh, scp, or sftp, each transaction can be carried out over OpenSSH 'control master session' that refers to location of its named-socket in the filesystem.
The following one-time-password authentication is useful when running a script that has to perform multiple ssh operations and one wants to avoid users having to password authenticate more than once, and is especially useful in cases where public key authentication isn't viable - e.g. not permitted, or at least not configured.
Most solutions I've seen entail using ControlPersist to tell ssh to keep the control master connection open, either indefinitely, or for some specific number of seconds.
Unfortunately, systems with OpenSSH prior to 5.6 don't have that option (wherein upgrading them might not be feasible). Unfortunately, there doesn't seem to be much documentation or discussion about that limitation online.
Reading through old release docs I discovered ControlPersist arrived late in the game for ssh session multiplexing scene. implying there may have been an alternative way to configure session multiplexing without relying on the ControlPersist option prior to it.
Initially trying to configure persistent-sessions from command line options rather than the config parameter, I ran into the problem of the ssh session terminating prematurely, closing control connection client sessions with it, or, alternatively, the connection was held open (kept ssh control master alive), terminal I/O was blocked, and the script would hang.
The following clarifies how to accomplish it.
OpenSSH option ssh flag Purpose
------------------- --------- -----------------------------
-o ControlMaster=yes -M Establishes sharable connection
-o ControlPath=path -S path Specifies path of connection's named socket
-o ControlPersist=600 Keep shareable connection open 10 min.
-o ControlPersist=yes Keep shareable connection open indefinitely
-N Don't create shell or run a command
-f Go into background after authenticating
-O exit Closes persistent connection
ControlPersist form Equivalent Purpose
------------------- ---------------- -------------------------
-o ControlPersist=yes ssh -Nf Keep control connection open indefinitely
-o ControlPersist=300 ssh -f sleep 300 Keep control connection open 5 min.
Note: scp and sftp implement -S flag differently, and -M flag not at all, so, for those commands, the -o option form is always required.
Sketchy Overview of Operations:
Note: This incomplete example doesn't execute as shown.
ctl=<path to dir to store named socket>
ssh -fNMS $ctl user#host # open control master connection
ssh -S $ctl … # example of ssh over connection
scp -o ControlPath=$ctl … # example of scp over connection
sftp -o ControlPath=$ctl … # example of sftp over connection
ssh -S $ctl -O exit # close control master connection
Session Multiplexing Demo
(Try it. You'll like it. Working example - authenticates only once):
Running this script will probably help you understand it quicker than reading it, and it is fascinating.
Note: If you lack access to remote host, just enter localhost at the "Host...?" prompt if you want to try this demo script
#!/bin/bash # This script demonstrates ssh session multiplexing
trap "[ -z "$ctl" ] || ssh -S $ctl -O exit $user#$host" EXIT # closes conn, deletes fifo
read -p "Host to connect to? " host
read -p "User to login with? " user
BOLD="\n$(tput bold)"; NORMAL="$(tput sgr0)"
echo -e "${BOLD}Create authenticated persistent control master connection:${NORMAL}"
sshfifos=~/.ssh/controlmasters
[ -d $sshfifos ] || mkdir -p $sshfifos; chmod 755 $sshfifos
ctl=$sshfifos/$user#$host:22 # ssh stores named socket ctrl conn here
ssh -fNMS $ctl $user#$host # Control Master: Prompts passwd then persists in background
lcldir=$(mktemp -d /tmp/XXXX)
echo -e "\nLocal dir: $lcldir"
rmtdir=$(ssh -S $ctl $user#$host "mktemp -d /tmp/XXXX")
echo "Remote dir: $rmtdir"
echo -e "${BOLD}Copy self to remote with scp:${NORMAL}"
scp -o ControlPath=$ctl ${BASH_SOURCE[0]} $user#$host:$rmtdir
echo -e "${BOLD}Display 4 lines of remote script, with ssh:${NORMAL}"
echo "====================================================================="
echo $rmtdir | ssh -S $ctl $user#$host "dir=$(</dev/stdin); head -4 \$dir/*"
echo "====================================================================="
echo -e "${BOLD}Do some pointless things with sftp:${NORMAL}"
sftp -o ControlPath=$ctl $user#$host:$rmtdir <<EOF
pwd
ls
lcd $lcldir
get *
quit
EOF
Using a master control socket, you can use multiple processes without having to authenticate more than once. This is just a simple example; see man ssh_config under ControlPath for advice on using a more secure socket.
It's not quite clear what you mean by sourcing somecommand locally; I'm going to assume it is a local script that you want copied over to the remote host. The simplest thing to do is just copy it over to run it.
# Copy the first file, and tell ssh to keep the connection open
# in the background after scp completes
$ scp -o ControlMaster=yes -o ControlPersist=yes -o ControlPath=%C somefile user#host:/tmp/somefile
# Copy the script on the same connection
$ scp -o ControlPath=%C somecommand user#host:
# Run the script on the same connection
$ ssh -o ControlPath=%C user#host somecommand
# Close the connection
$ ssh -o ControlPath=%C -O exit user#host
Of course, the user could use public key authentication to avoid entering their credentials at all, but ssh would still go through the authentication process each time. Here, the authentication process is only done once, by the command using ControlMaster=yes. The other two processes reuse that connection. The last commmand, with -O exit, doesn't actually connect; it just tells the local connection to close itself.
$ echo "abc" | cat <(echo "def")
The expression <(echo "def") expands to a file name, typically something like /dev/fd/63, that names a (virtual) file containing the text "def". So lets's simplify it a bit:
$ echo "def" > def.txt
$ echo "abc" | cat def.txt
This will also prints just def.
The pipe does feed the line abc to the standard input of the cat command. But because cat is given a file name on its command line, it doesn't read from its standard input. The abc is just quietly ignored, and the cat command prints the contents of the named file -- which is exactly what you told it to do.
The problem with echo abc | cat <(echo def) is that the <() wins the "providing the input" race. Luckily, bash will allow you to supply many inputs using mulitple <() constructs. So the trick is, how do you get the output of your echo abc into the <()?
How about:
$ echo abc | cat <(echo def) <(cat)
def
abc
If you need to handle the input from the pipe first, just switch the order:
$ echo abc | cat <(cat) <(echo def)
abc
def
I have a requirement which should address following points.
I have a file which contains list of IP addresses,I want to read line by line.
For each IP I need to push following commands using SSH (all are Mikrotik devices)
/ radius add service=login address=172.16.0.1 secret=aaaa
/ user aaa set use-radius=yes
Following is my code.
#!/bin/bash
filename="branch"
while IFS= read line; do
echo ${line//}
line1=${line//}
ok='#'
line3=$ok$line1
sshpass -p abc123 ssh -o StrictHostKeyChecking=no admin$line3 / radius add service=login address=172.16.0.1 secret=aaaa
sleep 3
sshpass -p abc123 ssh -o StrictHostKeyChecking=no admin$line3 / user aaa set use-radius=yes
sleep 3
echo $line3
echo $line
done <"$filename"
Branch text file:
192.168.100.1
192.168.101.2
192.168.200.1
Issue: What ever the changes I am doing While loop is only run once.
Troubleshooting/Observations:
Without the SSH command if I run the While loop to read the file " branch " it work fine.
The problem is that a program in the loop also reads data on standard input. This will consume the 2nd and subsequent lines of what's in "$filename".
On the next iteration of the loop, there's nothing left to read and the loop terminates.
The solution is to identify the command reading stdin, probably sshpass and change it to leave stdin alone. The answer by Cyrus shows one way to do that for ssh. If that doesn't work, try
sshpass [options and arguments here] < /dev/null
Another solution is to replace the while with a for loop. This works as long as the branch file only contains IP addresses:
for ip in $(cat branch); do
echo $ip
...
sshpass ...
done
Can I run a here document script over ssh on remote machine with interactive mode?
Code example is:
ssh -t xijing#ggzshop.com 'bash -s' <<EOF
sudo ls
......Other big scripts......
EOF
double -t won't work properly as well.
-----------------------------One possible solution:-------------------
After a lot of tries, I come up with following answers:
Script=`cat <<'EOF'
sudo ls
.....Big scripts.....
EOF`
ssh -t user#host ${Script}
which will allow user to type password in.
Solution of Xijing appears to work ok for me. However, I did a couple of cosmetic changes. First, for readability I used "dollar-parentheses" instead of backticks. For another thing I don't offer any explanation: Semicolons were needed to separate multiple commands in Script snippet even though commands are written on separate lines. My test:
Script=$( cat <<'HERE'
hostname;
cat /etc/issue;
sudo id
HERE
)
ssh -t user#host ${Script}
Sudo password will be asked in a normal manner, no need to omit that.
No, I don't think you can run interactive scripts like that.
To achieve what you want, you could create dedicated users for your common admin tasks that can run admin commands with sudo without password. Next, setup ssh key authentication to login as the dedicated users and perform the necessary tasks.
It is not necessary to use semicolons to separate multiple commands in Script if there are quotes around it.
Script="$( cat <<'HERE'
hostname;
cat /etc/issue;
sudo id
HERE
)"
- ssh -t user#host ${Script}
+ ssh -t user#host "${Script}"
# alternative (not recommended)
# set IFS variable to null string to avoid deletion of newlines \n in unquoted variable expansion
export IFS=''
ssh -t user#host ${Script}
i'm just starting out with bash & am trying to write a script to search specific files in a server remotely based on: (a)device name and (b) string. my goal is to get all output containing 'string' for the device specified. when i tried the script below just hangs. however, when i run the command directly on the server("grep -i "router1" /var/log/router.log | grep -i "UPDOWN"), it works. any ideas?any ideas?
#!/bin/bash
#
read -p "Enter username: " user
read -p "Enter device name: " dev
read -p "Enter string: " str
while read /home/user1/syslogs
do
ssh "$user"#server1234 'grep -i "$dev" /var/log/"$syslogs" 2> /dev/null | grep -i "$str"'
done
You seem to be mis-using the read command. You don't specify the file to read from as an argument; read always reads from standard input. It's not clear what you want to do with the value you read from the file as a result, but you want something like this:
read -p "Enter username: " user
read -p "Enter device name: " dev
read -p "Enter string: " str
while read fileName; do
# Also: I'm borrowing sputnick's solution to the nested quote problem.
ssh $user#server1234 <<EOF
grep -i "$dev" /var/log/$fileName 2>/dev/null | grep -i "$str"
EOF
done < /home/user1/syslogs
The message Pseudo-terminal will not be allocated because stdin is not a terminal is due to the fact that the stdin of the remote host's shell is being redirected from a here document and that there is no command specified for the remote host to execute, i. e. the remote host first assumes there will be a need to allocate a pseudo-terminal for an interactive login session due to the lacking command (see the synopsis of the ssh man page: ssh ... [user#]hostname [command]), but then realizes that the stdin of its shell is not a terminal since it is redirected from a here document. The result is that the remote host refuses to allocate a pseudo-terminal.
The solution in the given case would be to just specify a shell as a command for the remote host to execute the commands given in the here document.
As an alternative to specifying a shell as a command the remote host could be told in advance that there is no need for the allocation of a pseudo-terminal using the -T switch.
The -t switch, on the other hand, would be necessary only if a specified command expects an interactive login shell session on the remote host (such as top or vim).
- ssh $user#server1234 <<EOF ...
+ ssh $user#server1234 /bin/sh <<EOF ...
+ ssh -T $user#server1234 <<EOF ...
I am required to deploy some files from server A to server B. I connect to server A via SSH and from there, connect via ssh to server B, using a private key stored on server A, the public key of which resides in server B's authorized_keys file. The connection from A to B happens within a Bash shell script that resides on server A.
This all works fine, nice and simple, until a security-conscious admin pointed out that my SSH private key stored on server A is not passphrase protected, so that anyone who might conceivably hack into my account on server A would also have access to server B, as well as C, D, E, F, and G. He has a point, I guess.
He suggests a complicated scenario under which I would add a passphrase, then modify my shell script to add a a line at the beginning in which I would call
ssh-keygen -p -f {private key file}
answer the prompt for my old passphrase with the passphrase and the (two) prompts for my new passphrasw with just return which gets rid of the passphrase, and then at the end, after my scp command
calling
ssh-keygen -p -f {private key file}
again, to put the passphrase back
To which I say "Yecch!".
Well I can improve that a little by first reading the passphrase ONCE in the script with
read -s PASS_PHRASE
then supplying it as needed using the -N and -P parameters of ssh-keygen.
It's almost usable, but I hate interactive prompts in shell scripts. I'd like to get this down to one interactive prompt, but the part that's killing me is the part where I have to press enter twice to get rid of the passphrase
This works from the command line:
ssh-keygen -p -f {private key file} -P {pass phrase} -N ''
but not from the shell script. There, it seems I must remove the -N parameter and accept the need to type two returns.
That is the best I am able to do. Can anyone improve this? Or is there a better way to handle this? I can't believe there isn't.
Best would be some way of handling this securely without ever having to type in the passphrase but that may be asking too much. I would settle for once per script invocation.
Here is a simplified version the whole script in skeleton form
#! /bin/sh
KEYFILE=$HOME/.ssh/id_dsa
PASSPHRASE=''
unset_passphrase() {
# params
# oldpassword keyfile
echo "unset_key_password()"
cmd="ssh-keygen -p -P $1 -N '' -f $2"
echo "$cmd"
$cmd
echo
}
reset_passphrase() {
# params
# oldpassword keyfile
echo "reset_key_password()"
cmd="ssh-keygen -p -N '$1' -f $2"
echo "$cmd"
$cmd
echo
}
echo "Enter passphrase:"
read -s PASSPHRASE
unset_passphrase $PASSPHRASE $KEYFILE
# do something with ssh
reset_passphrase $PASSPHRASE $KEYFILE
Check out ssh-agent. It caches the passphrase so you can use the keyfile during a certain period regardless of how many sessions you have.
Here are more details about ssh-agent.
OpenSSH supports what's called a "control master" mode, where you can connect once, leave it running in the background, and then have other ssh instances (including scp, rsync, git, etc.) reuse that existing connection. This makes it possible to only type the password once (when setting up the control master) but execute multiple ssh commands to the same destination.
Search for ControlMaster in man ssh_config for details.
Advantages over ssh-agent:
You don't have to remember to run ssh-agent
You don't have to generate an ssh public/private key pair, which is important if the script will be run by many users (most people don't understand ssh keys, so getting a large group of people to generate them is a tiring exercise)
Depending on how it is configured, ssh-agent might time out your keys part-way through the script; this won't
Only one TCP session is started, so it is much faster if you're connecting over and over again (e.g., copying many small files one at a time)
Example usage (forgive Stack Overflow's broken syntax highlighting):
REMOTE_HOST=server
log() { printf '%s\n' "$*"; }
error() { log "ERROR: $*" >&2; }
fatal() { error "$*"; exit 1; }
try() { "$#" || fatal "'$#' failed"; }
controlmaster_start() {
CONTROLPATH=/tmp/$(basename "$0").$$.%l_%h_%p_%r
# same as CONTROLPATH but with special characters (quotes,
# spaces) escaped in a way that rsync understands
CONTROLPATH_E=$(
printf '%s\n' "${CONTROLPATH}" |
sed -e 's/'\''/"'\''"/g' -e 's/"/'\''"'\''/g' -e 's/ /" "/g'
)
log "Starting ssh control master..."
ssh -f -M -N -S "${CONTROLPATH}" "${REMOTE_HOST}" \
|| fatal "couldn't start ssh control master"
# automatically close the control master at exit, even if
# killed or interrupted with ctrl-c
trap 'controlmaster_stop' 0
trap 'exit 1' HUP INT QUIT TERM
}
controlmaster_stop() {
log "Closing ssh control master..."
ssh -O exit -S "${CONTROLPATH}" "${REMOTE_HOST}" >/dev/null \
|| fatal "couldn't close ssh control master"
}
controlmaster_start
try ssh -S "${CONTROLPATH}" "${REMOTE_HOST}" some_command
try scp -o ControlPath="${CONTROLPATH}" \
some_file "${REMOTE_HOST}":some_path
try rsync -e "ssh -S ${CONTROLPATH_E}" -avz \
some_dir "${REMOTE_HOST}":some_path
# the control master will automatically close once the script exits
I could point out an alternative solution for this. Instead of having the key stored on server A I would keep the key locally. Now I would create a local port forward to server B on port 4000.
ssh -L 4000:B:22 usernam#A
And then in a new terminal connect through the tunnel to server B.
ssh -p 4000 -i key_copied_from_a user_on_b#localhost
I don't know how feasible this is to you though.
Building up commands as a string is tricky, as you've discovered. Much more robust to use arrays:
cmd=( ssh-keygen -p -P "$1" -N "" -f "$2" )
echo "${cmd[#]}"
"${cmd[#]}"
Or even use the positional parameters
passphrase="$1"
keyfile="$2"
set -- ssh-keygen -p -P "$passphrase" -N "" -f "$keyfile"
echo "$#"
"$#"
The empty argument won't be echoed surrounded by quotes, but it's there