How to properly iterate through a list using sshpass with a single ssh-login - bash

Situation: we're feeding a list of filenames to an sshpass and it iterates correctly through a remote folder to check whether files with the given names actually exists, then build an updated list containing only the files that do exist, which is reused later in the bash script.
Problem: The list comprises sometimes tens of thousands of files, which means tens of thousands of ssh logins, which is harming performance and sometimes getting us blocked by our own security policies.
Intended solution: instead of starting the for-loop and calling sshpass each time, do it otherwise and pass the loop to an unique sshpass call.
I've got to pass the list to the sshpass instruction in the example test below:
#!/bin/bash
all_paths=(`/bin/cat /home/user/filenames_to_be_tested.list`)
existing_paths=()
sshpass -p PASSWORD ssh -n USER#HOST bash -c "'
for (( i=0; i<${#all_paths[#]}; i++ ))
do
echo ${all_paths[i]}
echo \"-->\"$i
if [[ -f ${all_paths[i]} ]]
then
echo ${all_paths[i]}
existing_paths=(${all_paths[i]})
fi
done
'
printf '%s\n' "${existing_paths[#]}"
The issue here is that it appears to loop (you see a series of echoed lines), but in the end it is not really iterating the i and is always checking/printing the same line.
Can someone help spot the bug? Thanks!

The problem is that bash first parses the string and substitutes the variables. That happens before it's sent to the server. If you want to stop bash from doing that, you should escape every variable that should be executed on the server.
#! /bin/bash
all_paths=(rootfs.tar derp a)
read -sp "pass? " PASS
echo
sshpass -p $PASS ssh -n $USER#$SERVER "
files=(${all_paths[#]})
existing_paths=()
for ((i=0; i<\${#files[#]}; i++)); do
echo -n \"\${files[#]} --> \$i\"
if [[ -f \${files[\$i]} ]]; then
echo \${files[\$i]}
existing_paths+=(\${files[\$i]})
else
echo 'Not found'
fi
done
printf '%s\n' \"\${existing_paths[#]}\"
This becomes hard to read very fast. However, there's an option I personally like to use. Create functions and export them to the server to be executed there to omit escaping a lot of stuff.
#! /bin/bash
all_paths=(rootfs.tar derp a)
function files_exist {
local files=($#)
local found=()
for file in ${files[#]}; do
echo -n "$file --> "
if [[ -f $file ]]; then
echo "exist"
found+=("$file")
else
echo "missing"
fi
done
printf '%s\n' "${found[#]}"
}
read -sp "pass? " PASS
echo
sshpass -p $PASS ssh -n $USER#$SERVER "
$(typeset -f files_exist)
files_exist ${all_paths[#]}
"

Related

How to change name of file if already present on remote machine?

I want to change the name of a file if it is already present on a remote server via SSH.
I tried this from here (SuperUser)
bash
ssh user#localhost -p 2222 'test -f /absolute/path/to/file' && echo 'YES' || echo 'NO'
This works well with a prompt, echoes YES when the file exists and NO when it doesn't. But I want this to be launched from a crontab, then it must be in a script.
Let's assume the file is called data.csv, a condition is set in a loop such as if there already is a data.csv file on the server, the file will be renamed data_1.csv and then data_2.csv, ... until the name is unique.
The renaming part works, but the detection part doesn't :
while [[ $fileIsPresent!='false' ]]
do
((appended+=1))
newFileName=${fileName}_${appended}.csv
remoteFilePathname=${remoteFolder}${newFileName}
ssh pi#localhost -p 2222 'test -f $remoteFilePathname' && fileIsPresent='true' || fileIsPresent='false'
done
always returns fileIsPresent='true' for any data_X.csv. All the paths are absolute.
Do you have any idea to help me?
This works:
$ cat replace.sh
#!/usr/bin/env bash
if [[ "$1" == "" ]]
then
echo "No filename passed."
exit
fi
if [[ ! -e "$1" ]]
then
echo "no such file"
exit
fi
base=${1%%.*} # get basename
ext=${1#*.} # get extension
for i in $(seq 1 100)
do
new="${base}_${i}.${ext}"
if [[ -e "$new" ]]
then
continue
fi
mv $1 $new
exit
done
$ ./replace.sh sample.csv
no such file
$ touch sample.csv
$ ./replace.sh sample.csv
$ ls
replace.sh
sample_1.csv
$ touch sample.csv
$ ./replace.sh sample.csv
$ ls
replace.sh
sample_1.csv
sample_2.csv
However, personally I'd prefer to use a timestamp instead of a number. Note that this sample will run out of names after 100. Timestamps won't. Something like $(date +%Y%m%d_%H%M%S).
As you asked for ideas to help you, I thought it worth mentioning that you probably don't want to start up to 100 ssh processes each one logging into the remote machine, so you might do better with a construct like this that only establishes a single ssh session that runs till complete:
ssh USER#REMOTE <<'EOF'
for ((i=0;i<10;i++)) ; do
echo $i
done
EOF
Alternatively, you can create and test a bash script locally and then run it remotely like this:
ssh USER#REMOTE 'bash -s' < LocallyTestedScript.bash

sshpass - connect to multiple servers from txt file? [duplicate]

This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 6 years ago.
I use this bash-code to upload files to a remote server, for normal files this works fine:
for i in `find devel/ -newer $UPLOAD_FILE`
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
The only problem is that for files with a space in the name, the for-loop fails, so I replaced the first line like this:
find devel/ -newer $UPLOAD_FILE | while read i
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
For some strange reason, the ssh-command breaks out of the while-loop, therefore the first missing directory is created fine, but all subsequent missing files/directories are ignored.
I guess this has something to do with ssh writing something to stdout which confuses the "read" command. Commenting out the ssh-command makes the loop work as it should.
Does anybody know why this happens and how one can prevent ssh from breaking the while-loop?
The problem is that ssh reads from standard input, therefore it eats all your remaining lines. You can just connect its standard input to nowhere:
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i" < /dev/null
You can also use ssh -n instead of the redirection.
Another approach is to loop over a FD other than stdin:
while IFS= read -u 3 -r -d '' filename; do
if [[ -d $filename ]]; then
printf -v cmd_str 'cd %q; mkdir -p %q' "$REMOTE_PATH" "$filename"
ssh "$USER#$SERVER" "$cmd_str"
else
printf -v remote_path_str '%q#%q:%q/%q' "$USER" "$SERVER" "$REMOTE_PATH" "$filename"
scp -Cp "$filename" "$remote_path_str"
fi
done 3< <(find devel/ -newer "$UPLOAD_FILE" -print0)
The -u 3 and 3< operators are critical here, using FD 3 rather than the default FD 0 (stdin).
The approach given here -- using -print0, a cleared IFS value, and the like -- is also less buggy than the original code and the existing answer, which can't handle interesting filenames correctly. (Glenn Jackman's answer is close, but even that can't deal with filenames with newlines or filenames with trailing whitespace).
The use of printf %q is critical to generate commands which can't be used to attack the remote machine. Consider what would happen with a file named devel/$(rm -rf /)/hello with code which didn't have this paranoia.

Output of "df -h" to while loop, for list of host [duplicate]

This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 6 years ago.
I use this bash-code to upload files to a remote server, for normal files this works fine:
for i in `find devel/ -newer $UPLOAD_FILE`
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
The only problem is that for files with a space in the name, the for-loop fails, so I replaced the first line like this:
find devel/ -newer $UPLOAD_FILE | while read i
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
For some strange reason, the ssh-command breaks out of the while-loop, therefore the first missing directory is created fine, but all subsequent missing files/directories are ignored.
I guess this has something to do with ssh writing something to stdout which confuses the "read" command. Commenting out the ssh-command makes the loop work as it should.
Does anybody know why this happens and how one can prevent ssh from breaking the while-loop?
The problem is that ssh reads from standard input, therefore it eats all your remaining lines. You can just connect its standard input to nowhere:
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i" < /dev/null
You can also use ssh -n instead of the redirection.
Another approach is to loop over a FD other than stdin:
while IFS= read -u 3 -r -d '' filename; do
if [[ -d $filename ]]; then
printf -v cmd_str 'cd %q; mkdir -p %q' "$REMOTE_PATH" "$filename"
ssh "$USER#$SERVER" "$cmd_str"
else
printf -v remote_path_str '%q#%q:%q/%q' "$USER" "$SERVER" "$REMOTE_PATH" "$filename"
scp -Cp "$filename" "$remote_path_str"
fi
done 3< <(find devel/ -newer "$UPLOAD_FILE" -print0)
The -u 3 and 3< operators are critical here, using FD 3 rather than the default FD 0 (stdin).
The approach given here -- using -print0, a cleared IFS value, and the like -- is also less buggy than the original code and the existing answer, which can't handle interesting filenames correctly. (Glenn Jackman's answer is close, but even that can't deal with filenames with newlines or filenames with trailing whitespace).
The use of printf %q is critical to generate commands which can't be used to attack the remote machine. Consider what would happen with a file named devel/$(rm -rf /)/hello with code which didn't have this paranoia.

Pseudo-terminal will not be allocated because stdin is not a terminal ssh bash

okay heres part of my code when I ssh to my servers from my server.txt list.
while read server <&3; do #read server names into the while loop
serverName=$(uname -n)
if [[ ! $server =~ [^[:space:]] ]] ; then #empty line exception
continue
fi
echo server on list = "$server"
echo server signed on = "$serverName"
if [ $serverName == $server ] ; then #makes sure a server doesnt try to ssh to itself
continue
fi
echo "Connecting to - $server"
ssh "$server" #SSH login
echo Connected to "$serverName"
exec < filelist.txt
while read updatedfile oldfile; do
# echo updatedfile = $updatedfile #use for troubleshooting
# echo oldfile = $oldfile #use for troubleshooting
if [[ ! $updatedfile =~ [^[:space:]] ]] ; then #empty line exception
continue # empty line exception
fi
if [[ ! $oldfile =~ [^[:space:]] ]] ; then #empty line exception
continue # empty line exception
fi
echo Comparing $updatedfile with $oldfile
if diff "$updatedfile" "$oldfile" >/dev/null ; then
echo The files compared are the same. No changes were made.
else
echo The files compared are different.
cp -f -v $oldfile /infanass/dev/admin/backup/`uname -n`_${oldfile##*/}_$(date +%F-%T)
cp -f -v $updatedfile $oldfile
fi
done
done 3</infanass/dev/admin/servers.txt
I keep on getting this error and the ssh doesn't actually connect and perform the code on the server its suppose to be ssh'd on.
Pseudo-terminal will not be allocated because stdin is not a terminal
I feel like everything the guy above just said is so wrong.
Expect?
It's simple:
ssh -i ~/.ssh/bobskey bob#10.10.10.10 << EOF
echo I am creating a file called Apples in the /tmp folder
touch /tmp/apples
exit
EOF
Everything in between the 2 "EOF"s will be run in the remote server.
The tags need to be the same. If you decide to replace "EOF" with "WayneGretzky", you must change the 2nd EOF also.
You seem to assume that when you run ssh to connect to a server, the rest of the commands in the file are passed to the remote shell running in ssh. They are not; instead they will be processed by the local shell once ssh terminates and returns control to it.
To run remote commands through ssh there are a couple of things you can do:
Write the commands you want to execute to a file. Copy the file to the remote server using scp, and execute it with ssh user#remote command
Learn a bit of TCL and use expect
Write the commands in a heredoc, but be careful with variable substitution: substitution happens in the client, not on the server. For example this will output your local home directory, not the remote:
ssh remote <<EOF
echo $HOME
EOF
To make it print the remote home directory you have to use echo \$HOME.
Also, remember that data files such as filelist.txt have to be explicitly copied if you want to read them on the remote side.

ssh breaks out of while-loop in bash [duplicate]

This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 6 years ago.
I use this bash-code to upload files to a remote server, for normal files this works fine:
for i in `find devel/ -newer $UPLOAD_FILE`
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
The only problem is that for files with a space in the name, the for-loop fails, so I replaced the first line like this:
find devel/ -newer $UPLOAD_FILE | while read i
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
For some strange reason, the ssh-command breaks out of the while-loop, therefore the first missing directory is created fine, but all subsequent missing files/directories are ignored.
I guess this has something to do with ssh writing something to stdout which confuses the "read" command. Commenting out the ssh-command makes the loop work as it should.
Does anybody know why this happens and how one can prevent ssh from breaking the while-loop?
The problem is that ssh reads from standard input, therefore it eats all your remaining lines. You can just connect its standard input to nowhere:
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i" < /dev/null
You can also use ssh -n instead of the redirection.
Another approach is to loop over a FD other than stdin:
while IFS= read -u 3 -r -d '' filename; do
if [[ -d $filename ]]; then
printf -v cmd_str 'cd %q; mkdir -p %q' "$REMOTE_PATH" "$filename"
ssh "$USER#$SERVER" "$cmd_str"
else
printf -v remote_path_str '%q#%q:%q/%q' "$USER" "$SERVER" "$REMOTE_PATH" "$filename"
scp -Cp "$filename" "$remote_path_str"
fi
done 3< <(find devel/ -newer "$UPLOAD_FILE" -print0)
The -u 3 and 3< operators are critical here, using FD 3 rather than the default FD 0 (stdin).
The approach given here -- using -print0, a cleared IFS value, and the like -- is also less buggy than the original code and the existing answer, which can't handle interesting filenames correctly. (Glenn Jackman's answer is close, but even that can't deal with filenames with newlines or filenames with trailing whitespace).
The use of printf %q is critical to generate commands which can't be used to attack the remote machine. Consider what would happen with a file named devel/$(rm -rf /)/hello with code which didn't have this paranoia.

Resources