This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 6 years ago.
I use this bash-code to upload files to a remote server, for normal files this works fine:
for i in `find devel/ -newer $UPLOAD_FILE`
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
The only problem is that for files with a space in the name, the for-loop fails, so I replaced the first line like this:
find devel/ -newer $UPLOAD_FILE | while read i
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
For some strange reason, the ssh-command breaks out of the while-loop, therefore the first missing directory is created fine, but all subsequent missing files/directories are ignored.
I guess this has something to do with ssh writing something to stdout which confuses the "read" command. Commenting out the ssh-command makes the loop work as it should.
Does anybody know why this happens and how one can prevent ssh from breaking the while-loop?
The problem is that ssh reads from standard input, therefore it eats all your remaining lines. You can just connect its standard input to nowhere:
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i" < /dev/null
You can also use ssh -n instead of the redirection.
Another approach is to loop over a FD other than stdin:
while IFS= read -u 3 -r -d '' filename; do
if [[ -d $filename ]]; then
printf -v cmd_str 'cd %q; mkdir -p %q' "$REMOTE_PATH" "$filename"
ssh "$USER#$SERVER" "$cmd_str"
else
printf -v remote_path_str '%q#%q:%q/%q' "$USER" "$SERVER" "$REMOTE_PATH" "$filename"
scp -Cp "$filename" "$remote_path_str"
fi
done 3< <(find devel/ -newer "$UPLOAD_FILE" -print0)
The -u 3 and 3< operators are critical here, using FD 3 rather than the default FD 0 (stdin).
The approach given here -- using -print0, a cleared IFS value, and the like -- is also less buggy than the original code and the existing answer, which can't handle interesting filenames correctly. (Glenn Jackman's answer is close, but even that can't deal with filenames with newlines or filenames with trailing whitespace).
The use of printf %q is critical to generate commands which can't be used to attack the remote machine. Consider what would happen with a file named devel/$(rm -rf /)/hello with code which didn't have this paranoia.
Related
I want to change the name of a file if it is already present on a remote server via SSH.
I tried this from here (SuperUser)
bash
ssh user#localhost -p 2222 'test -f /absolute/path/to/file' && echo 'YES' || echo 'NO'
This works well with a prompt, echoes YES when the file exists and NO when it doesn't. But I want this to be launched from a crontab, then it must be in a script.
Let's assume the file is called data.csv, a condition is set in a loop such as if there already is a data.csv file on the server, the file will be renamed data_1.csv and then data_2.csv, ... until the name is unique.
The renaming part works, but the detection part doesn't :
while [[ $fileIsPresent!='false' ]]
do
((appended+=1))
newFileName=${fileName}_${appended}.csv
remoteFilePathname=${remoteFolder}${newFileName}
ssh pi#localhost -p 2222 'test -f $remoteFilePathname' && fileIsPresent='true' || fileIsPresent='false'
done
always returns fileIsPresent='true' for any data_X.csv. All the paths are absolute.
Do you have any idea to help me?
This works:
$ cat replace.sh
#!/usr/bin/env bash
if [[ "$1" == "" ]]
then
echo "No filename passed."
exit
fi
if [[ ! -e "$1" ]]
then
echo "no such file"
exit
fi
base=${1%%.*} # get basename
ext=${1#*.} # get extension
for i in $(seq 1 100)
do
new="${base}_${i}.${ext}"
if [[ -e "$new" ]]
then
continue
fi
mv $1 $new
exit
done
$ ./replace.sh sample.csv
no such file
$ touch sample.csv
$ ./replace.sh sample.csv
$ ls
replace.sh
sample_1.csv
$ touch sample.csv
$ ./replace.sh sample.csv
$ ls
replace.sh
sample_1.csv
sample_2.csv
However, personally I'd prefer to use a timestamp instead of a number. Note that this sample will run out of names after 100. Timestamps won't. Something like $(date +%Y%m%d_%H%M%S).
As you asked for ideas to help you, I thought it worth mentioning that you probably don't want to start up to 100 ssh processes each one logging into the remote machine, so you might do better with a construct like this that only establishes a single ssh session that runs till complete:
ssh USER#REMOTE <<'EOF'
for ((i=0;i<10;i++)) ; do
echo $i
done
EOF
Alternatively, you can create and test a bash script locally and then run it remotely like this:
ssh USER#REMOTE 'bash -s' < LocallyTestedScript.bash
Situation: we're feeding a list of filenames to an sshpass and it iterates correctly through a remote folder to check whether files with the given names actually exists, then build an updated list containing only the files that do exist, which is reused later in the bash script.
Problem: The list comprises sometimes tens of thousands of files, which means tens of thousands of ssh logins, which is harming performance and sometimes getting us blocked by our own security policies.
Intended solution: instead of starting the for-loop and calling sshpass each time, do it otherwise and pass the loop to an unique sshpass call.
I've got to pass the list to the sshpass instruction in the example test below:
#!/bin/bash
all_paths=(`/bin/cat /home/user/filenames_to_be_tested.list`)
existing_paths=()
sshpass -p PASSWORD ssh -n USER#HOST bash -c "'
for (( i=0; i<${#all_paths[#]}; i++ ))
do
echo ${all_paths[i]}
echo \"-->\"$i
if [[ -f ${all_paths[i]} ]]
then
echo ${all_paths[i]}
existing_paths=(${all_paths[i]})
fi
done
'
printf '%s\n' "${existing_paths[#]}"
The issue here is that it appears to loop (you see a series of echoed lines), but in the end it is not really iterating the i and is always checking/printing the same line.
Can someone help spot the bug? Thanks!
The problem is that bash first parses the string and substitutes the variables. That happens before it's sent to the server. If you want to stop bash from doing that, you should escape every variable that should be executed on the server.
#! /bin/bash
all_paths=(rootfs.tar derp a)
read -sp "pass? " PASS
echo
sshpass -p $PASS ssh -n $USER#$SERVER "
files=(${all_paths[#]})
existing_paths=()
for ((i=0; i<\${#files[#]}; i++)); do
echo -n \"\${files[#]} --> \$i\"
if [[ -f \${files[\$i]} ]]; then
echo \${files[\$i]}
existing_paths+=(\${files[\$i]})
else
echo 'Not found'
fi
done
printf '%s\n' \"\${existing_paths[#]}\"
This becomes hard to read very fast. However, there's an option I personally like to use. Create functions and export them to the server to be executed there to omit escaping a lot of stuff.
#! /bin/bash
all_paths=(rootfs.tar derp a)
function files_exist {
local files=($#)
local found=()
for file in ${files[#]}; do
echo -n "$file --> "
if [[ -f $file ]]; then
echo "exist"
found+=("$file")
else
echo "missing"
fi
done
printf '%s\n' "${found[#]}"
}
read -sp "pass? " PASS
echo
sshpass -p $PASS ssh -n $USER#$SERVER "
$(typeset -f files_exist)
files_exist ${all_paths[#]}
"
This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 6 years ago.
I am writing a Bash file to execute two PhantomJS tasks.
I have two tasks written in external JS files: task1.js & task2.js.
Here's my Bash script so far:
#!/bin/bash
url=$1
cd $(cd $(dirname ${BASH_SOURCE}); pwd -P)
dir=../temp
mkdir -p $dir
file=$dir/file.txt
phantomjs "taks1.js" $url > $file
while IFS="" read -r line || [[ -n $line ]]; do
dir=../build
file=$dir/$line.html
mkdir -p $(dirname $file)
phantomjs "task2.js" $url $line > $file
done < $file
For some unknown reason task2 is being run only once, then the script stops.
If I remove the PhantomJS command, the while loop runs normally until all lines are read from the file.
Maybe someone knows why is that?
Cheers.
Your loop is reading contents from stdin. If any other program you run consumes stdin, the loop will terminate.
Either fix any program that may be consuming stdin to read from /dev/null, or use a different FD for the loop.
The first approach looks like this:
phantomjs "task2.js" "$url" "$line" >"$file" </dev/null
The second looks like this (note the 3< on establishing the redirection, and the <&3 to read from that file descriptor):
while IFS="" read -r line <&3 || [[ -n $line ]]; do
dir=../build
file=$dir/$line.html
mkdir -p "$(dirname "$file")"
phantomjs "task2.js" "$url" "$line" >"$file"
done 3< $file
By the way, consider taking file out of the loop altogether, by having the loop read directly from the first phantomjs program's output:
while IFS="" read -r line <&3 || [[ -n $line ]]; do
dir=../build
file=$dir/$line.html
mkdir -p "$(dirname "$file")"
phantomjs "task2.js" "$url" "$line" >"$file"
done 3< <(phantomjs "task1.js" "$url")
This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 6 years ago.
I use this bash-code to upload files to a remote server, for normal files this works fine:
for i in `find devel/ -newer $UPLOAD_FILE`
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
The only problem is that for files with a space in the name, the for-loop fails, so I replaced the first line like this:
find devel/ -newer $UPLOAD_FILE | while read i
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
For some strange reason, the ssh-command breaks out of the while-loop, therefore the first missing directory is created fine, but all subsequent missing files/directories are ignored.
I guess this has something to do with ssh writing something to stdout which confuses the "read" command. Commenting out the ssh-command makes the loop work as it should.
Does anybody know why this happens and how one can prevent ssh from breaking the while-loop?
The problem is that ssh reads from standard input, therefore it eats all your remaining lines. You can just connect its standard input to nowhere:
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i" < /dev/null
You can also use ssh -n instead of the redirection.
Another approach is to loop over a FD other than stdin:
while IFS= read -u 3 -r -d '' filename; do
if [[ -d $filename ]]; then
printf -v cmd_str 'cd %q; mkdir -p %q' "$REMOTE_PATH" "$filename"
ssh "$USER#$SERVER" "$cmd_str"
else
printf -v remote_path_str '%q#%q:%q/%q' "$USER" "$SERVER" "$REMOTE_PATH" "$filename"
scp -Cp "$filename" "$remote_path_str"
fi
done 3< <(find devel/ -newer "$UPLOAD_FILE" -print0)
The -u 3 and 3< operators are critical here, using FD 3 rather than the default FD 0 (stdin).
The approach given here -- using -print0, a cleared IFS value, and the like -- is also less buggy than the original code and the existing answer, which can't handle interesting filenames correctly. (Glenn Jackman's answer is close, but even that can't deal with filenames with newlines or filenames with trailing whitespace).
The use of printf %q is critical to generate commands which can't be used to attack the remote machine. Consider what would happen with a file named devel/$(rm -rf /)/hello with code which didn't have this paranoia.
This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 6 years ago.
I use this bash-code to upload files to a remote server, for normal files this works fine:
for i in `find devel/ -newer $UPLOAD_FILE`
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
The only problem is that for files with a space in the name, the for-loop fails, so I replaced the first line like this:
find devel/ -newer $UPLOAD_FILE | while read i
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
For some strange reason, the ssh-command breaks out of the while-loop, therefore the first missing directory is created fine, but all subsequent missing files/directories are ignored.
I guess this has something to do with ssh writing something to stdout which confuses the "read" command. Commenting out the ssh-command makes the loop work as it should.
Does anybody know why this happens and how one can prevent ssh from breaking the while-loop?
The problem is that ssh reads from standard input, therefore it eats all your remaining lines. You can just connect its standard input to nowhere:
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i" < /dev/null
You can also use ssh -n instead of the redirection.
Another approach is to loop over a FD other than stdin:
while IFS= read -u 3 -r -d '' filename; do
if [[ -d $filename ]]; then
printf -v cmd_str 'cd %q; mkdir -p %q' "$REMOTE_PATH" "$filename"
ssh "$USER#$SERVER" "$cmd_str"
else
printf -v remote_path_str '%q#%q:%q/%q' "$USER" "$SERVER" "$REMOTE_PATH" "$filename"
scp -Cp "$filename" "$remote_path_str"
fi
done 3< <(find devel/ -newer "$UPLOAD_FILE" -print0)
The -u 3 and 3< operators are critical here, using FD 3 rather than the default FD 0 (stdin).
The approach given here -- using -print0, a cleared IFS value, and the like -- is also less buggy than the original code and the existing answer, which can't handle interesting filenames correctly. (Glenn Jackman's answer is close, but even that can't deal with filenames with newlines or filenames with trailing whitespace).
The use of printf %q is critical to generate commands which can't be used to attack the remote machine. Consider what would happen with a file named devel/$(rm -rf /)/hello with code which didn't have this paranoia.