This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 6 years ago.
I use this bash-code to upload files to a remote server, for normal files this works fine:
for i in `find devel/ -newer $UPLOAD_FILE`
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
The only problem is that for files with a space in the name, the for-loop fails, so I replaced the first line like this:
find devel/ -newer $UPLOAD_FILE | while read i
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
For some strange reason, the ssh-command breaks out of the while-loop, therefore the first missing directory is created fine, but all subsequent missing files/directories are ignored.
I guess this has something to do with ssh writing something to stdout which confuses the "read" command. Commenting out the ssh-command makes the loop work as it should.
Does anybody know why this happens and how one can prevent ssh from breaking the while-loop?
The problem is that ssh reads from standard input, therefore it eats all your remaining lines. You can just connect its standard input to nowhere:
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i" < /dev/null
You can also use ssh -n instead of the redirection.
Another approach is to loop over a FD other than stdin:
while IFS= read -u 3 -r -d '' filename; do
if [[ -d $filename ]]; then
printf -v cmd_str 'cd %q; mkdir -p %q' "$REMOTE_PATH" "$filename"
ssh "$USER#$SERVER" "$cmd_str"
else
printf -v remote_path_str '%q#%q:%q/%q' "$USER" "$SERVER" "$REMOTE_PATH" "$filename"
scp -Cp "$filename" "$remote_path_str"
fi
done 3< <(find devel/ -newer "$UPLOAD_FILE" -print0)
The -u 3 and 3< operators are critical here, using FD 3 rather than the default FD 0 (stdin).
The approach given here -- using -print0, a cleared IFS value, and the like -- is also less buggy than the original code and the existing answer, which can't handle interesting filenames correctly. (Glenn Jackman's answer is close, but even that can't deal with filenames with newlines or filenames with trailing whitespace).
The use of printf %q is critical to generate commands which can't be used to attack the remote machine. Consider what would happen with a file named devel/$(rm -rf /)/hello with code which didn't have this paranoia.
Related
I was trying to break a loop I created with
ctrl+C while reading a file.
Instead it was just stopping the specific iteration and not
the whole loop.
How can I stop it completely instead of having to ctrl+C all the iterations?
The whole script can be found here
An example file is like that:
echo -e "SRR7637893\nSRR7637894\nSRR7637895\nSRR7637896" > filenames.txt
The specific code chunk that probably makes the issue is the while loop here(set -xv; has been added afterwards as suggested from markp-fuso in comments):
set -xv;
while read -r line; do
echo "Now downloading "${line}"\n"
docker run --rm -v "$OUTPUT_DIR":/data -w /data inutano/sra-toolkit:v2.9.2 fasterq-dump "${line}" -t /data/shm -e $PROCESSORS
if [[ -s $OUTPUT_DIR/${line}.fastq ]]; then
echo "Using pigz on ${line}.fastq"
pigz --best $OUTPUT_DIR/"${line}*.fastq"
else
echo "$OUTPUT_DIR/${line}.fastq not found"
fi
done < "$INPUT_txt"; set +xv
Situation: we're feeding a list of filenames to an sshpass and it iterates correctly through a remote folder to check whether files with the given names actually exists, then build an updated list containing only the files that do exist, which is reused later in the bash script.
Problem: The list comprises sometimes tens of thousands of files, which means tens of thousands of ssh logins, which is harming performance and sometimes getting us blocked by our own security policies.
Intended solution: instead of starting the for-loop and calling sshpass each time, do it otherwise and pass the loop to an unique sshpass call.
I've got to pass the list to the sshpass instruction in the example test below:
#!/bin/bash
all_paths=(`/bin/cat /home/user/filenames_to_be_tested.list`)
existing_paths=()
sshpass -p PASSWORD ssh -n USER#HOST bash -c "'
for (( i=0; i<${#all_paths[#]}; i++ ))
do
echo ${all_paths[i]}
echo \"-->\"$i
if [[ -f ${all_paths[i]} ]]
then
echo ${all_paths[i]}
existing_paths=(${all_paths[i]})
fi
done
'
printf '%s\n' "${existing_paths[#]}"
The issue here is that it appears to loop (you see a series of echoed lines), but in the end it is not really iterating the i and is always checking/printing the same line.
Can someone help spot the bug? Thanks!
The problem is that bash first parses the string and substitutes the variables. That happens before it's sent to the server. If you want to stop bash from doing that, you should escape every variable that should be executed on the server.
#! /bin/bash
all_paths=(rootfs.tar derp a)
read -sp "pass? " PASS
echo
sshpass -p $PASS ssh -n $USER#$SERVER "
files=(${all_paths[#]})
existing_paths=()
for ((i=0; i<\${#files[#]}; i++)); do
echo -n \"\${files[#]} --> \$i\"
if [[ -f \${files[\$i]} ]]; then
echo \${files[\$i]}
existing_paths+=(\${files[\$i]})
else
echo 'Not found'
fi
done
printf '%s\n' \"\${existing_paths[#]}\"
This becomes hard to read very fast. However, there's an option I personally like to use. Create functions and export them to the server to be executed there to omit escaping a lot of stuff.
#! /bin/bash
all_paths=(rootfs.tar derp a)
function files_exist {
local files=($#)
local found=()
for file in ${files[#]}; do
echo -n "$file --> "
if [[ -f $file ]]; then
echo "exist"
found+=("$file")
else
echo "missing"
fi
done
printf '%s\n' "${found[#]}"
}
read -sp "pass? " PASS
echo
sshpass -p $PASS ssh -n $USER#$SERVER "
$(typeset -f files_exist)
files_exist ${all_paths[#]}
"
This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 6 years ago.
I use this bash-code to upload files to a remote server, for normal files this works fine:
for i in `find devel/ -newer $UPLOAD_FILE`
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
The only problem is that for files with a space in the name, the for-loop fails, so I replaced the first line like this:
find devel/ -newer $UPLOAD_FILE | while read i
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
For some strange reason, the ssh-command breaks out of the while-loop, therefore the first missing directory is created fine, but all subsequent missing files/directories are ignored.
I guess this has something to do with ssh writing something to stdout which confuses the "read" command. Commenting out the ssh-command makes the loop work as it should.
Does anybody know why this happens and how one can prevent ssh from breaking the while-loop?
The problem is that ssh reads from standard input, therefore it eats all your remaining lines. You can just connect its standard input to nowhere:
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i" < /dev/null
You can also use ssh -n instead of the redirection.
Another approach is to loop over a FD other than stdin:
while IFS= read -u 3 -r -d '' filename; do
if [[ -d $filename ]]; then
printf -v cmd_str 'cd %q; mkdir -p %q' "$REMOTE_PATH" "$filename"
ssh "$USER#$SERVER" "$cmd_str"
else
printf -v remote_path_str '%q#%q:%q/%q' "$USER" "$SERVER" "$REMOTE_PATH" "$filename"
scp -Cp "$filename" "$remote_path_str"
fi
done 3< <(find devel/ -newer "$UPLOAD_FILE" -print0)
The -u 3 and 3< operators are critical here, using FD 3 rather than the default FD 0 (stdin).
The approach given here -- using -print0, a cleared IFS value, and the like -- is also less buggy than the original code and the existing answer, which can't handle interesting filenames correctly. (Glenn Jackman's answer is close, but even that can't deal with filenames with newlines or filenames with trailing whitespace).
The use of printf %q is critical to generate commands which can't be used to attack the remote machine. Consider what would happen with a file named devel/$(rm -rf /)/hello with code which didn't have this paranoia.
This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 6 years ago.
I am writing a Bash file to execute two PhantomJS tasks.
I have two tasks written in external JS files: task1.js & task2.js.
Here's my Bash script so far:
#!/bin/bash
url=$1
cd $(cd $(dirname ${BASH_SOURCE}); pwd -P)
dir=../temp
mkdir -p $dir
file=$dir/file.txt
phantomjs "taks1.js" $url > $file
while IFS="" read -r line || [[ -n $line ]]; do
dir=../build
file=$dir/$line.html
mkdir -p $(dirname $file)
phantomjs "task2.js" $url $line > $file
done < $file
For some unknown reason task2 is being run only once, then the script stops.
If I remove the PhantomJS command, the while loop runs normally until all lines are read from the file.
Maybe someone knows why is that?
Cheers.
Your loop is reading contents from stdin. If any other program you run consumes stdin, the loop will terminate.
Either fix any program that may be consuming stdin to read from /dev/null, or use a different FD for the loop.
The first approach looks like this:
phantomjs "task2.js" "$url" "$line" >"$file" </dev/null
The second looks like this (note the 3< on establishing the redirection, and the <&3 to read from that file descriptor):
while IFS="" read -r line <&3 || [[ -n $line ]]; do
dir=../build
file=$dir/$line.html
mkdir -p "$(dirname "$file")"
phantomjs "task2.js" "$url" "$line" >"$file"
done 3< $file
By the way, consider taking file out of the loop altogether, by having the loop read directly from the first phantomjs program's output:
while IFS="" read -r line <&3 || [[ -n $line ]]; do
dir=../build
file=$dir/$line.html
mkdir -p "$(dirname "$file")"
phantomjs "task2.js" "$url" "$line" >"$file"
done 3< <(phantomjs "task1.js" "$url")
This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 6 years ago.
I use this bash-code to upload files to a remote server, for normal files this works fine:
for i in `find devel/ -newer $UPLOAD_FILE`
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
The only problem is that for files with a space in the name, the for-loop fails, so I replaced the first line like this:
find devel/ -newer $UPLOAD_FILE | while read i
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
For some strange reason, the ssh-command breaks out of the while-loop, therefore the first missing directory is created fine, but all subsequent missing files/directories are ignored.
I guess this has something to do with ssh writing something to stdout which confuses the "read" command. Commenting out the ssh-command makes the loop work as it should.
Does anybody know why this happens and how one can prevent ssh from breaking the while-loop?
The problem is that ssh reads from standard input, therefore it eats all your remaining lines. You can just connect its standard input to nowhere:
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i" < /dev/null
You can also use ssh -n instead of the redirection.
Another approach is to loop over a FD other than stdin:
while IFS= read -u 3 -r -d '' filename; do
if [[ -d $filename ]]; then
printf -v cmd_str 'cd %q; mkdir -p %q' "$REMOTE_PATH" "$filename"
ssh "$USER#$SERVER" "$cmd_str"
else
printf -v remote_path_str '%q#%q:%q/%q' "$USER" "$SERVER" "$REMOTE_PATH" "$filename"
scp -Cp "$filename" "$remote_path_str"
fi
done 3< <(find devel/ -newer "$UPLOAD_FILE" -print0)
The -u 3 and 3< operators are critical here, using FD 3 rather than the default FD 0 (stdin).
The approach given here -- using -print0, a cleared IFS value, and the like -- is also less buggy than the original code and the existing answer, which can't handle interesting filenames correctly. (Glenn Jackman's answer is close, but even that can't deal with filenames with newlines or filenames with trailing whitespace).
The use of printf %q is critical to generate commands which can't be used to attack the remote machine. Consider what would happen with a file named devel/$(rm -rf /)/hello with code which didn't have this paranoia.