Bash While loop read file by line exit on certain line - bash

I'm writing a simple Bash script that simply the call of HadnBrakeCli for render videos.
I also implemented a simple queue option: the queue file just store the line-command it has to call to start a render.
So I wrote a while-loop to read one line at time, eval $line and repeat untill file ends.
if [[ ${QUEUE_MODE} = 'RUN' ]]; then
QUEUE_LEN=`cat ${CONFIG_DIR}/queue | wc -l`
QUEUE_POS='1'
printf "Queue lenght:\t ${QUEUE_LEN}\n"
while IFS= read line; do
echo "--Running render ${QUEUE_POS} on ${QUEUE_LEN}..."
echo "++" && echo "$line" && echo "++"
eval "${line}"
tail -n +2 "${CONFIG_DIR}/queue" > "${CONFIG_DIR}/queue.tmp" && mv "${CONFIG_DIR}/queue.tmp" "${CONFIG_DIR}/queue"
echo "--Render ended"
QUEUE_POS=`expr $QUEUE_POS + 1`
done < "${CONFIG_DIR}/queue"
exit 0
The problem is that any command makes the loop to work fine (empty line, echo "test"...), but as soon a proper render is loaded, it is launched and finished correctly, but also the loops exists.
I am a newbie so I tried some minor changes to see what effect I got, but nothing change the result.
I commented the command tail -n +2 "${CONFIG_DIR}/queue" > "${CONFIG_DIR}/queue.tmp" && mv "${CONFIG_DIR}/queue.tmp" "${CONFIG_DIR}/queue" or I added/removed IFS= in the while-loop or removed the -r in read command.
Sorry if the question is trivial, but I'm really missing some major part in how it works, so I have no idea even how to search for the solution.
I'll put a sample of a general render in the queue file.
HandBrakeCLI -i "/home/andrea/Videos/done/Rap dottor male e mini me.mp4" -o "/hdd/Render/Output/Rap dottor male e mini me.mkv" -e x265 -q 23 --encoder-preset faster --all-audio -E av_aac -6 dpl2 --all-subtitles -x pmode:pools='16' --verbose=0 2>/dev/null

HandBrakeCLI reads from standard input, which steals the rest of the queue file before read line can see it. My favorite solution to this is to pass the file over something other than standard input, like file descriptor #3:
...
while IFS= read line <&3; do # The <&3 makes it read from FD #3
...
done 3< "${CONFIG_DIR}/queue" # The 3< redirects the file into FD #3
Another way to avoid the problem is to redirect input to the HandBrakeCLI command:
...
eval "${line}" </dev/null
...
There's some more info about this in BashFAQ #89: I'm reading a file line by line and running ssh or ffmpeg, only the first line gets processed!
Also, I'm not sure I trust the way you're using tail to remove lines from the queue file as they're executed. I'm not sure it's really wrong, it just looks fragile to me. Also, I'd recommend using lower- or mixed-case variable names, since there are a bunch of all-caps names with special meanings, and re-using one of them by mistake can have weird consequences. Finally, I'd recommend running your script through shellcheck.net, as it'll make some other good recommendations.
[BTW, this question is a duplicate of "Bash script do loop exiting early", but that doesn't have any upvoted or accepted answers.]

Related

while loop to ssh only take first element [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Need help in bash script for ffmpeg encoding [duplicate]

I'm writing a simple Bash script that simply the call of HadnBrakeCli for render videos.
I also implemented a simple queue option: the queue file just store the line-command it has to call to start a render.
So I wrote a while-loop to read one line at time, eval $line and repeat untill file ends.
if [[ ${QUEUE_MODE} = 'RUN' ]]; then
QUEUE_LEN=`cat ${CONFIG_DIR}/queue | wc -l`
QUEUE_POS='1'
printf "Queue lenght:\t ${QUEUE_LEN}\n"
while IFS= read line; do
echo "--Running render ${QUEUE_POS} on ${QUEUE_LEN}..."
echo "++" && echo "$line" && echo "++"
eval "${line}"
tail -n +2 "${CONFIG_DIR}/queue" > "${CONFIG_DIR}/queue.tmp" && mv "${CONFIG_DIR}/queue.tmp" "${CONFIG_DIR}/queue"
echo "--Render ended"
QUEUE_POS=`expr $QUEUE_POS + 1`
done < "${CONFIG_DIR}/queue"
exit 0
The problem is that any command makes the loop to work fine (empty line, echo "test"...), but as soon a proper render is loaded, it is launched and finished correctly, but also the loops exists.
I am a newbie so I tried some minor changes to see what effect I got, but nothing change the result.
I commented the command tail -n +2 "${CONFIG_DIR}/queue" > "${CONFIG_DIR}/queue.tmp" && mv "${CONFIG_DIR}/queue.tmp" "${CONFIG_DIR}/queue" or I added/removed IFS= in the while-loop or removed the -r in read command.
Sorry if the question is trivial, but I'm really missing some major part in how it works, so I have no idea even how to search for the solution.
I'll put a sample of a general render in the queue file.
HandBrakeCLI -i "/home/andrea/Videos/done/Rap dottor male e mini me.mp4" -o "/hdd/Render/Output/Rap dottor male e mini me.mkv" -e x265 -q 23 --encoder-preset faster --all-audio -E av_aac -6 dpl2 --all-subtitles -x pmode:pools='16' --verbose=0 2>/dev/null
HandBrakeCLI reads from standard input, which steals the rest of the queue file before read line can see it. My favorite solution to this is to pass the file over something other than standard input, like file descriptor #3:
...
while IFS= read line <&3; do # The <&3 makes it read from FD #3
...
done 3< "${CONFIG_DIR}/queue" # The 3< redirects the file into FD #3
Another way to avoid the problem is to redirect input to the HandBrakeCLI command:
...
eval "${line}" </dev/null
...
There's some more info about this in BashFAQ #89: I'm reading a file line by line and running ssh or ffmpeg, only the first line gets processed!
Also, I'm not sure I trust the way you're using tail to remove lines from the queue file as they're executed. I'm not sure it's really wrong, it just looks fragile to me. Also, I'd recommend using lower- or mixed-case variable names, since there are a bunch of all-caps names with special meanings, and re-using one of them by mistake can have weird consequences. Finally, I'd recommend running your script through shellcheck.net, as it'll make some other good recommendations.
[BTW, this question is a duplicate of "Bash script do loop exiting early", but that doesn't have any upvoted or accepted answers.]

shell while loop is breaking abnormally [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Monitor Change in File That's Not Being Appended To

I would like to write a shell script that monitors the changes of a file. That is, another program I've written writes either a 1 or 0 to a file depending on its state. I would like to create a script that runs indefinitely, and monitors the state of this file. So far, I've found a close solution online by using tail -f. However, this command expects the file to be continually appended to. When I run the following piece of code, I get tail: test.log: file truncated. Also, when I test this program by running echo 1 > test.log and echo 0 > test.log back and forth on another terminal, it seems that periodically it will completely miss a change in the file. Probably related to tail expecting to follow the file as it's being appended rather than just changing a single character (thus thinking the file has been truncated, I suppose).
Here's the code I've tried:
#!/bin/sh
# Monitor changes in file
tail -fn0 test.log | \
while read line; do
if [ $line = 1 ]; then
echo "TRUE!!!"
elif [ $line = 0 ]; then
echo "FALSE!!!"
fi
done
The solution is probably incredibly easy, but I just can't manage to find it.
If you want to capture the state of the file in regular intervals, you could do something like this:
INTERVAL=2
while sleep 2; do
val="$(cat "test.log")"
case "$val" in
...
esac
done
Alternately, if you only want to act on the contents of the file whenever the file changes, you need to work with the "modification time". For example,
mtime () {
ls "$1" -l --time-style=+%s | cut -d' ' -f6
}
FILE="test.log"
LAST_TIME=$(mtime "$FILE")
touch "$FILE" #Force first update
while sleep 2; do
if [[ $(mtime "$FILE") -gt $LAST_TIME ]]; then
LAST_TIME=$(mtime "$FILE")
val="$(cat "$FILE")"
...
fi
done
If 2 seconds is too big of a delay for your purposes, uses a smaller number. Alternately, use true instead of sleep, for virtually zero delay.

While loop stops reading after the first line in Bash

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Resources