While loop only operating on first line of file in bash [duplicate] - bash

This question already has answers here:
Shell script while read loop executes only once
(6 answers)
While loop stops reading after the first line in Bash
(5 answers)
Bash script stops execution of ffmpeg in while loop - why?
(3 answers)
Closed 5 years ago.
I have a while loop that should iterate over a text file but stops at the first line and I can't figure out why. My code is below.
while read hadoop_accounts; do
if ! grep "no lock no remove"; then
echo "${hadoop_accounts%%:*}"
echo "${hadoop_accounts%%:*}" >> missing_no_lock_no_remove.txt
fi
done < hadoop_accounts.txt

When grep is run with no explicit redirection or file to read, it reads stdin. All of stdin. The same stdin your while read loop is reading from.
Thus, with all your stdin consumed by grep, there's nothing left for the next read command to consume.
The easy approach (and much better for performance) is to do the substring check internal to the shell, and not bother starting up a new copy of grep per line processed at all:
while IFS= read -r hadoop_accounts; do
if ! [[ $hadoop_accounts = *"no lock no remove"* ]]; then
echo "${hadoop_accounts%%:*}"
echo "${hadoop_accounts%%:*}" >&3
fi
done < hadoop_accounts.txt 3>> missing_no_lock_no_remove.txt
Note also that we're only opening the output file once, not re-opening it every single time we want to write a single line.
If you really want to call grep over and over and over with only a single line of input each time, though, you can do that:
while IFS= read -r hadoop_accounts; do
if ! grep "no lock no remove" <<<"$hadoop_accounts"; then
echo "${hadoop_accounts%%:*}"
echo "${hadoop_accounts%%:*}" >&3
fi
done < hadoop_accounts.txt 3>> missing_no_lock_no_remove.txt
Or, even better than either of the above, you can just run grep a single time over the entire input file and read its output in the loop:
while IFS= read -r hadoop_accounts; do
echo "${hadoop_accounts%%:*}"
echo "${hadoop_accounts%%:*}" >&3
done < <(grep -v 'no lock no remove' <hadoop_accounts.txt) 3>>missing_flags.txt

Related

bash loop iterates only once with remote tar [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

shell while loop is breaking abnormally [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Script: echoing lines from one file to another doesn't print '\t'. Issue [duplicate]

This question already has answers here:
Preserving leading white space while reading>>writing a file line by line in bash
(5 answers)
Closed 6 years ago.
I need to create a file by modifying some lines of a source one.
I developed a loop 'while read line; do'. Inside it, the lines I read and don't modify go just:
echo -e "$line" >> "xxxx.c"
My issue is that some of that lines start with '\t', and they won't print the output file.
Example:
while read line;
do
if echo "$line" | grep -q 'timeval TIMEOUT = {25,0};'
then
echo "$line"
fi
Any help? I've tried with the printf command also but without success.
In that case you could just remove "-e" argument from the echo command.
From echo man page:
-e enable interpretation of backslash escapes

while read line stops after first iteration [duplicate]

This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 2 years ago.
I am trying to execute a simple script to capture multiple server's details using svmatch on server names input from a file.
#!/bin/sh
while read line; do
svmatch $line
done < ~/svr_input;
The svmatch command works with no problem when executed as a stand along command.
Redirect your inner command's stdin from /dev/null:
svmatch $line </dev/null
Otherwise, svmatch is able to consume stdin (which, of course, is the list of remaining lines).
The other approach is to use a file descriptor other than the default of stdin:
#!/bin/sh
while IFS= read -r line <&3; do
svmatch "$line"
done 3<svr_input
...if using bash rather than /bin/sh, you have some other options as well; for instance, bash 4.1 or newer can allocate a free file descriptor, rather than requiring a specific FD number to be hardcoded:
#!/bin/bash
while IFS= read -r -u "$fd_num" line; do
do-something-with "$line"
done {fd_num}<svr_input

Reading files line by line in by using for loop bash script [duplicate]

This question already has answers here:
Looping through the content of a file in Bash
(16 answers)
Closed 3 years ago.
Say for example I have a file called "tests",it contains
a
b
c
d
I'm trying to read this file line by line and it should output
a
b
c
d
I create a bash script called "read" and try to read this file by using for loop
#!/bin/bash
for i in ${1}; do //for the ith line of the first argument, do...
echo $i // prints ith line
done
I execute it
./read tests
but it gives me
tests
Does anyone know what happened? Why does it print "tests" instead of the content of the "tests"? Thanks in advance.
#!/bin/bash
while IFS= read -r line; do
echo "$line"
done < "$1"
This solution can handle files with special characters in the file name (like spaces or carriage returns) unlike other responses.
You need something like this rather:
#!/bin/bash
while read line || [[ $line ]]; do
echo $line
done < ${1}
what you've written after expansion will become:
#!/bin/bash
for i in tests; do
echo $i
done
if you still want for loop, do something like:
#!/bin/bash
for i in $(cat ${1}); do
echo $i
done
This works for me:
#!/bin/sh
for i in `cat $1`
do
echo $i
done

Resources