diff on two program with same input? - shell

#!bin/sh
i=0;
while read inputline
do
array[$i]=$inputline;
i=`expr $i + 1`
done
j=i;
./a.out > test/temp;
while [$i -gt 0]
do
echo ${array[$i]}
i=`expr $i - 1`;
done
./test > test/temp1;
while [$j -gt 0]
do
echo ${array[$j]}
j=`expr $j - 1`;
done
diff test/temp1 test/temp
What's wrong with the above code? Essentially what it's meant to do is take some input from stdin and then provide the same input to two separate programs and then put their output into another file and then diff them. How come it doesn't work?

I see a few things that could be problems.
First, usually the path to sh is /bin/sh. I would expect the shebang line to be something like this:
#!/bin/sh
This may not be causing an error, however, if you're calling sh on the command line.
Second, the output from your while loops needs to be redirected to your executables:
{
while [ $i -lt $lines ]
do
echo ${array[$i]}
i=`expr $i + 1`;
done
} | ./a.out > test/temp;
Note: I tested this on Mac OS X and Linux, and sh is aliased to bash on both operating systems. I'm not entirely sure that this construct works in plain old sh.
Third, the indexing is off in your while loops: it should go from 0 to $i - 1 (or from $i - 1 to 0). As written in your example, it goes from $i to 1.
Finally, "test" is used as both your executable name and the output directory name. Here's what I ended up with:
#!/bin/sh
lines=0;
while read inputline
do
array[$lines]=$inputline;
lines=`expr $lines + 1`
done
i=0
{
while [ $i -lt $lines ]
do
echo ${array[$i]}
i=`expr $i + 1`;
done
} | ./a.out > test/temp;
i=0
{
while [ $i -lt $lines ]
do
echo ${array[$i]}
i=`expr $i + 1`;
done
} | ./b.out > test/temp1;
diff test/temp1 test/temp
Another way to do what you want would be to store your test input in a file and just use piping to feed the input to the programs. For example, if your input is stored in input.txt then you can do this:
cat input.txt | a.out > test/temp
cat input.txt | b.out > test/temp1
diff test/temp test/temp1

Another approach is to capture stdin like this:
#!/bin/sh
input=$(cat -)
printf "%s" "$input" | ./a.out > test/temp
printf "%s" "$input" | ./test > test/temp1
diff test/temp test/temp1
or, using bash process substitution and here-strings:
#!/bin/bash
input=$(cat -)
diff <(./a.out <<< "$input") <(./test <<< "$input")

What's wrong?
The semi-colons are not necessary, though they do no harm.
The initial input loop looks OK.
The assignment j=i is quite different from j=$i.
You run the program ./a.out without supplying it any input.
You then have a loop that was meant to echo the input. It provides the input backwards compared with the way it was read.
You repeat the program execution of ./test without supplying any input, followed by a repeat loop that was meant to echo the input, but this one fails because of the misassignment.
You then run diff on the two outputs produced from uncertain inputs.
You do not clean up the temporary files.
How to do it
This script is simple - except that it ensures that temporary files are cleaned up.
tmp=${TMPDIR:-/tmp}/tester.$$
trap "rm -f $tmp.?; exit 1" 0 1 2 3 13 15
cat - > $tmp.1
./a.out < $tmp.1 > $tmp.2
./test < $tmp.1 > $tmp.3
diff $tmp.2 $tmp.3
rm -f $tmp.?
trap 0
exit 0
The first step is to capture the input in a file $tmp.1. Then run the two test programs, capturing the output in files $tmp.2 and $tmp.3. Then take the difference of the two files.
The first trap line ensures that the temporary files are removed when the shell exits, or if it receives a signal from the set { HUP, INT, QUIT, PIPE, TERM }. The second trap line cancels the 'exit' trap, so that the script can exit successfully. (You can relay the exit status of diff to the calling program (shell) by capturing its exit status status=$? and then using exit $status.)

If all you want to do is supply the same stdin to two programs you might like to use process substitution together with tee. Assuming you can cat your input from a file (or just using the tee part, if you want it interactive-ish) you could use something like this:
cat input | tee >(./p1 > p1.out) >(./p2 > p2.out) && diff p1.out p2.out

Related

Count the Words in text file without using the 'wc' command in unix shell scripting

Here I could not find the number of words in the text file . What would be the possible changes do I need to make?
What is the use of tty in this program?
echo "Enter File name:"
read filename
terminal=`tty`
exec < $filename
num_line=0
num_words=0
while read line
do
num_lines=`expr $num_lines + 1`
num_words=`expr $num_words + 1`
done
There is a simple way using arrays to read the number of words in a file:
#!/bin/bash
[ -n "$1" ] || {
printf printf "error: insufficient input. Usage: %s\n" "${0//\//}"
exit 1
}
fn="$1"
[ -r "$fn" ] || {
printf "error: file not found: '%s'\n" "$fn"
exit 1
}
declare -i cnt=0
while read -r line || [ -n "$line" ]; do # read line from file
tmp=( $line ) # create tmp array of words
cnt=$((cnt + ${#tmp[#]})) # add no. of words to count
done <"$fn"
printf "\n %s words in %s\n\n" "$cnt" "$fn" # show results
exit 0
input:
$ cat dat/wordfile.txt
Here I could not find the number of words in the text file. What
would be the possible changes do I need to make? What is the use
of tty in this program?
output:
$bash wcount.sh dat/wordfile.txt
33 words in dat/wordfile.txt
wc -w confirmation:
$ wc -w dat/wordfile.txt
33 dat/wordfile.txt
tty?
The use of terminal=tty assigns the terminal device for the current interactive shell to the terminal variable. (It is a way to determine which tty device you are connected to e.g. /dev/pts/4)
tty command prints the name of the terminal connected to the standard output. In the context of your program, it does nothing significant really, you might as well remove that line and run.
Regarding the number of words calculation, you would need to parse each line and find it using space as the delimiter. Currently the program just finds the number of lines $num_lines and uses the same calculation for $num_words.

How to write a tail script without the tail command

How would you achieve this in bash. It's a question I got asked in an interview and I could think of answers in high level languages but not in shell.
As I understand it, the real implementation of tail seeks to the end of the file and then reads backwards.
The main idea is to keep a fixed-size buffer and to remember the last lines. Here's a quick way to do a tail using the shell:
#!/bin/bash
SIZE=5
idx=0
while read line
do
arr[$idx]=$line
idx=$(( ( idx + 1 ) % SIZE ))
done < text
for ((i=0; i<SIZE; i++))
do
echo ${arr[$idx]}
idx=$(( ( idx + 1 ) % SIZE ))
done
If all not-tail commands are allowed, why not be whimsical?
#!/bin/sh
[ -r "$1" ] && exec < "$1"
tac | head | tac
Use wc -l to count the number of lines in the file. Subtract the number of lines you want from this, and add 1, to get the starting line number. Then use this with sed or awk to start printing the file from that line number, e.g.
sed -n "$start,\$p"
There's this:
#!/bin/bash
readarray file
lines=$(( ${#file[#]} - 1 ))
for (( line=$(($lines-$1)), i=${1:-$lines}; (( line < $lines && i > 0 )); line++, i-- )); do
echo -ne "${file[$line]}"
done
Based on this answer: https://stackoverflow.com/a/8020488/851273
You pass in the number of lines at the end of the file you want to see then send the file via stdin, puts the entire file into an array, and only prints the last # lines of the array.
The only way I can think of in “pure” shell is to do a while read linewise on the whole file into an array variable with indexing modulo n, where n is the number of tail lines (default 10) — i.e. a circular buffer, then iterate over the circular buffer from where you left off when the while read ends. It's not efficient or elegant, in any sense, but it'll work and avoids reading the whole file into memory. For example:
#!/bin/bash
incmod() {
let i=$1+1
n=$2
if [ $i -ge $2 ]; then
echo 0
else
echo $i
fi
}
n=10
i=0
buffer=
while read line; do
buffer[$i]=$line
i=$(incmod $i $n)
done < $1
j=$i
echo ${buffer[$i]}
i=$(incmod $i $n)
while [ $i -ne $j ]; do
echo ${buffer[$i]}
i=$(incmod $i $n)
done
This script somehow imitates tail:
#!/bin/bash
shopt -s extglob
LENGTH=10
while [[ $# -gt 0 ]]; do
case "$1" in
--)
FILES+=("${#:2}")
break
;;
-+([0-9]))
LENGTH=${1#-}
;;
-n)
if [[ $2 != +([0-9]) ]]; then
echo "Invalid argument to '-n': $1"
exit 1
fi
LENGTH=$2
shift
;;
-*)
echo "Unknown option: $1"
exit 1
;;
*)
FILES+=("$1")
;;
esac
shift
done
PRINTHEADER=false
case "${#FILES[#]}" in
0)
FILES=("/dev/stdin")
;;
1)
;;
*)
PRINTHEADER=true
;;
esac
IFS=
for I in "${!FILES[#]}"; do
F=${FILES[I]}
if [[ $PRINTHEADER == true ]]; then
[[ I -gt 0 ]] && echo
echo "==> $F <=="
fi
if [[ LENGTH -gt 0 ]]; then
LINES=()
COUNT=0
while read -r LINE; do
LINES[COUNT++ % LENGTH]=$LINE
done < "$F"
for (( I = COUNT >= LENGTH ? LENGTH : COUNT; I; --I )); do
echo "${LINES[--COUNT % LENGTH]}"
done
fi
done
Example run:
> bash script.sh -n 12 <(yes | sed 20q) <(yes | sed 5q)
==> /dev/fd/63 <==
y
y
y
y
y
y
y
y
y
y
y
y
==> /dev/fd/62 <==
y
y
y
y
y
> bash script.sh -4 <(yes | sed 200q)
y
y
y
y
Here's the answer I would give if I were actually asked this question in an interview:
What environment is this where I have bash but not tail? Early boot scripts, maybe? Can we get busybox in there so we can use the full complement of shell utilities? Or maybe we should see if we can squeeze a stripped-down Perl interpreter in, even without most of the modules that would make life a whole lot easier. You know dash is much smaller than bash and perfectly good for scripting use, right? That might also help. If none of that is an option, we should check how much space a statically linked C mini-tail would need, I bet I can fit it in the same number of disk blocks as the shell script you want.
If that doesn't convince the interviewer that it's a silly question, then I go on to observe that I don't believe in using bash extensions, because the only good reason to write anything complicated in shell script nowadays is if total portability is an overriding concern. By avoiding anything that isn't portable even in one-offs, I don't develop bad habits, and I don't get tempted to do something in shell when it would be better done in a real programming language.
Now the thing is, in truly portable shell, arrays may not be available. (I don't actually know whether the POSIX shell spec has arrays, but there certainly are legacy-Unix shells that don't have them.) So, if you have to emulate tail using only shell builtins and it's got to work everywhere, this is the best you can do, and yes, it's hideous, because you're writing in the wrong language:
#! /bin/sh
a=""
b=""
c=""
d=""
e=""
f=""
while read x; do
a="$b"
b="$c"
c="$d"
d="$e"
e="$f"
f="$x"
done
printf '%s\n' "$a"
printf '%s\n' "$b"
printf '%s\n' "$c"
printf '%s\n' "$d"
printf '%s\n' "$e"
printf '%s\n' "$f"
Adjust the number of variables to match the number of lines you want to print.
The battle-scarred will note that printf is not 100% available either. Unfortunately, if all you have is echo, you are up a creek: some versions of echo cannot print the literal string "-n", and others cannot print the literal string "\n", and even figuring out which one you have is a bit of a pain, particularly as, if you don't have printf (which is in POSIX), you probably don't have user-defined functions either.
(N.B. The code in this answer, sans rationale, was originally posted by user 'Nirk' but then deleted under downvote pressure from people whom I shall charitably assume were not aware that some shells do not have arrays.)

Bash script to automatically test program output - C

I am very new to writing scripts and I am having trouble figuring out how to get started on a bash script that will automatically test the output of a program against expected output.
I want to write a bash script that will run a specified executable on a set of test inputs, say in1 in2 etc., against corresponding expected outputs, out1, out2, etc., and check that they match. The file to be tested reads its input from stdin and writes its output to stdout. So executing the test program on an input file will involve I/O redirection.
The script will be invoked with a single argument, which will be the name of the executable file to be tested.
I'm having trouble just getting going on this, so any help at all (links to any resources that further explain how I could do this) would be greatly appreciated. I've obviously tried searching myself but haven't been very successful in that.
Thanks!
If I get what you want; this might get you started:
A mix of bash + external tools like diff.
#!/bin/bash
# If number of arguments less then 1; print usage and exit
if [ $# -lt 1 ]; then
printf "Usage: %s <application>\n" "$0" >&2
exit 1
fi
bin="$1" # The application (from command arg)
diff="diff -iad" # Diff command, or what ever
# An array, do not have to declare it, but is supposedly faster
declare -a file_base=("file1" "file2" "file3")
# Loop the array
for file in "${file_base[#]}"; do
# Padd file_base with suffixes
file_in="$file.in" # The in file
file_out_val="$file.out" # The out file to check against
file_out_tst="$file.out.tst" # The outfile from test application
# Validate infile exists (do the same for out validate file)
if [ ! -f "$file_in" ]; then
printf "In file %s is missing\n" "$file_in"
continue;
fi
if [ ! -f "$file_out_val" ]; then
printf "Validation file %s is missing\n" "$file_out_val"
continue;
fi
printf "Testing against %s\n" "$file_in"
# Run application, redirect in file to app, and output to out file
"./$bin" < "$file_in" > "$file_out_tst"
# Execute diff
$diff "$file_out_tst" "$file_out_val"
# Check exit code from previous command (ie diff)
# We need to add this to a variable else we can't print it
# as it will be changed by the if [
# Iff not 0 then the files differ (at least with diff)
e_code=$?
if [ $e_code != 0 ]; then
printf "TEST FAIL : %d\n" "$e_code"
else
printf "TEST OK!\n"
fi
# Pause by prompt
read -p "Enter a to abort, anything else to continue: " input_data
# Iff input is "a" then abort
[ "$input_data" == "a" ] && break
done
# Clean exit with status 0
exit 0
Edit.
Added exit code check; And a short walk trough:
This will in short do:
Check if argument is given (bin/application)
Use an array of "base names", loop this and generate real filenames.
I.e.: Having array ("file1" "file2") you get
In file: file1.in
Out file to validate against: file1.out
Out file: file1.out.tst
In file: file2.in
...
Execute application and redirect in file to stdin for application by <, and redirect stdout from application to out file test by >.
Use a tool like i.e. diff to test if they are the same.
Check exit / return code from tool and print message (FAIL/OK)
Prompt for continuance.
Any and all of which off course can be modified, removed etc.
Some links:
TLDP; Advanced Bash-Scripting Guide (can be a bit more readable with this)
Arrays
File test operators
Loops and branches
Exit-status
...
bash-array-tutorial
TLDP; Bash-Beginners-Guide
Expect could be a perfect fit for this kind of problem:
Expect is a tool primarily for automating interactive applications
such as telnet, ftp, passwd, fsck, rlogin, tip, etc. Expect really
makes this stuff trivial. Expect is also useful for testing these same
applications.
First take a look at the Advanced Bash-Scripting Guide chapter on I/O redirection.
Then I have to ask Why use a bash script at all? Do it directly from your makefile.
For instance I have a generic makefile containing something like:
# type 'make test' to run a test.
# for example this runs your program with jackjill.txt as input
# and redirects the stdout to the file jackjill.out
test: $(program_NAME)
./$(program_NAME) < jackjill.txt > jackjill.out
./diff -q jackjill.out jackjill.expected
You can add as many tests as you want like this. You just diff the output file each time against a file containing your expected output.
Of course this is only relevant if you're actually using a makefile for building your program. :-)
Functions. Herestrings. Redirection. Process substitution. diff -q. test.
Expected outputs are a second kind of input.
For example, if you want to test a square function, you would have input like (0, 1, 2, -1, -2) and expected output as (0, 1, 4, 1, 4).
Then you would compare every result of input to the expected output and report errors for example.
You could work with arrays:
in=(0 1 2 -1 -2)
out=(0 1 4 2 4)
for i in $(seq 0 $((${#in[#]}-1)) )
do
(( ${in[i]} * ${in[i]} - ${out[i]} )) && echo -n bad" " || echo -n fine" "
echo $i ": " ${in[i]}"² ?= " ${out[i]}
done
fine 0 : 0² ?= 0
fine 1 : 1² ?= 1
fine 2 : 2² ?= 4
bad 3 : -1² ?= 2
fine 4 : -2² ?= 4
Of course you can read both arrays from a file.
Testing with (( ... )) can invoke arithmetic expressions, strings and files. Try
help test
for an overview.
Reading strings wordwise from a file:
for n in $(< f1); do echo $n "-" ; done
Read into an array:
arr=($(< file1))
Read file linewise:
for i in $(seq 1 $(cat file1 | wc -l ))
do
line=$(sed -n ${i}p file1)
echo $line"#"
done
Testing against program output sounds like string comparison and capturing of program output n=$(cmd param1 param2):
asux:~/prompt > echo -e "foo\nbar\nbaz"
foo
bar
baz
asux:~/prompt > echo -e "foo\nbar\nbaz" > file
asux:~/prompt > for i in $(seq 1 3); do line=$(sed -n ${i}p file); test "$line" = "bar" && echo match || echo fail ; done
fail
match
fail
Further usesful: Regular expression matching on Strings with =~ in [[ ... ]] brackets:
for i in $(seq 1 3)
do
line=$(sed -n ${i}p file)
echo -n $line
if [[ "$line" =~ ba. ]]; then
echo " "match
else echo " "fail
fi
done
foo fail
bar match
baz match

How to resize progress bar according to available space?

I am looking to get an effect where the length of my progress bar resizes accordingly to my PuTTY window. This effect is accomplished with wget's progress bar.
Here is my program I use in my bash scripts to create a progress bar:
_progress_bar
#!/bin/bash
maxwidth=50 # line length (in characters)
filled_char="#"
blank_char="."
current=0 max=0 i=0
current=${1:-0}
max=${2:-100}
if (( $current > $max ))
then
echo >&2 "current value must be smaller max. value"
exit 1
fi
percent=`awk 'BEGIN{printf("%5.2f", '$current' / '$max' * 100)}'`
chars=($current*$maxwidth)/$max
echo -ne " ["
while (( $i < $maxwidth ))
do
if (( $i <= $chars ));then
echo -ne $filled_char
else
echo -ne $blank_char
fi
i=($i+1)
done
echo -ne "] $percent%\r"
if (( $current == $max )); then
echo -ne "\r"
echo
fi
Here is an example of how I use it, this example finds all Tor Onion proxies Exit nodes and bans the IP under a custom chain:
#!/bin/bash
IPTABLES_TARGET="DROP"
IPTABLES_CHAINNAME="TOR"
WORKING_DIR="/tmp/"
# get IP address of eth0 network interface
IP_ADDRESS=$(ifconfig eth0 | awk '/inet addr/ {split ($2,A,":"); print A[2]}')
if ! iptables -L "$IPTABLES_CHAINNAME" -n >/dev/null 2>&1 ; then #If chain doesn't exist
iptables -N "$IPTABLES_CHAINNAME" >/dev/null 2>&1 #Create it
fi
cd $WORKING_DIR
wget -q -O - http://proxy.org/tor_blacklist.txt -U NoSuchBrowser/1.0 > temp_tor_list1
sed -i 's|RewriteCond %{REMOTE_ADDR} \^||g' temp_tor_list1
sed -i 's|\$.*$||g' temp_tor_list1
sed -i 's|\\||g' temp_tor_list1
sed -i 's|Rewrite.*$||g' temp_tor_list1
wget -q -O - "https://check.torproject.org/cgi-bin/TorBulkExitList.py?ip=$IP_ADDRESS&port=80" -U NoSuchBrowser/1.0 > temp_tor_list2
wget -q -O - "https://check.torproject.org/cgi-bin/TorBulkExitList.py?ip=$IP_ADDRESS&port=9998" -U NoSuchBrowser/1.0 >> temp_tor_list2
sed -i 's|^#.*$||g' temp_tor_list2
iptables -F "$IPTABLES_CHAINNAME"
CMD=$(cat temp_tor_list1 temp_tor_list2 | uniq | sort)
UBOUND=$(echo "$CMD" | grep -cve '^\s*$')
for IP in $CMD; do
let COUNT=COUNT+1
_progress_bar $COUNT $UBOUND
iptables -A "$IPTABLES_CHAINNAME" -s $IP -j $IPTABLES_TARGET
done
iptables -A "$IPTABLES_CHAINNAME" -j RETURN
rm temp_tor*
Edit:
I realized that first example people may not want to use so here is a more simple concept:
#!/bin/bash
for i in {1..100}; do
_progress_bar $i 100
done
I made a few changes to your script:
Converted it to a function. If you want to keep it in a separate file so it's available to multiple scripts, just source the file in each of your scripts. Doing this eliminates the overhead of repeatedly calling an external script.
Eliminated the while loop (which should have been a for ((i=0; $i < $maxwidth; i++)) loop anyway) for a drastic speed-up.
Changed your arithmetic expressions so they evaluate immediately instead of setting them to strings for later evaluation.
Removed dollar signs from variable names where they appear in arithmetic contexts.
Changed echo -en to printf.
Made a few other changes
Changed the AWK output so "100.00%" is decimal aligned with smaller values.
Changed the AWK command to use variable passing instead of "inside-out: quoting.
Here is the result:
_progress_bar () {
local maxwidth=50 # line length (in characters)
local filled_char="#"
local blank_char="."
local current=0 max=0 i=0
local complete remain
current=${1:-0}
max=${2:-100}
if (( current > max ))
then
echo >&2 "current value must be smaller than max. value"
return 1
fi
percent=$(awk -v "c=$current" -v "m=$max" 'BEGIN{printf("%6.2f", c / m * 100)}')
(( chars = current * maxwidth / max))
# sprintf n zeros into the var named as the arg to -v
printf -v complete '%0*.*d' '' "$chars" ''
printf -v remain '%0*.*d' '' "$((maxwidth - chars))" ''
# replace the zeros with the desired char
complete=${complete//0/"$filled_char"}
remain=${remain//0/"$blank_char"}
printf ' [%s%s] %s%%\r' "$complete" "$remain" "$percent"
}
What was the question? Oh, see BashFAQ/091. Use tput or bash -i and $COLUMNS. If you use bash -i, however, be aware that it will have the overhead of processing your startup files
After some google searching I did find the following:
tput cols will return the amount of columns, much like Sdaz's suggested COLUMNS var.
Therefore I am going with:
maxwidth=$(tput cols) unless someone else has a more bulletproof way without requiring tput
Bash exports the LINES and COLUMNS envvars to the window rows and column counts, respectively. Furthermore, when you resize the putty window, via the SSH or telnet protocols, there is sufficient logic in the protocol to send a WINCH signal to the active shell, which then resets these values to the new window dimensions.
In your bash script, use the COLUMNS variable to set the current dimensions, and divide 100 / progbarlen (progbarlen based on a portion of the COLUMNS variable) to get how many percentage points make up one character, and advance them as your progress moves along. To handle the resizing dynamically, add a handler for SIGWINCH (via trap) and have it reread the COLUMNS envvar, and redraw the progress bar using the new dimensions.
(I haven't tested this in a shell script, and there may be some additional logic required, but this is how bash detects/handles resizing.)

bash: append newline when redirecting file

Here is how I read a file row by row:
while read ROW
do
...
done < file
I don't use the other syntax
cat file | while read ROW
do
...
done
because the pipe creates a subshell and makes me lose the environment variables.
The problem arises if the file doesn't end with a newline: last line is not read. It is easy to solve this in the latter syntax, by echoing just a newline:
(cat file; echo) | while read ROW
do
...
done
How do I do the same in the former syntax, without opening a subshell nor creating a temporary file (the list is quite big)?
A way that works in all shells is the following:
#!/bin/sh
willexit=0
while [ $willexit == 0 ] ; do
read ROW || willexit=1
...
done < file
A direct while read will exit as soon as read encounters the EOF, so the last line will not be processed. By checking the return value outside the while, we can process the last line. An additional test for the emptiness of $ROW should be added after the read though, since otherwise a file whose last line ends with a newline will generate a spurious execution with an empty line, so make it
#!/bin/sh
willexit=0
while [ $willexit == 0 ] ; do
read ROW || willexit=1
if [ -n "$ROW"] ; then
...
fi
done < file
#!/bin/bash
while read ROW
...
done < <(cat file ; echo)
The POSIX way to do this is via a named pipe.
#!/bin/sh
[ -p mypipe ] || mkfifo mypipe
(cat num7; echo) > mypipe &
while read line; do
echo "-->$line<--"
export CNT=$((cnt+1))
done < mypipe
rm mypipe
echo "CNT is '$cnt'"
Input
$ cat infile
1
2
3
4
5$
Output
$ (cat infile;echo) > mypipe & while read line; do echo "-->$line<--"; export CNT=$((cnt+1)); done < mypipe; echo "CNT is '$cnt'"
[1] 22260
-->1<--
-->2<--
-->3<--
-->4<--
-->5<--
CNT is '5'
[1]+ Done ( cat num7; echo ) > mypipe
From an answer to a similar question:
while IFS= read -r LINE || [ -n "${LINE}" ]; do
...
done <file
The IFS= part prevents read from stripping leading and trailing whitespace (see this answer).
If you need to react differently depending on whether the file has a trailing newline or not (e.g., warn the user) you'll have to make some changes to the while condition.

Resources