I'm trying to get the result of multiple commands that run asynchronously, to far I got:
#!/usr/bin/env bash
sum=0
for i in `seq 1 10`;
do
sum+=$(calculationCommand) &
done
wait
echo $sum
But it outputs 0 every time, can someone help me find the mistake and correct it, thanks!
Here's ShellCheck:
Line 6:
sum+=$(calculationCommand) &
^-- SC2030: Modification of sum is local (to subshell caused by backgrounding &).
Line 10:
echo $sum
^-- SC2031: sum was modified in a subshell. That change might be lost.
You can not update variables from other processes. Instead, write the results to a file, wait for them to complete, and then read the data from the files.
Here's an example:
#!/bin/bash
calculationCommand() {
sleep 5
echo 2
}
for i in {1..10}
do
calculationCommand > tmp.$i &
done
wait
sum=0
for number in $(cat tmp.{1..10})
do
(( sum += number ))
done
echo "$sum"
Alternatives include using a fifo instead of 10 files.
This question already has answers here:
In Bash, how to find the lowest-numbered unused file descriptor?
(6 answers)
Closed 1 year ago.
How can I figure out if a file descriptor is currently in use in Bash? For example, if I have a script that reads, writes, and closes fd 3, e.g.
exec 3< <(some command here)
...
cat <&3
exec 3>&-
what's the best way to ensure I'm not interfering with some other purpose for the descriptor that may have been set before my script runs? Do I need to put my whole script in a subshell?
If you do not care if the file descriptor is above 9, you may ask the shell itself to provide one. Of course, the fd is guaranteed to be free by the own shell.
Feature available since bash 4.1+ (2009-12-31) {varname} style automatic file descriptor allocation
$ exec {var}>hellofile
$ echo "$var"
15
$ echo "this is a test" >&${var}
$ cat hellofile
this is a test
$ exec {var}>&- # to close the fd.
In fact, in linux, you can see the open fds with:
$ ls /proc/$$/fd
0 1 2 255
In pure bash, you can use the following method to see if a given file descriptor (3 in this case) is available:
rco="$(true 2>/dev/null >&3; echo $?)"
rci="$(true 2>/dev/null <&3; echo $?)"
if [[ "${rco}${rci}" = "11" ]] ; then
echo "Cannot read or write fd 3, hence okay to use"
fi
This basically works by testing whether you can read or write to the given file handle. Assuming you can do neither, it's probably okay to use.
In terms of finding the first free descriptor, you can use something like:
exec 3>/dev/null # Testing, comment out to make
exec 4</dev/null # descriptor available.
found=none
for fd in {0..200}; do
rco="$(true 2>/dev/null >&${fd}; echo $?)"
rci="$(true 2>/dev/null <&${fd}; echo $?)"
[[ "${rco}${rci}" = "11" ]] && found=${fd} && break
done
echo "First free is ${found}"
Running that script gives 5 as the first free descriptor but you can play around with the exec lines to see how making an earlier one available will allow the code snippet to find it.
As pointed out in the comments, systems that provide procfs (the /proc file system) have another way in which they can detect free descriptors. The /proc/PID/fd directory will contain an entry for each open file descriptor as follows:
pax> ls -1 /proc/$$/fd
0
1
2
255
So you could use a script similar to the one above to find a free entry in there:
exec 3>/dev/null # Testing, comment out to make
exec 4</dev/null # descriptor available.
found=none
for fd in {0..200} ; do
[[ ! -e /proc/$$/fd/${fd} ]] && found=${fd} && break
done
echo "First free is ${found}"
Just keep in mind that not all systems providing bash will necessarily have procfs (the BDSs and CygWin being examples). Should be fine for Linux if that's the OS you're targeting.
Of course, you do still have the option of wrapping your entire shell script as something like:
(
# Your current script goes here
)
In that case, the file handles will be preserved outside those parentheses and you can manipulate them within as you see fit.
The other answer that uses pre-bash-4.1 syntax does a lot unnecessary subshell spawning and redundant checks. It also has an arbitrary cut-off for the maximum FD number.
The following code should do the trick with no subshell spawning (other than for a ulimit call if we want to get a decent upper limit on FD numbers).
fd=2 max=$(ulimit -n) &&
while ((++fd < max)); do
! <&$fd && break
done 2>/dev/null &&
echo $fd
Basically we just iterate over possible FDs until we reach one we can't dupe.
In order to avoid the Bad file descriptor error message from the last loop iteration, we redirect stderr for the whole while loop.
For those who prefer one-liners and doesnt have Bash-4.1+ available:
{ seq 0 255; ls -1 /proc/$$/fd; } | sort -n | uniq -u | head -1
I decided to summarize the brilliant answer given by #paxdiablo into the single shell function with two auxiliary ones:
fd_used_sym() {
[ -e "/proc/$$/fd/$1" ]
}
fd_used_rw() {
: 2>/dev/null >&$1 || : 2>/dev/null <&$1
}
fd_free() {
local fd_check
if [ -e "/proc/$$/fd" ]
then
fd_check=fd_used_sym
else
fd_check=fd_used_rw
fi
for n in {0..255}
do
eval $fd_check $n || {
echo "$n"
return
}
done
}
There is sort of simplifications -- escape from auxiliary functions without losing the main functionality:
fd_free() {
local fd_check
if [ -e "/proc/$$/fd" ]
then
fd_check='[ -e "/proc/$$/fd/$n" ]'
else
fd_check=': 2>/dev/null >&$n || : 2>/dev/null <&$n'
fi
for n in {0..255}
do
eval $fd_check || {
echo "$n"
return
}
done
}
Both functions check availability of the file descriptor and output the number of the first found free file descriptor. Benefits are following:
both check ways are implemented (via /proc/$$/fd/X and R/W to a particular FD)
builtins are used only
When opening file descriptors in a bash script and reading files line by line, the script terminates with a memory allocation error after processing 70K lines:
xmalloc: cannot allocate 11541 bytes (0 bytes allocated)
Environment:
MINGW32
Bash: 3.1.20(4)-release (i686-pc-msys)
OS: Windows 7
The size of input files: 1.2 GB each
The script follows:
#!/bin/bash
echo Left: $1
echo Right: $2
echo >"$1.diff"
echo >"$2.diff"
exec 4<"$1"
exec 5<"$2"
LINECOUNT=0
while [ $? == 0 ]
do
exec 0<&4
read LEFTLINE
exec 0<&5
read RIGHTLINE
if [ $? != 0 ]
then
exit -1
fi
LINECOUNT=$(($LINECOUNT + 1))
LINEMOD=$(($LINECOUNT % 1000))
if [[ $LINEMOD == 0 ]]
then
echo Line: $LINECOUNT
fi
if [ $LEFTLINE != $RIGHTLINE ]
then
echo $LEFTLINE >> "$1.diff"
echo $RIGHTLINE >> "$2.diff"
echo Mismatch found
fi
done
As I said above the script works for a long time, processes about 70K lines and then terminates. I assume it terminates because it uses up all the memory that a 32 bit process can take.
The purpose of the script is to open two files of the same format and length and compare them line by line. It creates two output files into where it writes out mismatching lines. I had to write the script because all comparison tools I had at my disposal crashed with "out of memory" errors or hanged. I was surprised when my script also crashed. I had to rewrite the same in C++ to make it work. Now I am trying to understand why the bash script failed. In theory it should not accumulate the file content in memory. Instead it should just read one line at a time and advance the file pointer. I am trying to understand why it crashed. Maybe there is another approach to my problem that you can recommend that I could have implemented as a bash script.
Update: Tested the following modification. It also crashed.
while IFS= read -u4 -r LEFTLINE && IFS= read -u5 -r RIGHTLINE
do
LINECOUNT=$(($LINECOUNT + 1))
LINEMOD=$(($LINECOUNT % 1000))
With the valuable input from the people in the comments to the question the solution was found. Petesh commented correctly that there was a bug (or many bugs) in previous versions of bash that caused memory leaks. Here is the link to the ticket provided by Petesh. Fortunately, the leak was fixed in more recent versions of bash. So the solution is to update bash. I installed cygwin with the bash version 4.1.17(9)-release (i686-pc-cygwin) and my script completed successfully with only 1.5 Mb of memory consumed without memory increases. John Zwinch also tested Bash 4.1.5, x86_64 and confirmed that the bug was fixed in that version too.
While resolving the issue a few improvements to the script were suggested by Mark Setchell and John Zwinck. The modifications didn't fix the problem but made the script simpler and more reliable with different file formats. The final version of the script follows:
#!/bin/bash
echo Left: $1
echo Right: $2
>"$1.diff"
>"$2.diff"
LINECOUNT=0
while IFS= read -u4 -r LEFTLINE && IFS= read -u5 -r RIGHTLINE
do
LINECOUNT=$(($LINECOUNT + 1))
LINEMOD=$(($LINECOUNT % 1000))
if [[ $LINEMOD == 0 ]]
then
echo Line: $LINECOUNT
fi
if [ "$LEFTLINE" != "$RIGHTLINE" ]
then
echo $LEFTLINE >> "$1.diff"
echo $RIGHTLINE >> "$2.diff"
echo Mismatch found
fi
done 4<"$1" 5<"$2"
I want to execute a script and have it run a command every x minutes.
Also any general advice on any resources for learning bash scripting could be really cool. I use Linux for my personal development work, so bash scripts are not totally foreign to me, I just haven't written any of my own from scratch.
If you want to run a command periodically, there's 3 ways :
using the crontab command ex. * * * * * command (run every minutes)
using a loop like : while true; do ./my_script.sh; sleep 60; done (not precise)
using systemd timer
See cron
Some pointers for best bash scripting practices :
http://mywiki.wooledge.org/BashFAQ
Guide: http://mywiki.wooledge.org/BashGuide
ref: http://www.gnu.org/software/bash/manual/bash.html
http://wiki.bash-hackers.org/
USE MORE QUOTES!: http://www.grymoire.com/Unix/Quote.html
Scripts and more: http://www.shelldorado.com/
In addition to #sputnick's answer, there is also watch. From the man page:
Execute a program periodically, showing output full screen
By default this is every 2 seconds. watch is useful for tailing logs, for example.
macOS users: here's a partial implementation of the GNU watch command (as of version 0.3.0) for interactive periodic invocations for primarily visual inspection:
It is syntax-compatible with the GNU version and fails with a specific error message if an unimplemented feature is used.
Notable limitations:
The output is not limited to one screenful.
Displaying output differences is not supported.
Using precise timing is not supported.
Colored output is always passed through (--color is implied).
Also implements a few non-standard features, such as waiting for success (-E) to complement waiting for error (-e) and showing the time of day of the last invocation as well as the total time elapsed so far.
Run watch -h for details.
Examples:
watch -n 1 ls # list current dir every second
watch -e 'ls *.lockfile' # list lock files and exit once none exist anymore.
Source code (paste into a script file named watch, make it executable, and place in a directory in your $PATH; note that syntax highlighting here is broken, but the code works):
#!/usr/bin/env bash
THIS_NAME=$(basename "$BASH_SOURCE")
VERSION='0.1'
# Helper function for exiting with error message due to runtime error.
# die [errMsg [exitCode]]
# Default error message states context and indicates that execution is aborted. Default exit code is 1.
# Prefix for context is always prepended.
# Note: An error message is *always* printed; if you just want to exit with a specific code silently, use `exit n` directly.
die() {
echo "$THIS_NAME: ERROR: ${1:-"ABORTING due to unexpected error."}" 1>&2
exit ${2:-1} # Note: If the argument is non-numeric, the shell prints a warning and uses exit code 255.
}
# Helper function for exiting with error message due to invalid parameters.
# dieSyntax [errMsg]
# Default error message is provided, as is prefix and suffix; exit code is always 2.
dieSyntax() {
echo "$THIS_NAME: PARAMETER ERROR: ${1:-"Invalid parameter(s) specified."} Use -h for help." 1>&2
exit 2
}
# Get the elapsed time since the specified epoch time in format HH:MM:SS.
# Granularity: whole seconds.
# Example:
# tsStart=$(date +'%s')
# ...
# getElapsedTime $tsStart
getElapsedTime() {
date -j -u -f '%s' $(( $(date +'%s') - $1 )) +'%H:%M:%S'
}
# Command-line help.
if [[ "$1" == '--help' || "$1" == '-h' ]]; then
cat <<EOF
SYNOPSIS
$THIS_NAME [-n seconds] [opts] cmd [arg ...]
DESCRIPTION
Executes a command periodically and displays its output for visual inspection.
NOTE: This is a PARTIAL implementation of the GNU \`watch\` command, for OS X.
Notably, the output is not limited to one screenful, and displaying
output differences and using precise timing are not supported.
Also, colored output is always passed through (--color is implied).
Unimplemented features are marked as [NOT IMPLEMENTED] below.
Conversely, features specific to this implementation are marked as [NONSTD].
Reference version is GNU watch 0.3.0.
CMD may be a simple command with separately specified
arguments, if any, or a single string containing one or more
;-separated commands (including arguments) - in the former case the command
is directly executed by bash, in the latter the string is passed to \`bash -c\`.
Note that GNU watch uses sh, not bash.
To use \`exec\` instead, specify -x (see below).
By default, CMD is re-invoked indefinitely; terminate with ^-C or
exit based on conditions:
-e, --errexit
exits once CMD indicates an error, i.e., returns a non-zero exit code.
-E, --okexit [NONSTD]
is the inverse of -e: runs until CMD returns exit code 0.
By default, all output is passed through; the following options modify this
behavior; note that suppressing output only relates to CMD's output, not the
messages output by this utility itself:
-q, --quiet [NONSTD]
suppresses stdout output from the command invoked;
-Q, --quiet-both [NONSTD]
suppresses both stdout and stderr output.
-l, --list [NONSTD]
list-style display; i.e., suppresses clearing of the screen
before every invocation of CMD.
-n secs, --interval secs
interval in seconds between the end of the previous invocation of CMD
and the next invocation - 2 seconds by default, fractional values permitted;
thus, the interval between successive invocations is the specified interval
*plus* the last CMD's invocation's execution duration.
-x, --exec
uses \`exec\` rather than bash to execute CMD; this requires
arguments to be passed to CMD to be specified as separate arguments
to this utility and prevents any shell expansions of these arguments
at invocation time.
-t, --no-title
suppresses the default title (header) that displays the interval,
and (NONSTD) a time stamp, the time elapsed so far, and the command executed.
-b, --beep
beeps on error (bell signal), i.e., when CMD reports a non-zero exit code.
-c, --color
IMPLIED AND ALWAYS ON: colored command output is invariably passed through.
-p, --precise [NOT IMPLEMENTED]
-d, --difference [NOT IMPLEMENTED]
EXAMPLES
# List files in home folder every second.
$THIS_NAME -n 1 ls ~
# Wait until all *.lockfile files disappear from the current dir, checking every 2 secs.
$THIS_NAME -e 'ls *.lockfile'
EOF
exit 0
fi
# Make sure that we're running on OSX.
[[ $(uname) == 'Darwin' ]] || die "This script is designed to run on OS X only."
# Preprocess parameters: expand compressed options to individual options; e.g., '-ab' to '-a -b'
params=() decompressed=0 argsReached=0
for p in "$#"; do
if [[ $argsReached -eq 0 && $p =~ ^-[a-zA-Z0-9]+$ ]]; then # compressed options?
decompressed=1
params+=(${p:0:2})
for (( i = 2; i < ${#p}; i++ )); do
params+=("-${p:$i:1}")
done
else
(( argsReached && ! decompressed )) && break
[[ $p == '--' || ${p:0:1} != '-' ]] && argsReached=1
params+=("$p")
fi
done
(( decompressed )) && set -- "${params[#]}"; unset params decompressed argsReached p # Replace "$#" with the expanded parameter set.
# Option-parameters loop.
interval=2 # default interval
runUntilFailure=0
runUntilSuccess=0
quietStdOut=0
quietStdOutAndStdErr=0
dontClear=0
noHeader=0
beepOnErr=0
useExec=0
while (( $# )); do
case "$1" in
--) # Explicit end-of-options marker.
shift # Move to next param and proceed with data-parameter analysis below.
break
;;
-p|--precise|-d|--differences|--differences=*)
dieSyntax "Sadly, option $1 is NOT IMPLEMENTED."
;;
-v|--version)
echo "$VERSION"; exit 0
;;
-x|--exec)
useExec=1
;;
-c|--color)
# a no-op: unlike the GNU version, we always - and invariably - pass color codes through.
;;
-b|--beep)
beepOnErr=1
;;
-l|--list)
dontClear=1
;;
-e|--errexit)
runUntilFailure=1
;;
-E|--okexit)
runUntilSuccess=1
;;
-n|--interval)
shift; interval=$1;
errMsg="Please specify a positive number of seconds as the interval."
interval=$(bc <<<"$1") || dieSyntax "$errMsg"
(( 1 == $(bc <<<"$interval > 0") )) || dieSyntax "$errMsg"
[[ $interval == *.* ]] || interval+='.0'
;;
-t|--no-title)
noHeader=1
;;
-q|--quiet)
quietStdOut=1
;;
-Q|--quiet-both)
quietStdOutAndStdErr=1
;;
-?|--?*) # An unrecognized switch.
dieSyntax "Unrecognized option: '$1'. To force interpretation as non-option, precede with '--'."
;;
*) # 1st data parameter reached; proceed with *argument* analysis below.
break
;;
esac
shift
done
# Make sure we have at least a command name
[[ -n "$1" ]] || dieSyntax "Too few parameters specified."
# Suppress output streams, if requested.
# Duplicate stdout and stderr first.
# This allows us to produce output to stdout (>&3) and stderr (>&4) even when suppressed.
exec 3<&1 4<&2
if (( quietStdOutAndStdErr )); then
exec &> /dev/null
elif (( quietStdOut )); then
exec 1> /dev/null
fi
# Set an exit trap to ensure that the duplicated file descriptors are closed.
trap 'exec 3>&- 4>&-' EXIT
# Start loop with periodic invocation.
# Note: We use `eval` so that compound commands - e.g. 'ls; bash --version' - can be passed.
tsStart=$(date +'%s')
while :; do
(( dontClear )) || clear
(( noHeader )) || echo "Every ${interval}s. [$(date +'%H:%M:%S') - elapsed: $(getElapsedTime $tsStart)]: $#"$'\n' >&3
if (( useExec )); then
(exec "$#") # run in *subshell*, otherwise *this* script will be replaced by the process invoked
else
if [[ $* == *' '* ]]; then
# A single argument with interior spaces was provided -> we must use `bash -c` to evaluate it properly.
bash -c "$*"
else
# A command name only or a command name + arguments were specified as separate arguments -> let bash run it directly.
"$#"
fi
fi
ec=$?
(( ec != 0 && beepOnErr )) && printf '\a'
(( ec == 0 && runUntilSuccess )) && { echo $'\n'"[$(date +'%H:%M:%S') - elapsed: $(getElapsedTime $tsStart)] Exiting as requested: exit code 0 reported." >&3; exit 0; }
(( ec != 0 && runUntilFailure )) && { echo $'\n'"[$(date +'%H:%M:%S') - elapsed: $(getElapsedTime $tsStart)] Exiting as requested: non-zero exit code ($ec) reported." >&3; exit 0; }
sleep $interval
done
If you need a command to run periodically so that you can monitor its output, use watch [options] command. For example, to monitor free memory, run:
watch -n 1 free -m
the -n 1 option sets update interval to 1 second (default is 2 seconds).
Check man watch or the online manual for details.
If you need to monitor changes in one or multiple files (typically, logs), tail is your command of choice, for example:
# monitor one log file
tail -f /path/to/logs/file.log
# monitor multiple log files concurrently
tail -f $(ls /path/to/logs/*.log)
the -f (for “follow”) option tells tail to output new content as the file grows.
Check man tail or the online manual for details.
I want to execute the script and have it run a command every {time interval}
cron (https://en.wikipedia.org/wiki/Cron) was designed for this purpose. If you run man cron or man crontab you will find instructions for how to use it.
any general advice on any resources for learning bash scripting could be really cool. I use Linux for my personal development work, so bash scripts are not totally foreign to me, I just haven't written any of my own from scratch.
If you are comfortable working with bash, I recommend reading through the bash manpage first (man bash) -- there are lots of cool tidbits.
Avoiding Time Drift
Here's what I do to remove the time it takes for the command to run and still stay on schedule:
#One-liner to execute a command every 600 seconds avoiding time drift
#Runs the command at each multiple of :10 minutes
while sleep $(echo 600-`date "+%s"`%600 | bc); do ls; done
This will drift off by no more than one second. Then it will snap back in sync with the clock. If you need something with less than 1 second drift and your sleep command supports floating point numbers, try adding including nanoseconds in the calculation like this
while sleep $(echo 6-`date "+%s.%N"`%6 | bc); do date '+%FT%T.%N'; done
Here is my solution to reduce drift from loop payload run time.
tpid=0
while true; do
wait ${tpid}
sleep 3 & tpid=$!
{ body...; }
done
There is some approximation to timer object approach, with sleep command executed parallel with all other commands, including even true in condition check. I think it's most precise variant without drift cound using date command.
There could be true timer object bash function, implementing timer event by just 'echo' call, then piped to loop with read cmd, like this:
timer | { while read ev; do...; done; }
I was faced with this challenge recently. I wanted a way to execute a piece of the script every hour within the bash script file that runs on crontab every 5 minutes without having to use sleep or rely fully on crontab. if the TIME_INTERVAL is not met, the script will fail at the first condition. If the TIME_INTERVAL is met, however the TIME_CONDITION is not met, the script will fail at second condition. The below script does work hand-in-hand with crontab - adjust according.
NOTE: touch -m "$0" - This will change modification timestamp of the bash script file. You will have to create a separate file for storing the last script run time, if you don't want to change the modification timestamp of the bash script file.
CURRENT_TIME=$(date "+%s")
LAST_SCRIPT_RUN_TIME=$(date -r "$0" "+%s")
TIME_INTERVAL='3600'
START_TIME=$(date -d '07:00:00' "+%s")
END_TIME=$(date -d '16:59:59' "+%s")
TIME_DIFFERENCE=$((${CURRENT_TIME} - ${LAST_SCRIPT_RUN_TIME}))
TIME_CONDITION=$((${START_TIME} <= ${CURRENT_TIME} && ${CURRENT_TIME} <= $END_TIME}))
if [[ "$TIME_DIFFERENCE" -le "$TIME_INTERVAL" ]];
then
>&2 echo "[ERROR] FAILED - script failed to run because of time conditions"
elif [[ "$TIME_CONDITION" = '0' ]];
then
>&2 echo "[ERROR] FAILED - script failed to run because of time conditions"
elif [[ "$TIME_CONDITION" = '1' ]];
then
>&2 touch -m "$0"
>&2 echo "[INFO] RESULT - script ran successfully"
fi
Based on the answer from #david-h and its comment from #kvantour, I wrote a new version which comes with bash only, i.e. without bc.
export intervalsec=60
while sleep $(LANG=C now="$(date -Ins)"; printf "%0.0f.%09.0f" $((${intervalsec}-1-$(date "+%s" -d "$now")%${intervalsec})) $((1000000000-$(printf "%.0f" $(date "+%9N" -d "$now"))))); do \
date '+%FT%T.%N'; \
done
bash arithmetic operations ($(())) can only operate on integers. That's why the seconds and the nanoseconds have to be calculated separately.
printf is used to combine the two calculations together as well as to remove and to add leading zeros. Leading zeros from "+%N" must be removed because it gets interpreted as ocal instead of decimal and must be added again before merging into the floating point.
Because the concept needs to separate date commands, a date gets cached and reused to prevent flips.
This question already has answers here:
Read a file line by line assigning the value to a variable [duplicate]
(10 answers)
Closed 3 years ago.
I am writing a script to read commands from a file and execute a specific command. I want my script to work for either single input arguments or when an argument is a filename which contains the arguments in question.
My code below works except for one problem, it ignores the last line of the file. So, if the file were as follows.
file.txt
file1
file2
The script posted below only runs the command for file.txt
for currentJob in "$#"
do
if [[ "$currentJob" != *.* ]] #single file input arg
then
echo $currentJob
serverJobName="$( tr '[A-Z]' '[a-z]' <<< "$currentJob" )" #Change to lowercase
#run cURL job
curl -o "$currentJob"PisaInterfaces.xml http://www.ebi.ac.uk/msd-srv/pisa/cgi-bin/interfaces.pisa?"$serverJobName"
else #file with list of pdbs
echo "Reading "$currentJob
while read line; do
echo "-"$line
serverJobName="$( tr '[A-Z]' '[a-z]' <<< "$line" )"
curl -o "$line"PisaInterfaces.xml http://www.ebi.ac.uk/msd-srv/pisa/cgi-bin/interfaces.pisa?"$serverJobName"
done < "$currentJob"
fi
done
There is, of course, the obvious work around where after the while loop I repeat the steps for inside the loop to complete those commands with the last file, but this is not desirable as any changes I make inside the while loop must be repeated again outside of the while loop. I have searched around online and could not find anyone asking this precise question. I am sure it is out there, but I have not found it.
The output I get is as follows.
>testScript.sh file.txt
Reading file.txt
-file1
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 642k 0 642k 0 0 349k 0 --:--:-- 0:00:01 --:--:-- 492k
My bash version is 3.2.48
It sounds like your input file is missing the newline character after its last line. When read encounters this, it sets the variable ($line in this case) to the last line, but then returns an end-of-file error so the loop doesn't execute for that last line. To work around this, you can make the loop execute if read succeeds or it read anything into $line:
...
while read line || [[ -n "$line" ]]; do
...
EDIT: the || in the while condition is what's known as a short-circuit boolean -- it tries the first command (read line), and if that succeeds it skips the second ([[ -n "$line" ]]) and goes through the loop (basically, as long as the read succeeds, it runs just like your original script). If the read fails, it checks the second command ([[ -n "$line" ]]) -- if read read anything into $line before hitting the end of file (i.e. if there was an unterminated last line in the file), this'll succeed, so the while condition as a whole is considered to have succeeded, and the loop runs one more time.
After that last unterminated line is processed, it'll run the test again. This time, the read will fail (it's still at the end of file), and since read didn't read anything into $line this time the [[ -n "$line" ]] test will also fail, so the while condition as a whole fails and the loop terminates.
EDIT2: The [[ ]] is a bash conditional expression -- it's not a regular command, but it can be used in place of one. Its primary purpose is to succeed or fail, based on the condition inside it. In this case, the -n test means succeed if the operand ("$line") is NONempty. There's a list of other test conditions here, as well as in the man page for the test command.
Note that a conditional expression in [[ ... ]] is subtly different from a test command [ ... ] -- see BashFAQ #031 for differences and a list of available tests. And they're both different from an arithmetic expression with (( ... )), and completely different from a subshell with( ... )...
Your problem seems to be a missing carriage return in your file.
If you cat your file, you need to see the last line successfully appearing before the promopt.
Otherwise try adding :
echo "Reading "$currentJob
echo >> $currentJob #add new line
while read line; do
to force the last line of the file.
Using grep with while loop:
while IFS= read -r line; do
echo "$line"
done < <(grep "" file)
Using grep . instead of grep "" will skip the empty lines.
Note:
Using IFS= keeps any line indentation intact.
You should almost always use the -r option with read.
I found some code that reads a file including the last line, and works without the [[ ]] command:
http://www.unix.com/shell-programming-and-scripting/161645-read-file-using-while-loop-not-reading-last-line.html
DONE=false
until $DONE; do
read s || DONE=true
# you code
done < FILE