I'm facing a strange race condition in my bash program. I tried duplicating it via a simple enough demo program but, obviously, as true for all/most timing-related race demonstration attempts, I couldn't.
Here's an abstracted version of the program that DOES NOT duplicate the issue, but let me still explain:
# Abstracted version of the original program
# that is NOT able to demo the race.
#
function foo() {
local instance=$1
# [A lot of logic here -
# all foreground commands, nothing in the background.]
echo "$instance: test" > /tmp/foo.$instance.log
echo "Instance $instance ended"
}
# Launch the process in background...
#
echo "Launching instance 1"
foo 1 &
# ... and wait for it to complete.
#
echo "Waiting..."
wait
echo "Waiting... done. (wait exited with: $?)"
# This ls command ALWAYS fails in the real
# program in the 1st while-iteration, complaining about
# missing files, but works in the 2nd iteration!
#
# It always works in the very 1st while-iteration of the
# abstracted version.
#
while ! ls -l /tmp/foo.*; do
:
done
In my original program (and NOT in the above abstracted version), I do see Waiting... done. (wait exited with: 0) on stdout, just as I see in the above version. Yet, the ls -l always fails in the original, but always works in the above abstracted version in the very first while loop iteration.
Also, the ls command fails despite seeing the Instance 1 ended message on stdout. The output is:
$ ./myProgram
Launching instance 1
Waiting...
Waiting... done. (wait exited with: 0)
Instance 1 ended
ls: cannot access '/tmp/foo.*': No such file or directory
/tmp/foo.1
$
I noticed that the while loop can be safely done away with if I put a sleep 1 right before ls in my original program, like so:
# This too works in the original program:
sleep 1
ls -l /tmp/foo.*
Question: Why isn't wait working as expected in my original program? Any suggestions to at least help troubleshoot the problem?
I'm using bash 4.4.19 on Ubuntu 18.04.
EDIT: I just also verified that the call to wait in the original, failing program is exiting with a status code of 0.
EDIT 2: Shouldn't the Instance 1 ended message appear BEFORE Waiting... done. (wait exited with: 0)? Could this be a 'flushing problem' with OS' disk-buffer/cache when dealing with background processes in bash?
EDIT 3: If instead of the while loop or sleep 1 hacks, I issue a sync command, then, voila, it works! But why should I have to do a sync in one program but the other?
I noticed that each the following three hacks work, but not quite sure why:
Hack 1
while ! ls -l /tmp/foo.*; do
:
done
Hack 2
sleep 1
ls -l /tmp/foo.*
Hack 3
sync
ls -l /tmp/foo.*
Could this be a 'flushing problem' with OS' disk-buffer/cache, especially when dealing with background processes, especially in bash? In other words, the call to wait seems to returning BEFORE it flushes the diskcache (or, BEFORE the OS, on its own realizes and, is done flushing the diskcache).
EDIT Thanks to #Jon, his was a very close guess and got me thinking in the right direction, along with the age-old, bit-wise tweaking advice from #chepner.
The Real Problem: I was starting foo, not directly/plainly as shown in my inaccurate abstracted version in my original question, but via another launchThread function that, after doing some bookkeeping, would also say foo 1 & in its body. And the call to launchThread was itself suffixed with an &! So, my wait was really waiting on launchThread and not on foo! The sleep, sync, and while just were helping buy more time for foo to complete, which is why introducing them worked. The following is a more accurate demonstration of the problem, even though you may or may not be able to duplicate it on your own system (due to scheduling/timing variance across systems):
#!/bin/bash -u
function now() {
date +'%Y-%m-%d %H:%M:%S'
}
function log() {
echo "$(now) - $#" >> $logDir/log # Line 1
}
function foo() {
local msg=$1
log "$msg"
echo " foo ended"
}
function launchThread() {
local f=$1
shift
"$f" "$#" & # Line 2
}
logDir=/tmp/log
/bin/rm -rf "$logDir"
mkdir -p "$logDir"
echo "Launching foo..."
launchThread foo 'message abc' & # Line 3
echo "Waiting for foo to finish..."
wait
echo "Waiting for foo to finish... done. (wait exited with: $?)"
ls "$logDir"/log*
Output of the above buggy program:
Launching foo...
Waiting for foo to finish...
Waiting for foo to finish... done. (wait exited with: 0)
foo ended
ls: cannot access '/tmp/log/log*': No such file or directory
If I remove the & from EITHER Line 2 OR from Line 3, the program works correctly, with the following as output:
Launching foo...
Waiting for foo to finish...
foo ended
Waiting for foo to finish... done. (wait exited with: 0)
/tmp/log/log
The program also works correctly if I remove the $(now) part from Line 1.
Related
I'm seeing some unexpected behavior with runit and not sure how to get it to do what I want without throwing an error during termination. I have a process that sometimes knows it should stop itself and not let itself be restarted (thus should call sv d on itself). This works if I never change the user but produces errors if I switch to a non-root user when running.
I'll use the same finish script for both examples:
#!/bin/bash -e
echo "downtest finished with exit code $1 and exit status $2"
The run script that works as expected (prints downtest finished with exit code 0 and exit status 0 to syslog):
#!/bin/bash -e
exec 2>&1
echo "running downtest"
sv d downtest
exit 0
The run script that doesn't work as expected (prints downtest finished with exit code -1 and exit status 15 to syslog):
#!/bin/bash -e
exec 2>&1
echo "running downtest"
chpst -u ubuntu sudo sv d downtest
exit 0
I get the same result if I use su ubuntu instead of chpst.
Any ideas on why I see this behavior and how to fix it so calling sudo sv d downtest results in a clean process exit rather than returning error status codes?
sv d sends a SIGTERM if the process is still running. This is signal 15, hence the error being handled in the manner in question.
By contrast, to tell a running program not to start up again after it exits on its own (thus allowing that opportunity), use sv o (once) instead.
Alternately, you can trap SIGTERM in your script when you're expecting it:
trap 'exit 0' TERM
If you want to make this conditional:
trap 'if [[ $ignore_sigterm ]]; then exit 0; fi' TERM
...and then run
ignore_sigterm=1
before triggering sv d.
Has a workaround try a subshell for running (chpst -u ubuntu sudo sv d downtest) that will help to allow calling the last exit 0 since now is not being called because is exiting before.
#!/bin/sh
exec 2>&1
echo "running downtest"
(sudo sv d downtest)
exit 0
Indeed, for stopping the process you don’t need chpst -u ubuntu if want to stop or control the service as another user just need to adjust the permissions to the ./supervise directory that’s why probably you are getting the exit code -1
Checking the runsv man:
Two arguments are given to ./finish. The first one is ./run’s exit code, or -1 if ./run didn’t exit normally. The second one is the least significant byte of the exit status as determined by waitpid(2); for instance it is 0 if ./run exited normally, and the signal number if ./run was terminated by a signal. If runsv cannot start ./run for some reason, the exit code is 111 and the status is 0.
And from the faq:
Is it possible to allow a user other than root to control a service
Using the sv program to control a service, or query its status informations, only works as root. Is it possible to allow non-root users to control a service too?
Answer: Yes, you simply need to adjust file system permissions for the ./supervise/ subdirectory in the service directory. E.g.: to allow the user burdon to control the service dhcp, change to the dhcp service directory, and do
# chmod 755 ./supervise
# chown burdon ./supervise/ok ./supervise/control ./supervise/status
In case you would like to full stop/start you could remove the symlink of your run service, but that will imply to create it again when you want the service up.
Just in case, because of this and other cases, I came up with immortal to simplify the stop/start/restart/retries of services without root privileges, full based on daemontools & runit just adapted to some new flows.
I am trying to work through securing my scripts from parallel execution by incorporating flock. I have read a number of threads here and came across a reference to this: http://www.kfirlavi.com/blog/2012/11/06/elegant-locking-of-bash-program/ which incorporates many of the examples presented in the other threads.
My scripts will eventually run on Ubuntu (>14), OS X 10.7 and 10.11.4. I am mainly testing on OS X 10.11.4 and have installed flock via homebrew.
When I run the script below, locks are being created but I think I am forking the subscripts and it is these scripts I am trying to ensure are not running more than one instance each.
#!/bin/bash
#----------------------------------------------------------------
set -vx
set -euo pipefail
set -o errexit
IFS=$'\n\t'
readonly PROGNAME=$(basename "$0")
readonly LOCKFILE_DIR=/tmp
readonly LOCK_FD=200
subprocess1="/bash$/subprocess1.sh"
subprocess2="/bash$/subprocess2.sh"
lock() {
local prefix=$1
local fd=${2:-$LOCK_FD}
local lock_file=$LOCKFILE_DIR/$prefix.lock
# create lock file
eval "exec $fd>$lock_file"
# acquier the lock
flock -n $fd \
&& return 0 \
|| return 1
}
eexit() {
local error_str="$#"
echo $error_str
exit 1
}
main() {
lock $PROGNAME \
|| eexit "Only one instance of $PROGNAME can run at one time."
##My child scripts
sh "$subprocess1" #wait for it to finish then run
sh "$subprocess2"
}
main
$subprocess1 is a script that loads ncftpget and logs into a remote server to grab some files. Once finished, the connection closes. I want to subprocess1 every 15 minutes via cron. I have done so with success, but sometimes there are many files to grab and the job takes longer than 15 minutes. It is rare, but it does happen. In such a case, I want to ensure a second instance of $subprocess1 can't be started. For clarity a small example of such a subscript is:
#!/bin/bash
remoteftp="someftp.ftp"
ncftplog="somelog.log"
localdir="some/local/dir"
ncftpget -R -T -f "$remoteftp" -d "$ncftplog" "$localdir" "*.files"
EXIT_V="$?"
case $EXIT_V in
0) O="Success!";;
1) O="Could not connect to remote host.";;
2) O="Could not connect to remote host - timed out.";;
3) O="Transfer failed.";;
4) O="Transfer failed - timed out.";;
5) O="Directory change failed.";;
6) O="Directory change failed - timed out.";;
7) O="Malformed URL.";;
8) O="Usage error.";;
9) O="Error in login configuration file.";;
10) O="Library initialization failed.";;
11) O="Session initialization failed.";;
esac
if [ "$EXIT_V" = 0 ];
then
echo ""$O"
else
echo "There has been an error: "$O""
echo "Exiting now..."
exit
fi
echo "Goodbye"
and an example of subprocess2 is:
#!/bin/bash
...preamble script setup items etc and then:
java -jar /some/javaprog.java
When I execute the parent script with "sh lock.sh", it progresses through the script without error and exits. The first issue I have is that if I load up the script again I get an error that indicates only one instance of lock.sh can run. What should I have added in the script that would indicate the processes have not completed yet (rather than merely exiting and giving back the prompt).
However, if subprocess1 was running on its own, lock.sh would load up a second instance of subprocess1 because it was not locked. How would one go about locking child scripts and ideally ensuring that forked processes were taken care of as well? If someone had run subprocess1 at the terminal or there was a runaway instance, if cron loads lock.sh, I would want it to fail when trying to load its instance subprocess1 and subprocess2 and not merely exit if cron tried to load two lock.sh instances.
My main concern is in loading multiple instances of ncftpget that is called by subprocess1 and then further, a third script I hope to incorporate, "subprocess2," which launches a java program that deals with the downloaded files, both ncftpget and the java program can't have parallel processes without breaking many things. But I'm at a loss on how to control them adequately.
I thought I could use something similar to this in the main() function of lock.sh:
#This is where I try to lock the subscript
pidfile="${subprocess1}"
# lock it
exec 200>$pidfile
flock -n 200 || exit 1
pid=$$
echo $pid 1>&200
but am not sure how to incorporate it.
Editor's note: The OP is ultimately looking to package the code from this answer
as a script. Said code creates a stay-open FIFO from which a background command reads data to process as it arrives.
It works if I type it in the terminal, but it won't work if I enter those commands in a script file and run it.
#!/bin/bash
cat >a&
pid=$!
it seems that the program is stuck at cat>a&
$pid has no value after running the script, but the cat process seems to exist.
cdarke's answer contains the crucial pointer: your script mustn't run in a child process, so you have to source it.
Based on the question you linked to, it sounds like you're trying to do the following:
Open a FIFO (named pipe).
Keep that FIFO open indefinitely.
Make a background command read from that FIFO whenever new data is sent to it.
See bottom for a working solution.
As for an explanation of your symptoms:
Running your script NOT sourced (NOT with .) means that the script runs in a child process, which has the following implications:
Variables defined in the script are only visible inside that script, and the variables cease to exist altogether when the script finishes running.
That's why you didn't see the script's $myPid variable after running the script.
When the script finishes running, its background tasks (cat >a&) are killed (as cdarke explains, the SIGHUP signal is sent to them; any process that doesn't explicitly trap that signal is terminated).
This contradicts your claim that the cat process continues to exist, but my guess is that you mistook an interactively started cat process for one started by a script.
By contrast, any FIFO created by your script (with mkfifo) does persist after the script exits (a FIFO behaves like a file - it persists until you explicitly delete it).
However, when you write to that FIFO without another process reading from it, the writing command will block and thus appear to hang (the writing process blocks until another process reads the data from the FIFO).
That's probably what happened in your case: because the script's background processes were killed, no one was reading from the FIFO, causing an attempt to write to it to block. You incorrectly surmised that it was the cat >a& command that was getting "stuck".
The following script, when sourced, adds functions to the current shell for setting up and cleaning up a stay-open FIFO with a background command that processes data as it arrives. Save it as file bgfifo_funcs:
#!/usr/bin/env bash
[[ $0 != "$BASH_SOURCE" ]] || { echo "ERROR: This script must be SOURCED." >&2; exit 2; }
# Set up a background FIFO with a command listening for input.
# E.g.:
# bgfifo_setup bgfifo "sed 's/^/# /'"
# echo 'hi' > bgfifo # -> '# hi'
# bgfifo_cleanup
bgfifo_setup() {
(( $# == 2 )) || { echo "ERROR: usage: bgfifo_setup <fifo-file> <command>" >&2; return 2; }
local fifoFile=$1 cmd=$2
# Create the FIFO file.
mkfifo "$fifoFile" || return
# Use a dummy background command that keeps the FIFO *open*.
# Without this, it would be closed after the first time you write to it.
# NOTE: This call inevitably outputs a job control message that looks
# something like this:
# [1]+ Stopped cat > ...
{ cat > "$fifoFile" & } 2>/dev/null
# Note: The keep-the-FIFO-open `cat` PID is the only one we need to save for
# later cleanup.
# The background processing command launched below will terminate
# automatically then FIFO is closed when the `cat` process is killed.
__bgfifo_pid=$!
# Now launch the actual background command that should read from the FIFO
# whenever data is sent.
{ eval "$cmd" < "$fifoFile" & } 2>/dev/null || return
# Save the *full* path of the FIFO file in a global variable for reliable
# cleanup later.
__bgfifo_file=$fifoFile
[[ $__bgfifo_file == /* ]] || __bgfifo_file="$PWD/$__bgfifo_file"
echo "FIFO '$fifoFile' set up, awaiting input for: $cmd"
echo "(Ignore the '[1]+ Stopped ...' message below.)"
}
# Cleanup function that you must call when done, to remove
# the FIFO file and kill the background commands.
bgfifo_cleanup() {
[[ -n $__bgfifo_file ]] || { echo "(Nothing to clean up.)"; return 0; }
echo "Removing FIFO '$__bgfifo_file' and terminating associated background processes..."
rm "$__bgfifo_file"
kill $__bgfifo_pid # Note: We let the job control messages display.
unset __bgfifo_file __bgfifo_pid
return 0
}
Then, source script bgfifo_funcs, using the . shell builtin:
. bgfifo_funcs
Sourcing executes the script in the current shell (rather than in a child process that terminates after the script has run), and thus makes the script's functions and variables available to the current shell. Functions by definition run in the current shell, so any background commands started from functions stay alive.
Now you can set up a stay-open FIFO with a background process that processes input as it arrives as follows:
# Set up FIFO 'bgfifo in the current dir. and process lines sent to it
# with a sample Sed command that simply prepends '# ' to every line.
$ bgfifo_setup bgfifo "sed 's/^/# /'"
# Send sample data to the FIFO.
$ echo 'Hi.' > bgfifo
# Hi.
# ...
$ echo 'Hi again.' > bgfifo
# Hi again.
# ...
# Clean up when done.
$ bgfifo_cleanup
The reason that cat >a "hangs" is because it is reading from the standard input stream (stdin, file descriptor zero), which defaults to the keyboard.
Adding the & causes it to run in background, which disconnects from the keyboard. Normally that would leave a suspended job in background, but, since you exit your script, its background tasks are killed (sends a SIGHUP signal).
EDIT: although I followed the link in the question, it was not stated originally that the OP was actually using a FIFO at that stage. So thanks to #mklement0.
I don't understand what you are trying to do here, but I suspect you need to run it as a "sourced" file, as follows:
. gash.sh
Where gash.sh is the name of your script. Note the preceding .
You need to specify a file with "cat":
#!/bin/bash
cat SOMEFILE >a &
pid=$!
echo PID $pid
Although that seems a bit silly - why not just "cp" the file (cp SOMEFILE a)?
Q: What exactly are you trying to accomplish?
I realize that there are several other questions on SE about notifications upon completion of background tasks, and how to queue up jobs to start after others end, and questions like these, but I am looking for a simpler answer to a simpler question.
I want to start a very simple background job, and get a simple stdout text notification of its completion.
For example:
cp My_Huge_File.txt New_directory &
...and when it done, my bash shell would display a message. This message could just be the completed job's PID, but if I could program unique messages per background process, that would be cool too, so I could have numerous background jobs running without confusion.
Thanks for any suggestions!
EDIT: user000001's answer separates commands with ;. I separated commands with && in my original example. The only difference I notice is that you don't have to surround your base command with braces if you use&&. Semicolons are a bit more flexible, so I've updated my examples.
The first thing that comes to mind is
{ sleep 2; echo "Sleep done"; } &
You can also suppress the accompanying stderr output from the above line:
{ { sleep 2; echo "Sleep done"; } & } 2>/dev/null
If you want to save your program output (stdout) to a log file for later viewing, you can use:
{ { sleep 2; echo "Sleep done"; } & } 2>/dev/null 1>myfile.log
Here's even a generic form you might use (You can even make an alias so that you can run it at any time without having to type so much!):
# dont hesitate to add semicolons for multiple commands
CMD="cp My_Huge_File.txt New_directory"
{ eval $CMD & } 2>/dev/null 1>myfile.log
You might also pipe stdout into another process using | in case you wish to process output in real time with other scripts or software. tee is also a helpful tool in case you wish to use multiple pipes. For reference, there are more examples of I/O redirection here.
You could use command grouping:
{ slow_program; echo ok; } &
or the wait command
slow_program &
wait
echo ok
The most reliable way is to simply have the output from the background process go to a temporary file and then consume the temporary file.
When you have a background process running it can be difficult to capture the output into something useful because multiple jobs will overwrite eachother
For example, if you have two processes which each print out a string with a number "this is my string1" "this is my string2" then it is possible for you to end up with output that looks like this:
"this is mthis is my string2y string1"
instead of:
this is my string1
this is my string2
By using temporary files you guarantee that the output will be correct.
As I mentioned in my comment above, bash already does this kind of notification by default, as far as I know. Here's an example I just made:
$ sleep 5 &
[1] 25301
$ sleep 10 &
[2] 25305
$ sleep 3 &
[3] 25309
$ jobs
[1] Done sleep 5
[2]- Running sleep 10 &
[3]+ Running sleep 3 &
$ :
[3]+ Done sleep 3
$ :
[2]+ Done sleep 10
$
I'm adding some custom logging functionality to a bash script, and can't figure out why it won't take the output from one named pipe and feed it back into another named pipe.
Here is a basic version of the script (http://pastebin.com/RMt1FYPc):
#!/bin/bash
PROGNAME=$(basename $(readlink -f $0))
LOG="$PROGNAME.log"
PIPE_LOG="$PROGNAME-$$-log"
PIPE_ECHO="$PROGNAME-$$-echo"
# program output to log file and optionally echo to screen (if $1 is "-e")
log () {
if [ "$1" = '-e' ]; then
shift
$# > $PIPE_ECHO 2>&1
else
$# > $PIPE_LOG 2>&1
fi
}
# create named pipes if not exist
if [[ ! -p $PIPE_LOG ]]; then
mkfifo -m 600 $PIPE_LOG
fi
if [[ ! -p $PIPE_ECHO ]]; then
mkfifo -m 600 $PIPE_ECHO
fi
# cat pipe data to log file
while read data; do
echo -e "$PROGNAME: $data" >> $LOG
done < $PIPE_LOG &
# cat pipe data to log file & echo output to screen
while read data; do
echo -e "$PROGNAME: $data"
log echo $data # this doesn't work
echo -e $data > $PIPE_LOG 2>&1 # and neither does this
echo -e "$PROGNAME: $data" >> $LOG # so I have to do this
done < $PIPE_ECHO &
# clean up temp files & pipes
clean_up () {
# remove named pipes
rm -f $PIPE_LOG
rm -f $PIPE_ECHO
}
#execute "clean_up" on exit
trap "clean_up" EXIT
log echo "Log File Only"
log -e echo "Echo & Log File"
I thought the commands on line 34 & 35 would take the $data from $PIPE_ECHO and output it to the $PIPE_LOG. But, it doesn't work. Instead I have to send that output directly to the log file, without going through the $PIPE_LOG.
Why is this not working as I expect?
EDIT: I changed the shebang to "bash". The problem is the same, though.
SOLUTION: A.H.'s answer helped me understand that I wasn't using named pipes correctly. I have since solved my problem by not even using named pipes. That solution is here: http://pastebin.com/VFLjZpC3
it seems to me, you do not understand what a named pipe really is. A named pipe is not one stream like normal pipes. It is a series of normal pipes, because a named pipe can be closed and a close on the producer side is might be shown as a close on the consumer side.
The might be part is this: The consumer will read data until there is no more data. No more data means, that at the time of the read call no producer has the named pipe open. This means that multiple producer can feed one consumer only when there is no point in time without at least one producer. Think of it of door which closes automatically: If there is a steady stream of people keeping the door always open either by handing the doorknob to the next one or by squeezing multiple people through it at the same time, the door is open. But once the door is closed it stays closed.
A little demonstration should make the difference a little clearer:
Open three shells. First shell:
1> mkfifo xxx
1> cat xxx
no output is shown because cat has opened the named pipe and is waiting for data.
Second shell:
2> cat > xxx
no output, because this cat is a producer which keeps the named pipe open until we tell him to close it explicitly.
Third shell:
3> echo Hello > xxx
3>
This producer immediately returns.
First shell:
Hello
The consumer received data, wrote it and - since one more consumer keeps the door open, continues to wait.
Third shell
3> echo World > xxx
3>
First shell:
World
The consumer received data, wrote it and - since one more consumer keeps the door open, continues to wait.
Second Shell: write into the cat > xxx window:
And good bye!
(control-d key)
2>
First shell
And good bye!
1>
The ^D key closed the last producer, the cat > xxx, and hence the consumer exits also.
In your case which means:
Your log function will try to open and close the pipes multiple times. Not a good idea.
Both your while loops exit earlier than you think. (check this with (while ... done < $PIPE_X; echo FINISHED; ) &
Depending on the scheduling of your various producers and consumers the door might by slam shut sometimes and sometimes not - you have a race condition built in. (For testing you can add a sleep 1 at the end of the log function.)
You "testcases" only tries each possibility once - try to use them multiple times (you will block, especially with the sleeps ), because your producer might not find any consumer.
So I can explain the problems in your code but I cannot tell you a solution because it is unclear what the edges of your requirements are.
It seems the problem is in the "cat pipe data to log file" part.
Let's see: you use a "&" to put the loop in the background, I guess you mean it must run in parallel with the second loop.
But the problem is you don't even need the "&", because as soon as no more data is available in the fifo, the while..read stops. (still you've got to have some at first for the first read to work). The next read doesn't hang if no more data is available (which would pose another problem: how does your program stops ?).
I guess the while read checks if more data is available in the file before doing the read and stops if it's not the case.
You can check with this sample:
mkfifo foo
while read data; do echo $data; done < foo
This script will hang, until you write anything from another shell (or bg the first one). But it ends as soon as a read works.
Edit:
I've tested on RHEL 6.2 and it works as you say (eg : bad!).
The problem is that, after running the script (let's say script "a"), you've got an "a" process remaining. So, yes, in some way the script hangs as I wrote before (not that stupid answer as I thought then :) ). Except if you write only one log (be it log file only or echo,in this case it works).
(It's the read loop from PIPE_ECHO that hangs when writing to PIPE_LOG and leaves a process running each time).
I've added a few debug messages, and here is what I see:
only one line is read from PIPE_LOG and after that, the loop ends
then a second message is sent to the PIPE_LOG (after been received from the PIPE_ECHO), but the process no longer reads from PIPE_LOG => the write hangs.
When you ls -l /proc/[pid]/fd, you can see that the fifo is still open (but deleted).
If fact, the script exits and removes the fifos, but there is still one process using it.
If you don't remove the log fifo at the cleanup and cat it, it will free the hanging process.
Hope it will help...