Why can't I exit from an exit trap when I'm inside of a function in ZSH, unless I'm in a loop? - shell

I'm really trying to understand the difference in how ZSH and Bash are handling signal traps, but I'm having a very hard time grasping why ZSH is doing what it's doing.
In short, I'm not able to exit a script with exit in ZSH from within a trap if the execution point is within a function, unless it's also within a loop.
Here is an example of how exit in a trap action behaves in the global / file level scope.
#!/bin/zsh
trap 'echo "Trap SIGINT" ; exit 130' SIGINT
sleep 1
echo "1"
sleep 1
echo "2"
sleep 1
echo "3"
If I call the script, I can send an INT signal by pressing Cntrl+C at any time to echo "Trap SIGINT" and exit the script immediately.
If I hit Cntrl+C after I see the first 1, the output looks like this:
$ ./foobar
1
^CTrap SIGINT
But if I wrap the code in a function, then the trap doesn't want to stop script execution until the function finishes. Using exit 130 from within the trap action just continues the code from the execution point within the function.
Here is an example of how using trap behaves in the function level scope.
#!/bin/zsh
trap 'echo "Trap SIGINT" ; exit 130' SIGINT
foobar() {
sleep 1
echo "1"
sleep 1
echo "2"
sleep 1
echo "3"
}
foobar
echo "Finished"
If I call the script, the only thing that an INT signal does is end the sleep command early. The script will just keep on going from the same execution point after that.
If I hit Cntrl+C repeatedly the output looks like this.
$ ./foobar
^CTrap SIGINT
1
^CTrap SIGINT
2
^CTrap SIGINT
3
It doesn't echo the "Finished" at the end, so it is exiting when the function is finished, but I can't seem to exit before it's finished.
It doesn't make a difference if I set the trap in the global / file scope or from within the function.
If I change exit 130 to return 130, then it will jump out of that function early but continue script execution. This is expected behavior from what I could read in the ZSH documentation.
If I wrap the code inside of a for or while loop as shown in the code below, the code then has no problem breaking out of the loop.
#!/bin/zsh
trap 'echo "Trap SIGINT" ; exit 130' SIGINT
foobar() {
for i in 1; do
sleep 1
echo "1"
sleep 1
echo "2"
sleep 1
echo "3"
done
sleep 1
echo "Outside of loop"
}
foobar
echo "Finished"
Even if I have the loop in the global / file scope and calling foobar from within the loop, it still has no problem exiting within the trap action. I assume it's because using
The one thing that does work correctly is defining a TRAPINT function instead of using the trap built-in, and returning a non-exit code from that function. However exiting from the TRAPINT function works the same way it does with the trap built-in.
I've tried to find anything on why it acts like this but I couldn't find anything.
So what's actually happening here? Why is ZSH not letting me exit from the trap action when the execution point is inside a function?

One way to make this work as expected is setting the ERR_EXIT option.
From the documentation:
If a command has a non-zero exit status, execute the ZERR trap, if set, and exit. This is disabled while running initialization scripts.
There's also ERR_RETURN:
If a command has a non-zero exit status, return immediately from the enclosing function. The logic is similar to that for ERR_EXIT, except that an implicit return statement is executed instead of an exit. This will trigger an exit at the outermost level of a non-interactive script.
Both options have some caveats and notes; refer to the documentation.
Adding a setopt localoptions err_exit as the first line of the foobar function (You probably don't want to do this globally) in your script causes:
$ ./foobar
1
^CTrap SIGINT
$
Now, the interesting bit. In your demonstration script, if you change your exit value from 130 to some other number, and the echo lines to echo "1 - $?" etc., you get:
$ ./foobar
1 - 0
2 - 0
^CTrap SIGINT
3 - 130
The sleep is still exiting with 130, the normal value for a process killed by a SIGINT. What happened to your exit in the trap and its value? Not a clue (I'll update the answer if I figure it out) .
I'd just stick with the TRAPnal functions when writing zsh scripts that care about signals.

Related

Why doesn't "trap EXIT" work in background jobs in Bash?

I am trying to create a script that starts a bunch of jobs in the background and then waits until all of them run to completion.
#!/bin/sh
cleanup() {
wait
echo cleanup
}
do_work() {
sleep 2
echo done "$#"
}
run() {
trap cleanup EXIT
do_work 1 &
# ... some code that may fail ...
do_work 2 &
# I can't just call cleanup() here because of possible early exit
}
# The script itself runs in the background too.
run&
To ensure that this script will wait for all its child processes, even if something goes wrong while spawning them, I use trap cleanup EXIT instead of just cleanup at the end.
But when I run this script in different shells, I get the following results:
$ for sh in zsh dash 'busybox ash' bash; do echo "$sh:"; $sh script.sh; sleep 3; echo; done
zsh:
done 1
done 2
cleanup
dash:
done 1
done 2
cleanup
busybox ash:
done 2
done 1
cleanup
bash:
done 2
done 1
$
In Bash the trap command seems to be completely ignored. What might be the reason for that? Any way to fix it?
man bash-builtins says something about signals ignored upon entry to the shell that cannot be trapped, but I have no idea how that applies to this situation...
Just call exit at the end of run:
run() {
trap cleanup EXIT
do_work 1 &
# ... some code that may fail ...
do_work 2 &
# I can't just call cleanup() here because of possible early exit
exit
}

Prevent CTRL+C being sent to process called from a Bash script

Here is a simplified version of some code I am working on:
#!/bin/bash
term() {
echo ctrl c pressed!
# perform cleanup - don't exit immediately
}
trap term SIGINT
sleep 100 &
wait $!
As you can see, I would like to trap CTRL+C / SIGINT and handle these with a custom function to perform some cleanup operation, rather than exiting immediately.
However, upon pressing CTRL+C, what actually seems to happen is that, while I see ctrl c pressed! is echoed as expected, the wait command is also killed which I would not like to happen (part of my cleanup operation kills sleep a bit later but first does some other things). Is there a way I can prevent this, i.e. stop CTRL+C input being sent to the wait command?
You can prevent a process called from a Bash script from receiving sigint by first ignoring the signal with trap:
#!/bin/bash
# Cannot be interrupted
( trap '' INT; exec sleep 10; )
However, only a parent process can wait for its child, so wait is a shell builtin and not a new process. This therefore doesn't apply.
Instead, just restart the wait after it gets interrupted:
#!/bin/bash
n=0
term() {
echo "ctrl c pressed!"
n=$((n+1))
}
trap term INT
sleep 100 &
while
wait "$!"
[ "$?" -eq 130 ] # Sigint results in exit code 128+2
do
if [ "$n" -ge 3 ]
then
echo "Jeez, fine"
exit 1
fi
done
I ended up using a modified version of what #thatotherguy suggested:
#!/bin/bash
term() {
echo ctrl c pressed!
# perform cleanup - don't exit immediately
}
trap term SIGINT
sleep 100 &
pid=$!
while ps -p $pid > /dev/null; do
wait $pid
done
This checks if the process is still running and, if so, runs wait again.

Bash: why wait returns prematurely with code 145

This problem is very strange and I cannot find any documentation about this online. In the following code snippet I am merely trying to run a bunch of sub-processes in parallel, printing something when they exit and collect/print their exit code at the end. I find that without catching SIGCHLD things work as I would expect however, things break when I catch the signal. Here is the code:
#!/bin/bash
#enabling job control
set -m
cmd_array=( "$#" ) #array of commands to run in parallel
cmd_count=$# #number of commands to run
cmd_idx=0; #current index of command
cmd_pids=() #array of child proc pids
trap 'echo "Child job existed"' SIGCHLD #setting up signal handler on SIGCHLD
#running jobs in parallel
while [ $cmd_idx -lt $cmd_count ]; do
cmd=${cmd_array[$cmd_idx]} #retreiving the job command as a string
eval "$cmd" &
cmd_pids[$cmd_idx]=$! #keeping track of the job pid
echo "Job #$cmd_idx launched '$cmd']"
(( cmd_idx++ ))
done
#all jobs have been launched, collecting exit codes
idx=0
for pid in "${cmd_pids[#]}"; do
wait $pid
child_exit_code=$?
if [ $child_exit_code -ne 0 ]; then
echo "ERROR: Job #$idx failed with return code $child_exit_code. [job_command: '${cmd_array[$idx]}']"
fi
(( idx++ ))
done
You can tell something is wrong when you try to run this the following command:
./parallel_script.sh "sleep 20; echo done_20" "sleep 3; echo done_3"
The interesting thing here is that you can tell as soon as the signal handler is called (when sleep 3 is done), the wait (which is waiting on sleep 20) is interrupted right away with a return code 145. I can tell the sleep 20 is still running even after the script is done.
I can't find any documentation about such a return code from wait. Can anyone shed some light as to what is going on here?
(By the way if I add a while loop when I wait and keep on waiting while the return code is 145, I actually get the result I expect)
Thanks to #muru, I was able to reproduce the "problem" using much less code, which you can see below:
#!/bin/bash
set -m
trap "echo child_exit" SIGCHLD
function test() {
sleep $1
echo "'sleep $1' just returned now"
}
echo sleeping for 6 seconds in the background
test 6 &
pid=$!
echo sleeping for 2 second in the background
test 2 &
echo waiting on the 6 second sleep
wait $pid
echo "wait return code: $?"
If you run this you will get the following output:
linux:~$ sh test2.sh
sleeping for 6 seconds in the background
sleeping for 2 second in the background
waiting on the 6 second sleep
'sleep 2' just returned now
child_exit
wait return code: 145
lunux:~$ 'sleep 6' just returned now
Explanation:
As #muru pointed out "When a command terminates on a fatal signal whose number is N, Bash uses the value 128+N as the exit status." (c.f. Bash manual on Exit Status).
Now what mislead me here is the "fatal" signal. I was looking for a command to fail somewhere when nothing did.
Digging a little deeper in Bash manual on Signals: "When Bash is waiting for an asynchronous command via the wait builtin, the reception of a signal for which a trap has been set will cause the wait builtin to return immediately with an exit status greater than 128, immediately after which the trap is executed."
So there you have it, what happens in the script above is the following:
sleep 6 starts in the background
sleep 3 starts in the background
wait starts waiting on sleep 6
sleep 3terminates and the SIGCHLD trap if fired interrupting wait, which returns 128 + SIGCHLD = 145
my script exits since it does not wait anymore
the background sleep 6 terminates hence the "'sleep 6' just returned now" after the script already exited

Using wait process-id on a bash if-condition returning error code 1 for successful process termination

I know a little of bash return codes on successful/failure conditions, but I was experimenting a little with wait on background processes on couple of scripts on if condition and I was surprised to see the behavior on the return error codes 0 for success and non-zero for failure cases.
My scripts:-
$cat foo.sh
#!/bin/bash
sleep 5
$cat bar.sh
#!/bin/bash
sleep 10
$cat experiment.sh
./foo.sh &
pid1=$!
./bar.sh &
pid2=$!
if wait $pid1 && wait $pid2
then
echo "Am getting screwed here!"
else
echo "Am supposed to be screwed here!"
fi
Run the script as it is and getting the output as Am getting screwed here! instead of Am supposed to be screwed here!
$./experiment.sh
Am getting screwed here!
Now modifying the scripts to forcefully return exit codes using exit in both foo.sh and bar.sh
$cat foo.sh
#!/bin/bash
sleep 5
exit 2
$cat bar.sh
#!/bin/bash
sleep 10
exit 17
And am surprised to see the output as
$./experiment.sh
Am supposed to be screwed here!
Apologize for the detailed post, but any help appreciated.
The man page for reference:- http://ss64.com/bash/wait.html
That's correct behavior. The exit status of wait (when called with a single process ID) is the exit status of the process being waited on. Since at least one of them has a non-zero exit status, the && list fails and the else branch is taken.
The rationale is that there is one way (0) for a command to succeed but many ways (any non-zero integer) for it to fail. Don't confuse bash's use of exit statuses with the standard Boolean interpretation of 0 as false and nonzero as true. The shell if statement checks if its command succeeds.

SIGALRM waits for subshell processes?

Here is the unexpected situation: in the following script, SIGALRM doesn't invoke the function alarm() at the expected time.
#!/bin/sh -x
alarm() {
echo "alarmed!!!"
}
trap alarm 14
OUTER=$(exec sh -c 'echo $PPID')
#for arg in `ls $0`; do
ls $0 | while read arg; do
INNER=$(exec sh -c 'echo $PPID')
# child A, the timer
sleep 1 && kill -s 14 $$ &
# child B, some other scripts
sleep 60 &
wait $!
done
Expectation:
After 1 second, the function alarm() should be called.
Actually:
alarm() is called until 60s, or when we hit Ctrl+C.
We know in the script, $$ actually indicates the OUTER process, so I suppose we should see the string printed to screen after 1 second. However, it is until child B exits do we see alarm() is called.
When we get the trap line commented, the whole program just terminates after 1 second. So... I suppose SIGALRM is at least received, but why doesn't it invoke actions?
And, as a side question, is the default behavior of SIGALRM to be termination? From here I am told that by default it is ignored, so why OUTER exits after receiving it?
From the bash man page:
If bash is waiting for a command to complete and receives a signal for which a trap has been set, the trap will
not be executed until the command completes. When bash is waiting for an asynchronous command via the wait
builtin, the reception of a signal for which a trap has been set will cause the wait builtin to return immediately
with an exit status greater than 128, immediately after which the trap is executed.
Your original script is in the first scenario. The subshell (the while read loop) has called wait, but the top level script is just waiting for the subshell, so when it receives the signal, the trap is not executed until the subshell completes. If you send the signal to the subshell with kill -s 14 $INNER, you get the behavior you expect.

Resources