The following toy script (tmp.sh) exits with code 0 even if the process sent to the named pipe fails. How can I capture the non-zero exit code from the named pipe? Or more in general, the fact that something has gone wrong?
#!/bin/bash
set -eo pipefail
mkfifo mypipe
FOOBAR > mypipe &
cat mypipe
Run and check exit code:
bash tmp.sh
tmp.sh: line 6: FOOBAR: command not found
echo $? # <- Exit code is 0 despite the "command not found"!
You need to capture process id of background process and wait for it to set the correct exit status:
#!/bin/bash
set -eo pipefail
rm -f mypipe
mkfifo mypipe
FOOBAR > mypipe &
# store process id of above process into pid
pid=$!
cat mypipe
# wait for background process to complete
wait $pid
Now when you run it:
bash tmp.sh
tmp.sh: line 6: FOOBAR: command not found
echo $?
127
If you need to be able to catch errors and apply specific behavior, a trap can be your friend. This code prints itself, so I just post a run here:
$: tst
+ trap 'x=$?; echo "$x#$0:$LINENO"; exit $x' err
+ rm -f mypipe
+ mkfifo mypipe
+ pid=6404
+ cat mypipe
+ cat ./tst
#! /bin/env bash
set -x
trap 'x=$?; echo "$x#$0:$LINENO"; exit $x' err
#set -eo pipefail
rm -f mypipe
mkfifo mypipe
cat $0 >mypipe &
pid=$!
cat mypipe
wait $pid
fubar >mypipe &
pid=$!
cat mypipe
wait $pid
echo done
+ wait 6404
+ pid=7884
+ cat mypipe
+ fubar
./tst: line 16: fubar: command not found
+ wait 7884
++ x=127
++ echo 127#./tst:19
127#./tst:19
Note the trap 'x=$?; echo "$x#$0:$LINENO"; exit $x' err line.
It sets x to the last error code, which will be whatever triggered it. Then it prints the code, the filename, and the line number it is currently executing (before the trap) and exits the program with the error code. This actually triggers on the wait. It causes it to bail before continuing the echo at the bottom.
It works with or without the set -eo pipefail.
Related
Have the below in a bash script -
python3 run_tests.py 2>&1 | tee tests.log
If I run python3 run_tests.py alone, I can do the below to exit the script:
python3 run_tests.py
if [ $? -ne 0 ]; then
echo 'ERROR: pytest failed, exiting ...'
exit $?
However, the above working code doesn't write the output of pytest to a file.
When I run python3 run_tests.py 2>&1 | tee tests.log, the output of pytest will output to the file, but this always returns status 0, since the output job successfully ran.
I need a way to somehow capture the returned code of the python script tests prior to writing to the file. Either that, or something that accomplishes the same end result of quitting the job if a test fails while also getting the failures in the output file.
Any help would be appreciated! :)
The exit status of a pipeline is the status of the last command, so $? is the status of tee, not pytest.
In bash you can use the $PIPESTATUS array to get the status of each command in the pipeline.
python3 run_tests.py 2>&1 | tee tests.log
status=${PIPESTATUS[0]} # status of run_tests.py
if [ $status -ne 0 ]; then
echo 'ERROR: pytest failed, exiting ...'
exit $status
fi
Note that you need to save the status in another variable, because $? and $PIPESTATUS are updated after each command.
I don't have python on my system so using awk to produce output and a specific exit status instead:
$ { awk 'BEGIN{print "foo"; exit 1}'; ret="$?"; } > >(tee tests.log)
foo
$ echo "$ret"
1
$ cat tests.log
foo
or if you want a script:
$ cat tst.sh
#!/usr/bin/env bash
#########
exec 3>&1 # save fd 1 (stdout) in fd 3 to restore later
exec > >(tee tests.log) # redirect stdout of this script to go to tee
awk 'BEGIN{print "foo"; exit 1}' # run whatever command you want
ret="$?" # save that command's exit status
exec 1>&3; 3>&- # restore stdout and close fd 3
#########
echo "here's the result:"
echo "$ret"
cat tests.log
$ ./tst.sh
foo
here's the result:
1
foo
Obviously just test the value of ret to exit or not, e.g.:
if (( ret != 0 )); then
echo 'the sky is falling' >&2
exit "$ret"
fi
You could also wrap the command call in a function if you like:
$ cat tst.sh
#!/usr/bin/env bash
doit() {
local ret=0
exec 3>&1 # save fd 1 (stdout) in fd 3 to restore later
exec > >(tee tests.log) # redirect stdout of this script to go to tee
awk 'BEGIN{print "foo"; exit 1}' # run whatever command you want
ret="$?" # save that command's exit status
exec 1>&3; 3>&- # restore stdout and close fd 3
return "$ret"
}
doit
echo "\$?=$?"
cat tests.log
$ ./tst.sh
foo
$?=1
foo
I would like to monitor the output of a process for a given period of time. The following does everything I want except give me the return value of the command that ran.
cmd='cat <<EOF
My
Three
Lines
EOF
exit 2
'
perl -pe "if (/Hello/) { print \$_; exit 1 }" <(echo "$cmd" | timeout
5 bash)
Does anyone have a way to get that return value? I've looked at other questions here, but none of the answers apply in this situation.
Bash 4.4-Only Answer: Use $! to collect PID, and wait on that PID
Bash only made it possible to collect the exit status of a process substitution in version 4.4. Since we need to have that version anyhow, might as well use automatic FD allocation too. :)
exec {psfd}< <(echo "hello"; exit 3); procsub_pid=$!
cat <&$psfd # read from the process substitution so it can exit
exec {psfd}<&- # close the FD
wait "$procsub_pid" # wait for the process to collect its exit status
echo "$?"
...properly returns:
3
In the context of your code, that might look like:
cmd() { printf '%s\n' My Three Lines; exit 2; }
export -f cmd
exec {psfd}< <(timeout 5 bash -c cmd); ps_pid=$!
perl -pe "if (/Hello/) { print \$_; exit 1 }" <&$psfd
echo "Perl exited with status $?"
wait "$ps_pid"; echo "Process substitution exited with status $?"
...emitting as output:
Perl exited with status 0
Process substitution exited with status 2
Easy Answer: Do Something Else
While it's possible to work around this in very recent shell releases, in general, process substitutions eat exit status. More to the point, there's just no need for them in the example given.
If you set the pipefail shell option, exit status from any component in a pipeline -- not just the last -- will be reflected in the pipeline's exit status; thus, you don't need to use a process substitution to have perl's exit status be honored as well.
#!/usr/bin/env bash
set -o pipefail
cmd() { printf '%s\n' My Three Lines; exit 2; }
export -f cmd
timeout 5 bash -c 'cmd' | perl -pe "if (/Hello/) { print \$_; exit 1 }"
printf '%s\n' \
"Perl exited with status ${PIPESTATUS[1]}" \
"Process substitution exited with status ${PIPESTATUS[0]}"
I have some bash script where I put a block of commands in background and then want to kill them
#!/bin/bash
{ sleep 117s; echo "test"; } &
ppid=$!
# do something important
<kill the subprocess somehow>
I need to find a way to kill the subprocess so if it still sleeps then it stops sleeping and "test" won't be printed. I need to do it automatically in the script, so I can't use another shell.
What I already tried so far:
kill $ppid - doesn't kill sleep at all (with -9 flag too), sleep ppid becomes 1 but test won't be printed
kill %1 - the same result as above
kill -- -$ppid - it complains kill: (-30847) - No such process (and the subprocess is still here)
pkill -P $ppid - test has been printed
How can I do it?
Just changue your code for:
{ sleep 117s && echo "test"; } &
From bash man:
command1 && command2
command2 is executed if, and only if, command1 returns an exit status
of zero.
Demo:
$ { sleep 117s; echo "test"; } &
[1] 48013
$ pkill -P $!
-bash: line 102: 48014 Terminated sleep 117s
$ test
[1]+ Done { sleep 117s; echo "test"; }
$ { sleep 117s && echo "test"; } &
[1] 50763
$ pkill -P $!
-bash: line 106: 50764 Terminated sleep 117s
Run the command group in its own sub-shell. Use set -m to run the sub-shell in its own process group. Kill the process group
#!/bin/bash
set -m
( sleep 117s; echo "test"; ) &
ppid=$!
# do something important
kill -- -$ppid
This seems like a pretty trivial thing to do, but I'm very stuck.
To execute something in the background, use &:
>>> sleep 5 &
[1] 21763
>>> #hit enter
[1]+ Done sleep 5
But having a bashrc-sourced background script output job information is pretty frustrating, so you can do this to fix it:
>>> (sleep 5 &)
OK, so now I want to get the PID of sleep for wait or kill. Unfortunately its running in a subshell so the typical $! method doesn't work:
>>> echo $!
21763
>>> (sleep 5 &)
>>> echo $!
21763 #hasn't changed
So I thought, maybe I could get the subshell to print its PID in this way:
>>> sleep 5 & echo $!
[1] 21803 #annoying job-start message (stderr)
21803 #from the echo
But now when I throw that in the subshell no matter how I try to capture stdout of the subshell, it appears to block until sleep has finished.
>>> pid=$(sleep 5 & echo $!)
How can I run something in the background, get its PID and stop it from printing job information and "Done"?
Solution A
When summoning the process, redirect the shell's stderr to >/dev/null for that summoning instance. We can do this by duplicating fd 2 so we could still use the duplicate fd for the process. We do all of these inside a block to make redirection temporary:
{ sleep 5 2>&3 & pid=$!; } 3>&2 2>/dev/null
Now to prevent the "Done" message from being shown later, we exclude the process from the job table and this is done with the disown command:
{ sleep 5 2>&3 & disown; pid=$!; } 3>&2 2>/dev/null
It's not necessary if job control is not enabled. Job control can be disabled with set +m or shopt -u -o monitor.
Solution B
We can also use command substitution to summon the process. The only problem we had is that the process still hooks itself to the pipe created by $() that reads stdout but we can fix this by duplicating original stdout before it then using that file descriptor for the process:
{ pid=$( sleep 200s >&3 & echo $! ); } 3>&1
It may not be necessary if we redirect the process' output somewhere like /dev/null:
pid=$( sleep 200s >/dev/null & echo $! )
Similarly with process substitution:
{ read pid < <(sleep 200s >&3 & echo $!); } 3>&1
Some may say that redirection is not necessary for process substitution but the problem is that the process that may be accessing its stdout would die quickly. For example:
$ function x { for A in {1..100}; do echo "$A"; sleep 1s; done }
$ read pid < <(x & echo $!)
$ kill -s 0 "$pid" &>/dev/null && echo "Process active." || echo "Process died."
Process died.
$ read pid < <(x > /dev/null & echo $!)
$ kill -s 0 "$pid" &>/dev/null && echo "Process active." || echo "Process died."
Process active.
Optionally you can just create a permanent duplicate fd with exec 3>&1 so you can just have pid=$( sleep 200s >&3 & echo $! ) on the next lines.
You can use read bulletin to capture output:
read -r pid < <(sleep 10 & echo $!)
Then:
ps -p $pid
PID TTY TIME CMD
78541 ttys001 0:00.00 sleep 10
The set +m disable monitor mode in bash. In other words it rid off the annnoying Done message.
To enable again, use set -m.
eg:
$ set +m
$ (sleep 5; echo some) &
[1] 23545 #still prints the job number
#after 5 secs
some
$ #no Done message...
Try this:
pid=$((sleep 5 & echo $!) | sed 1q)
I found a great way, no need sub-shell, will keep the parent-child relationship.
Since: [1] 21763 and [1]+ Done sleep 5 are all stderr, which is &2.
We can redirect &2 to /dev/null, here is code:
exec 7>&2 2>/dev/null # Here backup 2 to 7, and redirect 2 to /dev/null
sleep 5
wait
exec 2>&7 7>&- # here restore 7 to 2, and delete 7.
See: Using exec
To redirect (and append) stdout and stderr to a file, while also displaying it on the terminal, I do this:
command 2>&1 | tee -a file.txt
However, is there another way to do this such that I get an accurate value for the exit status?
That is, if I test $?, I want to see the exit status of command, not the exit status of tee.
I know that I can use ${PIPESTATUS[0]} here instead of $?, but I am looking for another solution that would not involve having to check PIPESTATUS.
Perhaps you could put the exit value from PIPESTATUS into $?
command 2>&1 | tee -a file.txt ; ( exit ${PIPESTATUS} )
Another possibility, with some bash flavours, is to turn on the pipefail option:
pipefail
If set, the return value of a pipeline is
the value of the last (rightmost)
command to exit with a non-zero
status, or zero if all commands in the
pipeline exit successfully. This
option is disabled by default.
set -o pipefail
...
command 2>&1 | tee -a file.txt || echo "Command (or tee?) failed with status $?"
This having been said, the only way of achieving PIPESTATUS functionality portably (e.g. so it'd also work with POSIX sh) is a bit convoluted, i.e. it requires a temp file to propagate a pipe exit status back to the parent shell process:
{ command 2>&1 ; echo $? >"/tmp/~pipestatus.$$" ; } | tee -a file.txt
if [ "`cat \"/tmp/~pipestatus.$$\"`" -ne 0 ] ; then
...
fi
or, encapsulating for reuse:
log2file() {
LOGFILE="$1" ; shift
{ "$#" 2>&1 ; echo $? >"/tmp/~pipestatus.$$" ; } | tee -a "$LOGFILE"
MYPIPESTATUS="`cat \"/tmp/~pipestatus.$$\"`"
rm -f "/tmp/~pipestatus.$$"
return $MYPIPESTATUS
}
log2file file.txt command param1 "param 2" || echo "Command failed with status $?"
or, more generically perhaps:
save_pipe_status() {
STATUS_ID="$1" ; shift
"$#"
echo $? >"/tmp/~pipestatus.$$.$STATUS_ID"
}
get_pipe_status() {
STATUS_ID="$1" ; shift
return `cat "/tmp/~pipestatus.$$.$STATUS_ID"`
}
save_pipe_status my_command_id ./command param1 "param 2" | tee -a file.txt
get_pipe_status my_command_id || echo "Command failed with status $?"
...
rm -f "/tmp/~pipestatus.$$."* # do this in a trap handler, too, to be really clean
There is an arcane POSIX way of doing this:
exec 4>&1; R=$({ { command1; echo $? >&3 ; } | { command2 >&4; } } 3>&1); exec 4>&-
It will set the variable R to the return value of command1, and pipe output of command1 to command2, whose output is redirected to the output of parent shell.
Use process substitution:
command > >( tee -a "$logfile" ) 2>&1
tee runs in a subshell so $? holds the exit status of command.