Little Bash Script: Catch Errors? - bash

I've written (well, remixed to arrive at) this Bash script
# pkill.sh
trap onexit 1 2 3 15 ERR
function onexit() {
local exit_status=${1:-$?}
echo Problem killing $kill_this
exit $exit_status
}
export kill_this=$1
for X in `ps acx | grep -i $1 | awk {'print $1'}`; do
kill $X;
done
it works fine but any errors are shown to the display. I only want the echo Problem killing... to show in case of error. How can I "catch" (hide) the error when executing the kill statement?
Disclaimer: Sorry for the long example, but when I make them shorter I inevitably have to explain "what I'm trying to do."

# pkill.sh
trap onexit 1 2 3 15 ERR
function onexit() {
local exit_status=${1:-$?}
echo Problem killing $kill_this
exit $exit_status
}
export kill_this=$1
for X in `ps acx | grep -i $1 | awk {'print $1'}`; do
kill $X 2>/dev/null
if [ $? -ne 0 ]
then
onexit $?
fi
done

You can redirect stderr and stdout to /dev/null via something like pkill.sh > /dev/null 2>&1. If you only want to suppress the output from the kill command, only apply it to that line, e.g., kill $X > /dev/null 2>&1;
What this does is take send the standard output (stdout) from kill $X to /dev/null (that's the > /dev/null), and additionally send stderr (the 2) into stdout (the 1).

For my own notes, here's my new code using Paul Creasey's answer:
# pkill.sh: this is dangerous and should not be run as root!
trap onexit 1 2 3 15 ERR
#--- onexit() -----------------------------------------------------
# #param $1 integer (optional) Exit status. If not set, use `$?'
function onexit() {
local exit_status=${1:-$?}
echo Problem killing $kill_this
exit $exit_status
}
export kill_this=$1
for X in `ps acx | grep -i "$1" | awk {'print $1'}`; do
kill $X 2>/dev/null
done
Thanks all!

Related

Execute a command in a function displaying real-time output

In some Bash scripts I am executing some commands preserving the real-time output in this way:
exec 5>&1
output=$(ls -1 2>&1 |tee /dev/fd/5; exit ${PIPESTATUS[0]})
status=$?
I moved this piece of code in a function to make it reusable like this:
execute() {
# 1 - Execute backup
echo "Executing command 'very_long_command'..."
exec 5>&1
cmd="very_long_command"
output=$($cmd 2>&1 |tee /dev/fd/5; exit ${PIPESTATUS[0]})
status=$?
echo $output
echo "very_long_command exited with status $status."
return $status
}
When I call the function with exec_output="$(execute)" I can of course get its output, but what I still need is to get the output of very_long_command during its execution, and not at the end in a unique output.
Could you help me to achieve this?
Thanks to #Charles Duffy I solved my problem, redirecting FD 5 to stderr:
execute() {
exec 5>&2
output=$(ls -1 2>&1 | tee /dev/fd/5; exit ${PIPESTATUS[0]})
status=$?
echo "$output"
return $status
}
output="$(execute)"
echo "Function output:"
printf "%s\n" "$output"

trap ERR doesn't work with pipes

I am try to make a system backup script with trap "" ERR. I realized the trap doesn't get called when commands are part of pipes |.
Heres are some parts of my code that don't work with trap "" ERR ...
OpenFiles=$(lsof "$Source" | wc -l)
PackagesList=$(dpkg --get-selections | awk '!/deinstall|purge|hold/ {print $1}' | tee "$FilePackagesList")
How can I get this to work without using if [ "$?" -eq 0 ]; then, or similar coding ? Because this is the reason I declared a trap this way.
Here is the script ...
root#Lian-Li:~# cat /usr/local/bin/create_incremental_backup_of_system.sh
#!/bin/bash
# Create an incremental GNU-standard backup of important system-files.
# This script works with Debian Jessie and newer systems.
# Created for my lian-li NAS 2016-11-27.
MailTo="admin#example.com" # Mail Address of an admin
Source="boot etc root usr/local usr/lib/cgi-bin var/www"
BackupDirectory=/media/hdd1/backups/lian-li
SubDir="system.d"
FileTimeStamp=$(date "+%Y%m%d%H%M%S")
FileName=$(uname -n)
File="${BackupDirectory}/${SubDir}/${FileName}-${FileTimeStamp}.tgz"
FileIncremental="${BackupDirectory}/${SubDir}/${FileName}.gtar"
FilePackagesList="${BackupDirectory}/${SubDir}/installed_packages_on_${FileName}.txt"
# have2do ...
# Backup rotate
MailContent="None"
TimeStamp=$(date "+%F %T") # This format "2011-12-31 23:59:59" is needed to read the journal
exec 1> >(logger -i -s -t "$0" -p 3) 2>&1 # all error messages are redirected to syslog journal and after that to stdout
trap "BriefExit" ERR # Provide information for an admin (via sendmail) when an error occurred and exit the script
function BriefExit(){
rm -f "$File"
if [ "$MailContent" = "None" ]
then
case "$LANG" in
de_DE.UTF-8)
echo "Beende Skript, aufgrund vorherige Fehler." 1>&2
;;
*)
echo "Stopping script because of previous error(s)." 1>&2
;;
esac
MailContent=$(journalctl -p 3 -o "short" --since="$TimeStamp" --no-pager)
ScriptName="${0##*/}"
SystemName=$(uname -n)
MailSubject="${SystemName}: ${ScriptName}"
echo -e "Subject: ${MailSubject}\n\n${MailContent}\n" | sendmail "$MailTo"
fi
exit 1
}
if [ ! -d "${BackupDirectory}/${SubDir}" ]
then
mkdir -p "${BackupDirectory}/${SubDir}"
fi
LoopCount=0
OpenFiles=1
cd /
while [ "$OpenFiles" -ne 0 ]
do
if [ "$LoopCount" -le 180 ]
then
sleep 1
OpenFiles=$(lsof $Source | wc -l)
LoopCount=$(($LoopCount + 1))
else
echo "Closing Script. Reason: Can't create incremental backup, because some files are open." 1>&2
BriefExit
fi
done
tar -cpzf "$File" -g "$FileIncremental" $Source
chmod 0700 "$File"
PackagesList=$(dpkg --get-selections | awk '!/deinstall|purge|hold/ {print $1}' | tee "$FilePackagesList")
while read -r PackageName
do
case "$PackageName" in
minidlna)
# Code ...
;;
slapd)
# Code ...
;;
esac
done <<< "$PackagesList"
exit 0
This isn't a problem with ERR traps at all, or with command substitutions, but with pipelines.
false | true
returns true, unless the pipefail option is set.
Thus in OpenFiles=$(lsof "$Source" | wc -l), only a failure in wc will cause the pipeline to be considered a failure, or in PackagesList=$(dpkg --get-selections | awk '!/deinstall|purge|hold/ {print $1}' | tee "$FilePackagesList"), only a failure in tee will cause the command as a whole to be considered failed.
Put the command set -o pipefail at the top of your script if you want a failure from any pipeline component (as opposed to the last component alone) to cause the command as a whole to be considered failed -- and note the other caveats for ERR traps given in BashFAQ #105.
Another alternative is to look at the status for each stage in the pipeline:
# cat test_bash_return.bash
true | true | false | true
echo "${PIPESTATUS[#]}"
# ./test_bash_return.bash
0 0 1 0

In bash how do I exit a script from a function that is piped by tee?

I'm trying to understand why whenever I'm using function 2>&1 | tee -a $LOG tee creates a subshell in function that can't be exited by simple exit 1 (and if I'm not using tee it works fine). Below the example:
#!/bin/bash
LOG=/root/log.log
function first()
{
echo "Function 1 - I WANT to see this."
exit 1
}
function second()
{
echo "Function 2 - I DON'T WANT to see this."
exit 1
}
first 2>&1 | tee -a $LOG
second 2>&1 | tee -a $LOG
Output:
[root#linuxbox ~]# ./1.sh
Function 1 - I WANT to see this.
Function 2 - I DON'T WANT to see this.
So. if I remove | tee -a $LOG part, it's gonna work as expected (script will be exited in the first function).
Can you, please, explain how to overcome this and exit properly in the function while being able to tee output?
If you create a pipeline, the function is run in a subshell, and if you exit from a subshell, only the subshell will be affected, not the parent shell.
printPid(){ echo $BASHPID; }
printPid #some value
printPid #same value
printPid | tee #an implicit subshell -- different value
( printPid ) #an explicit subshell -- also a different value
If, instead of aFunction | tee you do:
aFunction > >(tee)
it'll be essential the same, except aFunction won't run in a subshell, and thus will be able to affect the current environment (set variables, call exit, etc.).
Use PIPESTATUS to retrieve the exit status of the first command in the pipeline.
first 2>&1 | tee -a $LOG; test ${PIPESTATUS[0]} -eq 0 || exit ${PIPESTATUS[0]}
second 2>&1 | tee -a $LOG; test ${PIPESTATUS[0]} -eq 0 || exit ${PIPESTATUS[0]}
You can tell bash to fail if anything in the pipeline fails with set -e -o pipefail:
$ cat test.sh
#!/bin/bash
LOG=~/log.log
set -e -o pipefail
function first()
{
echo "Function 1 - I WANT to see this."
exit 1
}
function second()
{
echo "Function 2 - I DON'T WANT to see this."
exit 1
}
first 2>&1 | tee -a $LOG
second 2>&1 | tee -a $LOG
$ ./test.sh
Function 1 - I WANT to see this.

Bash variable change doesn't persist

I have a short bash script to check to see if a Python program is running. The program writes out a PID file when it runs, so comparing this to the current list of running processes gives me what I need. But I'm having a problem with a variable being changed and then apparently changing back! Here's the script:
#!/bin/bash
# Test whether Home Server is currently running
PIDFILE=/tmp/montSvr.pid
isRunning=0
# does a pid file exist?
if [ -f "$PIDFILE" ]; then
# pid file exists
# now get contents of pid file
cat $PIDFILE | while read PID; do
if [ $PID != "" ]; then
PSGREP=$(ps -A | grep $PID | awk '{print $1}')
if [ -n "$PSGREP" ]; then
isRunning=1
echo "RUNNING: $isRunning"
fi
fi
done
fi
echo "Running: $isRunning"
exit $isRunning
The output I get, when the Python script is running, is:
RUNNING: 1
Running: 0
And the exit value of the bash script is 0. So isRunning is getting changed within all those if statements (ie, the code is performing as expected), but then somehow isRunning reverts to 0 again. Confused...
Commands after a pipe | are run in a subshell. Changes to variable values in a subshell do not propagate to the parent shell.
Solution: change your loop to
while read PID; do
# ...
done < $PIDFILE
It's the pipe that is the problem. Using a pipe in this way means that the loop runs in a sub-shell, with its own environment. Kill the cat, use this syntax instead:
while read PID; do
if [ $PID != "" ]; then
PSGREP=$(ps -A | grep $PID | awk '{print $1}')
if [ -n "$PSGREP" ]; then
isRunning=1
echo "RUNNING: $isRunning"
fi
fi
done < "$PIDFILE"

bash: redirect (and append) stdout and stderr to file and terminal and get proper exit status

To redirect (and append) stdout and stderr to a file, while also displaying it on the terminal, I do this:
command 2>&1 | tee -a file.txt
However, is there another way to do this such that I get an accurate value for the exit status?
That is, if I test $?, I want to see the exit status of command, not the exit status of tee.
I know that I can use ${PIPESTATUS[0]} here instead of $?, but I am looking for another solution that would not involve having to check PIPESTATUS.
Perhaps you could put the exit value from PIPESTATUS into $?
command 2>&1 | tee -a file.txt ; ( exit ${PIPESTATUS} )
Another possibility, with some bash flavours, is to turn on the pipefail option:
pipefail
If set, the return value of a pipeline is
the value of the last (rightmost)
command to exit with a non-zero
status, or zero if all commands in the
pipeline exit successfully. This
option is disabled by default.
set -o pipefail
...
command 2>&1 | tee -a file.txt || echo "Command (or tee?) failed with status $?"
This having been said, the only way of achieving PIPESTATUS functionality portably (e.g. so it'd also work with POSIX sh) is a bit convoluted, i.e. it requires a temp file to propagate a pipe exit status back to the parent shell process:
{ command 2>&1 ; echo $? >"/tmp/~pipestatus.$$" ; } | tee -a file.txt
if [ "`cat \"/tmp/~pipestatus.$$\"`" -ne 0 ] ; then
...
fi
or, encapsulating for reuse:
log2file() {
LOGFILE="$1" ; shift
{ "$#" 2>&1 ; echo $? >"/tmp/~pipestatus.$$" ; } | tee -a "$LOGFILE"
MYPIPESTATUS="`cat \"/tmp/~pipestatus.$$\"`"
rm -f "/tmp/~pipestatus.$$"
return $MYPIPESTATUS
}
log2file file.txt command param1 "param 2" || echo "Command failed with status $?"
or, more generically perhaps:
save_pipe_status() {
STATUS_ID="$1" ; shift
"$#"
echo $? >"/tmp/~pipestatus.$$.$STATUS_ID"
}
get_pipe_status() {
STATUS_ID="$1" ; shift
return `cat "/tmp/~pipestatus.$$.$STATUS_ID"`
}
save_pipe_status my_command_id ./command param1 "param 2" | tee -a file.txt
get_pipe_status my_command_id || echo "Command failed with status $?"
...
rm -f "/tmp/~pipestatus.$$."* # do this in a trap handler, too, to be really clean
There is an arcane POSIX way of doing this:
exec 4>&1; R=$({ { command1; echo $? >&3 ; } | { command2 >&4; } } 3>&1); exec 4>&-
It will set the variable R to the return value of command1, and pipe output of command1 to command2, whose output is redirected to the output of parent shell.
Use process substitution:
command > >( tee -a "$logfile" ) 2>&1
tee runs in a subshell so $? holds the exit status of command.

Resources