Try/catch like: trigger URL on bash error - bash

I would have something like a big try/catch in a bash script (I would like to trigger an URL if something went wrong). Something like this:
Do task 1
Do task 2
...
Do task 99
If any of this task has failed, stop the script (don't execute task >= 5 if task 4 failed) then trigger an URL (with curl or whatever)
I know set -e exists, but it just stops the script (it does the half job). Maybe there is something with trap, but I did not understand what I read about this. Is there any simple example for this case?
My question is obviously not about trigger the URL, but how to catch error then run an other part of the script.

Use set -e and you can trap the ERR pseudosignal and execute your curl statement if you script exits with an error. If all the tasks succeed, the ERR trap will not be triggered.
set -e
trap on_error ERR
on_error () {
curl $some_url
}
task_1
task_2
# ...
task_last

trap trap_exit EXIT
trap_exit() {
CODE=$?
if [ $CODE -ne 0 ]; then
echo "Failed! Return code: $CODE"
fi
}
This will execute trap_exit whenever your script ends (the EXIT argument) and check if something broke (the $?part).

Related

exit form shell script not working as expected

I have a function that is called twice with 2 different parameters. The function performs some checks and exits execution if check fails. If check succeeds, the execution continues. However in my case the execution is not exited, and it continues to call the function a second time.
I am calling the function each time and storing the returned value in variables. From what I have understood looking through answers on google and stackoverflow, the problem seems to be the fact that when I am calling the function to store it in the variable, it is running in a subshell and the "exit" just exits execution from the subshell, while the shell script continues executing. I am providing the code below:
check_profile_path() {
local profileToCheck=$1
if [ -e "$pdfToolBoxPath/used_profiles/Check$profileToCheck.kfpx" ]; then
return 0
else
outputArr[status]="failed"
outputArr[message]="profile configuration path for $profileToCheck not found at specified path"
exitCode=1
end_execution
fi
}
check_profile() {
local profileName=$1
check_profile_path $profileName
local containedText=$($someApplicationPath $somePath/used_profiles/Check$profileName.kfpx $fileToCheck)
echo "<<<<<<<<<< $profileName >>>>>>>>>>"
echo "$containedText"
echo ""
}
end_execution() {
jsonResult=$(create_json)
echo $jsonResult
exit $exitCode
}
colorSpaceProfileName="ColorSpace"
resolutionProfileName="Resolution"
colorSpaceCheckResult=$(check_profile $colorSpaceProfileName)
echo "$colorSpaceCheckResult"
resolutionCheckResult=$(check_profile $resolutionProfileName)
echo "$resolutionCheckResult"
The output I recieve from this is:
{"status":"failed","message":"profile configuration path for ColorSpace not found at specified path"}
<<<<<<<<<< Resolution >>>>>>>>>>
ProcessID 8..........
while I expect it to be just:
{"status":"failed","message":"profile configuration path for ColorSpace not found at specified path"}
I cannot set the proper syntax.. Please suggest..
With exit, the current process is ended. You are invoking your functions as, i.e., $(check_profile $colorSpaceProfileName), which means that they are run into their own process, and hence the exit inside the function only leaves this process.
Here are two workarounds:
Don't collect the output of the function in this way. Collect them inside the function and store them into a variable, which you can then retrieve on the calling side.
Arrange that the functions set a exit code, depending on whether the caller should exit or not, and evaluate the exit code of the function at the calling side, i.e. something like:
colorSpaceCheckResult=$(check_profile $colorSpaceProfileName)
(( $? == 2 )) && exit

Bash - Log all commands and exit codes in a script

I have a long (~2,000 lines) script that I'm trying to log for future debugging. Right now I have:
function log_with_time()
{
while read a; do
echo `date +'%H:%M:%S.%4N '` " $a" >> $LOGFILE
done
}
exec 7> >(log_with_time)
BASH_XTRACEFD=7
PS4=' exit($?)ln:$LINENO: '
set -x
echo "helloWorld 1"
which gives me very nice logging for any and all commands that are run:
15:18:03.6359 exit(0)ln:28: echo 'helloWorld 1'
The issue that I'm running into is that xtrace seems to be asynchronous. With longer scripts, the log times fall behind the actual time the commands are called, and the exit code doesn't match the logged command.
There has to be a better way to do this but I'd be happy if I could just synchronize xtrace.
...
tldr: How can I generally log the time, command and exit code for all commands in a script?
...
(First time posting, feedback appreciated)
UPDATE:
exec {BASH_XTRACEFD}>>$LOGFILE
PS4=' time:$(date +%H:%M:%S.%4N) ln:$LINENO: '
set -x
fail()
{
echo "fail" >> $LOGFILE
return 1
}
trap 'echo exit:$? >> $LOGFILE' DEBUG
fail
solves all of my synchronization issues. exit codes and timestamps are working beautifully. My only issue now is one of formatting: the trap itself is getting reported by xtrace.
time:18:30:07.6080 ln:27: fail
time:18:30:07.6089 ln:12: echo fail
fail
time:18:30:07.6126 ln:13: return 1
time:18:30:07.6134 ln:28: echo exit:1
exit:1
I've tried setting +x in the trap but then set +x gets logged. If I could find a way to omit one line from xtrace, this log would be perfect.
The async behavior is coming from the process substitution -- anything in >(...) is running in its own subshell on the other end of a FIFO. Since it's a separate process, it's inherently unsynchronized.
You don't need log_with_time here at all, though, and so you don't need BASH_XTRACEFD redirecting to a process substitution in the first place. Consider:
# aside: $(date ...) has a *huge* amount of performance overhead here. Personally, I'd
# advise against using it, unless you really need all that precision; $SECONDS will
# be orders-of-magnitude cheaper.
PS4=' prior-exit:$? time:$(date +%H:%M:%S.%4N) ln:$LINENO: '
...thereafter:
$ true
prior-exit:0 time:16:01:17.2509 ln:28: true
$ false
prior-exit:0 time:16:01:18.4242 ln:29: false
$ false
prior-exit:1 time:16:01:19.2963 ln:30: false
$ true
prior-exit:1 time:16:01:20.2159 ln:31: true
$ true
prior-exit:0 time:16:01:20.8650 ln:32: true
Per conversation with Charles Duffy in the comments to whom all credit is given:
Process substitution >(...) is asynchronous, allowing the log writing to fall behind and out of sync with the xtrace.
Instead use:
exec {BASH_XTRACEFD}>>$LOGFILE
PS4=' time:$(date +%H:%M:%S.%4N) ln:$LINENO: '
for synchronously logging the time and line.
Furthermore, xtrace is triggered before running the command, making it a bad candidate for capturing exit codes. Instead use:
trap 'echo exit:$? >> $LOGFILE' DEBUG
to log the exit codes of each command since trap triggers on command completion. Note that this won't report on every step in a function call like xtrace will. (could use some help with the phrasing here)
No solution yet for omitting the trap from xtrace, but it's good enough:
LOGFILE="SomeFile.log"
exec {BASH_XTRACEFD}>>$LOGFILE
PS4=' time:$(date +%H:%M:%S.%4N) ln:$LINENO: '
set -x
fail() # test function that returns 1
{
echo "fail" >> $LOGFILE
return 1
}
success() # test function that returns 0
{
echo "success" >> $LOGFILE
return 0
}
trap 'echo $? >> $LOGFILE' DEBUG
fail
success
echo "complete"
yields:
time:14:10:22.2686 ln:21: trap 'echo $? >> $LOGFILE' DEBUG
time:14:10:22.2693 ln:23: echo 0
0
time:14:10:22.2736 ln:23: fail
time:14:10:22.2741 ln:12: echo fail
fail
time:14:10:22.2775 ln:13: return 1
time:14:10:22.2782 ln:24: echo 1
1
time:14:10:22.2830 ln:24: success
time:14:10:22.2836 ln:17: echo success
success
time:14:10:22.2873 ln:18: return 0
time:14:10:22.2881 ln:26: echo 0
0
time:14:10:22.2912 ln:26: echo complete

How to get the real line number of a failing Bash command?

In the process of coming up with a way to catch errors in my Bash scripts, I've been experimenting with "set -e", "set -E", and the "trap" command. In the process, I've discovered some strange behavior in how $LINENO is evaluated in the context of functions. First, here's a stripped down version of how I'm trying to log errors:
#!/bin/bash
set -E
trap 'echo Failed on line: $LINENO at command: $BASH_COMMAND && exit $?' ERR
Now, the behavior is different based on where the failure occurs. For example, if I follow the above with:
echo "Should fail at: $((LINENO + 1))"
false
I get the following output:
Should fail at: 6
Failed on line: 6 at command: false
Everything is as expected. Line 6 is the line containing the single command "false". But if I wrap up my failing command in a function and call it like this:
function failure {
echo "Should fail at $((LINENO + 1))"
false
}
failure
Then I get the following output:
Should fail at 7
Failed on line: 5 at command: false
As you can see, $BASH_COMMAND contains the correct failing command: "false", but $LINENO is reporting the first line of the "failure" function definition as the current command. That makes no sense to me. Is there a way to get the line number of the line referenced in $BASH_COMMAND?
It's possible this behavior is specific to older versions of Bash. I'm stuck on 3.2.51 for the time being. If the behavior has changed in later releases, it would still be nice to know if there's a workaround to get the value I want on 3.2.51.
EDIT: I'm afraid some people are confused because I broke up my example into chunks. Let me try to clarify what I have, what I'm getting, and what I want.
This is my script:
#!/bin/bash
set -E
function handle_error {
local retval=$?
local line=$1
echo "Failed at $line: $BASH_COMMAND"
exit $retval
}
trap 'handle_error $LINENO' ERR
function fail {
echo "I expect the next line to be the failing line: $((LINENO + 1))"
command_that_fails
}
fail
Now, what I expect is the following output:
I expect the next line to be the failing line: 14
Failed at 14: command_that_fails
Now, what I get is the following output:
I expect the next line to be the failing line: 14
Failed at 12: command_that_fails
BUT line 12 is not command_that_fails. Line 12 is function fail {, which is somewhat less helpful. I have also examined the ${BASH_LINENO[#]} array, and it does not have an entry for line 14.
For bash releases prior to 4.1, a special level of awful, hacky, performance-killing hell is needed to work around an issue wherein, on errors, the system jumps back to the function definition point before invoking an error handler.
#!/bin/bash
set -E
set -o functrace
function handle_error {
local retval=$?
local line=${last_lineno:-$1}
echo "Failed at $line: $BASH_COMMAND"
echo "Trace: " "$#"
exit $retval
}
if (( ${BASH_VERSION%%.*} <= 3 )) || [[ ${BASH_VERSION%.*} = 4.0 ]]; then
trap '[[ $FUNCNAME = handle_error ]] || { last_lineno=$real_lineno; real_lineno=$LINENO; }' DEBUG
fi
trap 'handle_error $LINENO ${BASH_LINENO[#]}' ERR
fail() {
echo "I expect the next line to be the failing line: $((LINENO + 1))"
command_that_fails
}
fail
BASH_LINENO is an array. You can refer to different values in it: ${BASH_LINENO[1]}, ${BASH_LINENO[2]}, etc. to back up the stack. (Positions in this array line up with those in the BASH_SOURCE array, if you want to get fancy and actually print a stack trace).
Even better, though, you can just inject the correct line number in your trap:
failure() {
local lineno=$1
echo "Failed at $lineno"
}
trap 'failure ${LINENO}' ERR
You might also find my prior answer at https://stackoverflow.com/a/185900/14122 (with a more complete error-handling example) interesting.
That behaviour is very reasonable.
The whole picture of the call stack provides comprehensive information whenever an error occurs. Your example had demonstrated a good error message; you could see where the an error actually occurred and which line triggered the function, etc.
If the interpreter/compiler can't precisely indicate where the error actually occurs, you could be more easily confused.

Where does the exit status go after trap/return?

I was playing around with using trap inside a function because of this question, and came up with this secondary question. Given the following code:
d() {
trap 'return' ERR
false
echo hi
}
If I run d, the trap causes the shell to return from the function without printing 'hi'. So far so good. But if I run it a second time, I get a message from the shell:
-bash: return: can only `return' from a function or sourced script
At first, I assumed this meant the ERR sig was happening twice: Once when false gave a nonzero exit status (inside the function) and again when the function itself returned with a nonzero exit status (outside the function). But that hypothesis doesn't hold up against this test:
e() {
trap 'echo +;return' ERR
false
echo hi
}
If I run the above, no matter how often I run it, I no longer get the can only return from a function or sourced script warning from bash. Why does the shell treat a compound command different from a simple command in a trap arg?
My goal was to maintain the actual exit status of the command that caused the function to exit, but I think whatever is causing the above behavior also makes capturing the exit status complicated:
f() {
trap '
local s=$?
echo $s
return $s' ERR
false
echo hi
}
bash> f; echo $?
1
0
Wat? Can someone please explain why $s expands to two different values here and, if it turns out to be the same cause, the above differentiation between return and echo +; return?
Your first conclusion was right: the ERR sig was happening twice.
During the first execution of 'd', you define a trap globally. This affect next commands (the current call of d is not affected).
During the second execution of 'd', you define a trap again (not very useful), the call of 'false' fails, so we execute the handler defined by the trap. Then we return in the parent shell where 'd' fails too, so we execute the trap again.
Just a remark. ERR can be given as 'sigspec' argument, but ERR is not a signal ;-) From the manual of BASH:
If a sigspec is ERR, the command arg is executed whenever a sim‐
ple command has a non-zero exit status, subject to the following
conditions. [...]
These are the same conditions obeyed by the errexit option.
With the function 'e', the ERR handler executes the 'echo' command that succeeds. That's why the 'e' function doesn't fail, that's why the ERR handler is not called twice in this case.
If you try "e; echo $?" you will read "0".
Then I tried your 'f' function. I observed the same behavior (and I was surprised). The cause is NOT a bad expension of "$s". If you try to hardcode a value, you should observe the argument given to the 'return' statement is ignored when it is executed in by the trap handler.
I don't know if it is a normal behavior or if it's a bug of BASH... Or maybe a trick to avoid an infinite loop in the interpreter :-)
By the way, it's not a good use of trap in my opinion. We can avoid side effect of trap by creating a sub-shell. In this case we avoid the error in the parent shell and we keep the exitcode of the inner function:
g() (
trap 'return' ERR
false
echo hi
)

Increment a global variable in Bash

Here's a shell script:
globvar=0
function myfunc {
let globvar=globvar+1
echo "myfunc: $globvar"
}
myfunc
echo "something" | myfunc
echo "Global: $globvar"
When called, it prints out the following:
$ sh zzz.sh
myfunc: 1
myfunc: 2
Global: 1
$ bash zzz.sh
myfunc: 1
myfunc: 2
Global: 1
$ zsh zzz.sh
myfunc: 1
myfunc: 2
Global: 2
The question is: why this happens and what behavior is correct?
P.S. I have a strange feeling that function behind the pipe is called in a forked shell... So, can there be a simple workaround?
P.P.S. This function is a simple test wrapper. It runs test application and analyzes its output. Then it increments $PASSED or $FAILED variables. Finally, you get a number of passed/failed tests in global variables. The usage is like:
test-util << EOF | myfunc
input for test #1
EOF
test-util << EOF | myfunc
input for test #2
EOF
echo "Passed: $PASSED, failed: $FAILED"
Korn shell gives the same results as zsh, by the way.
Please see BashFAQ/024. Pipes create subshells in Bash and variables are lost when subshells exit.
Based on your example, I would restructure it something like this:
globvar=0
function myfunc {
echo $(($1 + 1))
}
myfunc "$globvar"
globalvar=$(echo "something" | myfunc "$globalvar")
Piping something into myfunc in sh or bash causes a new shell to spawn. You can confirm this by adding a long sleep in myfunc. While it's sleeping call ps and you'll see a subprocess. When the function returns, that sub shell exits without changing the value in the parent process.
If you really need that value to be changed, you'll need to return a value from the function and check $PIPESTATUS after, I guess, like this:
globvar=0
function myfunc {
let globvar=globvar+1
echo "myfunc: $globvar"
return $globvar
}
myfunc
echo "something" | myfunc
globvar=${PIPESTATUS[1]}
echo "Global: $globvar"
The problem is 'which end of a pipeline using built-ins is executed by the original process?'
In zsh, it looks like the last command in the pipeline is executed by the main shell script when the command is a function or built-in.
In Bash (and sh is likely to be a link to Bash if you're on Linux), then either both commands are run in a sub-shell or the first command is run by the main process and the others are run by sub-shells.
Clearly, when the function is run in a sub-shell, it does not affect the variable in the parent shell (only the global in the sub-shell).
Consider adding an extra test:
echo Something | { myfunc; echo $globvar; }
echo $globvar

Resources