While debugging a bash shell script I saw a mistake I have done in some part of my code. This mistake can be a variable name, a variable value or generally line(s) of code.
Is it possible to correct it while running the debugging mode? Or the only option is to exit the debug mode, correct the mistake(s) and rerun the debugging process? It would be very helpful if such an option of correcting mistakes "on the fly" exists. Especially for scripts that require long running times and you have to repeat the whole run process from the beginning multiple times (if there are many mistakes).
For example:
#!/bin/bash
set -x # debugging
trap read debug
a="1" # wrong value, should be 2
b="5"
sum=$(bc <<< "$a + $b")
set +x
The above script has a trap to execute one line of code at a time and continues to the next line after pressing enter.
During the debugging suppose that I realize that a=1 but should be something else lets say a=2. The next command b=5, is not executed yet because of the trap, so I was thinking of something like inserting a=2 just below a=1 and then proceed with enter to continue the debugging.
Something like in the code below:
#!/bin/bash
set -x # debugging
trap read debug
a="1" # wrong value, should be 2
a="2" # <-This is the value that a should have
b="5"
sum=$(bc <<< "$a + $b")
set +x
This approach does not work, I guess because the whole script is called only in the beginning of the run. What would be a good way to handle such an issue in shell scripting?
Thank you
Just flesh out your parser a bit:
#!/bin/bash
function parser {
IFS= read -r input
printf "Going to do: >%s\n" $input
eval "$input"
}
set -x # debugging
trap "parser" debug
a="1"
b="5"
sum=$(bc <<< "$a + $b")
set +x
This of course only works with one liners, but should get you started. When you see the debug is on a=1 (you will see it on screen) you can just type a=3 and return. Hit return on all other lines - you'll see the effect at the sum.
Note in this method (overriding after you see the line in debug) you always run AFTER the faulty command ran, since that's when the debugger outputs it. If you want to see the command before running use
echo $BASH_COMMAND
in your parser. Of course, running a=10 before a=1 ran is somewhat counter-productive.
An improvement for example may be to turn off debugging while parsing, and skipping the echo if no input is received:
function parser {
set +x
IFS= read -r input
if ! [ -z "$input" ]; then
printf "Going to do: >%s\n" $input
eval "$input"
fi
set -x
}
Note this won't modify the script itself. To modify the script itself you would need to add a sed command as well to actually modify the script, or perhaps echo each line to a new file, replacing with new input in debug if received - this is the safer option. This can be done as so (note the parser runs after the line already ran, so we always output the previous command, and override it if needed):
set prev_cmd="#!/bin/bash"
function parser {
set +x
IFS= read -r input
if ! [ -z "$input" ]; then
printf "Going to do: >%s\n" $input
eval "$input"
prev_cmd=$input
fi
echo $prev_cmd >> debug_log.bash
prev_cmd=$BASH_COMMAND
set -x
}
You imagination is the limit here. And the syntax. I would move your parser to a separate file and source it on demand as well.
Answers to questions in comments in order
Note I have echo $prev_cmd >> debug_log.bash. prev_command will be empty on the first call to the parser if you don't set it before. As the shebang for sure was never debugged and so won't be in your new file, it's a good initial choice for a first line to be dumped to the new file - which needs it anyway. You could of course set it empty or to some comment, whatever you prefer.
When you enter the debugging function debug is turned on (by definition). In order to prevent debugging in the debugger, I shut it off. Finally when leaving the function I need to re-activate it so debugging will continue. That's why the order is 'reversed' to the file - it shuts off your initial activation, and re-activates before continuing.
If you want to stop debugging before sum=..., put set +x before - that's what shuts off debug. Just like I did in the parser.
Acknowledgements: Special thanks to Charles Duffy for making the code safer to use. and just better.
Related
So this is probably an easy question, but I am not much of a bash programmer and I haven't been able to figure this out.
We have a closed source program that calls a subprogram which runs until it exits, at which point the program will call the subprogram again. This repeats indefinitely.
Unfortunately the main program will sometimes spontaneously (and repeatedly) fail to call the subprogram after a random period of time. The eventual solution is to contact the original developers to get support, but in the meantime we need a quick hotfix for the issue.
I'm trying to write a bash script that will monitor the output of the program and when it sees a specific string, it will restart the machine (the program will run again automatically on boot). The bash script needs to pass all standard output through to the screen up until it sees the specific string. The program also needs to continue to handle user input.
I have tried the following with limited success:
./program1 | ./watcher.sh
watcher.sh is basically just the following:
while read line; do
echo $line
if [$line == "some string"]
then
#the reboot script works fine
./reboot.sh
fi
done
This seems to work OK, but leading whitespace is stripped on the echo statement, and the echo output hangs in the middle until subprogram exits, at which point the rest of the output is printed to the screen. Is there a better way to accomplish what I need to do?
Thanks in advance.
I would do something along the lines of:
stdbuf -o0 ./program1 | grep --line-buffered "some string" | (read && reboot)
you need to quote your $line variable, i.e. "$line" for all references *(except the read line bit).
Your program1 is probably the source of the 'paused' data. It needs to flush its output buffer. You probably don't have control of that, so
a. check if your system has unbuffer command available. If so try unbuffer cmd1 | watcher You may have to experiment with which cmd you wrap unbuffer with, maybe you whill have to do cmd1 | unbuffer watcher.
b. OR you can try wrapping watcher as a process-group, (I think that is the right terminology), i.e.
./program1 | { ./watcher.sh ; printf "\n" ; }
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
use read's $REPLY variable, also I'd suggest using printf instead of echo
while read; do
printf "%s\n" "$REPLY"
# '[[' is Bash, quotes are not necessary
# use '[ "$REPLY" == "some string" ]' if in another shell
if [[ $REPLY == "some string" ]]
then
#the reboot script works fine
./reboot.sh
fi
done
Is there a better way to save a command line before it it executed?
A number of my /bin/bash scripts construct a very long command line. I generally save the command line to a text file for easier debugging and (sometimes) execution.
My code is littered with this idiom:
echo >saved.txt cd $NEW_PLACE '&&' command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
cd $NEW_PLACE && command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
Obviously updating code in two places is error-prone. Less obvious is that Certain parts need to be quoted in the first line but not the next. Thus, I can not do the update by simple copy-and-paste. If the command includes quotes, it gets even more complicated.
There has got to be a better way! Suggestions?
How about creating a helper function which logs and then executes the command? "$#" will expand to whatever command you pass in.
log() {
echo "$#" >> /tmp/cmd.log
"$#"
}
Use it by simply prepending log to any existing command. It won't handle && or || though, so you'll have to log those commands separately.
log cd $NEW_PLACE && log command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
are you looking for set -x (or bash -x)? This writes every command to standard out after executing.
use script and you will get archived everything.
use -x for tracing your script, e.g. run them as bash -x script_name args....
use set -x in your current bash (you will get echoed your commands with substitued globs and variables
combine 2 and 3 with the 1
If you just execute the command file immediately after creating it, you will only need to construct the command once, with one level of escapes.
If that would create too many discrete little command files, you could create shell procedures and then run an individual one.
(echo fun123 '()' {
echo echo something important
echo }
) > saved.txt
. saved.txt
fun123
It sounds like your goal is to keep a good log of what your script did so that you can debug it when things go bad. I would suggest using the -x parameter in your shebang like so:
#!/bin/sh -x
# the -x above makes bash print out every command before it is executed.
# you can also use the -e option to make bash exit immediately if any command
# returns a non-zero return code.
Also, see my answer on a previous question about redirecting all of this debug output to a log when --log is passed into your shell script. This will redirect all stdout and stderr. Occasionally, you'll still want to write to the terminal to give the user feedback. You can do this by saving stdout to a new file descriptor and using that with echo (or other programs):
exec 3>&1 # save stdout to fd 3
# perform log redirection as per above linked answer
# now all stdout and stderr will be redirected to the file and console.
# remove the `tee` command if you want it to go just to the file.
# now if you want to write to the original stdout (i.e. terminal)
echo "Hello World" >&3
# "Hello World" will be written to the terminal and not the logs.
I suggest you look into the xargs command. It was made to solve the problem of programtically building up argument lists and passing them off to executables for batch processing
http://en.wikipedia.org/wiki/Xargs
set -e (or a script starting with #!/bin/sh -e) is extremely useful to automatically bomb out if there is a problem. It saves me having to error check every single command that might fail.
How do I get the equivalent of this inside a function?
For example, I have the following script that exits immediately on error with an error exit status:
#!/bin/sh -e
echo "the following command could fail:"
false
echo "this is after the command that fails"
The output is as expected:
the following command could fail:
Now I'd like to wrap this into a function:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Expected output:
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
Actual output:
the following output could fail:
this is after the command that fails
run this all the time regardless of the success of my_function
(ie. the function is ignoring set -e)
This presumably is expected behaviour. My question is: how do I get the effect and usefulness of set -e inside a shell function? I'd like to be able to set something up such that I don't have to individually error check every call, but the script will stop on encountering an error. It should unwind the stack as far as is needed until I do check the result, or exit the script itself if I haven't checked it. This is what set -e does already, except it doesn't nest.
I've found the same question asked outside Stack Overflow but no suitable answer.
I eventually went with this, which apparently works. I tried the export method at first, but then found that I needed to export every global (constant) variable the script uses.
Disable set -e, then run the function call inside a subshell that has set -e enabled. Save the exit status of the subshell in a variable, re-enable set -e, then test the var.
f() { echo "a"; false; echo "Should NOT get HERE"; }
# Don't pipe the subshell into anything or we won't be able to see its exit status
set +e ; ( set -e; f ) ; err_status=$?
set -e
## cleaner syntax which POSIX sh doesn't support. Use bash/zsh/ksh/other fancy shells
if ((err_status)) ; then
echo "f returned false: $err_status"
fi
## POSIX-sh features only (e.g. dash, /bin/sh)
if test "$err_status" -ne 0 ; then
echo "f returned false: $err_status"
fi
echo "always print this"
You can't run f as part of a pipeline, or as part of a && of || command list (except as the last command in the pipe or list), or as the condition in an if or while, or other contexts that ignore set -e. This code also can't be in any of those contexts, so if you use this in a function, callers have to use the same subshell / save-exit-status trickery. This use of set -e for semantics similar to throwing/catching exceptions is not really suitable for general use, given the limitations and hard-to-read syntax.
trap err_handler_function ERR has the same limitations as set -e, in that it won't fire for errors in contexts where set -e won't exit on failed commands.
You might think the following would work, but it doesn't:
if ! ( set -e; f );then ##### doesn't work, f runs ignoring -e
echo "f returned false: $?"
fi
set -e doesn't take effect inside the subshell because it remembers that it's inside the condition of an if. I thought being a subshell would change that, but only being in a separate file and running a whole separate shell on it would work.
From documentation of set -e:
When this option is on, if a simple command fails for any of the
reasons listed in Consequences of
Shell Errors or returns an exit status
value > 0, and is not part of the
compound list following a while,
until, or if keyword, and is not a
part of an AND or OR list, and is not
a pipeline preceded by the ! reserved
word, then the shell shall immediately
exit.
In your case, false is a part of a pipeline preceded by ! and a part of if. So the solution is to rewrite your code so that it isn't.
In other words, there's nothing special about functions here. Try:
set -e
! { false; echo hi; }
You may directly use a subshell as your function definition and set it to exit immediately with set -e. This would limit the scope of set -e to the function subshell only and would later avoid switching between set +e and set -e.
In addition, you can use a variable assignment in the if test and then echo the result in an additional else statement.
# use subshell for function definition
f() (
set -exo pipefail
echo a
false
echo Should NOT get HERE
exit 0
)
# next line also works for non-subshell function given by agsamek above
#if ret="$( set -e && f )" ; then
if ret="$( f )" ; then
true
else
echo "$ret"
fi
# prints
# ++ echo a
# ++ false
# a
This is a bit of a kludge, but you can do:
export -f f
if sh -ec f; then
...
This will work if your shell supports export -f (bash does).
Note that this will not terminate the script. The echo
after the false in f will not execute, nor will the body
of the if, but statements after the if will be executed.
If you are using a shell that does not support export -f, you can
get the semantics you want by running sh in the function:
f() { sh -ec '
echo This will execute
false
echo This will not
'
}
Note/Edit: As a commenter pointed out, this answer uses bash, and not sh like the OP used in his question. I missed that detail when I originaly posted an answer. I will leave this answer up anyway since it might be interested to some passerby.
Y'aaaaaaaaaaaaaaaaaaallll ready for this?
Here's a way to do it with leveraging the DEBUG trap, which runs before each command, and sort of makes errors like the whole exception/try/catch idioms from other languages. Take a look. I've made your example one more 'call' deep.
#!/bin/bash
# Get rid of that disgusting set -e. We don't need it anymore!
# functrace allows RETURN and DEBUG traps to be inherited by each
# subshell and function. Plus, it doesn't suffer from that horrible
# erasure problem that -e and -E suffer from when the command
# is used in a conditional expression.
set -o functrace
# A trap to bubble up the error unless our magic command is encountered
# ('catch=$?' in this case) at which point it stops. Also don't try to
# bubble the error if were not in a function.
trap '{
code=$?
if [[ $code != 0 ]] && [[ $BASH_COMMAND != '\''catch=$?'\'' ]]; then
# If were in a function, return, else exit.
[[ $FUNCNAME ]] && return $code || exit $code
fi
}' DEBUG
my_function() {
my_function2
}
my_function2() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
# the || isn't necessary, but the 'catch=$?' is.
my_function || catch=$?
echo "Dealing with the problem with errcode=$catch (⌐■_■)"
echo "run this all the time regardless of the success of my_function"
and the output:
the following command could fail:
Dealing with the problem with errcode=1 (⌐■_■)
run this all the time regardless of the success of my_function
I haven't tested this in the wild, but off the top of my head, there are a bunch of pros:
It's actually not that slow. I've ran the script in a tight loop with and without the functrace option, and times are very close to each other under 10 000 iterations.
You could expand on this DEBUG trap to print a stack trace without doing that whole looping over $FUNCNAME and $BASH_LINENO nonsense. You kinda get it for free (besides actually doing an echo line).
Don't have to worry about that shopt -s inherit_errexit gotcha.
Join all commands in your function with the && operator. It's not too much trouble and will give the result you want.
This is by design and POSIX specification. We can read in man bash:
If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.
therefore you should avoid relying on set -e within functions.
Given the following exampleAustin Group:
set -e
start() {
some_server
echo some_server started successfully
}
start || echo >&2 some_server failed
the set -e is ignored within the function, because the function is a command in an AND-OR list other than the last.
The above behaviour is specified and required by POSIX (see: Desired Action):
The -e setting shall be ignored when executing the compound list following the while, until, if, or elif reserved word, a pipeline beginning with the ! reserved word, or any command of an AND-OR list other than the last.
I know this isn't what you asked, but you may or may not be aware that the behavior you seek is built into "make". Any part of a "make" process that fails aborts the run. It's a wholly different way of "programming", though, than shell scripting.
You will need to call your function in a sub shell (inside brackets ()) to achieve this.
I think you want to write your script like this:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
(my_function)
if [ $? -ne 0 ] ; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Then the output is (as desired):
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
If a subshell isn't an option (say you need to do something crazy like set a variable) then you can just check every single command that might fail and deal with it by appending || return $?. This causes the function to return the error code on failure.
It's ugly, but it works
#!/bin/sh
set -e
my_function() {
echo "the following command could fail:"
false || return $?
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
gives
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
set -e (or a script starting with #!/bin/sh -e) is extremely useful to automatically bomb out if there is a problem. It saves me having to error check every single command that might fail.
How do I get the equivalent of this inside a function?
For example, I have the following script that exits immediately on error with an error exit status:
#!/bin/sh -e
echo "the following command could fail:"
false
echo "this is after the command that fails"
The output is as expected:
the following command could fail:
Now I'd like to wrap this into a function:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Expected output:
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
Actual output:
the following output could fail:
this is after the command that fails
run this all the time regardless of the success of my_function
(ie. the function is ignoring set -e)
This presumably is expected behaviour. My question is: how do I get the effect and usefulness of set -e inside a shell function? I'd like to be able to set something up such that I don't have to individually error check every call, but the script will stop on encountering an error. It should unwind the stack as far as is needed until I do check the result, or exit the script itself if I haven't checked it. This is what set -e does already, except it doesn't nest.
I've found the same question asked outside Stack Overflow but no suitable answer.
I eventually went with this, which apparently works. I tried the export method at first, but then found that I needed to export every global (constant) variable the script uses.
Disable set -e, then run the function call inside a subshell that has set -e enabled. Save the exit status of the subshell in a variable, re-enable set -e, then test the var.
f() { echo "a"; false; echo "Should NOT get HERE"; }
# Don't pipe the subshell into anything or we won't be able to see its exit status
set +e ; ( set -e; f ) ; err_status=$?
set -e
## cleaner syntax which POSIX sh doesn't support. Use bash/zsh/ksh/other fancy shells
if ((err_status)) ; then
echo "f returned false: $err_status"
fi
## POSIX-sh features only (e.g. dash, /bin/sh)
if test "$err_status" -ne 0 ; then
echo "f returned false: $err_status"
fi
echo "always print this"
You can't run f as part of a pipeline, or as part of a && of || command list (except as the last command in the pipe or list), or as the condition in an if or while, or other contexts that ignore set -e. This code also can't be in any of those contexts, so if you use this in a function, callers have to use the same subshell / save-exit-status trickery. This use of set -e for semantics similar to throwing/catching exceptions is not really suitable for general use, given the limitations and hard-to-read syntax.
trap err_handler_function ERR has the same limitations as set -e, in that it won't fire for errors in contexts where set -e won't exit on failed commands.
You might think the following would work, but it doesn't:
if ! ( set -e; f );then ##### doesn't work, f runs ignoring -e
echo "f returned false: $?"
fi
set -e doesn't take effect inside the subshell because it remembers that it's inside the condition of an if. I thought being a subshell would change that, but only being in a separate file and running a whole separate shell on it would work.
From documentation of set -e:
When this option is on, if a simple command fails for any of the
reasons listed in Consequences of
Shell Errors or returns an exit status
value > 0, and is not part of the
compound list following a while,
until, or if keyword, and is not a
part of an AND or OR list, and is not
a pipeline preceded by the ! reserved
word, then the shell shall immediately
exit.
In your case, false is a part of a pipeline preceded by ! and a part of if. So the solution is to rewrite your code so that it isn't.
In other words, there's nothing special about functions here. Try:
set -e
! { false; echo hi; }
You may directly use a subshell as your function definition and set it to exit immediately with set -e. This would limit the scope of set -e to the function subshell only and would later avoid switching between set +e and set -e.
In addition, you can use a variable assignment in the if test and then echo the result in an additional else statement.
# use subshell for function definition
f() (
set -exo pipefail
echo a
false
echo Should NOT get HERE
exit 0
)
# next line also works for non-subshell function given by agsamek above
#if ret="$( set -e && f )" ; then
if ret="$( f )" ; then
true
else
echo "$ret"
fi
# prints
# ++ echo a
# ++ false
# a
This is a bit of a kludge, but you can do:
export -f f
if sh -ec f; then
...
This will work if your shell supports export -f (bash does).
Note that this will not terminate the script. The echo
after the false in f will not execute, nor will the body
of the if, but statements after the if will be executed.
If you are using a shell that does not support export -f, you can
get the semantics you want by running sh in the function:
f() { sh -ec '
echo This will execute
false
echo This will not
'
}
Note/Edit: As a commenter pointed out, this answer uses bash, and not sh like the OP used in his question. I missed that detail when I originaly posted an answer. I will leave this answer up anyway since it might be interested to some passerby.
Y'aaaaaaaaaaaaaaaaaaallll ready for this?
Here's a way to do it with leveraging the DEBUG trap, which runs before each command, and sort of makes errors like the whole exception/try/catch idioms from other languages. Take a look. I've made your example one more 'call' deep.
#!/bin/bash
# Get rid of that disgusting set -e. We don't need it anymore!
# functrace allows RETURN and DEBUG traps to be inherited by each
# subshell and function. Plus, it doesn't suffer from that horrible
# erasure problem that -e and -E suffer from when the command
# is used in a conditional expression.
set -o functrace
# A trap to bubble up the error unless our magic command is encountered
# ('catch=$?' in this case) at which point it stops. Also don't try to
# bubble the error if were not in a function.
trap '{
code=$?
if [[ $code != 0 ]] && [[ $BASH_COMMAND != '\''catch=$?'\'' ]]; then
# If were in a function, return, else exit.
[[ $FUNCNAME ]] && return $code || exit $code
fi
}' DEBUG
my_function() {
my_function2
}
my_function2() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
# the || isn't necessary, but the 'catch=$?' is.
my_function || catch=$?
echo "Dealing with the problem with errcode=$catch (⌐■_■)"
echo "run this all the time regardless of the success of my_function"
and the output:
the following command could fail:
Dealing with the problem with errcode=1 (⌐■_■)
run this all the time regardless of the success of my_function
I haven't tested this in the wild, but off the top of my head, there are a bunch of pros:
It's actually not that slow. I've ran the script in a tight loop with and without the functrace option, and times are very close to each other under 10 000 iterations.
You could expand on this DEBUG trap to print a stack trace without doing that whole looping over $FUNCNAME and $BASH_LINENO nonsense. You kinda get it for free (besides actually doing an echo line).
Don't have to worry about that shopt -s inherit_errexit gotcha.
Join all commands in your function with the && operator. It's not too much trouble and will give the result you want.
This is by design and POSIX specification. We can read in man bash:
If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.
therefore you should avoid relying on set -e within functions.
Given the following exampleAustin Group:
set -e
start() {
some_server
echo some_server started successfully
}
start || echo >&2 some_server failed
the set -e is ignored within the function, because the function is a command in an AND-OR list other than the last.
The above behaviour is specified and required by POSIX (see: Desired Action):
The -e setting shall be ignored when executing the compound list following the while, until, if, or elif reserved word, a pipeline beginning with the ! reserved word, or any command of an AND-OR list other than the last.
I know this isn't what you asked, but you may or may not be aware that the behavior you seek is built into "make". Any part of a "make" process that fails aborts the run. It's a wholly different way of "programming", though, than shell scripting.
You will need to call your function in a sub shell (inside brackets ()) to achieve this.
I think you want to write your script like this:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
(my_function)
if [ $? -ne 0 ] ; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Then the output is (as desired):
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
If a subshell isn't an option (say you need to do something crazy like set a variable) then you can just check every single command that might fail and deal with it by appending || return $?. This causes the function to return the error code on failure.
It's ugly, but it works
#!/bin/sh
set -e
my_function() {
echo "the following command could fail:"
false || return $?
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
gives
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
I have a bash script that sources contents from another file. The contents of the other file are commands I would like to execute and compare the return value. Some of the commands are have multiple commands separated by either a semicolon (;) or by ampersands (&&) and I can't seem to make this work. To work on this, I created some test scripts as shown:
test.conf is the file being sourced by test
Example-1 (this works), My output is 2 seconds in difference
test.conf
CMD[1]="date"
test.sh
. test.conf
i=2
echo "$(${CMD[$i]})"
sleep 2
echo "$(${CMD[$i]})"
Example-2 (this does not work)
test.conf (same script as above)
CMD[1]="date;date"
Example-3 (tried this, it does not work either)
test.conf (same script as above)
CMD[1]="date && date"
I don't want my variable, CMD, to be inside tick marks because then, the commands would be executed at time of invocation of the source and I see no way of re-evaluating the variable.
This script essentially calls CMD on pass-1 to check something, if on pass-1 I get a false reading, I do some work in the script to correct the false reading and re-execute & re-evaluate the output of CMD; pass-2.
Here is an example. Here I'm checking to see if SSHD is running. If it's not running when I evaluate CMD[1] on pass-1, I will start it and re-evaluate CMD[1] again.
test.conf
CMD[1]=`pgrep -u root -d , sshd 1>/dev/null; echo $?`
So if I modify this for my test script, then test.conf becomes:
NOTE: Tick marks are not showing up but it's the key below the ~ mark on my keyboard.
CMD[1]=`date;date` or `date && date`
My script looks like this (to handle the tick marks)
. test.conf
i=2
echo "${CMD[$i]}"
sleep 2
echo "${CMD[$i]}"
I get the same date/time printed twice despite the 2 second delay. As such, CMD is not getting re-evaluate.
First of all, you should never use backticks unless you need to be compatible with an old shell that doesn't support $() - and only then.
Secondly, I don't understand why you're setting CMD[1] but then calling CMD[$i] with i set to 2.
Anyway, this is one way (and it's similar to part of Barry's answer):
CMD[1]='$(date;date)' # no backticks (remember - they carry Lime disease)
eval echo "${CMD[1]}" # or $i instead of 1
From the couple of lines of your question, I would have expected some approach like this:
#!/bin/bash
while read -r line; do
# munge $line
if eval "$line"; then
# success
else
# fail
fi
done
Where you have backticks in the source, you'll have to escape them to avoid evaluating them too early. Also, backticks aren't the only way to evaluate code - there is eval, as shown above. Maybe it's eval that you were looking for?
For example, this line:
CMD[1]=`pgrep -u root -d , sshd 1>/dev/null; echo $?`
Ought probably look more like this:
CMD[1]='`pgrep -u root -d , sshd 1>/dev/null; echo $?`'