I have two shell scripts:
test.sh
function func() {
echo $1
exit 1
}
run.sh
source ./test.sh
func "Hello"
exitCode=$?
echo "Output: ${exitCode}"
Result:
Hello
The current problem which I'm facing is that when the function func returns 1, my run.sh script breaks and nothing gets executed after it. So, is there any way I can effectively capture the exit code without breaking run.sh. I know there is way to invoke a subshell using ( func "Hello" ) but I want to do it without invoking sub-shell using flock. I looked up for reference example but could'nt find any close to it.
2 ideas that are "pushing the boundary". Use with care, as future changes to the sourced script(s) might break the logic. Recommended only if there is a way to monitor that script is working OK - e.g. when executing interactively, etc.
I would not use this kind of solutions in any production/critical system.
Option 1: Alias 'exit' to 'return' when sourcing the file.
Assuming that ALL 'exit' statement in the test.sh are to be replaced with 'return', and assuming that you are willing to take the risk of future changes to test.sh, consider using alias before sourcing
alias exit=return
source test.sh
unalias exit
func "foo"
Option 2: automatically update the function that is using 'exit' to use return.
source test.sh
body=$(type func | tail +2 | sed -e 's/exit/return/g')
eval "$body"
Related
I am calling a bash script (say child.sh) within my bash script (say parent.sh), and I would like have the errors in the script (child.sh) trapped in my parent script (parent.sh).
I read through the medium article and the stack exchange post. Based on that I thought I should do set -E on my parent script so that the TRAPS are inherited by sub shell. Accordingly my code is as follows
parent.sh
#!/bin/bash
set -E
error() {
echo -e "$0: \e[0;33mERROR: The Zero Touch Provisioning script failed while running the command $BASH_COMMAND at line $BASH_LINENO.\e[0m" >&2
exit 1
}
trap error ERR
./child.sh
child.sh
#!/bin/bash
ls -al > /dev/null
cd non_exisiting_dir #To simulate error
echo "$0: I am still continuing after error"
Output
./child.sh: line 5: cd: non_exisiting_dir: No such file or directory
./child.sh: I am still continuing after error
Can you please let me know what am missing so that I can inherit the TRAPs defined in the parent script.
./child.sh does not run in a "subshell".
A subshell is not a child process of your shell which happens to be a shell, too, but a special environment where the commands from inside (...), $(...) or ...|... are run in, which is usually implemented by forking the current shell without executing another shell.
If you want to run child.sh in a subshell, then source that script from a subshell you can create with (...):
(. ./child.sh)
which will inherit your ERR trap because of set -E.
Notes:
There are other places where bash runs the commands in a subshell: process substitutions (<(...)), coprocesses, the command_not_found_handle function, etc.
In some shells (but not in bash) the leftmost command from a pipeline is not run in a subshell. For instance ksh -c ':|a=2; echo $a' will print 2. Also, not all shells implement subshells by forking a separate process.
Even if bash infamously allows functions to be exported to other bash scripts via the environment (with export -f funcname), that's AFAIK not possible with traps ;-)
I have a Scenario where in I need to fetch some data by triggering another external bash file from my shell script. If I end up with any error output from external bash, My shell script should handle and should go through the fall back approach. But I am actually facing issue with that external bash file, Wherein bash returns (exit 1) in failure cases, which causes my script also to exit and never executing fall back approach. Can anyone guide how to handle the exit from external bash and run my fall back approach.
Not sure if this works in sh, but it works in bash.
I made a try / except tool out of this, but it will work here too I believe.
#! /bin/bash
try() {
exec 2> /dev/null
#direct stderr out to /dev/null
#main block
input_function="$1"
#fallback code
catch_function="$3"
#open a sub shell
(
#tell it to exit upon encountering an error
set -e
#main block
"$#"
)
#if exit code of above is > 0, then run fallback code
if [ "$?" != 0 ]; then
$catch_function
else
#success, it ran with no errors
test
fi
#put stderr back into stdout
exec 2> /dev/tty
}
An example of using this would be:
try [function 1] except [function 2]
Function 1 would be main block of code, and 2 would be fallback function/block of code.
Your first function could be:
run() {
/path/to/external/script
}
And your second can be whatever you want to fall back on.
Hope this helps.
I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"
I'm trying to learn how to write some basic functions in Ubuntu, and I've found that some of them work, and some do not, and I can't figure out why.
Specifically, the following function addseq2.sh will work when I source it, but when I just try to run it with bash addseq2.shit doesn't work. When I check with $? I get a 0: command not found. Does anyone have an idea why this might be the case? Thanks for any suggestions!
Here's the code for addseq2.sh:
#!/usr/bin/env bash
# File: addseq2.sh
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
Thanks everyone for all the useful advice and help!
To expand on my original question, I have two simple functions already written. The first one, hello.sh looks like this:
#!/usr/bin/env bash
# File: hello.sh
function hello {
echo "Hello"
}
hello
hello
hello
When I call this function, without having done anything else, I would type:
$ bash hello.sh
Which seems to work fine. After I source it with $ source hello.sh, I'm then able to just type hello and it also runs as expected.
So what has been driving me crazy is the first function I mentioned here, addseq2.sh. If I try to repeat the same steps, calling it just with $ bash addseq2.sh 1 2 3. I don't see any result. I can see after checking as you suggested with $ echo $?that I get a 0 and it executed correctly, but nothing prints to the screen.
After I source it with $ source addseq2.sh, then I call it just by typing $ addseq2 1 2 3 it returns 6 as expected.
I don't understand why the two functions are behaving differently.
When you do bash foo.sh, it spawns a new instance of bash, which then reads and executes every command in foo.sh.
In the case of hello.sh, the commands are:
function hello {
echo "Hello"
}
This command has no visible effects, but it defines a function named hello.
hello
hello
hello
These commands call the hello function three times, each printing Hello to stdout.
Upon reaching the end of the script, bash exits with a status of 0. The hello function is gone (it was only defined within the bash process that just stopped running).
In the case of addseq2.sh, the commands are:
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
This command has no visible effects, but it defines a function named addseq2.
Upon reaching the end of the script, bash exits with a status of 0. The addseq2 function is gone (it was only defined within the bash process that just stopped running).
That's why bash addseq2.sh does nothing: It simply defines (and immediately forgets) a function without ever calling it.
The source command is different. It tells the currently running shell to execute commands from a file as if you had typed them on the command line. The commands themselves still execute as before, but now the functions persist because the bash process they were defined in is still alive.
If you want bash addseq2.sh 1 2 3 to automatically call the addseq2 function and pass it the list of command line arguments, you have to say so explicitly: Add
addseq2 "$#"
at the end of addseq2.sh.
When I check with $? I get a 0: command not found
This is because of the way you are checking it, for example:
(the leading $ is the convention for showing the command-line prompt)
$ $?
-bash: 0: command not found
Instead you could do this:
$ echo $?
0
By convention 0 indicated success. A better way to test in a script is something like this:
if addseq.sh
then
echo 'script worked'
else
# Redirect error message to stderr
echo 'script failed' >&2
fi
Now, why might your script not "work" even though it returned 0? You have a function but you are not calling it. With your code I appended a call:
#!/usr/bin/env bash
# File: addseq2.sh
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
addseq2 1 2 3 4 # <<<<<<<
and I got:
10
By the way, an alternative way of saying:
let sum=sum+$element
is:
sum=$((sum + element))
I want to execute some commands every time I exit from bash, but cannot find a way to do it.
There is a ~/.bash_logout file for when you are logging out, but normally we use interactive shell instead of login shell, so this is not very useful for this purpose.
Is there a way to do this? Thanks!
You can trap the EXIT signal.
exit_handler () {
# code to run on exit
}
trap 'exit_handler' EXIT
Techinically, trap exit_handler EXIT would work as well. I quoted it to emphasize that the first argument to trap is a string that is essentially passed to eval, and not necessarily a single function name. You could just as easily write
trap 'do_this; do_that; if [[ $PICKY == yes ]]; then one_more_thing; fi' EXIT
rather than gather your code into a single function.