Trap bash errors from child script - bash

I am calling a bash script (say child.sh) within my bash script (say parent.sh), and I would like have the errors in the script (child.sh) trapped in my parent script (parent.sh).
I read through the medium article and the stack exchange post. Based on that I thought I should do set -E on my parent script so that the TRAPS are inherited by sub shell. Accordingly my code is as follows
parent.sh
#!/bin/bash
set -E
error() {
echo -e "$0: \e[0;33mERROR: The Zero Touch Provisioning script failed while running the command $BASH_COMMAND at line $BASH_LINENO.\e[0m" >&2
exit 1
}
trap error ERR
./child.sh
child.sh
#!/bin/bash
ls -al > /dev/null
cd non_exisiting_dir #To simulate error
echo "$0: I am still continuing after error"
Output
./child.sh: line 5: cd: non_exisiting_dir: No such file or directory
./child.sh: I am still continuing after error
Can you please let me know what am missing so that I can inherit the TRAPs defined in the parent script.

./child.sh does not run in a "subshell".
A subshell is not a child process of your shell which happens to be a shell, too, but a special environment where the commands from inside (...), $(...) or ...|... are run in, which is usually implemented by forking the current shell without executing another shell.
If you want to run child.sh in a subshell, then source that script from a subshell you can create with (...):
(. ./child.sh)
which will inherit your ERR trap because of set -E.
Notes:
There are other places where bash runs the commands in a subshell: process substitutions (<(...)), coprocesses, the command_not_found_handle function, etc.
In some shells (but not in bash) the leftmost command from a pipeline is not run in a subshell. For instance ksh -c ':|a=2; echo $a' will print 2. Also, not all shells implement subshells by forking a separate process.
Even if bash infamously allows functions to be exported to other bash scripts via the environment (with export -f funcname), that's AFAIK not possible with traps ;-)

Related

Not able exit from docker container with bash script containing exit command

I'm able to exit when I enter the exit command in container environment. But if I try to run a script file having the exit command, I'm not able to exit from the container.
1.working
ubuntu#iot-docker:/repo$ exit
exit
root#iot-docker:/repo# exit
exit
ubuntu#ubuntu-***-Twr:~/shirisha/plo-***-snt-sp_u103a3$
not working
script.sh
#!/bin/bash
exit
exit
exit is not a command to exit your container, it just exits the current shell interpreter.
When you run your script, a new shell interpreter is started according to the first line of your script (here /bin/bash). When it encounters the exit command, the interpreter stops and you get back to the command line (the previous shell).
You can make this expriment:
$ bash # Starts a new shell
$ exit # Exits the new shell; we come back to the old one
exit
$
See? Running bash in command line is similar to running your script, and exiting from it brings you back to your previous shell. You didn't exit your container.
Solution:
exec script.sh param1 ... paramN
exec will replace your current shell with the command being started (script.sh). When that command exits, you will exit your container because your old shell no longer exists.
When you script a script without "sourcing" the script, the script will be started in a new subprocess. The exit works, you will finish that subprocess.
It is important to remember, that a script starts a new environment.
Look at the script example.sh
#!/bin/bash
my_value=high
cd /tmp
Call this script with
cd $HOME
my_value="low"
./example.sh
pwd
echo "My value is now ${my_value}"
Now nothing has changed: all changes in the subprocess are gone.
You can call this script with source ./example.sh (or short . ./example.sh),
and things have changed.
When you don't want to source your script, a function (in .bashrc) might help:
example() {
my_value=high
cd /tmp
}
Now you can call the function:
cd $HOME
my_value="low"
example
pwd
echo "My value is now ${my_value}"

How to exec bash script w/o exiting shell

I want to execute a bash script in the current shell, not a subshell/subprocess. I believe using exec allows you to do that.
But if I run:
exec ./my_script.sh
the shell will exit and it will say "process completed". Is there a way to execute a script in the same shell somehow w/o exiting the process.
note the only thing my_script.sh does is:
export foo=bar
but if there was some error in my script, it didn't appear in the console.
As #Snakienn has said, you can / should use the "." builtin command to "source" the file containing commands; e.g.
. ./my_script.sh
What this does is to temporarily change where the shell is reading commands from. Commands and other shell input are read from the file as if they were being read from the terminal. When the shell gets to the end-of-file, it switches back to taking input from the console.
This is called "sourcing". (Not "execing".) And indeed the shell accepts source as an alternative name for the . command.
The exec command does something completely different. As the bash manual entry says:
exec [-cl] [-a name] [command [arguments]]
If command is specified, it replaces the shell. No new process is created. The arguments become the arguments to command.
The concept and terminology of exec comes from early UNIX (i.e Unix V6 or earlier) where the syscalls for running a child command were fork and exec. The procedure was:
fork the current process creating a clone of current process (the child)
in the child process exec the new command
in the parent process, wait for the child process to complete
you can try . ./my_script.sh.
The first dot stands for the current shell.
man page says:-
exec [-cl] [-a name] [command [arguments]]
If command is specified, it replaces the shell. No new process
is created. The arguments become the arguments to command. If
the -l option is supplied, the shell places a dash at the
beginning of the zeroth argument passed to command.This is
what login(1) does. The -c option causes command to be executed
with an empty environment.
if you want to execute a bash script in the current shell,you can try
bash my_script.sh or ./my_script.sh

Writing a bash script, how do I stop my session from exiting when my script exits?

bash scripting noob here. I've found this article: https://www.shellhacks.com/print-usage-exit-if-arguments-not-provided/ that suggests putting
[ $# -eq 0 ] && { echo "Usage: $0 argument"; exit 1; }
at the top of a script to ensure arguments are passed. Seems sensible.
However, when I do that and test that that line does indeed work (by running the script without supplying any arguments: . myscript.sh) then the script does indeed exit but so does the bash session that I was calling the script from. This is very irritating.
Clearly I'm doing something wrong but I don't know what. Can anyone put me straight?
. myscript.sh is a synonym for source myscript.sh, which runs the script in the current shell (rather than as a separate process). So exit terminates your current shell. (return, on the other hand, wouldn't; it has special behaviour for sourced scripts.)
Use ./myscript.sh to run it "the normal way" instead. If that gives you a permission error, make it executable first, using chmod a+x myscript.sh. To inform the kernel that your script should be run with bash (rather than /bin/sh), add the following as the very first line in the script:
#!/usr/bin/env bash
You can also use bash myscript.sh if you can't make it executable, but this is slightly more error-prone (somebody might do sh myscript.sh instead).
Question seems not clear if you're sourcing script source script_name or . script_name it's interpreted in current bash process, if you're running a function it's the same it's running in same process, otherwise, calling a script, caller bash forks a new bash process and waits until it terminates (so running exit doesn't exit caller process), but when running exit builtin in in current bash it exits current process.

Is pipeline guaranteed to create a subshell in any POSIX shell?

This shell script behaves as expected.
trap 'echo exit' EXIT
foo()
{
exit
}
echo begin
foo
echo end
Here is the output.
$ sh foo.sh
begin
exit
This shows that the script exits while executing foo.
Now see the following script.
trap 'echo exit' EXIT
foo()
{
exit
}
echo begin
foo | cat
echo end
The only difference here is that the output of foo is being piped into `cat. Now the output looks like the following.
begin
end
exit
This shows that the script does not exit while executing foo because end is printed.
I believe this happens because in bash a pipeline causes a subshell to be opened, so foo | cat is equivalent to (foo) | cat.
Is this behaviour guaranteed in any POSIX shell? I could not find anything in the POSIX standard at http://pubs.opengroup.org/onlinepubs/9699919799/ that implies that a pipeline must lead to a subshell. Can someone confirm if this behaviour can be relied upon?
In 2.12 Shell Execution Environment you find this quote:
A subshell environment shall be created as a duplicate of the shell environment, except that signal traps that are not being ignored shall be set to the default action. Changes made to the subshell environment shall not affect the shell environment. Command substitution, commands that are grouped with parentheses, and asynchronous lists shall be executed in a subshell environment. Additionally, each command of a multi-command pipeline is in a subshell environment; as an extension, however, any or all commands in a pipeline may be executed in the current environment. All other commands shall be executed in the current shell environment.
Where the key sentence for this question is
Additionally, each command of a multi-command pipeline is in a subshell environment; as an extension, however, any or all commands in a pipeline may be executed in the current environment
So without the extension (which bash uses for things like lastpipe and, I thought, for the first element in a pipeline as well but apparently not or at least not always) it looks like you can assume there will be a subshell for each part of the pipeline but the exception means you can't quite count on that.

Exiting a shell script with an error

basically I have written a shell script for a homework assignment that works fine however I am having issues with exiting. Essentially the script reads numbers from the user until it reads a negative number and then does some output. I have the script set to exit and output an error code when it receives anything but a number and that's where the issue is.
The code is as follows:
if test $number -eq $number >dev/null 2>&1
then
"do stuff"
else
echo "There was an error"
exit
The problem is that we have to turn in our programs as text files using script and whenever I try to script my program and test the error cases it exits out of script as well. Is there a better way to do this?
The script is being run with the following command in the terminal
script "insert name of program here"
Thanks
If the program you're testing is invoked as a subprocess, then any exit command will only exit the command itself. The fact that you're seeing contrary behavior means you must be invoking it differently.
When invoking your script from the parent testing program, use:
# this runs "yourscript" as its own, external process.
./yourscript
...to invoke it as a subprocess, not
# this is POSIX-compliant syntax to run the commands in "yourscript" in the current shell.
. yourscript
...or...
# this is bash-extended syntax to run the commands in "yourscript" in the current shell.
source yourscript
...as either of the latter will run all the commands -- including exit -- inside your current shell, modifying its state or, in the case of exit, exec or similar, telling it to cease execution.

Resources