How to exec bash script w/o exiting shell - bash

I want to execute a bash script in the current shell, not a subshell/subprocess. I believe using exec allows you to do that.
But if I run:
exec ./my_script.sh
the shell will exit and it will say "process completed". Is there a way to execute a script in the same shell somehow w/o exiting the process.
note the only thing my_script.sh does is:
export foo=bar
but if there was some error in my script, it didn't appear in the console.

As #Snakienn has said, you can / should use the "." builtin command to "source" the file containing commands; e.g.
. ./my_script.sh
What this does is to temporarily change where the shell is reading commands from. Commands and other shell input are read from the file as if they were being read from the terminal. When the shell gets to the end-of-file, it switches back to taking input from the console.
This is called "sourcing". (Not "execing".) And indeed the shell accepts source as an alternative name for the . command.
The exec command does something completely different. As the bash manual entry says:
exec [-cl] [-a name] [command [arguments]]
If command is specified, it replaces the shell. No new process is created. The arguments become the arguments to command.
The concept and terminology of exec comes from early UNIX (i.e Unix V6 or earlier) where the syscalls for running a child command were fork and exec. The procedure was:
fork the current process creating a clone of current process (the child)
in the child process exec the new command
in the parent process, wait for the child process to complete

you can try . ./my_script.sh.
The first dot stands for the current shell.

man page says:-
exec [-cl] [-a name] [command [arguments]]
If command is specified, it replaces the shell. No new process
is created. The arguments become the arguments to command. If
the -l option is supplied, the shell places a dash at the
beginning of the zeroth argument passed to command.This is
what login(1) does. The -c option causes command to be executed
with an empty environment.
if you want to execute a bash script in the current shell,you can try
bash my_script.sh or ./my_script.sh

Related

Shell script closes iterm2 on exit

I need some help:
(On macos, bash shell)
If I run a .sh file which calls e.g. exit 1 (any exit code) my terminal session ends (and the iterm2 tab/window closes).
I'm calling the script like this $ . myscript.sh
I'm pretty sure it should not be like that or was not like this a while before.
Using:
. myscript.sh
You are actually running the script in the existing shell or "sourcing" the script. With exit at the end of the script, this means that the terminal session will also exit
Alternatively:
./myscript.sh
or
bash myscript.sh
Will run the script in a separate bash shell and stop the terminal session from exiting.
Instead of . myscript.sh you can run ./myscript.sh which will run it in a separate bash shell and will not exit the current session.
If you control the content of this .sh file, and you do want to source the script - simply return 1 instead of exit 1, and use proper error handling.

Trap bash errors from child script

I am calling a bash script (say child.sh) within my bash script (say parent.sh), and I would like have the errors in the script (child.sh) trapped in my parent script (parent.sh).
I read through the medium article and the stack exchange post. Based on that I thought I should do set -E on my parent script so that the TRAPS are inherited by sub shell. Accordingly my code is as follows
parent.sh
#!/bin/bash
set -E
error() {
echo -e "$0: \e[0;33mERROR: The Zero Touch Provisioning script failed while running the command $BASH_COMMAND at line $BASH_LINENO.\e[0m" >&2
exit 1
}
trap error ERR
./child.sh
child.sh
#!/bin/bash
ls -al > /dev/null
cd non_exisiting_dir #To simulate error
echo "$0: I am still continuing after error"
Output
./child.sh: line 5: cd: non_exisiting_dir: No such file or directory
./child.sh: I am still continuing after error
Can you please let me know what am missing so that I can inherit the TRAPs defined in the parent script.
./child.sh does not run in a "subshell".
A subshell is not a child process of your shell which happens to be a shell, too, but a special environment where the commands from inside (...), $(...) or ...|... are run in, which is usually implemented by forking the current shell without executing another shell.
If you want to run child.sh in a subshell, then source that script from a subshell you can create with (...):
(. ./child.sh)
which will inherit your ERR trap because of set -E.
Notes:
There are other places where bash runs the commands in a subshell: process substitutions (<(...)), coprocesses, the command_not_found_handle function, etc.
In some shells (but not in bash) the leftmost command from a pipeline is not run in a subshell. For instance ksh -c ':|a=2; echo $a' will print 2. Also, not all shells implement subshells by forking a separate process.
Even if bash infamously allows functions to be exported to other bash scripts via the environment (with export -f funcname), that's AFAIK not possible with traps ;-)

Writing a bash script, how do I stop my session from exiting when my script exits?

bash scripting noob here. I've found this article: https://www.shellhacks.com/print-usage-exit-if-arguments-not-provided/ that suggests putting
[ $# -eq 0 ] && { echo "Usage: $0 argument"; exit 1; }
at the top of a script to ensure arguments are passed. Seems sensible.
However, when I do that and test that that line does indeed work (by running the script without supplying any arguments: . myscript.sh) then the script does indeed exit but so does the bash session that I was calling the script from. This is very irritating.
Clearly I'm doing something wrong but I don't know what. Can anyone put me straight?
. myscript.sh is a synonym for source myscript.sh, which runs the script in the current shell (rather than as a separate process). So exit terminates your current shell. (return, on the other hand, wouldn't; it has special behaviour for sourced scripts.)
Use ./myscript.sh to run it "the normal way" instead. If that gives you a permission error, make it executable first, using chmod a+x myscript.sh. To inform the kernel that your script should be run with bash (rather than /bin/sh), add the following as the very first line in the script:
#!/usr/bin/env bash
You can also use bash myscript.sh if you can't make it executable, but this is slightly more error-prone (somebody might do sh myscript.sh instead).
Question seems not clear if you're sourcing script source script_name or . script_name it's interpreted in current bash process, if you're running a function it's the same it's running in same process, otherwise, calling a script, caller bash forks a new bash process and waits until it terminates (so running exit doesn't exit caller process), but when running exit builtin in in current bash it exits current process.

Understanding exec command

Looking for some basic help in shell programming.
Suppose we have a command known as foobar, then what is the effect of shell invocation
exec foobar
exec 2> /var/log/foobar.log
The first exec command should only be used in a script — not at a command line terminal. It replaces the shell with the program foobar, instead of running it as a separate child process. Any commands in the script after the exec foobar will not be executed (even if the shell fails to find foobar to execute); if it is an interactive terminal session, it will report the error and continue.
exec [-cl] [-a name] [command [arguments]]
If command is supplied, it replaces the shell without creating a new process. If the -l option is supplied, the shell places a dash at the beginning of the zeroth argument passed to command. This is what the login program does. The -c option causes command to be executed with an empty environment. If -a is supplied, the shell passes name as the zeroth argument to command. If command cannot be executed for some reason, a non-interactive shell exits, unless the execfail shell option is enabled. In that case, it returns failure. An interactive shell returns failure if the file cannot be executed.
The second exec (with I/O redirection but no command) changes things so that the standard error stream goes to the file /var/log/foobar.log. Any further error messages from the shell, or from commands executed by the shell, go to the log file (unless there's another lot of I/O redirection).
If no command is specified, redirections may be used to affect the current shell environment. If there are no redirection errors, the return status is zero; otherwise the return status is non-zero.
exec foobar
will replace your shell process with foobar. I do not think you mean exec 2>/var/log/foobar.log but rather exec foobar 2>/var/log/foobar.log. This will do the same with sending 2 i.e. standard error messages to specified log file. You can read man page here.
exec(1) command is similar to exec(3) call. It replaces the code segment of calling process from that of called program. 1 and 3 signify man page sections.

Shell logout after exec redirection for stdin

As described in advanced bash script-guide, exec can be used to redirect I/O.
So I just write some cases in my shell. Redirecting stdout or stderr works well, but redirecting stdin makes the shell logout. Any explanation?
Commands:
exec < file
The shell exits when it reaches EOF on its standard input (that's why you type Control-D to logout). When it has finished reading from file, it will exit as there is no more input to come.
from bash's BASH_BUILTINS man page (man exec)
exec [-cl] [-a name] [command [arguments]]
If command is specified, it replaces the shell. No new process
is created. The arguments become the arguments to command. If
the -l option is supplied, the shell places a dash at the begin-
ning of the zeroth argument passed to command. This is what
login(1) does. The -c option causes command to be executed with
an empty environment. If -a is supplied, the shell passes name
as the zeroth argument to the executed command. If command can-
not be executed for some reason, a non-interactive shell exits,
unless the shell option execfail is enabled, in which case it
returns failure. An interactive shell returns failure if the
file cannot be executed. If command is not specified, any redi-
rections take effect in the current shell, and the return status
is 0. If there is a redirection error, the return status is 1.
So, as you can see, if a command finishes -> exit, and if a command fails -> exit...
Redirecting a file into exec will fail...
Unless it contains a line that is a valid code that also doesn't exit till you quit it.
(otherwise it will run, and exit...)

Resources