Handling temporary files in Bash - bash

I need to copy and execute a bash script from within a parent bash script, when the job is done (and if it fails) I need the parent script to remove the child script file that it copied.
Here's the code snippet that I'm working on:
if [ -e $repo_path/install ]; then
cp $repo_path/install $install_path
exec $install_path/install
rm $install_path/install
fi
This fails for some reason, it seems to exit altogether when the child process ends.
Is it correct to use exec is this example?

exec replaces your current process, so the statements after that will never be reached.
You may replace exec with sh or bash, or just remove it if the child script is executable.
See also: The Bash Reference Manual for exec

Related

Not able exit from docker container with bash script containing exit command

I'm able to exit when I enter the exit command in container environment. But if I try to run a script file having the exit command, I'm not able to exit from the container.
1.working
ubuntu#iot-docker:/repo$ exit
exit
root#iot-docker:/repo# exit
exit
ubuntu#ubuntu-***-Twr:~/shirisha/plo-***-snt-sp_u103a3$
not working
script.sh
#!/bin/bash
exit
exit
exit is not a command to exit your container, it just exits the current shell interpreter.
When you run your script, a new shell interpreter is started according to the first line of your script (here /bin/bash). When it encounters the exit command, the interpreter stops and you get back to the command line (the previous shell).
You can make this expriment:
$ bash # Starts a new shell
$ exit # Exits the new shell; we come back to the old one
exit
$
See? Running bash in command line is similar to running your script, and exiting from it brings you back to your previous shell. You didn't exit your container.
Solution:
exec script.sh param1 ... paramN
exec will replace your current shell with the command being started (script.sh). When that command exits, you will exit your container because your old shell no longer exists.
When you script a script without "sourcing" the script, the script will be started in a new subprocess. The exit works, you will finish that subprocess.
It is important to remember, that a script starts a new environment.
Look at the script example.sh
#!/bin/bash
my_value=high
cd /tmp
Call this script with
cd $HOME
my_value="low"
./example.sh
pwd
echo "My value is now ${my_value}"
Now nothing has changed: all changes in the subprocess are gone.
You can call this script with source ./example.sh (or short . ./example.sh),
and things have changed.
When you don't want to source your script, a function (in .bashrc) might help:
example() {
my_value=high
cd /tmp
}
Now you can call the function:
cd $HOME
my_value="low"
example
pwd
echo "My value is now ${my_value}"

Trap bash errors from child script

I am calling a bash script (say child.sh) within my bash script (say parent.sh), and I would like have the errors in the script (child.sh) trapped in my parent script (parent.sh).
I read through the medium article and the stack exchange post. Based on that I thought I should do set -E on my parent script so that the TRAPS are inherited by sub shell. Accordingly my code is as follows
parent.sh
#!/bin/bash
set -E
error() {
echo -e "$0: \e[0;33mERROR: The Zero Touch Provisioning script failed while running the command $BASH_COMMAND at line $BASH_LINENO.\e[0m" >&2
exit 1
}
trap error ERR
./child.sh
child.sh
#!/bin/bash
ls -al > /dev/null
cd non_exisiting_dir #To simulate error
echo "$0: I am still continuing after error"
Output
./child.sh: line 5: cd: non_exisiting_dir: No such file or directory
./child.sh: I am still continuing after error
Can you please let me know what am missing so that I can inherit the TRAPs defined in the parent script.
./child.sh does not run in a "subshell".
A subshell is not a child process of your shell which happens to be a shell, too, but a special environment where the commands from inside (...), $(...) or ...|... are run in, which is usually implemented by forking the current shell without executing another shell.
If you want to run child.sh in a subshell, then source that script from a subshell you can create with (...):
(. ./child.sh)
which will inherit your ERR trap because of set -E.
Notes:
There are other places where bash runs the commands in a subshell: process substitutions (<(...)), coprocesses, the command_not_found_handle function, etc.
In some shells (but not in bash) the leftmost command from a pipeline is not run in a subshell. For instance ksh -c ':|a=2; echo $a' will print 2. Also, not all shells implement subshells by forking a separate process.
Even if bash infamously allows functions to be exported to other bash scripts via the environment (with export -f funcname), that's AFAIK not possible with traps ;-)

How to exec bash script w/o exiting shell

I want to execute a bash script in the current shell, not a subshell/subprocess. I believe using exec allows you to do that.
But if I run:
exec ./my_script.sh
the shell will exit and it will say "process completed". Is there a way to execute a script in the same shell somehow w/o exiting the process.
note the only thing my_script.sh does is:
export foo=bar
but if there was some error in my script, it didn't appear in the console.
As #Snakienn has said, you can / should use the "." builtin command to "source" the file containing commands; e.g.
. ./my_script.sh
What this does is to temporarily change where the shell is reading commands from. Commands and other shell input are read from the file as if they were being read from the terminal. When the shell gets to the end-of-file, it switches back to taking input from the console.
This is called "sourcing". (Not "execing".) And indeed the shell accepts source as an alternative name for the . command.
The exec command does something completely different. As the bash manual entry says:
exec [-cl] [-a name] [command [arguments]]
If command is specified, it replaces the shell. No new process is created. The arguments become the arguments to command.
The concept and terminology of exec comes from early UNIX (i.e Unix V6 or earlier) where the syscalls for running a child command were fork and exec. The procedure was:
fork the current process creating a clone of current process (the child)
in the child process exec the new command
in the parent process, wait for the child process to complete
you can try . ./my_script.sh.
The first dot stands for the current shell.
man page says:-
exec [-cl] [-a name] [command [arguments]]
If command is specified, it replaces the shell. No new process
is created. The arguments become the arguments to command. If
the -l option is supplied, the shell places a dash at the
beginning of the zeroth argument passed to command.This is
what login(1) does. The -c option causes command to be executed
with an empty environment.
if you want to execute a bash script in the current shell,you can try
bash my_script.sh or ./my_script.sh

Writing a bash script, how do I stop my session from exiting when my script exits?

bash scripting noob here. I've found this article: https://www.shellhacks.com/print-usage-exit-if-arguments-not-provided/ that suggests putting
[ $# -eq 0 ] && { echo "Usage: $0 argument"; exit 1; }
at the top of a script to ensure arguments are passed. Seems sensible.
However, when I do that and test that that line does indeed work (by running the script without supplying any arguments: . myscript.sh) then the script does indeed exit but so does the bash session that I was calling the script from. This is very irritating.
Clearly I'm doing something wrong but I don't know what. Can anyone put me straight?
. myscript.sh is a synonym for source myscript.sh, which runs the script in the current shell (rather than as a separate process). So exit terminates your current shell. (return, on the other hand, wouldn't; it has special behaviour for sourced scripts.)
Use ./myscript.sh to run it "the normal way" instead. If that gives you a permission error, make it executable first, using chmod a+x myscript.sh. To inform the kernel that your script should be run with bash (rather than /bin/sh), add the following as the very first line in the script:
#!/usr/bin/env bash
You can also use bash myscript.sh if you can't make it executable, but this is slightly more error-prone (somebody might do sh myscript.sh instead).
Question seems not clear if you're sourcing script source script_name or . script_name it's interpreted in current bash process, if you're running a function it's the same it's running in same process, otherwise, calling a script, caller bash forks a new bash process and waits until it terminates (so running exit doesn't exit caller process), but when running exit builtin in in current bash it exits current process.

Any reason not to exec in shell script?

I have a bunch of wrapper shell scripts which manipulate command line arguments and do some stuff before invoking another binary at the end. Is there any reason to not always exec the binary at the end? It seems like this would be simpler and more efficient, but I never see it done.
If you check /usr/bin, you will likely find many many shell scripts that end with an exec command. Just as an example, here is /usr/bin/ps2pdf (debian):
#!/bin/sh
# Convert PostScript to PDF.
# Currently, we produce PDF 1.4 by default, but this is not guaranteed
# not to change in the future.
version=14
ps2pdf="`dirname \"$0\"`/ps2pdf$version"
if test ! -x "$ps2pdf"; then
____ps2pdf="ps2pdf$version"
fi
exec "$ps2pdf" "$#"
exec is used because it eliminates the need for keeping the shell process active after it is no longer needed.
My /usr/bin directory has over 150 shell scripts that use exec. So, the use of exec is common.
A reason not to use exec would be if there was some processing to be done after the binary finished executing.
I disagree with your assessment that this is not a common practice. That said, it's not always the right thing.
The most common scenario where I end a script with the execution of another command, but can't reasonably use exec, is if I need a cleanup hook to be run after the command at the end finishes. For instance:
#!/bin/sh
# create a temporary directory
tempdir=$(mktemp -t -d myprog.XXXXXX)
cleanup() { rm -rf "$tempdir"; }
trap cleanup 0
# use that temporary directory for our program
exec myprog --workdir="$tempdir" "$#"
...won't actually clean up tempdir after execution! Changing that exec myprog to merely myprog has some disadvantages -- continued memory usage from the shell, an extra process-table entry, signals being potentially delivered to the shell rather than to the program that it's executing -- but it also ensures that the shell is still around on myprog's exit to run any traps required.

Resources