Is there any mechanism in Shell script alike "include guard" in C++? - shell

let's see an example: in my main.sh, I'd like to source a.sh and b.sh. a.sh, however, might have already sourced b.sh. Thus it will cause the codes in b.sh executed twice. Is there any mechanism alike "include guard" in C++?

If you're sourcing scripts, you are usually using them to define functions and/or variables.
That means you can test whether the script has been sourced before by testing for (one of) the functions or variables it defines.
For example (in b.sh):
if [ -z "$B_SH_INCLUDED" ]
then
B_SH_INCLUDED=yes
...rest of original contents of b.sh
fi
There is no other way to do it that I know of. In particular, you can't do early exits or returns because that will affect the shell sourcing the file. You don't have to use a name that is solely for the file; you could use a name that the file always has defined.

In bash, an early return does not affect the sourcing file, it returns to it as if the current file were a function. I prefer this method because it avoids wrapping the entire content in if...fi.
if [ -n "$_for_example" ]; then return; fi
_for_example=`date`

TL;DR:
Bash has a source guard mechanism which lets you decide what to do if executed or sourced.
Longer version:
Over the years working with Bash sourcing I found that a different approach worked excellently which I will discuss below.
The problem for me was similar to the one of the original poster:
sourcing other scripts led to double script execution
additionally, scripts are less testable with unit test frameworks like BATS
The main idea of my solution is to write scripts in a way that can safely sourced multiple times. A major part plays the extraction of functionality (compared to have a large script which would not render very testable).
So, only functions and global variables are defined, other scripts can be sourced at will.
As an example, consider the following three bash scripts:
main.sh
#!/bin/env bash
source script2.sh
source script3.sh
GLOBAL_VAR=value
function_1() {
echo "do something"
function_2 "completely different"
}
run_main() {
echo "starting..."
function_1
}
# Enter: the source guard
# make the script only run when executed, not when sourced)
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
run_main "$#"
fi
script2.sh
#!/bin/env bash
source script3.sh
ALSO_A_GLOBAL_VAR=value2
function_2() {
echo "do something ${1}"
}
# this file can be sourced or be executed if called directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
function_2 "$#"
fi
script3.sh
#!/bin/env bash
export SUPER_USEFUL_VAR=/tmp/path
function_3() {
echo "hello again"
}
# no source guard here: this script defines only the variable and function if called but does not executes them because no code inside refers to the function.
Note that script3.sh is sourced twice. But since only functions and variables are (re-)defined, no functional code is executed during the sourcing.
The execution starts with with running main.sh as one would expect.
There might be a drawback when it comes to dependency cycles (in general a bad idea): I have no idea how Bash reacts if files source (directly or indirectly) each other.

Personally I usually use
set +o nounset # same as set -u
on most of my scripts, therefore I always turn it off and back on.
#!/usr/bin/env bash
set +u
if [ -n "$PRINTF_SCRIPT_USAGE_SH" ] ; then
set -u
return
else
set -u
readonly PRINTF_SCRIPT_USAGE_SH=1
fi
If you do not prefer nounset, you can do this
[[ -n "$PRINTF_SCRIPT_USAGE_SH" ]] && return || readonly PRINTF_SCRIPT_USAGE_SH=1

Related

Export shell script commands on bash [duplicate]

This question already has answers here:
Can I export a variable to the environment from a Bash script without sourcing it?
(13 answers)
Closed 3 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
I'm trying to write a shell script that, when run, will set some environment variables that will stay set in the caller's shell.
setenv FOO foo
in csh/tcsh, or
export FOO=foo
in sh/bash only set it during the script's execution.
I already know that
source myscript
will run the commands of the script rather than launching a new shell, and that can result in setting the "caller's" environment.
But here's the rub:
I want this script to be callable from either bash or csh. In other words, I want users of either shell to be able to run my script and have their shell's environment changed. So 'source' won't work for me, since a user running csh can't source a bash script, and a user running bash can't source a csh script.
Is there any reasonable solution that doesn't involve having to write and maintain TWO versions on the script?
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh, but it's shorter to type and might work in some places where source doesn't.
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in \
NAME1=VALUE1 \
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo to set the environment variable TEREDO_WORMS:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL set in the environment to the C shell, and the environment variable TEREDO_WORMS is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit (or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i option to make an interactive shell). You could also add "$#" after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${#-'-i'}"
The "${#-'-i'}" bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i for the non-existent arguments'.
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset ):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun ):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'\0' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'\0' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX* to name the environment you want. The module command is basically a fancy alias for the eval mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$#"
else
kill -SIGUSR1 $$
echo "$#">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval \"\$CMD\"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent is to have the child process print an expression which the parent can eval.
bash$ eval $(shh-agent)
For example, ssh-agent has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh and tcsh uses setenv to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit.
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.

Passing parameter from one script to another- Shell Scripting

I have a main_script.sh which will call sub_script.
I have a variable in sub_script which i would like to access in main script
I tried "export" and "env" with the variable but when i'm trying to echo it in main_script i'm not getting values.
for example:
sub_script.sh
export a=hello
echo $a
main_script.sh
$PGMHOME/sub_script.sh > output_file
echo $a
FYI : sub_script.sh is executing properly because I'm getting value of 'a' in output_file
But when I'm echoing the value of a in main_script, I'm not getting it.
p.s : I know I can assign the variable directly in main_sript.sh but this is just an example and i have big processing done in sub_script.sh
Environments (export-ed variables) are passed only "downwards" (from parent to child process), not upwards.
This means that if you want to run the sub-script from the main-script as a process, the sub-script must write the names-and-values somewhere so that the parent process (the main script) can read and process them.
There are many ways to do this, including simply printing them to standard output and having the parent script eval the result:
eval $(./sub_script)
There are numerous pitfalls to this (including, of course, that the sub-script could print rm -rf $HOME and the main script would execute that—of course the sub-script can simply do that directly, but it's even easier to accidentally print something bad than to accidentally do something bad, so this serves as an illustration). Note that the sub-script must carefully quote things:
#! /bin/sh
# sub-script
echo a=value for a
When evaled, this fails because value for a gets split on word boundaries and evals to running for a with a=value set. The sub-script must use something more like:
echo a=\'value for a\'
so that the main script's eval $(./sub_script) sees a quoted assignment.
If the sub-script needs to send output to standard output, it will need to write its variable settings elsewhere (perhaps to a temporary file, perhaps to a file descriptor set up in the main script). Note that if the output is sent to a file—this includes stdout, really—the main script can read the file carefully (rather than using a simple eval).
Another alternative (usable only in some, not all, cases) is to source the sub-script from the main script. This allows the sub-script to access everything from the main script directly. This is usually the simplest method, and therefore often the best. To source a sub-script you can use the . command:
#! /bin/sh
# main script
# code here
. ./sub_script # run commands from sub_script
# more code here
Parameters to script are passed like $1, $2 etc. You can call main_script.sh from sub_script.sh and call main_script.sh again.
main_script.sh
#!/bin/sh
echo "main_script"
./sub_script.sh "hello world!"
sub_script.sh
#!/bin/sh
if [ "${1}" = "" ]; then
echo "calling main_script"
./main_script.sh
else
echo "sub_script called with parameter ${1}"
fi
./main_script.sh
calling main_script
main_script
sub_script called with parameter hello world!

What is the bash equivalent to Python's `if __name__ == '__main__'`?

In Bash, I would like to be able to both source a script and execute the file. What is Bash's equivalent to Python's if __name__ == '__main__'?
I didn't find a readily available question/solution about this topic on Stackoverflow (I suspect I am asking in such a way that doesn't match an existing question/answer, but this is the most obvious way I can think to phrase the question because of my Python experience).
p.s. regarding the possible duplicate question (if I had more time, I would have written a shorter response):
The linked to question asks "How to detect if a script is being sourced" but this question asks "how do you create a bash script that can be both sourced AND run as a script?". The answer to this question may use some aspects of the previous question but has additional requirements/questions as follows:
Once you detect the script is being sourced what is the best way to not run the script (and avoid unintended side-effects (other than importing the functions of interest) like adding/removing/modifying the environment/variables)
Once you detect the script is being run instead of sourced what is the canonical way of implementing your script (put it in a function? or maybe just put it after the if statement? if you put it after the if statement will it have side-affects?
most google searches I found on Bash do not cover this topic (a bash script that can be both sourced and executed) what is the canonical way to implement this? is the topic not covered because it is discouraged or bad to do? are there gotchas?
Solution:
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$#"
fi
I added this answer because I wanted an answer that was written in a style to mimic Python's if __name__ == '__main__' but in Bash.
Regarding the usage of BASH_SOURCE vs $_. I use BASH_SOURCE because it appears to be more robust than $_ (link1, link2).
Here is an example that I tested/verified with two Bash scripts.
script1.sh with xyz() function:
#!/bin/bash
xyz() {
echo "Entering script1's xyz()"
}
main() {
xyz
echo "Entering script1's main()"
}
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$#"
fi
script2.sh that tries to call function xyz():
#!/bin/bash
source script1.sh
xyz
Use FUNCNAME
FUNCNAME is an array that's only available within a function, where FUNCNAME[0] is the name of the function, and FUNCNAME[1] is the name of the caller. So for a top-level function, FUNCNAME[1] will be main in an executed script, or source in a sourced script.
#!/bin/bash
get_funcname_1(){
printf '%s\n' "${FUNCNAME[1]}"
}
_main(){
echo hello
}
if [[ $(get_funcname_1) == main ]]; then
_main
fi
Example run:
$ bash funcname_test.sh
hello
$ source funcname_test.sh
$ _main
hello
I'm not sure how I stumbled across this. man bash doesn't mention the source value.
Alternate method: Use FUNCNAME[0] outside a function
This only works in 4.3 and 4.4 and is not documented.
if [[ ${FUNCNAME[0]} == main ]]; then
_main
fi
There is none. I usually use this:
#!/bin/bash
main()
{
# validate parameters
echo "In main: $#"
# your code here
}
main "$#"
If you'd like to know whether this script is being source'd, just wrap your main call in
if [[ "$_" != "$0" ]]; then
echo "Script is being sourced, not calling main()"
else
echo "Script is a subshell, calling main()"
main "$#"
fi
Reference: How to detect if a script is being sourced
return 2> /dev/null
For an idiomatic Bash way to do this, you can use return like so:
_main(){
echo hello
}
# End sourced section
return 2> /dev/null
_main
Example run:
$ bash return_test.sh
hello
$ source return_test.sh
$ _main
hello
If the script is sourced, return will return to the parent (of course), but if the script is executed, return will produce an error which gets hidden, and the script will continue execution.
I have tested this on GNU Bash 4.2 to 5.0, and it's my preferred solution.
Warning: This doesn't work in most other shells.
This is based on part of mr.spuratic's answer on How to detect if a script is being sourced.
I have been using the following construct at the bottom of all my scripts:
[[ "$(caller)" != "0 "* ]] || main "$#"
Everything else in the script is defined in a function or is a global variable.
caller is documented to "Return the context of the current subroutine call." When a script is sourced, the result of caller starts with the line number of the script sourcing this one. If this script is not being sourced, it starts with "0 "
The reason I use != and || instead of = and && is that the latter will cause the script to return false when sourced. This may cause your outer script to exit if it is running under set -e.
Note that I only know this works with bash. It wont work with a posix shell. I don't know about other shells such as ksh or zsh.
I really think this is the most Pythonic and beautiful way to accomplish the equivalent of if __name__ == "__main__" in Bash:
From hello_world_best.sh in my eRCaGuy_hello_world repo:
#!/usr/bin/env bash
main() {
echo "Running main."
# Add your main function code here
}
if [ "${BASH_SOURCE[0]}" = "$0" ]; then
# This script is being run.
__name__="__main__"
else
# This script is being sourced.
__name__="__source__"
fi
# Only run `main` if this script is being **run**, NOT sourced (imported)
if [ "$__name__" = "__main__" ]; then
main
fi
To run it, run ./hello_world_best.sh. Here is a sample run and output:
eRCaGuy_hello_world/bash$ ./hello_world_best.sh
Running main.
To source (import) it, run . ./hello_world_best.sh, . hello_world_best.sh, or source hello_world_best.sh, etc. When you source it, you'll see no output in this case, but it will give you access to the functions therein, so you can manually call main for instance. If you want to learn more about what "sourcing" means, see my answer here: Unix: What is the difference between source and export?. Here is a sample run and output, including manually calling the main function after sourcing the file:
eRCaGuy_hello_world/bash$ . hello_world_best.sh
eRCaGuy_hello_world/bash$ main
Running main.
Important:
I used to believe that using a technique based on "${FUNCNAME[-1]}" was better, but that is not the case! It turns out that technique does not gracefully handle nested scripts, where one script calls or sources another. To make that work, you have to 1) put "${FUNCNAME[-1]}" inside a function, and 2) change the index into the FUNCNAME arrray based on the level of nested scripts, which isn't practical. So, do not use the "${FUNCNAME[-1]}" technique I used to recommend. Instead, use the if [ "${BASH_SOURCE[0]}" = "$0" ] technique I now recommend, as it perfectly handles nested scripts!
References
The main answer here where I first learned about "${BASH_SOURCE[0]}" = "$0"
Where I first learned about the ${FUNCNAME[-1]} trick: #mr.spuratic: How to detect if a script is being sourced - he learned it from Dennis Williamson apparently.
A lot of personal study, trial, and effort over the last year.
My code: eRCaGuy_hello_world/bash/if__name__==__main___check_if_sourced_or_executed_best.sh
My other related but longer answer: How to detect if a script is being sourced

Pass all variables from one shell script to another?

Lets say I have a shell / bash script named test.sh with:
#!/bin/bash
TESTVARIABLE=hellohelloheloo
./test2.sh
My test2.sh looks like this:
#!/bin/bash
echo ${TESTVARIABLE}
This does not work. I do not want to pass all variables as parameters since imho this is overkill.
Is there a different way?
You have basically two options:
Make the variable an environment variable (export TESTVARIABLE) before executing the 2nd script.
Source the 2nd script, i.e. . test2.sh and it will run in the same shell. This would let you share more complex variables like arrays easily, but also means that the other script could modify variables in the source shell.
UPDATE:
To use export to set an environment variable, you can either use an existing variable:
A=10
# ...
export A
This ought to work in both bash and sh. bash also allows it to be combined like so:
export A=10
This also works in my sh (which happens to be bash, you can use echo $SHELL to check). But I don't believe that that's guaranteed to work in all sh, so best to play it safe and separate them.
Any variable you export in this way will be visible in scripts you execute, for example:
a.sh:
#!/bin/sh
MESSAGE="hello"
export MESSAGE
./b.sh
b.sh:
#!/bin/sh
echo "The message is: $MESSAGE"
Then:
$ ./a.sh
The message is: hello
The fact that these are both shell scripts is also just incidental. Environment variables can be passed to any process you execute, for example if we used python instead it might look like:
a.sh:
#!/bin/sh
MESSAGE="hello"
export MESSAGE
./b.py
b.py:
#!/usr/bin/python
import os
print 'The message is:', os.environ['MESSAGE']
Sourcing:
Instead we could source like this:
a.sh:
#!/bin/sh
MESSAGE="hello"
. ./b.sh
b.sh:
#!/bin/sh
echo "The message is: $MESSAGE"
Then:
$ ./a.sh
The message is: hello
This more or less "imports" the contents of b.sh directly and executes it in the same shell. Notice that we didn't have to export the variable to access it. This implicitly shares all the variables you have, as well as allows the other script to add/delete/modify variables in the shell. Of course, in this model both your scripts should be the same language (sh or bash). To give an example how we could pass messages back and forth:
a.sh:
#!/bin/sh
MESSAGE="hello"
. ./b.sh
echo "[A] The message is: $MESSAGE"
b.sh:
#!/bin/sh
echo "[B] The message is: $MESSAGE"
MESSAGE="goodbye"
Then:
$ ./a.sh
[B] The message is: hello
[A] The message is: goodbye
This works equally well in bash. It also makes it easy to share more complex data which you could not express as an environment variable (at least without some heavy lifting on your part), like arrays or associative arrays.
Fatal Error gave a straightforward possibility: source your second script! if you're worried that this second script may alter some of your precious variables, you can always source it in a subshell:
( . ./test2.sh )
The parentheses will make the source happen in a subshell, so that the parent shell will not see the modifications test2.sh could perform.
There's another possibility that should definitely be referenced here: use set -a.
From the POSIX set reference:
-a: When this option is on, the export attribute shall be set for each variable to which an assignment is performed; see the Base Definitions volume of IEEE Std 1003.1-2001, Section 4.21, Variable Assignment. If the assignment precedes a utility name in a command, the export attribute shall not persist in the current execution environment after the utility completes, with the exception that preceding one of the special built-in utilities causes the export attribute to persist after the built-in has completed. If the assignment does not precede a utility name in the command, or if the assignment is a result of the operation of the getopts or read utilities, the export attribute shall persist until the variable is unset.
From the Bash Manual:
-a: Mark variables and function which are modified or created for export to the environment of subsequent commands.
So in your case:
set -a
TESTVARIABLE=hellohelloheloo
# ...
# Here put all the variables that will be marked for export
# and that will be available from within test2 (and all other commands).
# If test2 modifies the variables, the modifications will never be
# seen in the present script!
set +a
./test2.sh
# Here, even if test2 modifies TESTVARIABLE, you'll still have
# TESTVARIABLE=hellohelloheloo
Observe that the specs only specify that with set -a the variable is marked for export. That is:
set -a
a=b
set +a
a=c
bash -c 'echo "$a"'
will echo c and not an empty line nor b (that is, set +a doesn't unmark for export, nor does it “save” the value of the assignment only for the exported environment). This is, of course, the most natural behavior.
Conclusion: using set -a/set +a can be less tedious than exporting manually all the variables. It is superior to sourcing the second script, as it will work for any command, not only the ones written in the same shell language.
There's actually an easier way than exporting and unsetting or sourcing again (at least in bash, as long as you're ok with passing the environment variables manually):
let a.sh be
#!/bin/bash
secret="winkle my tinkle"
echo Yo, lemme tell you \"$secret\", b.sh!
Message=$secret ./b.sh
and b.sh be
#!/bin/bash
echo I heard \"$Message\", yo
Observed output is
[rob#Archie test]$ ./a.sh
Yo, lemme tell you "winkle my tinkle", b.sh!
I heard "winkle my tinkle", yo
The magic lies in the last line of a.sh, where Message, for only the duration of the invocation of ./b.sh, is set to the value of secret from a.sh.
Basically, it's a little like named parameters/arguments. More than that, though, it even works for variables like $DISPLAY, which controls which X Server an application starts in.
Remember, the length of the list of environment variables is not infinite. On my system with a relatively vanilla kernel, xargs --show-limits tells me the maximum size of the arguments buffer is 2094486 bytes. Theoretically, you're using shell scripts wrong if your data is any larger than that (pipes, anyone?)
In Bash if you export the variable within a subshell, using parentheses as shown, you avoid leaking the exported variables:
#!/bin/bash
TESTVARIABLE=hellohelloheloo
(
export TESTVARIABLE
source ./test2.sh
)
The advantage here is that after you run the script from the command line, you won't see a $TESTVARIABLE leaked into your environment:
$ ./test.sh
hellohelloheloo
$ echo $TESTVARIABLE
#empty! no leak
$
Adding to the answer of Fatal Error, There is one more way to pass the variables to another shell script.
The above suggested solution have some drawbacks:
using Export : It will cause the variable to be present out of their scope which is not a good design practice.
using Source : It may cause name collisions or accidental overwriting of a predefined variable in some other shell script file which have sourced another file.
There is another simple solution avaiable for us to use.
Considering the example posted by you,
test.sh
#!/bin/bash
TESTVARIABLE=hellohelloheloo
./test2.sh "$TESTVARIABLE"
test2.sh
#!/bin/bash
echo $1
output
hellohelloheloo
Also it is important to note that "" are necessary if we pass multiword strings.
Taking one more example
master.sh
#!/bin/bash
echo in master.sh
var1="hello world"
sh slave1.sh $var1
sh slave2.sh "$var1"
echo back to master
slave1.sh
#!/bin/bash
echo in slave1.sh
echo value :$1
slave2.sh
#!/bin/bash
echo in slave2.sh
echo value : $1
output
in master.sh
in slave1.sh
value :"hello
in slave2.sh
value :"hello world"
It happens because of the reasons aptly described in this link
Another option is using eval. This is only suitable if the strings are trusted. The first script can echo the variable assignments:
echo "VAR=myvalue"
Then:
eval $(./first.sh) ./second.sh
This approach is of particular interest when the second script you want to set environment variables for is not in bash and you also don't want to export the variables, perhaps because they are sensitive and you don't want them to persist.
Another way, which is a little bit easier for me is to use named pipes. Named pipes provided a way to synchronize and sending messages between different processes.
A.bash:
#!/bin/bash
msg="The Message"
echo $msg > A.pipe
B.bash:
#!/bin/bash
msg=`cat ./A.pipe`
echo "message from A : $msg"
Usage:
$ mkfifo A.pipe #You have to create it once
$ ./A.bash & ./B.bash # you have to run your scripts at the same time
B.bash will wait for message and as soon as A.bash sends the message, B.bash will continue its work.

Can a shell script set environment variables of the calling shell? [duplicate]

This question already has answers here:
Can I export a variable to the environment from a Bash script without sourcing it?
(13 answers)
Closed 3 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
I'm trying to write a shell script that, when run, will set some environment variables that will stay set in the caller's shell.
setenv FOO foo
in csh/tcsh, or
export FOO=foo
in sh/bash only set it during the script's execution.
I already know that
source myscript
will run the commands of the script rather than launching a new shell, and that can result in setting the "caller's" environment.
But here's the rub:
I want this script to be callable from either bash or csh. In other words, I want users of either shell to be able to run my script and have their shell's environment changed. So 'source' won't work for me, since a user running csh can't source a bash script, and a user running bash can't source a csh script.
Is there any reasonable solution that doesn't involve having to write and maintain TWO versions on the script?
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh, but it's shorter to type and might work in some places where source doesn't.
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in \
NAME1=VALUE1 \
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo to set the environment variable TEREDO_WORMS:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL set in the environment to the C shell, and the environment variable TEREDO_WORMS is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit (or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i option to make an interactive shell). You could also add "$#" after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${#-'-i'}"
The "${#-'-i'}" bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i for the non-existent arguments'.
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset ):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun ):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'\0' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'\0' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX* to name the environment you want. The module command is basically a fancy alias for the eval mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$#"
else
kill -SIGUSR1 $$
echo "$#">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval \"\$CMD\"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent is to have the child process print an expression which the parent can eval.
bash$ eval $(shh-agent)
For example, ssh-agent has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh and tcsh uses setenv to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit.
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.

Resources