Set an environment variable and retrieve it later - shell

I would like to make an executable shell-script that once launched execute a certain action, and if launched again in a second time undo the action. I've tried to define an environment variable to determinate if the action has been already executed, but I can't make it work.
#!/bin/bash
if [ -z ${CUSTOM_VARIABLE} ]
then
CUSTOM_VARIABLE=true
export CUSTOM_VARIABLE
# do stuff here
else
# undo stuff here
unset CUSTOM_VARIABLE
fi
Is this a correct approach? How can I fix the code
Thanks in advance

Note jdv's comment.
You cannot run your script as a standalone execution and have it alter the environment of the parent. Calling it x.sh,
x.sh
will never change your env. On the other hand, you can source it.
. x.sh
which reads it and executes its content in the current scope, so that would work. You aren't really creating a separate program doing that - just storing a scripted list of commands and using the source shorthand to execute them in bulk, so to speak.
You could also define it as a function for similar result -
$: x() {
if [[ -z "${CUSTOM_VARIABLE}" ]]
then export CUSTOM_VARIABLE=true
else unset CUSTOM_VARIABLE
fi
}
YMMV.
Good luck.

Related

How can I change the environment of my bash function without affecting the environment it was called from?

I am working on a bash script that needs to operate on several directories. In each directory it needs to source a setup script unique to that directory and then run some commands. I need the environment set up when that script is sourced to only persist inside the function, as if it had been called as an external script with no persistent effects on the calling script.
As a simplified example if I have this script, sourced.sh:
export foo=old
And I have this as my driver.sh:
export foo=young
echo $foo
source_and_run(){
source ./sourced.sh
echo $foo
}
source_and_run
echo $foo
I want to know how to change the invocation of source_and_run in driver so it will print:
young
old
young
I need to be able to collect the return value from the function as well. I'm pretty sure there's a simple way to accomplish this but I haven't been able to track it down so far.
Creating another script like external.sh:
source ./sourced.sh; echo $foo
and defining source_and_run like
source_and_run(){ ./external.sh; return $? }
would work but managing that extra layer of scripts seems like it shouldn't be necessary.
You said
Creating another script like external.sh:
source ./sourced.sh; echo $foo
and defining source_and_run like
source_and_run(){ ./external.sh; return $? }
would work but managing that extra layer of scripts seems like it shouldn't be necessary.
You can get the same behavior by using a subshell. Note the () instead of {}.
But note that return $? is not necessary in your case. By default, a function returns the exit status of its last command. In your case, that command is echo. Therefore, the return value will always be 0.
source_and_run() (
source ./sourced.sh
echo "$foo"
)
By the way: A better solution would be to rewrite sourced.sh such that it prints $foo. That way you could call the script directly instead of having to source it and then using echo $foo.
The very purpose of a bash function is that it runs in the same process as the invoker.
Since the environment is accessible (for instance, using the command printenv),you could, at the entry of the function, save the environment and at the end restore it. However, the easier and more natural approach is to not use a function at all, but make it a separate shell script which is executed ín its own process and hence has its own environment, which does not affect the environment of the caller anymore.

Watch for environment variable change - ZSH

Is there a way to watch for changes to an environment variable in zsh/bash? When switching my kubernetes environment, for example, I would like to be able to read the variable that's set and make changes to my terminal window if I'm in production vs. development etc.
The way we switch environments is part of our tooling. I'd like to be able to extend that on my own machine without having to update any tooling. If watching for an environment variable change isn't possible, I'm also looking for a way to use something similar to builtin.
Example: create a function of the same name as an alias, call that alias from within the function, then do some other action afterward.
Both shells provide a way to execute arbitrary code prior to displaying a prompt; you can use this to check the value of a specific variable and take an appropriate action.
In .bashrc:
# The function name doesn't matter; it's just easier
# to set PROMPT_COMMAND to the name of a function than
# to arbitrary code.
pre_prompt () {
if [[ $SOME_VAR == "prod" ]]; then
doSomething
else [[
doSomethingElse
fi
}
PROMPT_COMMAND=pre_prompt
In .zshrc:
precmd () {
if [[ $SOME_VAR == "prod" ]]; then
doSomething
else [[
doSomethingElse
fi
}

can I have the PATH variable evaluate to what directory I'm in?

On linux using bash,
lets say I made two programs both called print_report.
(They are in different directories.)
Inside my .bashrc file, I have:
PATH="path/to/print_report1/:$PATH"
This allows me to type print_report anywhere and it will run one of the programs.
How can I have bash decide to use one or the other depending on the working directory?
for example,
If I'm currently in ~/project1/ and type print_report it will use /bin/foo/print_report
If I'm currently in ~/project2/ and type print_report it will use /bin/bar/print_report
You can't do that as such. Instead, write a wrapper script or function that checks the current directory and invokes the right command:
#!/bin/bash
if [[ $PWD == $HOME/project1/* ]]
then
/bin/foo/print_report "$#"
elif [[ $PWD == $HOME/project2/* ]]
then
/bin/bar/print_report "$#"
else
echo "Don't know how to print_report for $PWD"
fi
You can emulate preexec hooks à la zsh, using the DEBUG trap.
In that way, every time a command is executed, you can run a preexec hook to check $PWD, and adjust $PATH accordingly.
You can include a preexec hook doing what you want in your .bashrc.
This is a security disaster waiting to happen, (which is to say you really don't want to do this) but you can certainly do something like:
cd() {
dir=${1-.}
case $dir in)
path1) PATH=/path/for/working/in/path1;;
path2) PATH=/path/for/working/in/path2;;
*) PATH=/bin:/usr/bin;;
esac
command cd $dir
}
(Put that in your .bashrc or just define it in the current shell.)
Everything presented here so far strikes me as overly complicated and needlessly complex hackery. I would just place a Makefile in each directory with
report:
/bin/foo/print_report
in ~/project1/Makefile, and
report:
/bin/bar/print_report
in ~/project2/Makefile. This extends easily to as many directories and programs you want. And you only need to type make instead of those longwinded command names :-)

Is it possible to "unsource" in bash?

I have sourced a script in bash source somescript.sh. Is it possible to undo this without restarting the terminal? Alternatively, is there a way to "reset" the shell to the settings it gets upon login without restarting?
EDIT: As suggested in one of the answers, my script sets some environment variables. Is there a way to reset to the default login environment?
It is typically sufficient to simply re-exec a shell:
$ exec bash
This is not guaranteed to undo anything (sourcing the script may remove files, or execute any arbitrary command), but if your setup scripts are well written you will get a relatively clean environment. You can also try:
$ su - $(whoami)
Note that both of these solutions assume that you are talking about resetting your current shell, and not your terminal as (mis?)stated in the question. If you want to reset the terminal, try
$ reset
No. Sourcing a script executes the commands contained therein. There is no guarantee that the script doesn't do things that can't be undone (like remove files or whatever).
If the script only sets some variables and/or runs some harmless commands, then you can "undo" its action by unsetting the same variables, but even then the script might have replaced variables that already had values before with new ones, and to undo it you'd have to remember what the old values were.
If you source a script that sets some variables for your environment but you want this to be undoable, I suggest you start a new (sub)shell first and source the script in the subshell. Then to reset the environment to what it was before, just exit the subshell.
The best option seems to be to use unset to unset the environment variables that sourcing produces. Adding OLD_PATH=$PATH; export OLD_PATH to the .bashrc profile saves a backup of the login path in case one needs to revert the $PATH.
Not the most elegant solution, but this appears to do what you want:
exec $SHELL -l
My favorite approach for this would be to use a subshell within () parantheses
#!/bin/bash
(
source some_script.sh
#do something
)
# the environment before starting previous subshell should be restored here
# ...
(
source other_script.sh
#do something else
)
# the environment before starting previous subshell should be restored here
see also
https://unix.stackexchange.com/questions/138463/do-parentheses-really-put-the-command-in-a-subshell
I don't think undo of executed commands is possible in bash. You can try tset, reset for terminal initialization.
Depending what you're sourcing, you can make this script source/unsource itself.
#!/bin/bash
if [ "$IS_SOURCED" == true ] ; then
unset -f foo
export IS_SOURCED==false
else
foo () { echo bar ; }
export IS_SOURCED==true
fi

Is there any mechanism in Shell script alike "include guard" in C++?

let's see an example: in my main.sh, I'd like to source a.sh and b.sh. a.sh, however, might have already sourced b.sh. Thus it will cause the codes in b.sh executed twice. Is there any mechanism alike "include guard" in C++?
If you're sourcing scripts, you are usually using them to define functions and/or variables.
That means you can test whether the script has been sourced before by testing for (one of) the functions or variables it defines.
For example (in b.sh):
if [ -z "$B_SH_INCLUDED" ]
then
B_SH_INCLUDED=yes
...rest of original contents of b.sh
fi
There is no other way to do it that I know of. In particular, you can't do early exits or returns because that will affect the shell sourcing the file. You don't have to use a name that is solely for the file; you could use a name that the file always has defined.
In bash, an early return does not affect the sourcing file, it returns to it as if the current file were a function. I prefer this method because it avoids wrapping the entire content in if...fi.
if [ -n "$_for_example" ]; then return; fi
_for_example=`date`
TL;DR:
Bash has a source guard mechanism which lets you decide what to do if executed or sourced.
Longer version:
Over the years working with Bash sourcing I found that a different approach worked excellently which I will discuss below.
The problem for me was similar to the one of the original poster:
sourcing other scripts led to double script execution
additionally, scripts are less testable with unit test frameworks like BATS
The main idea of my solution is to write scripts in a way that can safely sourced multiple times. A major part plays the extraction of functionality (compared to have a large script which would not render very testable).
So, only functions and global variables are defined, other scripts can be sourced at will.
As an example, consider the following three bash scripts:
main.sh
#!/bin/env bash
source script2.sh
source script3.sh
GLOBAL_VAR=value
function_1() {
echo "do something"
function_2 "completely different"
}
run_main() {
echo "starting..."
function_1
}
# Enter: the source guard
# make the script only run when executed, not when sourced)
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
run_main "$#"
fi
script2.sh
#!/bin/env bash
source script3.sh
ALSO_A_GLOBAL_VAR=value2
function_2() {
echo "do something ${1}"
}
# this file can be sourced or be executed if called directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
function_2 "$#"
fi
script3.sh
#!/bin/env bash
export SUPER_USEFUL_VAR=/tmp/path
function_3() {
echo "hello again"
}
# no source guard here: this script defines only the variable and function if called but does not executes them because no code inside refers to the function.
Note that script3.sh is sourced twice. But since only functions and variables are (re-)defined, no functional code is executed during the sourcing.
The execution starts with with running main.sh as one would expect.
There might be a drawback when it comes to dependency cycles (in general a bad idea): I have no idea how Bash reacts if files source (directly or indirectly) each other.
Personally I usually use
set +o nounset # same as set -u
on most of my scripts, therefore I always turn it off and back on.
#!/usr/bin/env bash
set +u
if [ -n "$PRINTF_SCRIPT_USAGE_SH" ] ; then
set -u
return
else
set -u
readonly PRINTF_SCRIPT_USAGE_SH=1
fi
If you do not prefer nounset, you can do this
[[ -n "$PRINTF_SCRIPT_USAGE_SH" ]] && return || readonly PRINTF_SCRIPT_USAGE_SH=1

Resources