Difference between launching a script with ./script.sh and . ./script.sh - bash

Please tell me what is the difference in bash shell between launching a script with
./script.sh and . ./script.sh?

As klausbyskov says, the first form requries that the file have its executable permission bit set.
But more importantly, the first form executes the script in a separate process (distinct from, independent of, and unable to make changes in the shell that launched it). The second form causes the initial shell to directly run the commands from the file (as if you had typed them into the shell, or as if they were included in the script that does the ‘sourcing’).
A script that contains FOO=bar; export FOO will have not create an exported FOO environment variable in the shell that runs the first variant, but it will create such a variable in a shell that runs the second variant.
The second form (‘sourcing’) is a bit like a #include in C.

The first requires the file to have the +x flag set. The second uses the . command aka "source", described here.

Related

Running a bash script stored in a variable

I want to send the path to a bash script which sources some environmental variables as an argument to another bash script to run it and use the environmental variables. It works well with no arguments if I hard coded the path to the bash script to run it works and I can retrieve the environmental variables in the main script. the problem happens when I send the path as an argument it does not want to run it.
for example if the path is /path/script.bash and I send the path as an argument I get the error that /path/env_set: No such a file or directory
I run the script by this line
. $1 (this doesn't work)
. /path/script.bash (this works)
if I use
bash -c $1
the bash file runs but it does not set the environmental variables to use it in the main script
I don't know why env_set replaces the script name when I use arguments. Is there any approach to achieve this or any work around to achieve my goal?
It sounds like the problem could be either with your quoting, or with relative paths.
Quoting isn't just about spaces, it's also about pathname expansion (ie. []?* characters).
Do
. "$1"
(not . $1)
And remember, if you're giving a relative path for the environment script (or that script uses some relative paths), you will have a problem. Those paths are relative to the pwd - which is wherever you happen to be when you execute the main script (not where any of the script files themselves happen to be located, for example).
Finally, you can debug this problem by throwing echo at the start, and running the script (if it's safe to do that):
echo . "$1"
exit # Add exit here if you don't want to run w/o the vars.
Now you can see what you're actually trying to source.
In script 1, in your main code, you can call and run script 2,
. ./script 2
The first . stands for current shell, and the second . for current directory.
which will create the environment variables for you, and configure any other settings as well in the same terminal.
Afterwards when script 2 has finished running, script 1 would continue to run, and your environment variables which was created in script 2, will be accessible for script 1 to use in the same session.

Is there a good way to preload or include a script prior to executing another script?

I am looking to execute a script but have it include another script before it executes. The problem is, the included script would be generated and the executed script would be unmodifiable. One solution I came up with, was to actually reverse the include, by having the include script as a wrapper, calling set to set the arguments for the executed script and then dotting/sourcing it. E.g.
#!/bin/bash
# Generated wrapper or include script.
: Performing some setup...
target_script=$1 ; shift
set -- "$#"
. "$target_script"
Where target_script is the script I actually want to run, importing settings from the wrapper.
However, the potential problem I face is that callers of the target script or even the target script itself may be expecting $0 to be set to the path of it's location on the file system. But because this wrapper approach overrides $0, the value of $0 may be unexpected and could produce undefined behaviour.
Is there another way to perform what is in effect, an LD_PRELOAD but in the scripted form, through bash without interfering with its runtime parameters?
I have looked at --init-file or --rcfile, but these only seem to be included for interactive shells.
Forcing interactive mode does seem to allow me to specify --rcfile:
$ bash --rcfile /tmp/x-include.sh -i /tmp/xx.sh
include_script: $0=bash, $BASH_SOURCE=/tmp/x-include.sh
target_script: $0=/tmp/xx.sh, $BASH_SOURCE=/tmp/xx.sh
Content of the x-include.sh script:
#!/bin/bash
echo "include_script: \$0=$0, \$BASH_SOURCE=$BASH_SOURCE"
Content of the xx.sh script:
#!/bin/bash
echo "target_script: \$0=$0, \$BASH_SOURCE=$BASH_SOURCE"
From the bash documentation:
When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in
the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read
and execute. Bash behaves as if the following command were executed:
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
but the value of the PATH variable is not used to search for the file name.
So that settles it then:
BASH_ENV=/tmp/x-include.sh /bin/bash /tmp/xx.sh

How can I store and execute the command "export PATH=$PREFIX/bin" from a script?

I would like to write a script that has several commands of the kind
> export PATH=$PREFIX/bin
Where
> $PREFIX = /home/usr
or something else. Instead of typing it into the the Shell (/bin/bash) I would run the script to execute the commands.
Tried it with sh and then with a .py script having the line,
> commands.getstatusoutput('export PATH=$PREFIX/bin')
but these result into the error "bad variable name".
Would be thankful for some ideas!
If you need to adjust PATH (or any other environment variable) via a script after your .profile and equivalents have been run, you need to 'dot' or 'source' the file containing the script:
. file_setting_path
source file_setting_path
The . notation applies to all Bourne shell derivatives, and is standardized by POSIX. The source notation is used in C shell and has infected Bash completely unnecessarily.
Note that the file (file_setting_path) can be specified as a pathname, or if it lives in a directory listed on $PATH, it will be found. It only needs to be readable; it does not have to be executable.
The way the dot command works is that it reads the named file as part of the current shell environment, rather than executing it in a sub-shell like a normal script would be executed. Normally, the sub-shell sets its environment happily, but that doesn't affect the calling script.
The bad variable name is probably just a complaint that $PREFIX is undefined.
Usually a setting of PATH would look something like
export PATH=$PATH:/new/path/to/programs
so that you retain the old PATH but add something onto the end.
You are best off putting such things in your .bashrc so that they get run every time you log in.

"Command not found" inside shell script

I have a shell script on a mac (OSX 10.9) named msii810161816_TMP_CMD with the following content.
matlab
When I execute it, I get
./msii810161816_TMP_CMD: line 1: matlab: command not found
However, when I type matlab into the shell directly it starts as normal. How can it be that the same command works inside the shell but not inside a shell script? I copy-pasted the command directly from the script into the shell and it worked ...
PS: When I replace the content of the script with
echo matlab
I get the desired result, so I can definitely execute the shell script (I use ./msii810161816_TMP_CMD)
Thanks guys!
By default, aliases are not expanded in non-interactive shells, which is what shell scripts are. Aliases are intended to be used by a person at the keyboard as a typing aid.
If your goal is to not have to type the full path to matlab, instead of creating an alias you should modify your $PATH. Add /Applications/MATLAB_R2014a.app/bin to your $PATH environment variable and then both you and your shell scripts will be able to simply say
matlab
This is because, as commenters have stated, the PATH variable inside of the shell executing the script does not include the directory containing the matlab executable.
When a command name is used, like "matlab", your shell looks at every directory in the PATH in order, searching for one containing an executable file with the name "matlab".
Without going into too much detail, the PATH is determined by the shell being invoked.
When you execute bash, it combines a global setting for basic directories that must be in the PATH with any settings in your ~/.bashrc which alter the PATH.
Most likely, you are not running your script in a shell where the PATH includes matlab's directory.
To verify this, you can take the following steps:
Run which matlab. This will show you the path to the matlab executable.
Run echo "$PATH". This will show you your current PATH settings. Note that the directory from which matlab is included in the colon-separated list.
Add a line to the beginning of your script that does echo "$PATH". Note that the directory from which matlab is not included.
To resolve this, ensure that your script is executed in a shell that has the needed directory in the PATH.
You can do this a few ways, but the two most highly recommended ones would be
Add a shebang line to the start of your script. Assuming that you want to run it with bash, do #!/bin/bash or whatever the path to your bash interpreter is.
The shebang line is not actually fully standardized by POSIX, so BSD-derived systems like OSX will happily handle multiple arguments to the shebanged executable, while Linux systems pass at most one argument.
In spite of this, the shebang is an easy and simple way to document what should be used to execute the script, so it's a good solution.
Explicitly invoke your script with a shell as its interpreter, as in bash myscript.sh or tcsh myscript.sh or even sh myscript.sh
This is not incompatible with using a shebang line, and using both is a common practice.
I believe that the default shell on OSX is always bash, so you should start by trying with that.
If these instructions don't help, then you'll have to dig deeper to find out why or how the PATH is being altered between the calling context and the script's internal context.
Ultimately, this is almost certainly the source of your issue.

Positional parameters in a script read with the source builtin in zsh

I have noticed some weird behavior when sourcing another script within my shell script. The script that I am sourcing to setup the environment in my shell script takes an optional argument, e.g.
source setup.sh version1
However in my shell script I have also have command line argument variables. For example:
./myscript.sh TEST 1
Inside myscript.sh:
#!/bin/zsh
source setup.sh
echo ROOT version setup $ROOT_SYS
...more of the script
The problem that I have noticed with my script above is that the $1 argument (TEST in this example) is used in the source setup.sh command. This causes the command to become
source setup.sh TEST
which of course fails as setup.sh does not have a version TEST.
I solved this problem by editing my script to below.
#!/bin/zsh
source setup.sh version1
echo ROOT version setup $ROOT_SYS
...more of the script
The source command now does not pick up the $1 argument.
Why/How does the source command pick up the $1 argument when I am running my shell script?
Historically, unix shells didn't allow any arguments to be passed to scripts called with the . built-in (source is an alias of . available in bash, ksh and zsh). The . built-in means “act as if this file was actually included here”.
In bash, ksh and zsh, if you pass extra arguments to the . built-in, they become positional parameters ($1 and so on) in the sourced script. If you pass zero arguments, the positional parameters of the main script remain in effect. In those shells, . behaves rather like calling a function, though not perfectly so (in particular, in bash, if you modify the positional parameters inside the sub-script, the modification is seen by the main script).
A simple way of avoiding this kind of difficulty is to only ever define functions (and perhaps variables) in the subscript. Treat it as a code library, such that merely sourcing it has no effect, and then call functions from the sub-script to actually do something.
This is because source executes the code of setup.sh as if it was in place, so when setup.sh access, say, $1, the value it has is that of the first argument of the actual script. If you want to avoid that you could either execute it:
setup.sh
or, if you need to get back some variables or values from it, change it to return the result in form of an output, something like:
ROOT_SYS=`setup.sh`
Finally, as you figured out, the source keywords also allows providing arguments to the scripts, but it bypasses current arguments if you don't provide any.

Resources