In multiple different shells, the value of SHELL remains mostly constant:
bash$ echo $SHELL
/bin/bash
bash$ csh
csh$ echo $SHELL
/bin/bash
csh$ exec tcsh
csh$ echo $SHELL
/bin/bash
csh$ exec ksh
$ echo $SHELL
/bin/bash
$ exec dash
$ echo $SHELL
/bin/bash
$ exec zsh
zsh$ echo $SHELL
banana
zsh$ exec bash
bash$ echo $SHELL
banana
Since $SHELL is not useful for determining the currently running shell, what is its purpose?
As pointed out in the question, SHELL is (almost) completely worthless for determining the currently running shell. Although there is some correlation between the value of $SHELL and the user's login shell, that relationship is tenuous at best and $SHELL cannot be used to reliably determine which shell you are currently running.
Instead, the purpose of SHELL is to allow the user to communicate a preference to the system, similar to the use of PAGER or EDITOR. If a process needs the user to edit a file, it should examine EDITOR and open an editor that the user likes. If a process needs to present textual information to the user, it should check the value of PAGER to determine which program to use. If a process needs to invoke a SHELL to execute commands, it should execute the shell specified in SHELL. SHELL should probably not be used in quite the same way as PAGER or EDITOR, which really are just user preferences. It is more likely to be used to work around certain behaviors. For instance, if /bin/sh is a bloated shell, or some not-strictly-standards conforming shell that causes some obscure error, or even if /bin/sh actually fixes bugs that older scripts rely on for correct behavior, the user (or the system, via default startup files) may prefer to set SHELL to something like /usr/xpg4/bin/sh to side-step those issues.
Note that ksh documents a slightly different usage of SHELL, and states "The pathname of the shell is kept in the environment." Inspecting the code (https://github.com/ksh2020/ksh/blob/master/src/lib/libast/path/pathshell.c#L104), we see that the documentation is not quite accurate, as reflected in the behavior seen above. The bash documentation states "This variable expands to the full pathname to the shell. If it is not set when the shell starts, bash assigns to it the full pathname of the current user's login shell"
It is very likely that most users will set SHELL to the value of their login shell, so bash's behavior is reasonable. After all, if you have a favorite shell, it makes sense to use it as your login shell and to set it in SHELL. On the other hand, if you are using
tclsh or python as your login shell, it would make sense to set SHELL to /bin/sh or similar. Since many users will set SHELL to their login shell there is a correlation between the value of SHELL and the shell you are currently using, but this relationship is certainly not guaranteed.
In the question, note the value of $SHELL in zsh is set to banana, and that value persists into the next invocation of bash. This is a bit pathological, but might be instructive. What is happening here is simply that the value of SHELL in the $HOME/.zshrc was set to the string banana. When bash was invoked, that value was retained. It is the prerogative of the user to set SHELL to any value they like, and it does not need to be in any way related to the current shell nor even to make any sense.
Related
This question already has answers here:
Can I export a variable to the environment from a Bash script without sourcing it?
(13 answers)
Closed 3 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
I'm trying to write a shell script that, when run, will set some environment variables that will stay set in the caller's shell.
setenv FOO foo
in csh/tcsh, or
export FOO=foo
in sh/bash only set it during the script's execution.
I already know that
source myscript
will run the commands of the script rather than launching a new shell, and that can result in setting the "caller's" environment.
But here's the rub:
I want this script to be callable from either bash or csh. In other words, I want users of either shell to be able to run my script and have their shell's environment changed. So 'source' won't work for me, since a user running csh can't source a bash script, and a user running bash can't source a csh script.
Is there any reasonable solution that doesn't involve having to write and maintain TWO versions on the script?
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh, but it's shorter to type and might work in some places where source doesn't.
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in \
NAME1=VALUE1 \
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo to set the environment variable TEREDO_WORMS:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL set in the environment to the C shell, and the environment variable TEREDO_WORMS is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit (or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i option to make an interactive shell). You could also add "$#" after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${#-'-i'}"
The "${#-'-i'}" bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i for the non-existent arguments'.
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset ):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun ):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'\0' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'\0' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX* to name the environment you want. The module command is basically a fancy alias for the eval mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$#"
else
kill -SIGUSR1 $$
echo "$#">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval \"\$CMD\"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent is to have the child process print an expression which the parent can eval.
bash$ eval $(shh-agent)
For example, ssh-agent has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh and tcsh uses setenv to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit.
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.
my scripts are developed using a shebang line as "#!/bin/ksh" and the default shell is
$ echo $SHELL
/bin/ksh
i am moving all these scripts without changing the shebang line to a new machine where the default shell is
$ echo $SHELL
/bin/bash
Should i worry about this ?
I am guessing there should not be any issue as the shebang line will override the interpreter and use ksh as defined in the scripts and as i want it to be.
Please share your thoughts ..
The default shell does not affect how scripts are executed (unless you're using a shell that does something very strange).
An executable script with no #! line will be executed with /bin/sh. Actually that doesn't appear to be correct, but in any case you don't have to worry about that.
As long as your scripts start with #!/bin/ksh and you execute them normally, the system will execute them by passing them to /bin/ksh.
One thing you might have to worry about is whether /bin/ksh exists, and if it does, just what it is. On my system (Linux Mint 17), /bin/ksh is a symlink to /etc/alternatives/ksh, which in turn is a symlink to /bin/ksh93.
Scripts with #!/bin/ksh are probably common enough that almost all UNIX-like systems will cater to them, and will install something that behaves like ksh at that location.
Note that what you call the "default shell", specified by $SHELL, is not a system-wide default. It's just the value of a particular environment variable. That variable is set for each user on login based on the shell specified in /etc/passwd or equivalent; thus different users can have different default shells. You can change the value of $SHELL after logging in. The entry in /etc/passwd or equivalent is set when the account is created, and can be changed later. Most systems have a default user shell that's set for new accounts if no shell is specified (for example, most Linux systems user /bin/bash).
The supposition given is correct: The shebang line is honored on any execve() call. Only if your scripts are sourced (. yourscript or source yourscript) or lack a valid shebang do you need to care which interpreter they're called from.
If this were not true, scripts in non-shell languages wouldn't work as expected (as the Python interpreter, for instance, is never a system's default shell).
The kernel will use the shebang line to select the appropriate interpreter to use when the script is executed in the default manner, whether it is sh, bash, ksh, expect, python, or whatever. The only real issue to be wary of is scripts written on a system where sh is one specific shell (e.g. bash) that are then moved to another system where sh is a different shell (e.g. dash) since they may use shell features found in the former that do not exist in the latter.
I am using Bash
$ echo $SHELL
/bin/bash
and starting about a year ago I stopped using Shebangs with my Bash scripts. Can
I benefit from using #!/bin/sh or #!/bin/bash?
Update: In certain situations a file is only treated as a script with the
Shebang, example
$ cat foo.sh
ls
$ cat bar.sh
#!/bin/sh
ls
$ file foo.sh bar.sh
foo.sh: ASCII text
bar.sh: POSIX shell script, ASCII text executable
On UNIX-like systems, you should always start scripts with a shebang line. The system call execve (which is responsible for starting programs) relies on an executable having either an executable header or a shebang line.
From FreeBSD's execve manual page:
The execve() system call transforms the calling process into a new
process. The new process is constructed from an ordinary file, whose
name is pointed to by path, called the new process file.
[...]
This file is
either an executable object file, or a file of data for an interpreter.
[...]
An interpreter file begins with a line of the form:
#! interpreter [arg]
When an interpreter file is execve'd, the system actually execve's the
specified interpreter. If the optional arg is specified, it becomes the
first argument to the interpreter, and the name of the originally
execve'd file becomes the second argument
Similarly from the Linux manual page:
execve() executes the program pointed to by filename. filename must be
either a binary executable, or a script starting with a line of the
form:
#! interpreter [optional-arg]
In fact, if a file doesn't have the right "magic number" in it's header, (like an ELF header or #!), execve will fail with the ENOEXEC error (again from FreeBSD's execve manpage):
[ENOEXEC] The new process file has the appropriate access
permission, but has an invalid magic number in its
header.
If the file has executable permissions, but no shebang line but does seem to be a text file, the behaviour depends on the shell that you're running in.
Most shells seem to start a new instance of themselves and feed it the file, see below.
Since there is no guarantee that the script was actually written for that shell, this can work or fail spectacularly.
From tcsh(1):
On systems which do not understand the `#!' script interpreter conven‐
tion the shell may be compiled to emulate it; see the version shell
variable. If so, the shell checks the first line of the file to see if
it is of the form `#!interpreter arg ...'. If it is, the shell starts
interpreter with the given args and feeds the file to it on standard
input.
From FreeBSD's sh(1):
If the program is not a normal executable file (i.e., if it
does not begin with the “magic number” whose ASCII representation is
“#!”, resulting in an ENOEXEC return value from execve(2)) but appears to
be a text file, the shell will run a new instance of sh to interpret it.
From bash(1):
If this execution fails because the file is not in executable format,
and the file is not a directory, it is assumed to be a shell script, a
file containing shell commands. A subshell is spawned to execute it.
You cannot always depend on the location of a non-standard program like bash. I've seen bash in /usr/bin, /usr/local/bin, /opt/fsf/bin and /opt/gnu/bin to name a few.
So it is generally a good idea to use env;
#!/usr/bin/env bash
If you want your script to be portable, use sh instead of bash.
#!/bin/sh
While standards like POSIX do not guarantee the absolute paths of standard utilities, most UNIX-like systems seem to have sh in /bin and env in /usr/bin.
Scripts should always begin with a shebang line. If a script doesn't start with this, then it may be executed by the current shell. But that means that if someone who uses your script is running a different shell than you do, the script may behave differently. Also, it means the script can't be run directly from a program (e.g. the C exec() system call, or find -exec), it has to be run from a shell.
You might be interested in an early description by Dennis M Ritchie (dmr) who invented the #! :
From uucp Thu Jan 10 01:37:58 1980
.>From dmr Thu Jan 10 04:25:49 1980 remote from research
The system has been changed so that if a file
being executed begins with the magic characters #! , the rest of the
line is understood to be the name of an interpreter for the executed
file. Previously (and in fact still) the shell did much of this job;
it automatically executed itself on a text file with executable mode
when the text file's name was typed as a command. Putting the facility
into the system gives the following benefits.
1) It makes shell scripts more like real executable files, because
they can be the subject of 'exec.'
2) If you do a 'ps' while such a command is running, its real name
appears instead of 'sh'. Likewise, accounting is done on the basis of
the real name.
3) Shell scripts can be set-user-ID.
4) It is simpler to have alternate shells available; e.g. if you like
the Berkeley csh there is no question about which shell is to
interpret a file.
5) It will allow other interpreters to fit in more smoothly.
To take advantage of this wonderful opportunity, put
#! /bin/sh
at the left margin of the first line of your shell scripts. Blanks
after ! are OK. Use a complete pathname (no search is done). At the
moment the whole line is restricted to 16 characters but this limit
will be raised.
Hope this helps
If you write bash scripts, i.e. non portable scripts containing bashisms, you should keep using the #!/bin/bash shebang just to be sure the correct interpreter is used. You should not replace the shebang by #!/bin/sh as bash will run in POSIX mode so some of your scripts might behave differently.
If you write portable scripts, i.e. scripts only using POSIX utilities and their supported options, you might keep using #!/bin/sh on your system (i.e. one where /bin/sh is a POSIX shell).
It you write stricly conforming POSIX scripts to be distributed in various platforms and you are sure they will only be launched from a POSIX conforming system, you might and probably should remove the shebang as stated in the POSIX standard:
As it stands, a strictly conforming application must not use "#!" as the first two characters of the file.
The rationale is the POSIX standard doesn't mandate /bin/sh to be the POSIX compliant shell so there is no portable way to specify its path in a shebang. In this third case, to be able to use the 'find -exec' syntax on systems unable to run a shebangless still executable script, you can simply specify the interpreter in the find command itself, eg:
find /tmp -name "*.foo" -exec sh -c 'myscript "$#"' sh {} +
Here, as sh is specified without a path, the POSIX shell will be run.
The header is useful since it specifies which shell to use when running the script. For example, #!/bin/zsh would change the shell to zsh instead of bash, where you can use different commands.
For example, this page specifies the following:
Using #!/bin/sh, the default Bourne shell in most commercial variants
of UNIX, makes the script portable to non-Linux machines, though you
sacrifice Bash-specific features ...
TL;DR: always in scripts; please not in source'd scripts
Always in your parent
FYI: POSIX compliant is #!/bin/bash, not #!/bin/sh
You want to clarify this so that nothing else overrides the interpreter your script is made for.
You don't want a user at the terminal using zsh to have trouble if your script was written for POSIX bash scripts.
You don't want to run source in your #!/bin/bash unrecognized by #!/bin/sh, someone in an sh terminal have it break the script because it is expecting the simple/POSIX . for including source'd files
You don't want e.g. zsh features - not available in other interpreters - to make their way into your bash code. So, put #!/bin/bash in all your script headers. Then, any of your zsh habits in your script will break so you know to remove them before your roll-out.
It's probably best, especially so POSIX-compliant scripts don't break in a terminal like zsh.
Not expected for included source scripts
FYI: POSIX compliant for sourcing text in a BASH script is ., not source
You can use either for sourcing, but I'll do POSIX.
Standard "shebanging" for all scripting:
parent.sh:
#!/bin/bash
echo "My script here"
. sourced.sh # child/source script, below
sourced.sh:
echo "I am a sourced child script"
But, you are allowed to do this...
sourced.sh: (optional)
#!/bin/bash
echo "I am a sourced child script"
There, the #!/bin/bash "shebang" will be ignored. The main reason I would use it is for syntax highlighting in my text editor. However, in the "proper" scripting world, it is expected that your rolled-out source'd script will not contain the shebang.
In addition to what the others said, the shebang also enables syntax highlighting in some text editors, for example vim.
$SHELL and #!/bin/bash or #!/bin/sh are different.
To start, #!/bin/sh is a symlink to /bin/bash on most Linux systems (on Ubuntu it is now /bin/dash)
But on whether to start with /bin/sh or /bin/bash:
Bash and sh are two different shells. Basically bash is sh, with more
features and better syntax. Most commands work the same, but they are
different.
Just assume if you're writing a bash script, stick with /bin/bash and not /sh because problems can arise.
$SHELL does not necessarily reflect the currently running shell.
Instead, $SHELL is the user's preferred shell, which is typically the
one set in /etc/passwd. If you start a different shell after logging
in, you can not necessarily expect $SHELL to match the current shell
anymore.
This is mine for example, but it could also be /root:/bin/dash or /root:/bin/sh depending on which shell you have input in passwd. So to avoid any problems, keep the passwd file at /bin/bash and then using $SHELL vs. #!/bin/bash wouldn't matter as much.
root#kali:~/Desktop# cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
Sources:
http://shebang.mintern.net/bourne-is-not-bash-or-read-echo-and-backslash/
https://unix.stackexchange.com/questions/43499/difference-between-echo-shell-and-which-bash
http://man.cx/sh
http://man.cx/bash
I've recently switched to the ksh93 shell. I did this by adding the following two lines to my .profile file
export SHELL=/usr/local/bin/ksh93
exec $SHELL
Since I did that some simple scripts have started misbehaving in a way I don't understand. I narrowed it down to the following simple script called say test.sh
#!/bin/ksh
echo $0 $1
If I type the command test.sh fred I would expect to see the same output test.sh fred. Instead I see test.sh noglob. If I remove the shebang or if I change it to read #!/usr/local/bin/ksh93 then the script works as expected.
Can anyone explain what's going on, or what to do about it? I'm stumped.
I'm using Solaris 5.9 if it makes any difference.
I notice from the comments that your .kshrc has a set noglob. The set command with no options will set the command-line parameters, which is why $1 is "noglob", it should be set -o noglob.
By the way, setting noglob is weird, are you sure you want that?
I suspect (as others have mentioned) that /bin/ksh is Korn shell 88.
There is an important difference between ksh88 and ksh93 with regards to .kshrc. On ksh88 .kshrc is executed for every korn shell process, even non-interactive ones (scripts). In ksh93 .kshrc is not executed for shell scripts, only for interactive login shells.
When you do exec $SHELL that is not a login shell, it is better to change your entry in /etc/passwd. By the way, using variable SHELL is a bad idea, since that is set by the login shell.
There's probably an alias on ksh in your system with noglob set as an option, or noglob is being passed as a default parameter by default in your old shell. You should also check what ksh you're really calling (check if there's a link to another shell in from /bin/ksh). ksh --version should give some insight as well.
As a last point, instead of calling the shell directly i'd recommend to use
#!/usr/bin/env ksh
I've noticed echo behaves slightly differently when called directly
root$echo "line1\nline2"
and when called via a script:
#! /bin/sh
echo "line1\nline2"
[...]
The first case would print:
line1\nline2
, while the latter would print
line1
line2
So echo is always assumed to have flag -e when used in a script?
Your script has a shebang line of #! /bin/sh, while your interactive shell is probably Bash. If you want Bash behavior, change the script to #! /bin/bash instead.
The two programs sh and bash are different shells. Even on systems where /bin/sh and /bin/bash are the same program, it behaves differently depending on which command you use to invoke it. (Though echo doesn't act like it has -e turned on by default in Bash's sh mode, so you're probably on a system whose sh is really dash.)
If you type sh at your prompt, you'll find that echo behaves the same way as in your script. But sh is not a very friendly shell for interactive use.
If you're trying to write maximally portable shell scripts, you should stick to vanilla sh syntax, but if you aren't writing for distribution far and wide, you can use Bash scripts - just make sure to tell the system that they are, in fact, Bash scripts, by putting bash in the shebang line.
The echo command is probably the most notoriously inconsistent command across shells, by the way. So if you are going for portability, you're better off avoiding it entirely for anything other than straight up, no-options, no-escapes, always-a-newline output. One pretty portable option is printf.