I would like to avoid bugs/behaviors from the old versions of the bash interpreter, is there a solution to bundle a recent(like, >4.3) bash interpreter along with the script?
It is a bad idea to write a script used for in a specific version of a shell. Write POSIX compliant shell scripts, it is not that difficult.
However, you can declare what interpreter is needed for the script at the top of your file with a shebang:
#!/path/to/interpreter
A common (and recommended) shebang for shell scripts is:
#!/bin/sh which links to the system shell.
If you want a particular shell, like bash, you would write:
#!/bin/bash
For requiring specific versions of bash, you would need to write a check to verify the version of bash that is present.
I've seen in a number of places, including recommendations on this site (What is the preferred Bash shebang?), to use #!/usr/bin/env bash in preference to #!/bin/bash. I've even seen one enterprising individual suggest using #!/bin/bash was wrong and bash functionality would be lost by doing so.
All that said, I use bash in a tightly controlled test environment where every drive in circulation is essentially a clone of a single master drive. I understand the portability argument, though it is not necessarily applicable in my case. Is there any other reason to prefer #!/usr/bin/env bashover the alternatives and, assuming portability was a concern, is there any reason using it could break functionality?
#!/usr/bin/env searches PATH for bash, and bash is not always in /bin, particularly on non-Linux systems. For example, on my OpenBSD system, it's in /usr/local/bin, since it was installed as an optional package.
If you are absolutely sure bash is in /bin and will always be, there's no harm in putting it directly in your shebang—but I'd recommend against it because scripts and programs all have lives beyond what we initially believe they will have.
The standard location of bash is /bin, and I suspect that's true on all systems. However, what if you don't like that version of bash? For example, I want to use bash 4.2, but the bash on my Mac is at 3.2.5.
I could try reinstalling bash in /bin but that may be a bad idea. If I update my OS, it will be overwritten.
However, I could install bash in /usr/local/bin/bash, and setup my PATH to:
PATH="/usr/local/bin:/bin:/usr/bin:$HOME/bin"
Now, if I specify bash, I don't get the old cruddy one at /bin/bash, but the newer, shinier one at /usr/local/bin. Nice!
Except my shell scripts have that !# /bin/bash shebang. Thus, when I run my shell scripts, I get that old and lousy version of bash that doesn't even have associative arrays.
Using /usr/bin/env bash will use the version of bash found in my PATH. If I setup my PATH, so that /usr/local/bin/bash is executed, that's the bash that my scripts will use.
It's rare to see this with bash, but it is a lot more common with Perl and Python:
Certain Unix/Linux releases which focus on stability are sometimes way behind with the release of these two scripting languages. Not long ago, RHEL's Perl was at 5.8.8 -- an eight year old version of Perl! If someone wanted to use more modern features, you had to install your own version.
Programs like Perlbrew and Pythonbrew allow you to install multiple versions of these languages. They depend upon scripts that manipulate your PATH to get the version you want. Hard coding the path means I can't run my script under brew.
It wasn't that long ago (okay, it was long ago) that Perl and Python were not standard packages included in most Unix systems. That meant you didn't know where these two programs were installed. Was it under /bin? /usr/bin? /opt/bin? Who knows? Using #! /usr/bin/env perl meant I didn't have to know.
And Now Why You Shouldn't Use #! /usr/bin/env bash
When the path is hardcoded in the shebang, I have to run with that interpreter. Thus, #! /bin/bash forces me to use the default installed version of bash. Since bash features are very stable (try running a 2.x version of a Python script under Python 3.x) it's very unlikely that my particular BASH script will not work, and since my bash script is probably used by this system and other systems, using a non-standard version of bash may have undesired effects. It is very likely I want to make sure that the stable standard version of bash is used with my shell script. Thus, I probably want to hard code the path in my shebang.
There are a lot of systems that don't have Bash in /bin, FreeBSD and OpenBSD just to name a few. If your script is meant to be portable to many different Unices, you may want to use #!/usr/bin/env bash instead of #!/bin/bash.
Note that this does not hold true for sh; for Bourne-compliant scripts I exclusively use #!/bin/sh, since I think pretty much every Unix in existence has sh in /bin.
For invoking bash it is a little bit of overkill. Unless you have multiple bash binaries like your own in ~/bin but that also means your code depends on $PATH having the right things in it.
It is handy for things like python though. There are wrapper scripts and environments which lead to alternative python binaries being used.
But nothing is lost by using the exact path to the binary as long as you are sure it is the binary you really want.
#!/usr/bin/env bash
is definitely better because it finds the bash executable path from your system environment variable.
Go to your Linux shell and type
env
It will print all your environment variables.
Go to your shell script and type
echo $BASH
It will print your bash path (according to the environment variable list) that you should use to build your correct shebang path in your script.
I would prefer wrapping the main program in a script like below to check all bash available on system. Better to have more control on the version it uses.
#! /usr/bin/env bash
# This script just chooses the appropriate bash
# installed in system and executes testcode.main
readonly DESIRED_VERSION="5"
declare all_bash_installed_on_this_system
declare bash
if [ "${BASH_VERSINFO}" -ne "${DESIRED_VERSION}" ]
then
found=0
all_bash_installed_on_this_system="$(\
awk -F'/' '$NF == "bash"{print}' "/etc/shells"\
)"
for bash in $all_bash_installed_on_this_system
do
versinfo="$( $bash -c 'echo ${BASH_VERSINFO}' )"
[ "${versinfo}" -eq "${DESIRED_VERSION}" ] && { found=1 ; break;}
done
if [ "${found}" -ne 1 ]
then
echo "${DESIRED_VERSION} not available"
exit 1
fi
fi
$bash main_program "$#"
Normally #!path/to/command will trigger bash to prepend the command path to the invoking script when executed. Example,
# file.sh
#!/usr/bin/bash
echo hi
./file.sh will start a new process and the script will get executed like /bin/bash ./file.sh
Now
# file.sh
#!/usr/bin/env bash
echo hi
will get executed as /usr/bin/env bash ./file.sh which quoting from the man page of env describes it as:
env - run a program in a modified environment
So env will look for the command bash in its PATH environment variable and execute in a separate environment where the environment values can be passed to env like NAME=VALUE pair.
You can test this with other scripts using different interpreters like python, etc.
#!/usr/bin/env python
# python commands
Your question is biased because it assumes that #!/usr/bin/env bash is superior to #!/bin/bash. This assumption is not true, and here's why:
env is useful in two cases:
when there are multiple versions of the interpreter that are incompatible.
For example python 2/3, perl 4/5, or php 5/7
when the location depends on the PATH, for instance with a python virtual environment.
But bash doesn't fall under any of these two cases because:
bash is quite stable, especially on modern systems like Linux and BSD which form the vast majority of bash installations.
there's typically only one version of bash installed under /bin.
This has been the case for the past 20+ years, only very old unices (that nobody uses any longer) had a different location.
Consequently going through the PATH variable via /usr/bin/env is not useful for bash.
Add to these three good resons to use #!/bin/bash:
for system scripts (when not using sh) for which the PATH variable may not contain /bin.
For example cron defaults to a very strict PATH of /usr/bin:/bin which is fine, sure, but other context/environments may not include /bin for some peculiar reason.
when the user screwed-up his PATH, which is very common with beginners.
for security when for example you're calling a suid program that invokes a bash script. You don't want the interpreter to be found via the PATH variable which is entirely under the user's control!
Finally, one could argue that there is one legitimate use case of env to spawn bash: when one needs to pass extra environment variables to the interpreter using #!/usr/bin/env -S VAR=value bash.
But this is not a thing with bash because when you're in control of the shebang, you're also in control of the whole script, so just add VAR=value inside the script instead and avoid the aforementioned problems introduced by env with bash scripts.
I rarely touch shell scripts, we have another department who write them, so I have an understanding of writing them but no experience. However they all appear rather useless with my issue.
I am trying to execute some KornShell (ksh) scripts on a windows based machine using Cygwin- we use these to launch our Oracle WebLogic servers, now it simply will not execute. I used to be able to execute these exact same scripts fine on my old machine.
Now I have narrowed this down to the fact the 'magic number' or whatever it is at the start of the script where it specifies the script interpreter path:
i.e.:
#!/bin/ksh
if I change it to execute as a simple bash it works i.e:
#!/bin/sh
I went through checking the packages installed for cygwin - now the shells I installed are:
mksh MirdBSD KornShell
bash the bourne again shell
zsh z shell
Should I expect to see a ksh.exe in my cygwin/bin directory? there is a system file 'ksh' which I was making an assume somehow associates it with one of the other shell exes, like mksh.exe
I understand my explanation may well be naff. But that being said, any help would be very much appreciated.
Thanks.
I believe the MirBSD korn shell is called mksh. You can verify this and look for the correct path by typing
% which mksh
% which ksh
or if you have no which,
% type -p mksh
% type -p ksh
or if that fails too, check /etc/shells which should list all valid shells on a system:
% grep ksh /etc/shells
You need to put the full path after the #! line. It will probably be /bin/mksh, so your line needs to look like:
#!/bin/mksh
You've probably fixed it by now, but the answer was no, your Cygwin does not (yet) know about ksh.
I solved this problem by launching the cygwin setup in command-line mode with the -P ksh attribute (as described in http://www.ehow.com/how_8611406_install-ksh-cygwin.html).
You can run a ksh using a bat file
C:\cygwin\bin\dos2unix kshfilename.ksh
C:\cygwin\bin\bash kshfilename.ksh
Running a shell script through Cygwin on Windows
Install KornShell (ksh) into Cygwin by the following process:
Download: ksh.2012-08-06.cygwin.i386.gz
Install ksh via Cygwin setup.
Execture Cygwin setup.exe
Choose: Install from Local Directory
Select the ksh.2012-08-06.cygwin.i386.gz as the Local Package Directory.
Complete Cygwin setup.
Restart Cygwin.
I have a ruby script, which declare the ruby path in the first line
#! /usr/bin/ruby
But, I need it to be run in different system. And the path of Ruby are different in different system. How to handle this issue?
In unix systems you can get away with
#! /usr/bin/env ruby
This has the effect of using the ruby found on the path. env is a core binary on pretty much every unix.
I have a relatively strange problem with bash shell scripts and tcl scripts, invoked from within the shell scripts.
In perspective, I have a shell script that generates some other shell scripts, as well as tcl scripts. Some of the generated shell scripts invoke tcl scripts with tclsh command.
The files are created and stored in a directory, also created by the initial shell scripts, that generates the files and the folders where these are to be stored.
The problem is with the generated shell scripts, which invoke tclsh to run a tcl script. Even if the files are generated and the shell scripts have the permissions to be executed, the response from the shell is that the tcl file embedded in the shell script cannot be found.
However, the file exists and I can open it with both vi and gedit, RHEL 9.0 or Centos 5.7 platforms. But, when I take the same shell script out of the created directory, this error does not appear. Can you please suggest any idea? I checked also directory permissions, but they seem ok. I also checked the shell script for extra characters, but I did not find anything.
It's hard to tell what exactly is going wrong from your description; you leave out all the information actually required to diagnose the problem precisely. However…
You have a tclsh on your PATH, but your script isn't running despite being chmodded to be executable? That means that there's a problem with your #! line. Current best practice is that you use something like this:
#!/usr/bin/env tclsh
That will search your PATH for tclsh and use that, and it's so much easier than any of the alternative contortions.
The other thing that might be causing a problem is if your Tcl program contains:
package require Tcl 8.5
And yet the version of Tcl used by the tclsh on the path is 8.4. I've seen this a number of times, and if that's your problem you need to make sure that the right Tcl rpm is installed and to update your #! line to this:
#!/usr/bin/env tclsh8.5
Similarly for Tcl 8.6, but in that case you might need to build your own from source as well and install that in a suitable location. (Tcl 8.6 is still only really for people who are specialists.)
(The issue is that RHEL — and Centos too, which tracks RHEL — is very conservative when it comes to Tcl. The reasons for this aren't really germane to this answer though.)
Could the problem be as simple as the fact that you don't have "." in your PATH? What if instead of calling your script like "myscript.tcl" you call it like "./myscript.tcl" or "/absolute/path/to/myscript.tcl"?