How can I check if a program exists from a Bash script? - bash

How would I validate that a program exists, in a way that will either return an error and exit, or continue with the script?
It seems like it should be easy, but it's been stumping me.

Answer
POSIX compatible:
command -v <the_command>
Example use:
if ! command -v <the_command> &> /dev/null
then
echo "<the_command> could not be found"
exit
fi
For Bash specific environments:
hash <the_command> # For regular commands. Or...
type <the_command> # To check built-ins and keywords
Explanation
Avoid which. Not only is it an external process you're launching for doing very little (meaning builtins like hash, type or command are way cheaper), you can also rely on the builtins to actually do what you want, while the effects of external commands can easily vary from system to system.
Why care?
Many operating systems have a which that doesn't even set an exit status, meaning the if which foo won't even work there and will always report that foo exists, even if it doesn't (note that some POSIX shells appear to do this for hash too).
Many operating systems make which do custom and evil stuff like change the output or even hook into the package manager.
So, don't use which. Instead use one of these:
command -v foo >/dev/null 2>&1 || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }
type foo >/dev/null 2>&1 || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }
hash foo 2>/dev/null || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }
(Minor side-note: some will suggest 2>&- is the same 2>/dev/null but shorter – this is untrue. 2>&- closes FD 2 which causes an error in the program when it tries to write to stderr, which is very different from successfully writing to it and discarding the output (and dangerous!))
If your hash bang is /bin/sh then you should care about what POSIX says. type and hash's exit codes aren't terribly well defined by POSIX, and hash is seen to exit successfully when the command doesn't exist (haven't seen this with type yet). command's exit status is well defined by POSIX, so that one is probably the safest to use.
If your script uses bash though, POSIX rules don't really matter anymore and both type and hash become perfectly safe to use. type now has a -P to search just the PATH and hash has the side-effect that the command's location will be hashed (for faster lookup next time you use it), which is usually a good thing since you probably check for its existence in order to actually use it.
As a simple example, here's a function that runs gdate if it exists, otherwise date:
gnudate() {
if hash gdate 2>/dev/null; then
gdate "$#"
else
date "$#"
fi
}
Alternative with a complete feature set
You can use scripts-common to reach your need.
To check if something is installed, you can do:
checkBin <the_command> || errorMessage "This tool requires <the_command>. Install it please, and then run this tool again."

The following is a portable way to check whether a command exists in $PATH and is executable:
[ -x "$(command -v foo)" ]
Example:
if ! [ -x "$(command -v git)" ]; then
echo 'Error: git is not installed.' >&2
exit 1
fi
The executable check is needed because bash returns a non-executable file if no executable file with that name is found in $PATH.
Also note that if a non-executable file with the same name as the executable exists earlier in $PATH, dash returns the former, even though the latter would be executed. This is a bug and is in violation of the POSIX standard. [Bug report] [Standard]
Edit: This seems to be fixed as of dash 0.5.11 (Debian 11).
In addition, this will fail if the command you are looking for has been defined as an alias.

I agree with lhunath to discourage use of which, and his solution is perfectly valid for Bash users. However, to be more portable, command -v shall be used instead:
$ command -v foo >/dev/null 2>&1 || { echo "I require foo but it's not installed. Aborting." >&2; exit 1; }
Command command is POSIX compliant. See here for its specification: command - execute a simple command
Note: type is POSIX compliant, but type -P is not.

It depends on whether you want to know whether it exists in one of the directories in the $PATH variable or whether you know the absolute location of it. If you want to know if it is in the $PATH variable, use
if which programname >/dev/null; then
echo exists
else
echo does not exist
fi
otherwise use
if [ -x /path/to/programname ]; then
echo exists
else
echo does not exist
fi
The redirection to /dev/null/ in the first example suppresses the output of the which program.

I have a function defined in my .bashrc that makes this easier.
command_exists () {
type "$1" &> /dev/null ;
}
Here's an example of how it's used (from my .bash_profile.)
if command_exists mvim ; then
export VISUAL="mvim --nofork"
fi

Expanding on #lhunath's and #GregV's answers, here's the code for the people who want to easily put that check inside an if statement:
exists()
{
command -v "$1" >/dev/null 2>&1
}
Here's how to use it:
if exists bash; then
echo 'Bash exists!'
else
echo 'Your system does not have Bash'
fi

Try using:
test -x filename
or
[ -x filename ]
From the Bash manpage under Conditional Expressions:
-x file
True if file exists and is executable.

To use hash, as #lhunath suggests, in a Bash script:
hash foo &> /dev/null
if [ $? -eq 1 ]; then
echo >&2 "foo not found."
fi
This script runs hash and then checks if the exit code of the most recent command, the value stored in $?, is equal to 1. If hash doesn't find foo, the exit code will be 1. If foo is present, the exit code will be 0.
&> /dev/null redirects standard error and standard output from hash so that it doesn't appear onscreen and echo >&2 writes the message to standard error.

Command -v works fine if the POSIX_BUILTINS option is set for the <command> to test for, but it can fail if not. (It has worked for me for years, but I recently ran into one where it didn't work.)
I find the following to be more failproof:
test -x "$(which <command>)"
Since it tests for three things: path, existence and execution permission.

There are a ton of options here, but I was surprised no quick one-liners. This is what I used at the beginning of my scripts:
[[ "$(command -v mvn)" ]] || { echo "mvn is not installed" 1>&2 ; exit 1; }
[[ "$(command -v java)" ]] || { echo "java is not installed" 1>&2 ; exit 1; }
This is based on the selected answer here and another source.

If you check for program existence, you are probably going to run it later anyway. Why not try to run it in the first place?
if foo --version >/dev/null 2>&1; then
echo Found
else
echo Not found
fi
It's a more trustworthy check that the program runs than merely looking at PATH directories and file permissions.
Plus you can get some useful result from your program, such as its version.
Of course the drawbacks are that some programs can be heavy to start and some don't have a --version option to immediately (and successfully) exit.

Check for multiple dependencies and inform status to end users
for cmd in latex pandoc; do
printf '%-10s' "$cmd"
if hash "$cmd" 2>/dev/null; then
echo OK
else
echo missing
fi
done
Sample output:
latex OK
pandoc missing
Adjust the 10 to the maximum command length. It is not automatic, because I don't see a non-verbose POSIX way to do it:
How can I align the columns of a space separated table in Bash?
Check if some apt packages are installed with dpkg -s and install them otherwise.
See: Check if an apt-get package is installed and then install it if it's not on Linux
It was previously mentioned at: How can I check if a program exists from a Bash script?

I never did get the previous answers to work on the box I have access to. For one, type has been installed (doing what more does). So the builtin directive is needed. This command works for me:
if [ `builtin type -p vim` ]; then echo "TRUE"; else echo "FALSE"; fi

I wanted the same question answered but to run within a Makefile.
install:
#if [[ ! -x "$(shell command -v ghead)" ]]; then \
echo 'ghead does not exist. Please install it.'; \
exit -1; \
fi

It could be simpler, just:
#!/usr/bin/env bash
set -x
# if local program 'foo' returns 1 (doesn't exist) then...
if ! type -P foo; then
echo 'crap, no foo'
else
echo 'sweet, we have foo!'
fi
Change foo to vi to get the other condition to fire.

hash foo 2>/dev/null: works with Z shell (Zsh), Bash, Dash and ash.
type -p foo: it appears to work with Z shell, Bash and ash (BusyBox), but not Dash (it interprets -p as an argument).
command -v foo: works with Z shell, Bash, Dash, but not ash (BusyBox) (-ash: command: not found).
Also note that builtin is not available with ash and Dash.

zsh only, but very useful for zsh scripting (e.g. when writing completion scripts):
The zsh/parameter module gives access to, among other things, the internal commands hash table. From man zshmodules:
THE ZSH/PARAMETER MODULE
The zsh/parameter module gives access to some of the internal hash ta‐
bles used by the shell by defining some special parameters.
[...]
commands
This array gives access to the command hash table. The keys are
the names of external commands, the values are the pathnames of
the files that would be executed when the command would be in‐
voked. Setting a key in this array defines a new entry in this
table in the same way as with the hash builtin. Unsetting a key
as in `unset "commands[foo]"' removes the entry for the given
key from the command hash table.
Although it is a loadable module, it seems to be loaded by default, as long as zsh is not used with --emulate.
example:
martin#martin ~ % echo $commands[zsh]
/usr/bin/zsh
To quickly check whether a certain command is available, just check if the key exists in the hash:
if (( ${+commands[zsh]} ))
then
echo "zsh is available"
fi
Note though that the hash will contain any files in $PATH folders, regardless of whether they are executable or not. To be absolutely sure, you have to spend a stat call on that:
if (( ${+commands[zsh]} )) && [[ -x $commands[zsh] ]]
then
echo "zsh is available"
fi

The which command might be useful. man which
It returns 0 if the executable is found and returns 1 if it's not found or not executable:
NAME
which - locate a command
SYNOPSIS
which [-a] filename ...
DESCRIPTION
which returns the pathnames of the files which would
be executed in the current environment, had its
arguments been given as commands in a strictly
POSIX-conformant shell. It does this by searching
the PATH for executable files matching the names
of the arguments.
OPTIONS
-a print all matching pathnames of each argument
EXIT STATUS
0 if all specified commands are
found and executable
1 if one or more specified commands is nonexistent
or not executable
2 if an invalid option is specified
The nice thing about which is that it figures out if the executable is available in the environment that which is run in - it saves a few problems...

Use Bash builtins if you can:
which programname
...
type -P programname

For those interested, none of the methodologies in previous answers work if you wish to detect an installed library. I imagine you are left either with physically checking the path (potentially for header files and such), or something like this (if you are on a Debian-based distribution):
dpkg --status libdb-dev | grep -q not-installed
if [ $? -eq 0 ]; then
apt-get install libdb-dev
fi
As you can see from the above, a "0" answer from the query means the package is not installed. This is a function of "grep" - a "0" means a match was found, a "1" means no match was found.

This will tell according to the location if the program exist or not:
if [ -x /usr/bin/yum ]; then
echo "This is Centos"
fi

I'd say there isn't any portable and 100% reliable way due to dangling aliases. For example:
alias john='ls --color'
alias paul='george -F'
alias george='ls -h'
alias ringo=/
Of course, only the last one is problematic (no offence to Ringo!). But all of them are valid aliases from the point of view of command -v.
In order to reject dangling ones like ringo, we have to parse the output of the shell built-in alias command and recurse into them (command -v isn't a superior to alias here.) There isn't any portable solution for it, and even a Bash-specific solution is rather tedious.
Note that a solution like this will unconditionally reject alias ls='ls -F':
test() { command -v $1 | grep -qv alias }

If you guys/gals can't get the things in answers here to work and are pulling hair out of your back, try to run the same command using bash -c. Just look at this somnambular delirium. This is what really happening when you run $(sub-command):
First. It can give you completely different output.
$ command -v ls
alias ls='ls --color=auto'
$ bash -c "command -v ls"
/bin/ls
Second. It can give you no output at all.
$ command -v nvm
nvm
$ bash -c "command -v nvm"
$ bash -c "nvm --help"
bash: nvm: command not found

#!/bin/bash
a=${apt-cache show program}
if [[ $a == 0 ]]
then
echo "the program doesn't exist"
else
echo "the program exists"
fi
#program is not literal, you can change it to the program's name you want to check

The hash-variant has one pitfall: On the command line you can for example type in
one_folder/process
to have process executed. For this the parent folder of one_folder must be in $PATH. But when you try to hash this command, it will always succeed:
hash one_folder/process; echo $? # will always output '0'

I second the use of "command -v". E.g. like this:
md=$(command -v mkdirhier) ; alias md=${md:=mkdir} # bash
emacs="$(command -v emacs) -nw" || emacs=nano
alias e=$emacs
[[ -z $(command -v jed) ]] && alias jed=$emacs

I had to check if Git was installed as part of deploying our CI server. My final Bash script was as follows (Ubuntu server):
if ! builtin type -p git &>/dev/null; then
sudo apt-get -y install git-core
fi

To mimic Bash's type -P cmd, we can use the POSIX compliant env -i type cmd 1>/dev/null 2>&1.
man env
# "The option '-i' causes env to completely ignore the environment it inherits."
# In other words, there are no aliases or functions to be looked up by the type command.
ls() { echo 'Hello, world!'; }
ls
type ls
env -i type ls
cmd=ls
cmd=lsx
env -i type $cmd 1>/dev/null 2>&1 || { echo "$cmd not found"; exit 1; }

If there isn't any external type command available (as taken for granted here), we can use POSIX compliant env -i sh -c 'type cmd 1>/dev/null 2>&1':
# Portable version of Bash's type -P cmd (without output on stdout)
typep() {
command -p env -i PATH="$PATH" sh -c '
export LC_ALL=C LANG=C
cmd="$1"
cmd="`type "$cmd" 2>/dev/null || { echo "error: command $cmd not found; exiting ..." 1>&2; exit 1; }`"
[ $? != 0 ] && exit 1
case "$cmd" in
*\ /*) exit 0;;
*) printf "%s\n" "error: $cmd" 1>&2; exit 1;;
esac
' _ "$1" || exit 1
}
# Get your standard $PATH value
#PATH="$(command -p getconf PATH)"
typep ls
typep builtin
typep ls-temp
At least on Mac OS X v10.6.8 (Snow Leopard) using Bash 4.2.24(2) command -v ls does not match a moved /bin/ls-temp.

My setup for a Debian server:
I had the problem when multiple packages contained the same name.
For example apache2. So this was my solution:
function _apt_install() {
apt-get install -y $1 > /dev/null
}
function _apt_install_norecommends() {
apt-get install -y --no-install-recommends $1 > /dev/null
}
function _apt_available() {
if [ `apt-cache search $1 | grep -o "$1" | uniq | wc -l` = "1" ]; then
echo "Package is available : $1"
PACKAGE_INSTALL="1"
else
echo "Package $1 is NOT available for install"
echo "We can not continue without this package..."
echo "Exitting now.."
exit 0
fi
}
function _package_install {
_apt_available $1
if [ "${PACKAGE_INSTALL}" = "1" ]; then
if [ "$(dpkg-query -l $1 | tail -n1 | cut -c1-2)" = "ii" ]; then
echo "package is already_installed: $1"
else
echo "installing package : $1, please wait.."
_apt_install $1
sleep 0.5
fi
fi
}
function _package_install_no_recommends {
_apt_available $1
if [ "${PACKAGE_INSTALL}" = "1" ]; then
if [ "$(dpkg-query -l $1 | tail -n1 | cut -c1-2)" = "ii" ]; then
echo "package is already_installed: $1"
else
echo "installing package : $1, please wait.."
_apt_install_norecommends $1
sleep 0.5
fi
fi
}

Related

Can bash retrieve the entire command within the script? [duplicate]

I have a script where I do not want it to call exit if it's being sourced.
I thought of checking if $0 == bash but this has problems if the script is sourced from another script, or if the user sources it from a different shell like ksh.
Is there a reliable way of detecting if a script is being sourced?
Robust solutions for bash, ksh, zsh, including a cross-shell one, plus a reasonably robust POSIX-compliant solution:
Version numbers given are the ones on which functionality was verified - likely, these solutions work on much earlier versions, too - feedback welcome.
Using POSIX features only (such as in dash, which acts as /bin/sh on Ubuntu), there is no robust way to determine if a script is being sourced - see below for the best approximation.
Important:
The solutions determine whether the script is being sourced by its caller, which may be a shell itself or another script (which may or may not be sourced itself):
Also detecting the latter case adds complexity; if you do not need to detect the case when your script is being sourced by another script, you can use the following, relatively simple POSIX-compliant solution:
# Helper function
is_sourced() {
if [ -n "$ZSH_VERSION" ]; then
case $ZSH_EVAL_CONTEXT in *:file:*) return 0;; esac
else # Add additional POSIX-compatible shell names here, if needed.
case ${0##*/} in dash|-dash|bash|-bash|ksh|-ksh|sh|-sh) return 0;; esac
fi
return 1 # NOT sourced.
}
# Sample call.
is_sourced && sourced=1 || sourced=0
All solutions below must run in the top-level scope of your script, not inside functions.
One-liners follow - explanation below; the cross-shell version is complex, but it should work robustly:
bash (verified on 3.57, 4.4.19, and 5.1.16)
(return 0 2>/dev/null) && sourced=1 || sourced=0
ksh (verified on 93u+)
[[ "$(cd -- "$(dirname -- "$0")" && pwd -P)/$(basename -- "$0")" != "$(cd -- "$(dirname -- "${.sh.file}")" && pwd -P)/$(basename -- "${.sh.file}")" ]] && sourced=1 || sourced=0
zsh (verified on 5.0.5)
[[ $ZSH_EVAL_CONTEXT =~ :file$ ]] && sourced=1 || sourced=0
cross-shell (bash, ksh, zsh)
(
[[ -n $ZSH_VERSION && $ZSH_EVAL_CONTEXT =~ :file$ ]] ||
[[ -n $KSH_VERSION && "$(cd -- "$(dirname -- "$0")" && pwd -P)/$(basename -- "$0")" != "$(cd -- "$(dirname -- "${.sh.file}")" && pwd -P)/$(basename -- "${.sh.file}")" ]] ||
[[ -n $BASH_VERSION ]] && (return 0 2>/dev/null)
) && sourced=1 || sourced=0
POSIX-compliant; not a one-liner (single pipeline) for technical reasons and not fully robust (see bottom):
sourced=0
if [ -n "$ZSH_VERSION" ]; then
case $ZSH_EVAL_CONTEXT in *:file) sourced=1;; esac
elif [ -n "$KSH_VERSION" ]; then
[ "$(cd -- "$(dirname -- "$0")" && pwd -P)/$(basename -- "$0")" != "$(cd -- "$(dirname -- "${.sh.file}")" && pwd -P)/$(basename -- "${.sh.file}")" ] && sourced=1
elif [ -n "$BASH_VERSION" ]; then
(return 0 2>/dev/null) && sourced=1
else # All other shells: examine $0 for known shell binary filenames.
# Detects `sh` and `dash`; add additional shell filenames as needed.
case ${0##*/} in sh|-sh|dash|-dash) sourced=1;; esac
fi
Explanations
bash
(return 0 2>/dev/null) && sourced=1 || sourced=0
Note: The technique was adapted from user5754163's answer, as it turned out to be more robust than the original solution, [[ $0 != "$BASH_SOURCE" ]] && sourced=1 || sourced=0[1]
Bash allows return statements only from functions and, in a script's top-level scope, only if the script is sourced.
If return is used in the top-level scope of a non-sourced script, an error message is emitted, and the exit code is set to 1.
(return 0 2>/dev/null) executes return in a subshell and suppresses the error message; afterwards the exit code indicates whether the script was sourced (0) or not (1), which is used with the && and || operators to set the sourced variable accordingly.
Use of a subshell is necessary, because executing return in the top-level scope of a sourced script would exit the script.
Tip of the hat to #Haozhun, who made the command more robust by explicitly using 0 as the return operand; he notes: per bash help of return [N]: "If N is omitted, the return status is that of the last command." As a result, the earlier version [which used just return, without an operand]
produces incorrect result if the last command on the user's shell has a non-zero return value.
ksh
[[ "$(cd -- "$(dirname -- "$0")" && pwd -P)/$(basename -- "$0")" != "$(cd -- "$(dirname -- "${.sh.file}")" && pwd -P)/$(basename -- "${.sh.file}")" ]] && sourced=1 || sourced=0
Special variable ${.sh.file} is somewhat analogous to $BASH_SOURCE; note that ${.sh.file} causes a syntax error in bash, zsh, and dash, so be sure to execute it conditionally in multi-shell scripts.
Unlike in bash, $0 and ${.sh.file} are NOT guaranteed to be the same - at different times either may be a relative path or mere file name, while the other may be a full one; therefore, both $0 and ${.sh.file} must be resolved to full paths before comparing. If the full paths differ, sourcing is implied.
zsh
[[ $ZSH_EVAL_CONTEXT =~ :file$) ]] && sourced=1 || sourced=0
$ZSH_EVAL_CONTEXT contains information about the evaluation context: substring file, separated with :, is only present if the script is being sourced.
In a sourced script's top-level scope, $ZSH_EVAL_CONTEXT ends with :file, and that's what this test is limited to. Inside a function, :shfunc is appended to :file; inside a command substitution, :cmdsubst, is appended.
Using POSIX features only
If you're willing to make certain assumptions, you can make a reasonable, but not fool-proof guess as to whether your script is being sourced, based on knowing the binary filenames of the shells that may be executing your script.
Notably, this means that this approach doesn't detect the case when your script is being sourced by another script.
The section "How to handle sourced invocations" in this answer discusses the edge cases that cannot be handled with POSIX features only in detail.
Examining the binary filename relies on the standard behavior of $0, which zsh, for instance, does not exhibit.
Thus, the safest approach is to combine the robust, shell-specific methods above - which do not rely on $0 - with a $0-based fallback solution for all remaining shells.
In short: The following solution:
in the shells that are covered with shell-specific tests: works robustly.
in all other shells: works only as expected when the script is being sourced directly from such a shell, as opposed to from another script.
Tip of the hat to Stéphane Desneux and his answer for the inspiration (transforming my cross-shell statement expression into a sh-compatible if statement and adding a handler for other shells).
sourced=0
if [ -n "$ZSH_VERSION" ]; then
case $ZSH_EVAL_CONTEXT in *:file) sourced=1;; esac
elif [ -n "$KSH_VERSION" ]; then
[ "$(cd -- "$(dirname -- "$0")" && pwd -P)/$(basename -- "$0")" != "$(cd -- "$(dirname -- "${.sh.file}")" && pwd -P)/$(basename -- "${.sh.file}")" ] && sourced=1
elif [ -n "$BASH_VERSION" ]; then
(return 0 2>/dev/null) && sourced=1
else # All other shells: examine $0 for known shell binary filenames.
# Detects `sh` and `dash`; add additional shell filenames as needed.
case ${0##*/} in sh|-sh|dash|-dash) sourced=1;; esac
fi
Note that, for robustness, each shell binary filename (e.g. sh) is represented twice - once as-is and a second time, prefixed with -. This is to account for environments, such as macOS, where interactive shells are launched as login shells with a custom $0 value that is the (path-less) shell filename prefixed with -.Thanks, t7e.
(While sh and dash are perhaps unlikely to be used as interactive shells, others that you may need to add to the list may.)
[1] user1902689 discovered that [[ $0 != "$BASH_SOURCE" ]] yields a false positive when you execute a script located in the $PATH by passing its mere filename to the bash binary; e.g., bash my-script, because $0 is then just my-script, whereas $BASH_SOURCE is the full path. While you normally wouldn't use this technique to invoke scripts in the $PATH - you'd just invoke them directly (my-script) - it is helpful when combined with -x for debugging.
If your Bash version knows about the BASH_SOURCE array variable, try something like:
# man bash | less -p BASH_SOURCE
#[[ ${BASH_VERSINFO[0]} -le 2 ]] && echo 'No BASH_SOURCE array variable' && exit 1
[[ "${BASH_SOURCE[0]}" != "${0}" ]] && echo "script ${BASH_SOURCE[0]} is being sourced ..."
This seems to be portable between Bash and Korn:
[[ $_ != $0 ]] && echo "Script is being sourced" || echo "Script is a subshell"
A line similar to this or an assignment like pathname="$_" (with a later test and action) must be on the first line of the script or on the line after the shebang (which, if used, should be for ksh in order for it to work under the most circumstances).
After reading #DennisWilliamson's answer, there are some issues, see below:
As this question stand for ksh and bash, there is another part in this answer concerning ksh... see below.
Simple bash way
[ "$0" = "$BASH_SOURCE" ]
Let's try (on the fly because that bash could ;-):
source <(echo $'#!/bin/bash
[ "$0" = "$BASH_SOURCE" ] && v=own || v=sourced;
echo "process $$ is $v ($0, $BASH_SOURCE)" ')
process 29301 is sourced (bash, /dev/fd/63)
bash <(echo $'#!/bin/bash
[ "$0" = "$BASH_SOURCE" ] && v=own || v=sourced;
echo "process $$ is $v ($0, $BASH_SOURCE)" ')
process 16229 is own (/dev/fd/63, /dev/fd/63)
I use source instead off . for readability (as . is an alias to source):
. <(echo $'#!/bin/bash
[ "$0" = "$BASH_SOURCE" ] && v=own || v=sourced;
echo "process $$ is $v ($0, $BASH_SOURCE)" ')
process 29301 is sourced (bash, /dev/fd/63)
Note that process number don't change while process stay sourced:
echo $$
29301
Why not to use $_ == $0 comparison
For ensuring many case, I begin to write a true script:
#!/bin/bash
# As $_ could be used only once, uncomment one of two following lines
#printf '_="%s", 0="%s" and BASH_SOURCE="%s"\n' "$_" "$0" "$BASH_SOURCE"
[[ "$_" != "$0" ]] && DW_PURPOSE=sourced || DW_PURPOSE=subshell
[ "$0" = "$BASH_SOURCE" ] && BASH_KIND_ENV=own || BASH_KIND_ENV=sourced;
echo "proc: $$[ppid:$PPID] is $BASH_KIND_ENV (DW purpose: $DW_PURPOSE)"
Copy this to a file called testscript:
cat >testscript
chmod +x testscript
Now we could test:
./testscript
proc: 25758[ppid:24890] is own (DW purpose: subshell)
That's ok.
. ./testscript
proc: 24890[ppid:24885] is sourced (DW purpose: sourced)
source ./testscript
proc: 24890[ppid:24885] is sourced (DW purpose: sourced)
That's ok.
But,for testing a script before adding -x flag:
bash ./testscript
proc: 25776[ppid:24890] is own (DW purpose: sourced)
Or to use pre-defined variables:
env PATH=/tmp/bintemp:$PATH ./testscript
proc: 25948[ppid:24890] is own (DW purpose: sourced)
env SOMETHING=PREDEFINED ./testscript
proc: 25972[ppid:24890] is own (DW purpose: sourced)
This won't work anymore.
Moving comment from 5th line to 6th would give more readable answer:
./testscript
_="./testscript", 0="./testscript" and BASH_SOURCE="./testscript"
proc: 26256[ppid:24890] is own
. testscript
_="_filedir", 0="bash" and BASH_SOURCE="testscript"
proc: 24890[ppid:24885] is sourced
source testscript
_="_filedir", 0="bash" and BASH_SOURCE="testscript"
proc: 24890[ppid:24885] is sourced
bash testscript
_="/bin/bash", 0="testscript" and BASH_SOURCE="testscript"
proc: 26317[ppid:24890] is own
env FILE=/dev/null ./testscript
_="/usr/bin/env", 0="./testscript" and BASH_SOURCE="./testscript"
proc: 26336[ppid:24890] is own
Harder: ksh now...
As I don't use ksh a lot, after some read on the man page, there is my tries:
#!/bin/ksh
set >/tmp/ksh-$$.log
Copy this in a testfile.ksh:
cat >testfile.ksh
chmod +x testfile.ksh
Than run it two time:
./testfile.ksh
. ./testfile.ksh
ls -l /tmp/ksh-*.log
-rw-r--r-- 1 user user 2183 avr 11 13:48 /tmp/ksh-9725.log
-rw-r--r-- 1 user user 2140 avr 11 13:48 /tmp/ksh-9781.log
echo $$
9725
and see:
diff /tmp/ksh-{9725,9781}.log | grep ^\> # OWN SUBSHELL:
> HISTCMD=0
> PPID=9725
> RANDOM=1626
> SECONDS=0.001
> lineno=0
> SHLVL=3
diff /tmp/ksh-{9725,9781}.log | grep ^\< # SOURCED:
< COLUMNS=152
< HISTCMD=117
< LINES=47
< PPID=9163
< PS1='$ '
< RANDOM=29667
< SECONDS=23.652
< level=1
< lineno=1
< SHLVL=2
There is some variable herited in a sourced run, but nothing really related...
You could even check that $SECONDS is close to 0.000, but that's ensure only manualy sourced cases...
You even could try to check for what's parent is:
Place this into your testfile.ksh:
ps $PPID
Than:
./testfile.ksh
PID TTY STAT TIME COMMAND
32320 pts/4 Ss 0:00 -ksh
. ./testfile.ksh
PID TTY STAT TIME COMMAND
32319 ? S 0:00 sshd: user#pts/4
or ps ho cmd $PPID, but this work only for one level of subsessions...
Sorry, I couldn't find a reliable way of doing that, under ksh.
The BASH_SOURCE[] answer (bash-3.0 and later) seems simplest, though BASH_SOURCE[] is not documented to work outside a function body (it currently happens to work, in disagreement with the man page).
The most robust way, as suggested by Wirawan Purwanto, is to check FUNCNAME[1] within a function:
function mycheck() { declare -p FUNCNAME; }
mycheck
Then:
$ bash sourcetest.sh
declare -a FUNCNAME='([0]="mycheck" [1]="main")'
$ . sourcetest.sh
declare -a FUNCNAME='([0]="mycheck" [1]="source")'
This is the equivalent to checking the output of caller, the values main and source distinguish the caller's context. Using FUNCNAME[] saves you capturing and parsing caller output. You need to know or calculate your local call depth to be correct though. Cases like a script being sourced from within another function or script will cause the array (stack) to be deeper. (FUNCNAME is a special bash array variable, it should have contiguous indexes corresponding to the call stack, as long as it is never unset.)
function issourced() {
[[ ${FUNCNAME[#]: -1} == "source" ]]
}
(In bash-4.2 and later you can use the simpler form ${FUNCNAME[-1]} instead for the last item in the array. Improved and simplified thanks to Dennis Williamson's comment below.)
However, your problem as stated is "I have a script where I do not want it to call 'exit' if it's being sourced". The common bash idiom for this situation is:
return 2>/dev/null || exit
If the script is being sourced then return will terminate the sourced script and return to the caller.
If the script is being executed, then return will return an error (redirected), and exit will terminate the script as normal. Both return and exit can take an exit code, if required.
Sadly, this doesn't work in ksh (at least not in the AT&T derived version I have here), it treats return as equivalent to exit if invoked outside a function or dot-sourced script.
Updated: What you can do in contemporary versions of ksh is to check the special variable .sh.level which is set to the function call depth. For an invoked script this will initially be unset, for a dot-sourced script it will be set to 1.
function issourced {
[[ ${.sh.level} -eq 2 ]]
}
issourced && echo this script is sourced
This is not quite as robust as the bash version, you must invoke issourced() in the file you are testing from at the top level or at a known function depth.
(You may also be interested in this code on github which uses a ksh discipline function and some debug trap trickery to emulate the bash FUNCNAME array.)
The canonical answer here: http://mywiki.wooledge.org/BashFAQ/109 also offers $- as another indicator (though imperfect) of the shell state.
Notes:
it is possible to create bash functions named "main" and "source" (overriding the builtin), these names may appear in FUNCNAME[] but as long as only the last item in that array is tested there is no ambiguity.
I don't have a good answer for pdksh. The closest thing I can find applies only to pdksh, where each sourcing of a script opens a new file descriptor (starting with 10 for the original script). Almost certainly not something you want to rely on...
Editor's note: This answer's solution works robustly, but is bash-only. It can be streamlined to
(return 2>/dev/null).
TL;DR
Try to execute a return statement. If the script isn't sourced, that will raise an error. You can catch that error and proceed as you need.
Put this in a file and call it, say, test.sh:
#!/usr/bin/env sh
# Try to execute a `return` statement,
# but do it in a sub-shell and catch the results.
# If this script isn't sourced, that will raise an error.
$(return >/dev/null 2>&1)
# What exit code did that give?
if [ "$?" -eq "0" ]
then
echo "This script is sourced."
else
echo "This script is not sourced."
fi
Execute it directly:
shell-prompt> sh test.sh
output: This script is not sourced.
Source it:
shell-prompt> source test.sh
output: This script is sourced.
For me, this works in zsh and bash.
Explanation
The return statement will raise an error if you try to execute it outside of a function or if the script is not sourced. Try this from a shell prompt:
shell-prompt> return
output: ...can only `return` from a function or sourced script
You don't need to see that error message, so you can redirect the output to dev/null:
shell-prompt> return >/dev/null 2>&1
Now check the exit code. 0 means OK (no errors occurred), 1 means an error occurred:
shell-prompt> echo $?
output: 1
You also want to execute the return statement inside of a sub-shell. When the return statement runs it . . . well . . . returns. If you execute it in a sub-shell, it will return out of that sub-shell, rather than returning out of your script. To execute in the sub-shell, wrap it in $(...):
shell-prompt> $(return >/dev/null 2>$1)
Now, you can see the exit code of the sub-shell, which should be 1, because an error was raised inside the sub-shell:
shell-prompt> echo $?
output: 1
FWIW, after reading all of the other answers, I came up with following solution for me:
Update: Actually, somebody spotted a since-corrected error in another answer which affected mine, too. I think the update here also is an improvement (see edits if you are curious).
This works for all scripts, which start with #!/bin/bash but might be sourced by different shells as well to learn some information (like settings) which is are kept outside the main function.
According to the comments below, this answer here apparently does not work for all bash variants. Also not for systems, where /bin/sh is based on bash. I. E. it fails for bash v3.x on MacOS. (Currenty I do not know how to solve this.)
#!/bin/bash
# Function definitions (API) and shell variables (constants) go here
# (This is what might be interesting for other shells, too.)
# this main() function is only meant to be meaningful for bash
main()
{
# The script's execution part goes here
}
BASH_SOURCE=".$0" # cannot be changed in bash
test ".$0" != ".$BASH_SOURCE" || main "$#"
Instead of the last 2 lines you can use following (in my opinion less readable) code to not set BASH_SOURCE in other shells and allow set -e to work in main:
if ( BASH_SOURCE=".$0" && exec test ".$0" != ".$BASH_SOURCE" ); then :; else main "$#"; fi
This script-recipe has following properties:
If executed by bash the normal way, main is called. Please note that this does not include a call like bash -x script (where script does not contain a path), see below.
If sourced by bash, main is only called, if the calling script happens to have the same name. (For example, if it sources itself or via bash -c 'someotherscript "$#"' main-script args.. where main-script must be, what test sees as $BASH_SOURCE).
If sourced/executed/read/evaled by a shell other than bash, main is not called (BASH_SOURCE is always different to $0).
main is not called if bash reads the script from stdin, unless you set $0 to be the empty string like so: ( exec -a '' /bin/bash ) <script
If evaluated by bash with eval (eval "`cat script`" all quotes are important!) from within some other script, this calls main. If eval is run from commandline directly, this is similar to previous case, where the script is read from stdin. (BASH_SOURCE is blank, while $0 usually is /bin/bash if not forced to something completely different.)
If main is not called, it does return true ($?=0).
This does not rely on unexpected behavior (previously I wrote undocumented, but I found no documentation that you cannot unset nor alter BASH_SOURCE either):
BASH_SOURCE is a bash reserved array. But allowing BASH_SOURCE=".$0" to change it would open a very dangerous can of worms, so my expectation is, that this must have no effect (except, perhaps, some ugly warning shows up in some future version of bash).
There is no documentation that BASH_SOURCE works outside functions. However the opposite (that it only works in functions) is neither documented. The observation is, that it works (tested with bash v4.3 and v4.4, unfortunately I have no bash v3.x anymore) and that quite too many scripts would break, if $BASH_SOURCE stops working as observed. Hence my expectation is, that BASH_SOURCE stays as is for future versions of bash, too.
In contrast (nice find, BTW!) consider ( return 0 ), which gives 0 if sourced and 1 if not sourced. This comes a bit unexpected not only for me , and (according to the readings there) POSIX says, that return from subshell is undefined behavior (and the return here is clearly from a subshell). Perhaps this feature eventually gets enough widespread use such that it can no more be changed, but AFAICS there is a much higher chance that some future bash version accidental changes the return behavior in that case.
Unfortunately bash -x script 1 2 3 does not run main. (Compare script 1 2 3 where script has no path). Following can be used as workaround:
bash -x "`which script`" 1 2 3
bash -xc '. script' "`which script`" 1 2 3
That bash script 1 2 3 does not run main can be considered a feature.
Note that ( exec -a none script ) calls main (bash does not pass it's $0 to the script, for this you need to use -c as shown in the last point).
Thus, except for some some corner cases, main is only called, when the script is executed the usual way. Normally this is, what you want, especially because it lacks complex hard to understand code.
Note that it is very similar to the Python code:
if __name__ == '__main__': main()
Which also prevents calling of main, except for some corner cases, as
you can import/load the script and enforce that __name__='__main__'
Why I think this is a good general way to solve the challenge
If you have something, which can be sourced by multiple shells, it must be compatible. However (read the other answers), as there is no (easy to implement) portable way to detect the sourceing, you should change the rules.
By enforcing that the script must be executed by /bin/bash, you exactly do this.
This solves all cases but following in which case the script cannot run directly:
/bin/bash is not installed or disfunctional (i. E. in a boot environment)
If you pipe it to a shell like in curl https://example.com/script | $SHELL
(Note: This is only true if your bash is recent enough. This recipe is reported to fail for certain variants. So be sure to check it works for your case.)
However I cannot think about any real reason where you need that and also the ability to source the exactly same script in parallel! Usually you can wrap it to execute the main by hand. Like that:
$SHELL -c '. script && main'
{ curl https://example.com/script && echo && echo main; } | $SHELL
$SHELL -c 'eval "`curl https://example.com/script`" && main'
echo 'eval "`curl https://example.com/script`" && main' | $SHELL
Notes
This answer would not have been possible without the help of all the other answers! Even the wrong ones - which initially made me posting this.
Update: Edited due to the new discoveries found in https://stackoverflow.com/a/28776166/490291
This works later on in the script and does'nt depend on the _ variable:
## Check to make sure it is not sourced:
Prog=myscript.sh
if [ $(basename $0) = $Prog ]; then
exit 1 # not sourced
fi
or
[ $(basename $0) = $Prog ] && exit
I will give a BASH-specific answer. Korn shell, sorry. Suppose your script name is include2.sh ; then make a function inside the include2.sh called am_I_sourced. Here's my demo version of include2.sh:
am_I_sourced()
{
if [ "${FUNCNAME[1]}" = source ]; then
if [ "$1" = -v ]; then
echo "I am being sourced, this filename is ${BASH_SOURCE[0]} and my caller script/shell name was $0"
fi
return 0
else
if [ "$1" = -v ]; then
echo "I am not being sourced, my script/shell name was $0"
fi
return 1
fi
}
if am_I_sourced -v; then
echo "Do something with sourced script"
else
echo "Do something with executed script"
fi
Now try to execute it in many ways:
~/toys/bash $ chmod a+x include2.sh
~/toys/bash $ ./include2.sh
I am not being sourced, my script/shell name was ./include2.sh
Do something with executed script
~/toys/bash $ bash ./include2.sh
I am not being sourced, my script/shell name was ./include2.sh
Do something with executed script
~/toys/bash $ . include2.sh
I am being sourced, this filename is include2.sh and my caller script/shell name was bash
Do something with sourced script
So this works without exception, and it is not using the brittle $_ stuff. This trick uses BASH's introspection facility, i.e. built-in variables FUNCNAME and BASH_SOURCE; see their documentation in bash manual page.
Only two caveat:
1) the call to am_I_called must take place in the sourced script, but not within any function, lest ${FUNCNAME[1]} returns something else. Yeah...you could have checked ${FUNCNAME[2]} -- but you just make your life harder.
2) function am_I_called must reside in the sourced script if you want to find out what the name of the file being included.
I would like to suggest a small correction to Dennis' very helpful answer, to make it slightly more portable, I hope:
[ "$_" != "$0" ] && echo "Script is being sourced" || echo "Script is a subshell"
because [[ isn't recognized by the (somewhat anal retentive IMHO) Debian POSIX compatible shell, dash. Also, one may need the quotes to protect against filenames containing spaces, again in said shell.
The most beautiful way to detect if a Bash script is being executed or sourced (imported)
I really think this is the most beautiful way to do it:
From my if__name__==__main___check_if_sourced_or_executed_best.sh file in my eRCaGuy_hello_world repo:
#!/usr/bin/env bash
main() {
echo "Running main."
# Add your main function code here
}
if [ "${BASH_SOURCE[0]}" = "$0" ]; then
# This script is being run.
__name__="__main__"
else
# This script is being sourced.
__name__="__source__"
fi
# Only run `main` if this script is being **run**, NOT sourced (imported)
if [ "$__name__" = "__main__" ]; then
echo "This script is being run."
main
else
echo "This script is being sourced."
fi
References:
See also my other answer here for additional details on the above technique, including showing the run output: What is the bash equivalent to Python's if __name__ == '__main__'?
This answer where I first learned about "${BASH_SOURCE[0]}" = "$0"
You can also explore the following alternatives if you like, but I prefer to use the code chunk above.
Important: Using the "${FUNCNAME[-1]}" technique does not properly handle nested scripts, where one script calls or sources another, whereas the if [ "${BASH_SOURCE[0]}" = "$0" ] technique does. That's another huge reason to use if [ "${BASH_SOURCE[0]}" = "$0" ] instead.
4 ways to determine whether a bash script is being sourced or executed
I have read a bunch of answers all over the place on this and a few other questions, and have come up with 4 ways I'd like to summarize and put in one place.
if __name__ == "__main__":
See: What does if __name__ == "__main__": do? for what that does in Python.
You can see a full demonstration of all 4 techniques below in my check_if_sourced_or_executed.sh script in my eRCaGuy_hello_world repo.
You can see one of the techniques in-use in my advanced bash program with help menu, argument parsing, main function, automatic execute vs source detection (akin to if __name__ == "__main__": in Python), etc, see my demo/template program in this list here. It is currently called argument_parsing__3_advanced__gen_prog_template.sh, but if that name changes in the future I'll update it in the list at the link just above
Anyway, here are the 4 Bash techniques:
Technique 1 (can be placed anywhere; handles nested scripts):
See: https://unix.stackexchange.com/questions/424492/how-to-define-a-shell-script-to-be-sourced-not-run/424495#424495
if [ "${BASH_SOURCE[0]}" -ef "$0" ]; then
echo " This script is being EXECUTED."
run="true"
else
echo " This script is being SOURCED."
fi
Technique 2 [My favorite technique] (can be placed anywhere; handles nestes scripts):
See this type of technique in-use in my most-advanced bash demo script yet, here: argument_parsing__3_advanced__gen_prog_template.sh, near the bottom.
Modified from: What is the bash equivalent to Python's `if __name__ == '__main__'`?
if [ "${BASH_SOURCE[0]}" == "$0" ]; then
echo " This script is being EXECUTED."
run="true"
else
echo " This script is being SOURCED."
fi
Technique 3 (requires another line which MUST be outside all functions):
Modified from: How to detect if a script is being sourced
# A. Place this line OUTSIDE all functions:
(return 0 2>/dev/null) && script_is_being_executed="false" || script_is_being_executed="true"
# B. Place these lines anywhere
if [ "$script_is_being_executed" == "true" ]; then
echo " This script is being EXECUTED."
run="true"
else
echo " This script is being SOURCED."
fi
Technique 4 [Limitation: does not handle nested scripts!] (MUST be inside a function):
Modified from: How to detect if a script is being sourced
and Unix & Linux: How to define a shell script to be sourced not run.
if [ "${FUNCNAME[-1]}" == "main" ]; then
echo " This script is being EXECUTED."
run="true"
elif [ "${FUNCNAME[-1]}" == "source" ]; then
echo " This script is being SOURCED."
else
echo " ERROR: THIS TECHNIQUE IS BROKEN"
fi
This is where I first learned about the ${FUNCNAME[-1]} trick: #mr.spuratic: How to detect if a script is being sourced - he learned it from Dennis Williamson apparently.
See also:
[my answer] What is the bash equivalent to Python's if __name__ == '__main__'?
[my answer] Unix & Linux: How to define a shell script to be sourced not run
$_ is quite brittle. You have to check it as the first thing you do in the script. And even then, it is not guaranteed to contain the name of your shell (if sourced) or the name of the script (if executed).
For example, if the user has set BASH_ENV, then at the top of a script, $_ contains the name of the last command executed in the BASH_ENV script.
The best way I have found is to use $0 like this:
name="myscript.sh"
main()
{
echo "Script was executed, running main..."
}
case "$0" in *$name)
main "$#"
;;
esac
Unfortunately, this way doesn't work out of the box in zsh due to the functionargzero option doing more than its name suggests, and being on by default.
To work around this, I put unsetopt functionargzero in my .zshenv.
Not exactly what the OP wanted, but I often find myself needing to source a script just to load its functions (i.e. as a library). For example, for benchmarking or testing purposes.
Here's a design that works in all shells (including POSIX):
Wrap all your top-level actions in a run_main() function.
Have your sourced script check for an initial --no-run argument which doesn't perform any actions; without --no-run, it can call run_main.
source the script using:
set -- --no-run "$#"
. script.sh
shift
The problem with . or source is that it's impossible to pass arguments to the script portably. POSIX shells ignore arguments to . and pass the caller's "$#" no matter what.
I don't think there is any portable way to do this in both ksh and bash. In bash you could detect it using caller output, but I don't think there exists equivalent in ksh.
I followed mklement0 compact expression.
That's neat, but I noticed that it can fail in the case of ksh when invoked as this:
/bin/ksh -c ./myscript.sh
(it thinks it's sourced and it's not because it executes a subshell)
But the expression will work to detect this:
/bin/ksh ./myscript.sh
Also, even if the expression is compact, the syntax is not compatible with all shells.
So I ended with the following code, which works for bash,zsh,dash and ksh
SOURCED=0
if [ -n "$ZSH_EVAL_CONTEXT" ]; then
[[ $ZSH_EVAL_CONTEXT =~ :file$ ]] && SOURCED=1
elif [ -n "$KSH_VERSION" ]; then
[[ "$(cd $(dirname -- $0) && pwd -P)/$(basename -- $0)" != "$(cd $(dirname -- ${.sh.file}) && pwd -P)/$(basename -- ${.sh.file})" ]] && SOURCED=1
elif [ -n "$BASH_VERSION" ]; then
[[ $0 != "$BASH_SOURCE" ]] && SOURCED=1
elif grep -q dash /proc/$$/cmdline; then
case $0 in *dash*) SOURCED=1 ;; esac
fi
Feel free to add exotic shells support :)
The fix for this issue is not to write code that needs to know such a thing in order to behave correctly. And the way to do that is to put the code into a function, and not into the mainline of a script that needs to be sourced.
Code inside a function can just return 0 or return 1. This terminates just the function, so that control returns to whatever invoked the function.
This works whether the function is called from the mainline of a sourced script, from the mainline of a top-level script, or from another function.
Use sourcing to bring in "library" scripts that only define functions and perhaps variables, but don't actually execute any other top-level commands:
. path/to/lib.sh # defines libfunction
libfunction arg
or else:
path/to/script.sh arg # call script as a child process
and not:
. path/to/script.sh arg # shell programming anti-pattern
A small addition to the #mklement0 answer. This is the custom function I used in my script to determine whether it is sourced or not:
replace_shell(){
if [ -n "$ZSH_EVAL_CONTEXT" ]; then
case $ZSH_EVAL_CONTEXT in *:file*) echo "Zsh is sourced";; esac
else
case ${0##*/} in sh|dash|bash) echo "Bash is sourced";; esac
fi
}
In a function, the output of "$ZSH_EVAL_CONTEXT" for zsh is toplevel:file:shfunc and not just toplevel:file during sorcing; thus, *:file* should fix this issue.
I needed a one-liner that works on [mac, linux] with bash.version >= 3 and none of these answers fit the bill.
[[ ${BASH_SOURCE[0]} = $0 ]] && main "$#"
Straight to the point: you must evaluate if the variable "$0" is equal to the name of your Shell.
Like this:
#!/bin/bash
echo "First Parameter: $0"
echo
if [[ "$0" == "bash" ]] ; then
echo "The script was sourced."
else
echo "The script WAS NOT sourced."
fi
Via SHELL:
$ bash check_source.sh
First Parameter: check_source.sh
The script WAS NOT sourced.
Via SOURCE:
$ source check_source.sh
First Parameter: bash
The script was sourced.
It's pretty hard to have a 100% portable way of detecting if a script was sourced or not.
Regarding my experience (7 years with Shellscripting), the only safe way (not relying on environment variables with PIDs and so on, which is not safe due to the fact that it is something VARIABLE), you should:
extend the possibilities from your if
using switch/case, if you want to.
Both options cannot be auto scaled, but it is the safer way.
For example:
when you source a script via an SSH session, the value returned by the variable "$0" (when using source), is -bash.
#!/bin/bash
echo "First Parameter: $0"
echo
if [[ "$0" == "bash" || "$0" == "-bash" ]] ; then
echo "The script was sourced."
else
echo "The script WAS NOT sourced."
fi
OR
#!/bin/bash
echo "First Parameter: $0"
echo
if [[ "$0" == "bash" ]] ; then
echo "The script was sourced."
elif [[ "$0" == "-bash" ]] ; then
echo "The script was sourced via SSH session."
else
echo "The script WAS NOT sourced."
fi
I ended up with checking [[ $_ == "$(type -p "$0")" ]]
if [[ $_ == "$(type -p "$0")" ]]; then
echo I am invoked from a sub shell
else
echo I am invoked from a source command
fi
When use curl ... | bash -s -- ARGS to run remote script on-the-fly, the $0 will be just bash instead of normal /bin/bash when run actual script file, so I use type -p "$0" to show full path of bash.
test:
curl -sSL https://github.com/jjqq2013/bash-scripts/raw/master/common/relpath | bash -s -- /a/b/c/d/e /a/b/CC/DD/EE
source <(curl -sSL https://github.com/jjqq2013/bash-scripts/raw/master/common/relpath)
relpath /a/b/c/d/e /a/b/CC/DD/EE
wget https://github.com/jjqq2013/bash-scripts/raw/master/common/relpath
chmod +x relpath
./relpath /a/b/c/d/e /a/b/CC/DD/EE
This is a spin off from some other answers, regarding "universal" cross shell support. This is admittedly very similar to https://stackoverflow.com/a/2942183/3220983 in particular, though slightly different. The weakness with this, is that a client script must respect how to use it (i.e. by exporting a variable first). The strength is that this is simple and should work "anywhere". Here's a template for your cut & paste pleasure:
# NOTE: This script may be used as a standalone executable, or callable library.
# To source this script, add the following *prior* to including it:
# export ENTRY_POINT="$0"
main()
{
echo "Running in direct executable context!"
}
if [ -z "${ENTRY_POINT}" ]; then main "$#"; fi
Note: I use export just be sure this mechanism can be extended into sub processes.
Use a shebang line and check if it is being Executed instead.
Your script should have a shebang line #!/path/to/shell saying what shell it should run in. Otherwise, you will have other cross shell compatibility issues as well.
Therefore, you only need to check if its being executed by attempting a command that does not work when being sourced.
eg. For a Bash script:
#!/usr/bin/env bash
if (return 0 2>/dev/null); then
echo "Script was sourced."
fi
This method also works for zsh and sh just change the shebang line.

Customize "command not found" message in Bash

Is there someway to alter the Bash system error message template so that you can print something in addition to the original message? For example:
Macbook Air:~/Public]$ lfe
-bash: lfe: WTF command not found
or
Macbook Air:~/Public]$ lfe
-bash: lfe: #!&**! command not found
Since Bash 4.0, if the search for a command is unsuccessful, the shell searches for a function called command_not_found_handle. If it doesn't exist, Bash prints a message like this and exits with status 127:
$ foo
-bash: foo: command not found
$ echo $?
127
If it does exist, it is called with the command and its arguments as arguments, so if you have something like
command_not_found_handle () {
echo "It's my handle!"
echo "Arguments: $#"
}
in your .bashrc, Bash will react like this:
$ foo bar
It's my handle!
Arguments: foo bar
Most systems have something much more sophisticated in place, though. My Ubuntu, for example, has this in /etc/bash.bashrc:
# if the command-not-found package is installed, use it
if [ -x /usr/lib/command-not-found -o -x /usr/share/command-not-found/command-not-found ]; then
function command_not_found_handle {
# check because c-n-f could've been removed in the meantime
if [ -x /usr/lib/command-not-found ]; then
/usr/lib/command-not-found -- "$1"
return $?
elif [ -x /usr/share/command-not-found/command-not-found ]; then
/usr/share/command-not-found/command-not-found -- "$1"
return $?
else
printf "%s: command not found\n" "$1" >&2
return 127
fi
}
fi
and this is sourced from /etc/profile. /usr/lib/command-not-found is a Python script that uses some more Python (CommandNotFound) to basically look up packages that are named like the unknown command, or sound similar:
$ sl
The program 'sl' is currently not installed. You can install it by typing:
sudo apt install sl
$ sedd
No command 'sedd' found, did you mean:
Command 'sed' from package 'sed' (main)
Command 'seedd' from package 'bit-babbler' (universe)
Command 'send' from package 'nmh' (universe)
Command 'send' from package 'mailutils-mh' (universe)
sedd: command not found
So if you want simple customization, you can provide your own command_not_found_handle, and if you want to customize the existing system, you can modify the Python scripts.
But, as mentioned, this requires Bash 4.0 or higher.
Maybe something like:
curl https://raw.githubusercontent.com/rcaloras/bash-preexec/master/bash-preexec.sh -o ~/.bash-preexec.sh
echo '[[ -f ~/.bash-preexec.sh ]] && source ~/.bash-preexec.sh' >> ~/.bashrc
then add the following to .bashrc too
preexec() { type "$1" >/dev/null 2>&1 || echo -n 'WTF??? '; }
reload your shell, then try enter some nonexistent command, like bububu
$ bububu
will print
WTF??? -bash: bububu: command not found
Important: read https://github.com/rcaloras/bash-preexec

Create shell sub commands by hierarchy

I'm trying to create a system for my scripts -
Each script will be located in a folder, which is the command itself.
The script itself will act as a sub-command.
For example, a script called "who" inside a directory called "git",
will allow me to run the script using git who in the command line.
Also, I would like to create a sub command to a psuedo-command, meaning a command not currently available. E.g. some-arbitrary-command sub-command.
Is that somehow possible?
I thought of somehow extending https://github.com/basecamp/sub to accomplish the task.
EDIT 1
#!/usr/bin/env bash
command=`basename $0`
subcommand="$1"
case "$subcommand" in
"" | "-h" | "--help" )
echo "$command: Some description here" >&2
;;
* )
subcommand_path="$(command -v "$command-$subcommand" || true)"
if [[ -x "$subcommand_path" ]]; then
shift
exec "$subcommand_path" "${#}"
return $?
else
echo "$command: no such command \`$subcommand'" >&2
exit 1
fi
;;
esac
This is currently the script I run for new custom-made commands.
Since it's so generic, I just copy-paste it.
I still wonder though -
can it be generic enough to just recognize the folder name and create the script by its folder name?
One issue though is that it doesn't seem to override the default command name, if it supposed to replace it (E.g. git).
EDIT 2
After tinkering around a bit this is what I came to eventuall:
#!/usr/bin/env bash
COMMAND=`basename $0`
SUBCOMMAND="$1"
COMMAND_DIR="$HOME/.zsh/scripts/$COMMAND"
case "$SUBCOMMAND" in
"" | "-h" | "--help" )
cat "$COMMAND_DIR/help.txt" 2>/dev/null ||
command $COMMAND "${#}"
;;
* )
SUBCOMMAND_path="$(command -v "$COMMAND-$SUBCOMMAND" || true)"
if [[ -x "$SUBCOMMAND_path" ]]; then
shift
exec "$SUBCOMMAND_path" "${#}"
else
command $COMMAND "${#}"
fi
;;
esac
This is a generic script called "helper-sub" I symlink to all the script directories I have (E.g. ln -s $HOME/bin/helper-sub $HOME/bin/ssh).
in my zshrc I created this to call all the scripts:
#!/usr/bin/env bash
PATH=${PATH}:$(find $HOME/.zsh/scripts -type d | tr '\n' ':' | sed 's/:$//')
export PATH
typeset -U path
for aliasPath in `find $HOME/.zsh/scripts -type d`; do
aliasName=`echo $aliasPath | awk -F/ '{print $NF}'`
alias ${aliasName}=${aliasPath}/${aliasName}
done
unset aliasPath
Examples can be seen here: https://github.com/iwfmp/zsh/tree/master/scripts
You can't make a directory executable as a script, but you can create a wrapper that calls the scripts in the directory.
You can do this either with a function (in your profile script or a file in your FPATH) or with a wrapper script.
A simple function might look like:
git() {
local subPath='/path/to/your/git'
local sub="${1}" ; shift
if [[ -x "${subPath}/${1}" ]]; then
"${subPath}/${sub}" "${#}"
return $?
else
printf '%s\n' "git: Unknown sub-command '${sub}'." >&2
return 1
fi
}
(This is the same way that the sub project you linked works, just simplified.)
Of course, if you actually want to create a sub-command for git specifically (and that wasn't just an example), you'll need to make sure that the built-in git commands still work. In that case you could do like this:
git() {
local subPath='/path/to/your/git'
local sub="${1}"
if [[ -x "${subPath}/${sub}" ]]; then
shift
"${subPath}/${sub}" "${#}"
return $?
else
command git "${#}"
return 1
fi
}
But it might be worth pointing out in that case that git supports adding arbitrary aliases via git config:
git config --global alias.who '!/path/to/your/git/who'

Check if a file is executable

I am wondering what's the easiest way to check if a program is executable with bash, without executing it ? It should at least check whether the file has execute rights, and is of the same architecture (for example, not a windows executable or another unsupported architecture, not 64 bits if the system is 32 bits, ...) as the current system.
Take a look at the various test operators (this is for the test command itself, but the built-in BASH and TCSH tests are more or less the same).
You'll notice that -x FILE says FILE exists and execute (or search) permission is granted.
BASH, Bourne, Ksh, Zsh Script
if [[ -x "$file" ]]
then
echo "File '$file' is executable"
else
echo "File '$file' is not executable or found"
fi
TCSH or CSH Script:
if ( -x "$file" ) then
echo "File '$file' is executable"
else
echo "File '$file' is not executable or found"
endif
To determine the type of file it is, try the file command. You can parse the output to see exactly what type of file it is. Word 'o Warning: Sometimes file will return more than one line. Here's what happens on my Mac:
$ file /bin/ls
/bin/ls: Mach-O universal binary with 2 architectures
/bin/ls (for architecture x86_64): Mach-O 64-bit executable x86_64
/bin/ls (for architecture i386): Mach-O executable i386
The file command returns different output depending upon the OS. However, the word executable will be in executable programs, and usually the architecture will appear too.
Compare the above to what I get on my Linux box:
$ file /bin/ls
/bin/ls: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), stripped
And a Solaris box:
$ file /bin/ls
/bin/ls: ELF 32-bit MSB executable SPARC Version 1, dynamically linked, stripped
In all three, you'll see the word executable and the architecture (x86-64, i386, or SPARC with 32-bit).
Addendum
Thank you very much, that seems the way to go. Before I mark this as my answer, can you please guide me as to what kind of script shell check I would have to perform (ie, what kind of parsing) on 'file' in order to check whether I can execute a program ? If such a test is too difficult to make on a general basis, I would at least like to check whether it's a linux executable or osX (Mach-O)
Off the top of my head, you could do something like this in BASH:
if [ -x "$file" ] && file "$file" | grep -q "Mach-O"
then
echo "This is an executable Mac file"
elif [ -x "$file" ] && file "$file" | grep -q "GNU/Linux"
then
echo "This is an executable Linux File"
elif [ -x "$file" ] && file "$file" | grep q "shell script"
then
echo "This is an executable Shell Script"
elif [ -x "$file" ]
then
echo "This file is merely marked executable, but what type is a mystery"
else
echo "This file isn't even marked as being executable"
fi
Basically, I'm running the test, then if that is successful, I do a grep on the output of the file command. The grep -q means don't print any output, but use the exit code of grep to see if I found the string. If your system doesn't take grep -q, you can try grep "regex" > /dev/null 2>&1.
Again, the output of the file command may vary from system to system, so you'll have to verify that these will work on your system. Also, I'm checking the executable bit. If a file is a binary executable, but the executable bit isn't on, I'll say it's not executable. This may not be what you want.
Seems nobody noticed that -x operator does not differ file with directory.
So to precisely check an executable file, you may use
[[ -f SomeFile && -x SomeFile ]]
Testing files, directories and symlinks
The solutions given here fail on either directories or symlinks (or both). On Linux, you can test files, directories and symlinks with:
if [[ -f "$file" && -x $(realpath "$file") ]]; then .... fi
On OS X, you should be able to install coreutils with homebrew and use grealpath.
Defining an isexec function
You can define a function for convenience:
isexec() {
if [[ -f "$1" && -x $(realpath "$1") ]]; then
true;
else
false;
fi;
}
Or simply
isexec() { [[ -f "$1" && -x $(realpath "$1") ]]; }
Then you can test using:
if `isexec "$file"`; then ... fi
Also seems nobody noticed -x operator on symlinks. A symlink (chain) to a regular file (not classified as executable) fails the test.
First you need to remember that in Unix and Linux, everything is a file, even directories. For a file to have the rights to be executed as a command, it needs to satisfy 3 conditions:
It needs to be a regular file
It needs to have read-permissions
It needs to have execute-permissions
So this can be done simply with:
[ -f "${file}" ] && [ -r "${file}" ] && [ -x "${file}" ]
If your file is a symbolic link to a regular file, the test command will operate on the target and not the link-name. So the above command distinguishes if a file can be used as a command or not. So there is no need to pass the file first to realpath or readlink or any of those variants.
If the file can be executed on the current OS, that is a different question. Some answers above already pointed to some possibilities for that, so there is no need to repeat it here.
To test whether a file itself has ACL_EXECUTE bit set in any of permission sets (user, group, others) regardless of where it resides, i. e. even on a tmpfs with noexec option, use stat -c '%A' to get the permission string and then check if it contains at least a single “x” letter:
if [[ "$(stat -c '%A' 'my_exec_file')" == *'x'* ]] ; then
echo 'Has executable permission for someone'
fi
The right-hand part of comparison may be modified to fit more specific cases, such as *x*x*x* to check whether all kinds of users should be able to execute the file when it is placed on a volume mounted with exec option.
This might be not so obvious, but sometime is required to test the executable to appropriately call it without an external shell process:
function tkl_is_file_os_exec()
{
[[ ! -x "$1" ]] && return 255
local exec_header_bytes
case "$OSTYPE" in
cygwin* | msys* | mingw*)
# CAUTION:
# The bash version 3.2+ might require a file path together with the extension,
# otherwise will throw the error: `bash: ...: No such file or directory`.
# So we make a guess to avoid the error.
#
{
read -r -n 4 exec_header_bytes 2> /dev/null < "$1" ||
{
[[ -x "${1%.exe}.exe" ]] && read -r -n 4 exec_header_bytes 2> /dev/null < "${1%.exe}.exe"
} ||
{
[[ -x "${1%.com}.com" ]] && read -r -n 4 exec_header_bytes 2> /dev/null < "${1%.com}.com"
}
} &&
if [[ "${exec_header_bytes:0:3}" == $'MZ\x90' ]]; then
# $'MZ\x90\00' for bash version 3.2.42+
# $'MZ\x90\03' for bash version 4.0+
[[ "${exec_header_bytes:3:1}" == $'\x00' || "${exec_header_bytes:3:1}" == $'\x03' ]] && return 0
fi
;;
*)
read -r -n 4 exec_header_bytes < "$1"
[[ "$exec_header_bytes" == $'\x7fELF' ]] && return 0
;;
esac
return 1
}
# executes script in the shell process in case of a shell script, otherwise executes as usual
function tkl_exec_inproc()
{
if tkl_is_file_os_exec "$1"; then
"$#"
else
. "$#"
fi
return $?
}
myscript.sh:
#!/bin/bash
echo 123
return 123
In Cygwin:
> tkl_exec_inproc /cygdrive/c/Windows/system32/cmd.exe /c 'echo 123'
123
> tkl_exec_inproc /cygdrive/c/Windows/system32/chcp.com 65001
Active code page: 65001
> tkl_exec_inproc ./myscript.sh
123
> echo $?
123
In Linux:
> tkl_exec_inproc /bin/bash -c 'echo 123'
123
> tkl_exec_inproc ./myscript.sh
123
> echo $?
123

Detect if executable file is on user's PATH [duplicate]

This question already has answers here:
How can I check if a program exists from a Bash script?
(39 answers)
Closed 4 years ago.
In a bash script, I need to determine whether an executable named foo is on the PATH.
You could also use the Bash builtin type -P:
help type
cmd=ls
[[ $(type -P "$cmd") ]] && echo "$cmd is in PATH" ||
{ echo "$cmd is NOT in PATH" 1>&2; exit 1; }
You can use which:
path_to_executable=$(which name_of_executable)
if [ -x "$path_to_executable" ] ; then
echo "It's here: $path_to_executable"
fi
TL;DR:
In bash:
function is_bin_in_path {
builtin type -P "$1" &> /dev/null
}
Example usage of is_bin_in_path:
% is_bin_in_path ls && echo "found in path" || echo "not in path"
found in path
In zsh:
Use whence -p instead.
For a version that works in both {ba,z}sh:
# True if $1 is an executable in $PATH
# Works in both {ba,z}sh
function is_bin_in_path {
if [[ -n $ZSH_VERSION ]]; then
builtin whence -p "$1" &> /dev/null
else # bash:
builtin type -P "$1" &> /dev/null
fi
}
To test that ALL given commands are executables in $PATH:
# True iff all arguments are executable in $PATH
function is_bin_in_path {
if [[ -n $ZSH_VERSION ]]; then
builtin whence -p "$1" &> /dev/null
else # bash:
builtin type -P "$1" &> /dev/null
fi
[[ $? -ne 0 ]] && return 1
if [[ $# -gt 1 ]]; then
shift # We've just checked the first one
is_bin_in_path "$#"
fi
}
Example usage:
is_bin_in_path ssh-agent ssh-add && setup_ssh_agent
Non-solutions to avoid
This is not a short answer because the solution must correctly handle:
Functions
Aliases
Builtin commands
Reserved words
Examples which fail with plain type (note the token after type changes):
$ alias foo=ls
$ type foo && echo "in path" || echo "not in path"
foo is aliased to `ls'
in path
$ type type && echo "in path" || echo "not in path"
type is a shell builtin
in path
$ type if && echo "in path" || echo "not in path"
if is a shell keyword
in path
Note that in bash, which is not a shell builtin (it is in zsh):
$ PATH=/bin
$ builtin type which
which is /bin/which
This answer says why to avoid using which:
Avoid which. Not only is it an external process you're launching for doing very little (meaning builtins like hash, type or command are way cheaper), you can also rely on the builtins to actually do what you want, while the effects of external commands can easily vary from system to system.
Why care?
Many operating systems have a which that doesn't even set an exit status, meaning the if which foo won't even work there and will always report that foo exists, even if it doesn't (note that some POSIX shells appear to do this for hash too).
Many operating systems make which do custom and evil stuff like change the output or even hook into the package manager.
In this case, also avoid command -v
The answer I just quoted from suggests using command -v, however this doesn't apply to the current "is the executable in $PATH?" scenario: it will fail in exactly the ways I've illustrated with plain type above.
Correct solutions
In bash we need to use type -P:
-P force a PATH search for each NAME, even if it is an alias,
builtin, or function, and returns the name of the disk file
that would be executed
In zsh we need to use whence -p:
-p Do a path search for name even if it is an alias,
reserved word, shell function or builtin.
You can use the command builtin, which is POSIX compatible:
if [ -x "$(command -v "$cmd")" ]; then
echo "$cmd is in \$PATH"
fi
The executable check is needed because command -v detects functions and aliases as well as executables.
In Bash, you can also use type with the -P option, which forces a PATH search:
if type -P "$cmd" &>/dev/null; then
echo "$cmd is in \$PATH"
fi
As already mentioned in the comments, avoid which as it requires launching an external process and might give you incorrect output in some cases.
if command -v foo ; then foo ; else echo "foo unavailable" ; fi
Use which
$ which myprogram
We can define a function for checking whether as executable exists by using which:
function is_executable() {
which "$#" &> /dev/null
}
The function is called just like you would call an executable. "$#" ensures that which gets exactly the same arguments as are given to the function.
&> /dev/null ensures that whatever is written to stdout or stderr by which is redirected to the null device (which is a special device which discards the information written to it) and not written to stdout or stderr by the function.
Since the function doesn't explicitly return with an return code, when it does return, the exit code of the latest executed executable—which in this case is which—will be the return code of the function. which will exit with a code that indicates success if it is able to find the executable specified by the argument to the function, otherwise with an exit code that indicates failure. This behavior will automatically be replicated by is_executable.
We can then use that function to conditionally do something:
if is_executable name_of_executable; then
echo "name_of_executable was found"
else
echo "name_of_executable was NOT found"
fi
Here, if executes the command(s) written between it and then—which in our case is is_executable name_of_executable—and chooses the branch to execute based on the return code of the command(s).
Alternatively, we can skip defining the function and use which directly in the if-statement:
if which name_of_executable &> /dev/null; then
echo "name_of_executable was found"
else
echo "name_of_executable was NOT found"
fi
However, I think this makes the code slightly less readable.

Resources