Optimizing a script that lists available commands with manual pages - bash

I'm using this script to generate a list of the available commands with manual pages on the system. Running this with time shows an average of about 49 seconds on my computer.
#!/usr/local/bin/bash
for x in $(for f in $(compgen -c); do which $f; done | sort -u); do
dir=$(dirname $x)
cmd=$(basename $x)
if [[ ! $(man --path "$cmd" 2>&1) =~ 'No manual entry' ]]; then
printf '%b\n' "${dir}:\n${cmd}"
fi
done | awk '!x[$0]++'
Is there a way to optimize this for faster results?
This is a small sample of my current output. The goal is to group commands by directory. This will later be fed into an array.
/bin: # directories generated by $dir
[ # commands generated by $cmd (compgen output)
cat
chmod
cp
csh
date

Going for a complete disregard of built-ins here. That's what which does, anyway. Script not thoroughly tested.
#!/bin/bash
shopt -s nullglob # need this for "empty" checks below
MANPATH=${MANPATH:-/usr/share/man:/usr/local/share/man}
IFS=: # chunk up PATH and MANPATH, both colon-deliminated
# just look at the directory!
has_man_p() {
local needle=$1 manp manpp result=()
for manp in $MANPATH; do
# man? should match man0..man9 and a bunch of single-char things
# do we need 'man?*' for longer suffixes here?
for manpp in "$manp"/man?; do
# assumption made for filename formats. section not checked.
result=("$manpp/$needle".*)
if (( ${#result[#]} > 0 )); then
return 0
fi
done
done
return 1
}
unset seen
declare -A seen # for deduplication
for p in $PATH; do
printf '%b:\n' "$p" # print the path first
for exe in "$p"/*; do
cmd=${exe##*/} # the sloppy basename
if [[ ! -x $exe || ${seen[$cmd]} == 1 ]]; then
continue
fi
seen["$cmd"]=1
if has_man_p "$cmd"; then
printf '%b\n' "$cmd"
fi
done
done
Time on Cygwin with a truncated PATH (the full one with Windows has too many misses for the original version):
$ export PATH=/usr/local/bin:/usr/bin
$ time (sh ./opti.sh &>/dev/null)
real 0m3.577s
user 0m0.843s
sys 0m2.671s
$ time (sh ./orig.sh &>/dev/null)
real 2m10.662s
user 0m20.138s
sys 1m5.728s
(Caveat for both versions: most stuff in Cygwin's /usr/bin comes with a .exe extension)

Related

Test -d directory true - subdirectory false (POSIX)

I'm trying to print all directories/subdirectories from a given start directory.
for i in $(ls -A -R -p); do
if [ -d "$i" ]; then
printf "%s/%s \n" "$PWD" "$i"
fi
done;
This script returns all of the directories found in the . directory and all of the files in that directory, but for some reason the test fails for subdirectories. All of the directories end up in $i and the output looks exactly the same.
Let's say I have the following structure:
foo/bar/test
echo $i prints
foo/
bar/
test/
While the contents of the folders are listed like this:
./foo:
file1
file2
./bar:
file1
file2
However the test statement just prints:
PWD/TO/THIS/DIRECTORY/foo
For some reason it returns true for the first level directories, but false for all of the subdirectories.
(ls is probably not a good way of doing this and I would be glad for a find statement that solves all of my issues, but first I want to know why this script doesn't work the way you'd think.)
As pointed out in the comments, the issue is that the directory names include a :, so -d is false.
I guess that this command gives you the output you want (although it requires Bash):
# enable globstar for **
# disabled in non-interactive shell (e.g. a script)
shopt -s globstar
# print each path ending in a / (all directories)
# ** expands recursively
printf '%s\n' **/*/
The standard way would either to do the recursion yourself, or to use find:
find . -type d
Consider your output:
dir1:
dir1a
Now, the following will be true:
[ -d dir1/dir1a ]
but that's not what your code does; instead, it runs:
[ -d dir1a ]
To avoid this, don't attempt to parse ls; if you want to implement recursion in baseline POSIX sh, do it yourself:
callForEachEntry() {
# because calling this without any command provided would try to execute all found files
# as commands, checking for safe/correct invocation is essential.
if [ "$#" -lt 2 ]; then
echo "Usage: callForEachEntry starting-directory command-name [arg1 arg2...]" >&2
echo " ...calls command-name once for each file recursively found" >&2
return 1
fi
# try to declare variables local, swallow/hide error messages if this fails; code is
# defensively written to avoid breaking if recursing changes either, but may be faulty if
# the command passed as an argument modifies "dir" or "entry" variables.
local dir entry 2>/dev/null ||: "not strict POSIX, but available in dash"
dir=$1; shift
for entry in "$dir"/*; do
# skip if the glob matched nothing
[ -e "$entry" ] || [ -L "$entry" ] || continue
# invoke user-provided callback for the entry we found
"$#" "$entry"
# recurse last for if on a baseline platform where the "local" above failed.
if [ -d "$entry" ]; then
callForEachEntry "$entry" "$#"
fi
done
}
# call printf '%s\n' for each file we recursively find; replace this with the code you
# actually want to call, wrapped in a function if appropriate.
callForEachEntry "$PWD" printf '%s\n'
find can also be used safely, but not as a drop-in replacement for the way ls was used in the original code -- for dir in $(find . -type d) is just as buggy. Instead, see the "Complex Actions" and "Actions In Bulk" section of Using Find.

How to find latest modified files and delete them with SHELL code

I need some help with a shell code. Now I have this code:
find $dirname -type f -exec md5sum '{}' ';' | sort | uniq --all-repeated=separate -w 33 | cut -c 35-
This code finds duplicated files (with same content) in a given directory. What I need to do is to update it - find out latest (by date) modified file (from duplicated files list), print that file name and also give opportunity to delete that file in terminal.
Doing this in pure bash is a tad awkward, it would be a lot easier to write
this in perl or python.
Also, if you were looking to do this with a bash one-liner, it might be feasible,
but I really don't know how.
Anyhoo, if you really want a pure bash solution below is an attempt at doing
what you describe.
Please note that:
I am not actually calling rm, just echoing it - don't want to destroy your files
There's a "read -u 1" in there that I'm not entirely happy with.
Here's the code:
#!/bin/bash
buffer=''
function process {
if test -n "$buffer"
then
nbFiles=$(printf "%s" "$buffer" | wc -l)
echo "================================================================================="
echo "The following $nbFiles files are byte identical and sorted from oldest to newest:"
ls -lt -c -r $buffer
lastFile=$(ls -lt -c -r $buffer | tail -1)
echo
while true
do
read -u 1 -p "Do you wish to delete the last file $lastFile (y/n/q)? " answer
case $answer in
[Yy]* ) echo rm $lastFile; break;;
[Nn]* ) echo skipping; break;;
[Qq]* ) exit;;
* ) echo "please answer yes, no or quit";;
esac
done
echo
fi
}
find . -type f -exec md5sum '{}' ';' |
sort |
uniq --all-repeated=separate -w 33 |
cut -c 35- |
while read -r line
do
if test -z "$line"
then
process
buffer=''
else
buffer=$(printf "%s\n%s" "$buffer" "$line")
fi
done
process
echo "done"
Here's a "naive" solution implemented in bash (except for two external commands: md5sum, of course, and stat used only for user's comfort, it's not part of the algorithm). The thing implements a 100% Bash quicksort (that I'm kind of proud of):
#!/bin/bash
# Finds similar (based on md5sum) files (recursively) in given
# directory. If several files with same md5sum are found, sort
# them by modified (most recent first) and prompt user for deletion
# of the oldest
die() {
printf >&2 '%s\n' "$#"
exit 1
}
quicksort_files_by_mod_date() {
if ((!$#)); then
qs_ret=()
return
fi
# the return array is qs_ret
local first=$1
shift
local newers=()
local olders=()
qs_ret=()
for i in "$#"; do
if [[ $i -nt $first ]]; then
newers+=( "$i" )
else
olders+=( "$i" )
fi
done
quicksort_files_by_mod_date "${newers[#]}"
newers=( "${qs_ret[#]}" )
quicksort_files_by_mod_date "${olders[#]}"
olders=( "${qs_ret[#]}" )
qs_ret=( "${newers[#]}" "$first" "${olders[#]}" )
}
[[ -n $1 ]] || die "Must give an argument"
[[ -d $1 ]] || die "Argument must be a directory"
dirname=$1
shopt -s nullglob
shopt -s globstar
declare -A files
declare -A hashes
for file in "$dirname"/**; do
[[ -f $file ]] || continue
read md5sum _ < <(md5sum -- "$file")
files[$file]=$md5sum
((hashes[$md5sum]+=1))
done
has_found=0
for hash in "${!hashes[#]}"; do
((hashes[$hash]>1)) || continue
files_with_same_md5sum=()
for file in "${!files[#]}"; do
[[ ${files[$file]} = $hash ]] || continue
files_with_same_md5sum+=( "$file" )
done
has_found=1
echo "Found ${hashes[$hash]} files with md5sum=$hash, sorted by modified (most recent first):"
# sort them by modified date (using quicksort :p)
quicksort_files_by_mod_date "${files_with_same_md5sum[#]}"
for file in "${qs_ret[#]}"; do
printf " %s %s\n" "$(stat --printf '%y' -- "$file")" "$file"
done
read -p "Do you want to remove the oldest? [yn] " answer
if [[ ${answer,,} = y ]]; then
echo rm -fv -- "${qs_ret[#]:1}"
fi
done
if((!has_found)); then
echo "Didn't find any similar files in directory \`$dirname'. Yay."
fi
I guess the script is self-explanatory (you can read it like a story). It uses the best practices I know of, and is 100% safe regarding any silly characters in file names (e.g., spaces, newlines, file names starting with hyphens, file names ending with a newline, etc.).
It uses bash's globs, so it might be a bit slow if you have a bloated directory tree.
There are a few error checkings, but many are missing, so don't use as-is in production! (it's a trivial but rather tedious taks to add these).
The algorithm is as follows: scan each file in the given directory tree; for each file, will compute its md5sum and store in associative arrays:
files with keys the file names and values the md5sums.
hashes with keys the hashes and values the number of files the md5sum of which is the key.
After this is done, we'll scan through all the found md5sum, select only the ones that correspond to more than one file, then select all files with this md5sum, then quicksort them by modified date, and prompt the user.
A sweet effect when no dups are found: the script nicely informs the user about it.
I would not say it's the most efficient way of doing things (might be better in, e.g., Perl), but it's really a lot of fun, surprisingly easy to read and follow, and you can potentially learn a lot by studying it!
It uses a few bashisms and features that only are in bash version ≥ 4
Hope this helps!
Remark. If on your system date has the -r switch, you can replace the stat command by:
date -r "$file"
Remark. I left the echo in front of rm. Remove it if you're happy with how the script behaves. Then you'll have a script that uses 3 external commands :).

Can i cache the output of a command on Linux from CLI?

I'm looking for an implementation of a 'cacheme' command, which 'memoizes' the output (stdout) of whatever has in ARGV. If it never ran it, it will run it and somewhat memorize the output. If it ran it, it will just copy the output of the file (or even better, both output and error to &1 and &2 respectively).
Let's suppose someone wrote this command, it would work like this.
$ time cacheme sleep 1 # first time it takes one sec
real 0m1.228s
user 0m0.140s
sys 0m0.040s
$ time cacheme sleep 1 # second time it looks for stdout in the cache (dflt expires in 1h)
#DEBUG# Cache version found! (1 minute old)
real 0m0.100s
user 0m0.100s
sys 0m0.040s
This example is a bit silly because it has no output. Ideally it would be tested on a script like sleep-1-and-echo-hello-world.sh.
I created a small script that creates a file in /tmp/ with hash of full command name and username, but I'm pretty sure something already exists.
Are you aware of any of this?
Note. Why I would do this? Occasionally I run commands that are network or compute intensive, they take minutes to run and the output doesn't change much. If I know it in advance I'd just prepend a cacheme <cmd>, go for dinner and when i'm back I can just rerun the SAME command over and over on the same machine and get the same answer in an instance.
Improved solution above somewhat by also adding expiry age as optional argument.
#!/bin/sh
# save as e.g. $HOME/.local/bin/cacheme
# and then chmod u+x $HOME/.local/bin/cacheme
VERBOSE=false
PROG="$(basename $0)"
DIR="${HOME}/.cache/${PROG}"
mkdir -p "${DIR}"
EXPIRY=600 # default to 10 minutes
# check if first argument is a number, if so use it as expiration (seconds)
[ "$1" -eq "$1" ] 2>/dev/null && EXPIRY=$1 && shift
[ "$VERBOSE" = true ] && echo "Using expiration $EXPIRY seconds"
CMD="$#"
HASH=$(echo "$CMD" | md5sum | awk '{print $1}')
CACHE="$DIR/$HASH"
test -f "${CACHE}" && [ $(expr $(date +%s) - $(date -r "$CACHE" +%s)) -le $EXPIRY ] || eval "$CMD" > "${CACHE}"
cat "${CACHE}"
I've implemented a simple caching script for bash, because I wanted to speed up plotting from piped shell command in gnuplot. It can be used to cache output of any command. Cache is used as long as the arguments are the same and files passed in arguments haven't changed. System is responsible for cleaning up.
#!/bin/bash
# hash all arguments
KEY="$#"
# hash last modified dates of any files
for arg in "$#"
do
if [ -f $arg ]
then
KEY+=`date -r "$arg" +\ %s`
fi
done
# use the hash as a name for temporary file
FILE="/tmp/command_cache.`echo -n "$KEY" | md5sum | cut -c -10`"
# use cached file or execute the command and cache it
if [ -f $FILE ]
then
cat $FILE
else
$# | tee $FILE
fi
You can name the script cache, set executable flag and put it in your PATH. Then simply prefix any command with cache to use it.
Author of bash-cache here with an update. I recently published bkt, a CLI and Rust library for subprocess caching. Here's a simple example:
# Execute and cache an invocation of 'date +%s.%N'
$ bkt -- date +%s.%N
1631992417.080884000
# A subsequent invocation reuses the same cached output
$ bkt -- date +%s.%N
1631992417.080884000
It supports a number of features such as asynchronous refreshing (--stale and --warm), namespaced caches (--scope), and optionally keying off the working directory (--cwd) and select environment variables (--env). See the README for more.
It's still a work in progress but it's functional and effective! I'm using it already to speed up my shell prompt and a number of other common tasks.
I created bash-cache, a memoization library for Bash, which works exactly how you're describing. It's designed specifically to cache Bash functions, but obviously you can wrap calls to other commands in functions.
It handles a number of edge-case behaviors that many simpler caching mechanisms miss. It reports the exit code of the original call, keeps stdout and stderr separately, and retains any trailing whitespace in the output ($() command substitutions will truncate trailing whitespace).
Demo:
# Define function normally, then decorate it with bc::cache
$ maybe_sleep() {
sleep "$#"
echo "Did I sleep?"
} && bc::cache maybe_sleep
# Initial call invokes the function
$ time maybe_sleep 1
Did I sleep?
real 0m1.047s
user 0m0.000s
sys 0m0.020s
# Subsequent call uses the cache
$ time maybe_sleep 1
Did I sleep?
real 0m0.044s
user 0m0.000s
sys 0m0.010s
# Invocations with different arguments are cached separately
$ time maybe_sleep 2
Did I sleep?
real 0m2.049s
user 0m0.000s
sys 0m0.020s
There's also a benchmark function that shows the overhead of the caching:
$ bc::benchmark maybe_sleep 1
Original: 1.007
Cold Cache: 1.052
Warm Cache: 0.044
So you can see the read/write overhead (on my machine, which uses tmpfs) is roughly 1/20th of a second. This benchmark utility can help you decide whether it's worth caching a particular call or not.
How about this simple shell script (not tested)?
#!/bin/sh
mkdir -p cache
cachefile=cache/cache
for i in "$#"
do
cachefile=${cachefile}_$(printf %s "$i" | sed 's/./\\&/g')
done
test -f "$cachefile" || "$#" > "$cachefile"
cat "$cachefile"
Improved upon solution from error:
Pipes output into the "tee" command which allows it to be viewed real-time as well as stored in the cache.
Preserve colors (for example in commands like "ls --color") by using "script --flush --quiet /dev/null --command $CMD".
Avoid calling "exec" by using script as well
Use bash and [[
#!/usr/bin/env bash
CMD="$#"
[[ -z $CMD ]] && echo "usage: EXPIRY=600 cache cmd arg1 ... argN" && exit 1
# set -e -x
VERBOSE=false
PROG="$(basename $0)"
EXPIRY=${EXPIRY:-600} # default to 10 minutes, can be overriden
EXPIRE_DATE=$(date -Is -d "-$EXPIRY seconds")
[[ $VERBOSE = true ]] && echo "Using expiration $EXPIRY seconds"
HASH=$(echo "$CMD" | md5sum | awk '{print $1}')
CACHEDIR="${HOME}/.cache/${PROG}"
mkdir -p "${CACHEDIR}"
CACHEFILE="$CACHEDIR/$HASH"
if [[ -e $CACHEFILE ]] && [[ $(date -Is -r "$CACHEFILE") > $EXPIRE_DATE ]]; then
cat "$CACHEFILE"
else
script --flush --quiet --return /dev/null --command "$CMD" | tee "$CACHEFILE"
fi
The solution I came up in ruby is this. Does anybody see any optimization?
#!/usr/bin/env ruby
VER = '1.2'
$time_cache_secs = 3600
$cache_dir = File.expand_path("~/.cacheme")
require 'rubygems'
begin
require 'filecache' # gem install ruby-cache
rescue Exception => e
puts 'gem filecache requires installation, sorry. trying to install myself'
system 'sudo gem install -r filecache'
puts 'Try re-running the program now.'
exit 1
end
=begin
# create a new cache called "my-cache", rooted in /home/simon/caches
# with an expiry time of 30 seconds, and a file hierarchy three
# directories deep
=end
def main
cache = FileCache.new("cache3", $cache_dir, $time_cache_secs, 3)
cmd = ARGV.join(' ').to_s # caching on full command, note that quotes are stripped
cmd = 'echo give me an argment' if cmd.length < 1
# caches the command and retrieves it
if cache.get('output' + cmd)
#deb "Cache found!(for '#{cmd}')"
else
#deb "Cache not found! Recalculating and setting for the future"
cache.set('output' + cmd, `#{cmd}`)
end
#deb 'anyway calling the cache now'
print(cache.get('output' + cmd))
end
main
An implementation exists here: https://bitbucket.org/sivann/runcached/src
Caches executable path, output, exit code, remembers arguments. Configurable expiration. Implemented in bash, C, python, choose whatever suits you.

make alias for ls so that it doesn't show files of the pattern *~

Is there a series of commands that does ls then removes backup files? I want to do something like
ls | grep -v *~
but this shows all the files in different lines, any one to make the output identical to ls?
When I type in "man ls" My man page for ls has this option of -B its
-B Force printing of non-printable characters (as defined by ctype(3)
and current locale settings) in file names as \xxx, where xxx is the
numeric value of the character in octal.
It is not identical to the one you showed and I searched for ignored but no results popped up. Btw I am on a mac, which might have a different version of ls?
Alternatively, can I tell a directory to stop making backup files?
Assuming ls from GNU coreutils,
-B, --ignore-backups
do not list implied entries ending with ~
You can also set FIGNORE='~' in Bash so that * never expands to contain filenames ending in ~.
You can list all files ending in ~ with:
ls -d *[^~]
The *[^~] specifies all files that don't end in ~. The -d flag tells ls not to show the directory contents for any directories that it matches (as with the default ls command).
Edit: If you alias your ls to use the command above, it will break the standard ls usage, so you're better off using ephemient's solution if you want your ls usage to always exclude backup files.
For people forced to use ls that doesn't have -B (e.g., using BSD ls in Mac OS X), you can create an alias to a bash function that is based on Mansoor Siddiqui's suggestion. If you add the following function to your bash profile where you keep your aliases (.bash_profile, .profile, .bashrc, .bash_aliases, or equivalent):
ls_no_hidden() {
nonflagcount=0
ARG_ARRAY=(${#})
flags="-l"
curdir=`pwd`
shopt -s nullglob
# Iterate through args, find all flags (arg starting with dash (-))
for (( i = 0; i < $# ; i++ )); do
if [[ ${ARG_ARRAY[${i}]} == -* ]]; then
flags="${flags} ${ARG_ARRAY[${i}]}";
else
((nonflagcount++));
fi
done
if [[ $nonflagcount -eq 0 ]]; then
# ls current directory if no non-flag args provided
FILES=`echo *[^~#]`
# check if files are present, before calling ls
# to suppress errors if no matches.
if [[ -n $FILES ]]; then
ls $flags -d *[^~#]
fi
else
# loop through all args, and run ls for each non-flag
for (( i = 0; i < $# ; i++ )); do
if [[ ${ARG_ARRAY[${i}]} != -* ]]; then
# if directory, enter the directory
if [[ -d ${ARG_ARRAY[${i}]} ]]; then
cd ${ARG_ARRAY[${i}]}
# check that the cd was successful before calling ls
if [[ $? -eq 0 ]]; then
pwd # print directory you are listing (feel free to comment out)
FILES=`echo *[^~#]`
if [[ -n $FILES ]]; then
ls $flags -d *[^~#]
fi
cd $curdir
fi
else
# if file list the file
if [[ -f ${ARG_ARRAY[${i}]} ]]; then
ls $flags ${ARG_ARRAY[${i}]}
else
echo "Directory/File not found: ${ARG_ARRAY[${i}]}"
fi
fi
fi
done
fi
}
alias l=ls_no_hidden
Then l will be mapped to ls but not show files that end in ~ or #.

How to resolve symbolic links in a shell script

Given an absolute or relative path (in a Unix-like system), I would like to determine the full path of the target after resolving any intermediate symlinks. Bonus points for also resolving ~username notation at the same time.
If the target is a directory, it might be possible to chdir() into the directory and then call getcwd(), but I really want to do this from a shell script rather than writing a C helper. Unfortunately, shells have a tendency to try to hide the existence of symlinks from the user (this is bash on OS X):
$ ls -ld foo bar
drwxr-xr-x 2 greg greg 68 Aug 11 22:36 bar
lrwxr-xr-x 1 greg greg 3 Aug 11 22:36 foo -> bar
$ cd foo
$ pwd
/Users/greg/tmp/foo
$
What I want is a function resolve() such that when executed from the tmp directory in the above example, resolve("foo") == "/Users/greg/tmp/bar".
readlink -f "$path"
Editor's note: The above works with GNU readlink and FreeBSD/PC-BSD/OpenBSD readlink, but not on OS X as of 10.11.
GNU readlink offers additional, related options, such as -m for resolving a symlink whether or not the ultimate target exists.
Note since GNU coreutils 8.15 (2012-01-06), there is a realpath program available that is less obtuse and more flexible than the above. It's also compatible with the FreeBSD util of the same name. It also includes functionality to generate a relative path between two files.
realpath $path
[Admin addition below from comment by halloleo —danorton]
For Mac OS X (through at least 10.11.x), use readlink without the -f option:
readlink $path
Editor's note: This will not resolve symlinks recursively and thus won't report the ultimate target; e.g., given symlink a that points to b, which in turn points to c, this will only report b (and won't ensure that it is output as an absolute path).
Use the following perl command on OS X to fill the gap of the missing readlink -f functionality:
perl -MCwd -le 'print Cwd::abs_path(shift)' "$path"
According to the standards, pwd -P should return the path with symlinks resolved.
C function char *getcwd(char *buf, size_t size) from unistd.h should have the same behaviour.
getcwd
pwd
"pwd -P" seems to work if you just want the directory, but if for some reason you want the name of the actual executable I don't think that helps. Here's my solution:
#!/bin/bash
# get the absolute path of the executable
SELF_PATH=$(cd -P -- "$(dirname -- "$0")" && pwd -P) && SELF_PATH=$SELF_PATH/$(basename -- "$0")
# resolve symlinks
while [[ -h $SELF_PATH ]]; do
# 1) cd to directory of the symlink
# 2) cd to the directory of where the symlink points
# 3) get the pwd
# 4) append the basename
DIR=$(dirname -- "$SELF_PATH")
SYM=$(readlink "$SELF_PATH")
SELF_PATH=$(cd "$DIR" && cd "$(dirname -- "$SYM")" && pwd)/$(basename -- "$SYM")
done
One of my favorites is realpath foo
realpath - return the canonicalized absolute pathname
realpath expands all symbolic links and resolves references to '/./', '/../' and extra '/' characters in the null terminated string named by path and
stores the canonicalized absolute pathname in the buffer of size PATH_MAX named by resolved_path. The resulting path will have no symbolic link, '/./' or
'/../' components.
readlink -e [filepath]
seems to be exactly what you're asking for
- it accepts an arbirary path, resolves all symlinks, and returns the "real" path
- and it's "standard *nix" that likely all systems already have
Another way:
# Gets the real path of a link, following all links
myreadlink() { [ ! -h "$1" ] && echo "$1" || (local link="$(expr "$(command ls -ld -- "$1")" : '.*-> \(.*\)$')"; cd $(dirname $1); myreadlink "$link" | sed "s|^\([^/].*\)\$|$(dirname $1)/\1|"); }
# Returns the absolute path to a command, maybe in $PATH (which) or not. If not found, returns the same
whereis() { echo $1 | sed "s|^\([^/].*/.*\)|$(pwd)/\1|;s|^\([^/]*\)$|$(which -- $1)|;s|^$|$1|"; }
# Returns the realpath of a called command.
whereis_realpath() { local SCRIPT_PATH=$(whereis $1); myreadlink ${SCRIPT_PATH} | sed "s|^\([^/].*\)\$|$(dirname ${SCRIPT_PATH})/\1|"; }
Putting some of the given solutions together, knowing that readlink is available on most systems, but needs different arguments, this works well for me on OSX and Debian. I'm not sure about BSD systems. Maybe the condition needs to be [[ $OSTYPE != darwin* ]] to exclude -f from OSX only.
#!/bin/bash
MY_DIR=$( cd $(dirname $(readlink `[[ $OSTYPE == linux* ]] && echo "-f"` $0)) ; pwd -P)
echo "$MY_DIR"
Here's how one can get the actual path to the file in MacOS/Unix using an inline Perl script:
FILE=$(perl -e "use Cwd qw(abs_path); print abs_path('$0')")
Similarly, to get the directory of a symlinked file:
DIR=$(perl -e "use Cwd qw(abs_path); use File::Basename; print dirname(abs_path('$0'))")
Common shell scripts often have to find their "home" directory even if they are invoked as a symlink. The script thus have to find their "real" position from just $0.
cat `mvn`
on my system prints a script containing the following, which should be a good hint at what you need.
if [ -z "$M2_HOME" ] ; then
## resolve links - $0 may be a link to maven's home
PRG="$0"
# need this for relative symlinks
while [ -h "$PRG" ] ; do
ls=`ls -ld "$PRG"`
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
PRG="$link"
else
PRG="`dirname "$PRG"`/$link"
fi
done
saveddir=`pwd`
M2_HOME=`dirname "$PRG"`/..
# make it fully qualified
M2_HOME=`cd "$M2_HOME" && pwd`
Note: I believe this to be a solid, portable, ready-made solution, which is invariably lengthy for that very reason.
Below is a fully POSIX-compliant script / function that is therefore cross-platform (works on macOS too, whose readlink still doesn't support -f as of 10.12 (Sierra)) - it uses only POSIX shell language features and only POSIX-compliant utility calls.
It is a portable implementation of GNU's readlink -e (the stricter version of readlink -f).
You can run the script with sh or source the function in bash, ksh, and zsh:
For instance, inside a script you can use it as follows to get the running's script true directory of origin, with symlinks resolved:
trueScriptDir=$(dirname -- "$(rreadlink "$0")")
rreadlink script / function definition:
The code was adapted with gratitude from this answer.
I've also created a bash-based stand-alone utility version here, which you can install with
npm install rreadlink -g, if you have Node.js installed.
#!/bin/sh
# SYNOPSIS
# rreadlink <fileOrDirPath>
# DESCRIPTION
# Resolves <fileOrDirPath> to its ultimate target, if it is a symlink, and
# prints its canonical path. If it is not a symlink, its own canonical path
# is printed.
# A broken symlink causes an error that reports the non-existent target.
# LIMITATIONS
# - Won't work with filenames with embedded newlines or filenames containing
# the string ' -> '.
# COMPATIBILITY
# This is a fully POSIX-compliant implementation of what GNU readlink's
# -e option does.
# EXAMPLE
# In a shell script, use the following to get that script's true directory of origin:
# trueScriptDir=$(dirname -- "$(rreadlink "$0")")
rreadlink() ( # Execute the function in a *subshell* to localize variables and the effect of `cd`.
target=$1 fname= targetDir= CDPATH=
# Try to make the execution environment as predictable as possible:
# All commands below are invoked via `command`, so we must make sure that
# `command` itself is not redefined as an alias or shell function.
# (Note that command is too inconsistent across shells, so we don't use it.)
# `command` is a *builtin* in bash, dash, ksh, zsh, and some platforms do not
# even have an external utility version of it (e.g, Ubuntu).
# `command` bypasses aliases and shell functions and also finds builtins
# in bash, dash, and ksh. In zsh, option POSIX_BUILTINS must be turned on for
# that to happen.
{ \unalias command; \unset -f command; } >/dev/null 2>&1
[ -n "$ZSH_VERSION" ] && options[POSIX_BUILTINS]=on # make zsh find *builtins* with `command` too.
while :; do # Resolve potential symlinks until the ultimate target is found.
[ -L "$target" ] || [ -e "$target" ] || { command printf '%s\n' "ERROR: '$target' does not exist." >&2; return 1; }
command cd "$(command dirname -- "$target")" # Change to target dir; necessary for correct resolution of target path.
fname=$(command basename -- "$target") # Extract filename.
[ "$fname" = '/' ] && fname='' # !! curiously, `basename /` returns '/'
if [ -L "$fname" ]; then
# Extract [next] target path, which may be defined
# *relative* to the symlink's own directory.
# Note: We parse `ls -l` output to find the symlink target
# which is the only POSIX-compliant, albeit somewhat fragile, way.
target=$(command ls -l "$fname")
target=${target#* -> }
continue # Resolve [next] symlink target.
fi
break # Ultimate target reached.
done
targetDir=$(command pwd -P) # Get canonical dir. path
# Output the ultimate target's canonical path.
# Note that we manually resolve paths ending in /. and /.. to make sure we have a normalized path.
if [ "$fname" = '.' ]; then
command printf '%s\n' "${targetDir%/}"
elif [ "$fname" = '..' ]; then
# Caveat: something like /var/.. will resolve to /private (assuming /var# -> /private/var), i.e. the '..' is applied
# AFTER canonicalization.
command printf '%s\n' "$(command dirname -- "${targetDir}")"
else
command printf '%s\n' "${targetDir%/}/$fname"
fi
)
rreadlink "$#"
A tangent on security:
jarno, in reference to the function ensuring that builtin command is not shadowed by an alias or shell function of the same name, asks in a comment:
What if unalias or unset and [ are set as aliases or shell functions?
The motivation behind rreadlink ensuring that command has its original meaning is to use it to bypass (benign) convenience aliases and functions often used to shadow standard commands in interactive shells, such as redefining ls to include favorite options.
I think it's safe to say that unless you're dealing with an untrusted, malicious environment, worrying about unalias or unset - or, for that matter, while, do, ... - being redefined is not a concern.
There is something that the function must rely on to have its original meaning and behavior - there is no way around that.
That POSIX-like shells allow redefinition of builtins and even language keywords is inherently a security risk (and writing paranoid code is hard in general).
To address your concerns specifically:
The function relies on unalias and unset having their original meaning. Having them redefined as shell functions in a manner that alters their behavior would be a problem; redefinition as an alias is
not necessarily a concern, because quoting (part of) the command name (e.g., \unalias) bypasses aliases.
However, quoting is not an option for shell keywords (while, for, if, do, ...) and while shell keywords do take precedence over shell functions, in bash and zsh aliases have the highest precedence, so to guard against shell-keyword redefinitions you must run unalias with their names (although in non-interactive bash shells (such as scripts) aliases are not expanded by default - only if shopt -s expand_aliases is explicitly called first).
To ensure that unalias - as a builtin - has its original meaning, you must use \unset on it first, which requires that unset have its original meaning:
unset is a shell builtin, so to ensure that it is invoked as such, you'd have to make sure that it itself is not redefined as a function. While you can bypass an alias form with quoting, you cannot bypass a shell-function form - catch 22.
Thus, unless you can rely on unset to have its original meaning, from what I can tell, there is no guaranteed way to defend against all malicious redefinitions.
Is your path a directory, or might it be a file? If it's a directory, it's simple:
(cd "$DIR"; pwd -P)
However, if it might be a file, then this won't work:
DIR=$(cd $(dirname "$FILE"); pwd -P); echo "${DIR}/$(readlink "$FILE")"
because the symlink might resolve into a relative or full path.
On scripts I need to find the real path, so that I might reference configuration or other scripts installed together with it, I use this:
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
SOURCE="$(readlink "$SOURCE")"
[[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done
You could set SOURCE to any file path. Basically, for as long as the path is a symlink, it resolves that symlink. The trick is in the last line of the loop. If the resolved symlink is absolute, it will use that as SOURCE. However, if it is relative, it will prepend the DIR for it, which was resolved into a real location by the simple trick I first described.
function realpath {
local r=$1; local t=$(readlink $r)
while [ $t ]; do
r=$(cd $(dirname $r) && cd $(dirname $t) && pwd -P)/$(basename $t)
t=$(readlink $r)
done
echo $r
}
#example usage
SCRIPT_PARENT_DIR=$(dirname $(realpath "$0"))/..
In case where pwd can't be used (e.g. calling a scripts from a different location), use realpath (with or without dirname):
$(dirname $(realpath $PATH_TO_BE_RESOLVED))
Works both when calling through (multiple) symlink(s) or when directly calling the script - from any location.
This is a symlink resolver in Bash that works whether the link is a directory or a non-directory:
function readlinks {(
set -o errexit -o nounset
declare n=0 limit=1024 link="$1"
# If it's a directory, just skip all this.
if cd "$link" 2>/dev/null
then
pwd -P
return 0
fi
# Resolve until we are out of links (or recurse too deep).
while [[ -L $link ]] && [[ $n -lt $limit ]]
do
cd "$(dirname -- "$link")"
n=$((n + 1))
link="$(readlink -- "${link##*/}")"
done
cd "$(dirname -- "$link")"
if [[ $n -ge $limit ]]
then
echo "Recursion limit ($limit) exceeded." >&2
return 2
fi
printf '%s/%s\n' "$(pwd -P)" "${link##*/}"
)}
Note that all the cd and set stuff takes place in a subshell.
Try this:
cd $(dirname $([ -L $0 ] && readlink -f $0 || echo $0))
Since I've run into this many times over the years, and this time around I needed a pure bash portable version that I could use on OSX and linux, I went ahead and wrote one:
The living version lives here:
https://github.com/keen99/shell-functions/tree/master/resolve_path
but for the sake of SO, here's the current version (I feel it's well tested..but I'm open to feedback!)
Might not be difficult to make it work for plain bourne shell (sh), but I didn't try...I like $FUNCNAME too much. :)
#!/bin/bash
resolve_path() {
#I'm bash only, please!
# usage: resolve_path <a file or directory>
# follows symlinks and relative paths, returns a full real path
#
local owd="$PWD"
#echo "$FUNCNAME for $1" >&2
local opath="$1"
local npath=""
local obase=$(basename "$opath")
local odir=$(dirname "$opath")
if [[ -L "$opath" ]]
then
#it's a link.
#file or directory, we want to cd into it's dir
cd $odir
#then extract where the link points.
npath=$(readlink "$obase")
#have to -L BEFORE we -f, because -f includes -L :(
if [[ -L $npath ]]
then
#the link points to another symlink, so go follow that.
resolve_path "$npath"
#and finish out early, we're done.
return $?
#done
elif [[ -f $npath ]]
#the link points to a file.
then
#get the dir for the new file
nbase=$(basename $npath)
npath=$(dirname $npath)
cd "$npath"
ndir=$(pwd -P)
retval=0
#done
elif [[ -d $npath ]]
then
#the link points to a directory.
cd "$npath"
ndir=$(pwd -P)
retval=0
#done
else
echo "$FUNCNAME: ERROR: unknown condition inside link!!" >&2
echo "opath [[ $opath ]]" >&2
echo "npath [[ $npath ]]" >&2
return 1
fi
else
if ! [[ -e "$opath" ]]
then
echo "$FUNCNAME: $opath: No such file or directory" >&2
return 1
#and break early
elif [[ -d "$opath" ]]
then
cd "$opath"
ndir=$(pwd -P)
retval=0
#done
elif [[ -f "$opath" ]]
then
cd $odir
ndir=$(pwd -P)
nbase=$(basename "$opath")
retval=0
#done
else
echo "$FUNCNAME: ERROR: unknown condition outside link!!" >&2
echo "opath [[ $opath ]]" >&2
return 1
fi
fi
#now assemble our output
echo -n "$ndir"
if [[ "x${nbase:=}" != "x" ]]
then
echo "/$nbase"
else
echo
fi
#now return to where we were
cd "$owd"
return $retval
}
here's a classic example, thanks to brew:
%% ls -l `which mvn`
lrwxr-xr-x 1 draistrick 502 29 Dec 17 10:50 /usr/local/bin/mvn# -> ../Cellar/maven/3.2.3/bin/mvn
use this function and it will return the -real- path:
%% cat test.sh
#!/bin/bash
. resolve_path.inc
echo
echo "relative symlinked path:"
which mvn
echo
echo "and the real path:"
resolve_path `which mvn`
%% test.sh
relative symlinked path:
/usr/local/bin/mvn
and the real path:
/usr/local/Cellar/maven/3.2.3/libexec/bin/mvn
To work around the Mac incompatibility, I came up with
echo `php -r "echo realpath('foo');"`
Not great but cross OS
Here I present what I believe to be a cross-platform (Linux and macOS at least) solution to the answer that is working well for me currently.
crosspath()
{
local ref="$1"
if [ -x "$(which realpath)" ]; then
path="$(realpath "$ref")"
else
path="$(readlink -f "$ref" 2> /dev/null)"
if [ $? -gt 0 ]; then
if [ -x "$(which readlink)" ]; then
if [ ! -z "$(readlink "$ref")" ]; then
ref="$(readlink "$ref")"
fi
else
echo "realpath and readlink not available. The following may not be the final path." 1>&2
fi
if [ -d "$ref" ]; then
path="$(cd "$ref"; pwd -P)"
else
path="$(cd $(dirname "$ref"); pwd -P)/$(basename "$ref")"
fi
fi
fi
echo "$path"
}
Here is a macOS (only?) solution. Possibly better suited to the original question.
mac_realpath()
{
local ref="$1"
if [[ ! -z "$(readlink "$ref")" ]]; then
ref="$(readlink "$1")"
fi
if [[ -d "$ref" ]]; then
echo "$(cd "$ref"; pwd -P)"
else
echo "$(cd $(dirname "$ref"); pwd -P)/$(basename "$ref")"
fi
}
My answer here Bash: how to get real path of a symlink?
but in short very handy in scripts:
script_home=$( dirname $(realpath "$0") )
echo Original script home: $script_home
These are part of GNU coreutils, suitable for use in Linux systems.
To test everything, we put symlink into /home/test2/, amend some additional things and run/call it from root directory:
/$ /home/test2/symlink
/home/test
Original script home: /home/test
Where
Original script is: /home/test/realscript.sh
Called script is: /home/test2/symlink
My 2 cents. This function is POSIX compliant, and both the source and the destination can contain ->. However, I have not gotten it work with filenames that container newline or tabs, as ls in general has issues with those.
resolve_symlink() {
test -L "$1" && ls -l "$1" | awk -v SYMLINK="$1" '{ SL=(SYMLINK)" -> "; i=index($0, SL); s=substr($0, i+length(SL)); print s }'
}
I believe the solution here is the file command, with a custom magic file that only outputs the destination of the provided symlink.
This is the best solution, tested in Bash 3.2.57:
# Read a path (similar to `readlink`) recursively, until the physical path without any links (like `cd -P`) is found.
# Accepts any existing path, prints its physical path and exits `0`, exits `1` if some contained links don't exist.
# Motivation: `${BASH_SOURCE[0]}` often contains links; using it directly to extract your project's path may fail.
#
# Example: Safely `source` a file located relative to the current script
#
# source "$(dirname "$(rreadlink "${BASH_SOURCE[0]}")")/relative/script.sh"
#Inspiration: https://stackoverflow.com/a/51089005/6307827
rreadlink () {
declare p="$1" d l
while :; do
d="$(cd -P "$(dirname "$p")" && pwd)" || return $? #absolute path without symlinks
p="$d/$(basename "$p")"
if [ -h "$p" ]; then
l="$(readlink "$p")" || break
#A link must be resolved from its fully resolved parent dir.
d="$(cd "$d" && cd -P "$(dirname "$l")" && pwd)" || return $?
p="$d/$(basename "$l")"
else
break
fi
done
printf '%s\n' "$p"
}

Resources