Escape slashes in bash complete - bash

I try to use the bash complete builtin to show different options for a command.
I have problems when an option contains a path like in -F/dev/null.
Currently I'm using
#!/bin/bash
_xyz-completion ()
{
local cur
COMPREPLY=() # Array variable storing the possible completions.
cur=${COMP_WORDS[COMP_CWORD]}
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-oOption1 -F/dev/null" -- $cur ) )
;;
esac
return 0
}
complete -F _xyz-completion -o filenames xyz
If -F was already typed, then a Tab completes it successfully.
But if only - was typed, then a Tab shows
null -oOption1
But I expect to see
-F/dev/null -oOption1
I tried already -F\/dev\/null, -F//dev//null, "-F/dev/null" and -F\\\/dev\\\/null
It seems to be only a display problem, as the completion itself works as expected.
I can't see how to appropriate escape the slashes in `-F/dev/null`.
To comment the comments:
1)
Never mind, it's a problem also if -F is replaced by a non-option such as -Q. – Benjamin W.
It's not a problem, that the -F looks like a option for complete itself, as it even fails if I changed it to xOPTION1 xF/dev/null
2)
I'm wondering what compgen -W "-oOption1 -F/dev/null" -- - displays for you.
It displays (as expected)
-oOption1
-F/dev/null
As mentioned, -F completes successfully to -F/dev/null

If you remove the -o filenames option from complete your example works as expected. Which makes some sense as the completions aren't filenames. This is with bash version 5.0.2(1).
So:
#!/bin/bash
_xyz-completion ()
{
local cur
COMPREPLY=() # Array variable storing the possible completions.
cur=${COMP_WORDS[COMP_CWORD]}
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-oOption1 -F/dev/null" -- $cur ) )
;;
esac
return 0
}
complete -F _xyz-completion xyz
It definitely seems like a bug that it would truncate part of the completion when there are slashes. And only when displaying the choices, the actual completion works correctly.
EDIT:
After looking into it a little more, the filenames option is used for escaping strings that could have spaces or other breaking characters. Basically cleaning up file names for the shell. From the Programmable Completion Built-in man page
-o filenames:
Tell Readline that the compspec generates filenames, so it can perform any filename-specific processing (like adding a slash to directory names, quoting special characters, or suppressing trailing spaces). This option is intended to be used with shell functions specified with -F.
Apparently that includes stripping everything before and including the last slash.
EDIT2:
Here's a comment from the readline source that bash uses for file name completion. I got this from the bash repo at https://git.savannah.gnu.org/git/bash.git). The master, so 5.0 patch 3 at time of writing.
./lib/readline/complete.c line 697
/* Return the portion of PATHNAME that should be output when listing
possible completions. If we are hacking filename completion, we
are only interested in the basename, the portion following the
final slash. Otherwise, we return what we were passed. Since
printing empty strings is not very informative, if we're doing
filename completion, and the basename is the empty string, we look
for the previous slash and return the portion following that. If
there's no previous slash, we just return what we were passed. */
static char *
printable_part (char *pathname)
For filename completion, it only wants to print the basename, everything after the last slash.

Related

bash removing slashes in front of directory names

I'm trying to recapture some of the simplicity of the c-shell and tcsh. I had a simple alias that allowed me to list directories (alias lsdd 'ls | grep /'). I found a post with several solutions, none of which were particularly satisfying. For instance,
ls -d */
works well unless there are no sub-directories, in which case you get an error message--not exactly elegant.
echo */
doesn't give that error, but the list is not as easily readable as a single column.
So, I have been routing around in /etc to find where bash defines its ls command so that it uses color, and so that it strips the / following the directory name. That seems to be a great place to do some bud nipping. In what startup file does bash strip slashes from directory names in an ls command?
If you want to list only directories without the trailing slash and in a single column ls is not necessarily the best utility. And aliases are somehow obsolete; functions are recommended nowadays.
lsdd() {
local -a list=( */ )
printf '%s\n' "${list[#]%/}"
}
If there are no sub-directories by default a single * is printed. To get rid of it we can temporarily set the nullglob option. In the following we record the option's initial state and restore it afterwards:
lsdd() {
local tmp=$(shopt -p nullglob)
shopt -s nullglob
local -a list=( */ )
printf '%s\n' "${list[#]%/}"
eval "$tmp"
}

Bash command completion with full path expansion injected into history for vim

i've spent a solid week searching online and trying many different ways to solve a tricky problem. basically i would like to use vim to edit custom commands / scripts that are in my $PATH without having to actually cd to their given directories first or manually type their full paths on the command line.
in essence, i'd love to be able to combine stock bash command completion (compgen -c) with simultaneous path expansion when specifying scripts in my $PATH as vim FILE ARGUMENTS. btw i'm using the caps to make clear what can be a tricky subject and not shouting.
it's probably easier to show you what i'm trying to do then explain it. lets say i have scripts in directories that are on my $PATH
~/bin/x/y/cmd1.sh
~/bin/a/b/cmd2.sh
/ppp/n/m/cmd3.sh
sometimes these scripts provide functionality on files that exist in other directories so i'd like to be able to edit them easily from anywhere in the file system. sometimes i just want to be able to edit those scripts from other directories because it's more convenient. lets say i'm currently in the following directory.
/completely/different/dir
but now i need to vim edit
~/bin/a/b/cmd2.sh
my options to achieve this solely with default bash functionality is to do one of the following which takes a long time
cd ~/bin/a/b/; vim cmd.sh
vim ~/<tab-complete-my-way-to-file>
open a new terminal window plus some combination of the above
since i know the names of my custom scripts it would be soooo much easier to just do the following which requires no tab completion of the full path to the file or directory as well as no cd'ing to a different directory to change my context!!!
vim cmd2.sh
but this won't work by default b/c vim needs the full path to the script
my first thought was to write a vim wrapper function which basically uses which to do the $PATH expansion for me and then tie bash command completion to my vc function like this:
vc () { vim $(which "$#"); }
complete -c vc
i can run the following in the shell to complete partial script names that start with "c" from the choices of cmd1.sh, cmd2.sh, cmd3.sh
vc c<tab>
until i get what i want here which is great
vc cmd2.sh
when i hit enter and execute the command it all works fine BUT it doesn't inject the expanded path into the READLINE command line and thus the FULL EXAPANDED PATH of 'cmd2.sh' never winds up in my command history! my history will show this
vc cmd2.sh
instead of
vc ~/bin/a/b/cmd2.sh
or
vim ~/bin/a/b/cmd2.sh
i want that expanded path in my command history because it makes future operations on that script file super easy when reusing command history. ie i can ls, file, diff, mv, cp that expanded path much easier reusing history than writing more wrapper scripts for ls, file, diff, mv, cp etc.. like i had to do with vc above.
QUESTIONS :
OPTION 1
is there a way to reinject the full expanded path provided by which in my vc function directly back into the original vc READLINE or just inject the entire "vim " command that actually gets executed in vc as a replacement for the original vc command into READLINE? any method that allows me to get the expanded vim command into the history even if it is in addition to the original vc command is ok by me.
basically how do you access and edit the current READLINE programmatically in bash?
OPTION 2
note i can also do something like this DIRECTLY on the command line in real-time
vim $(which cmd2.sh) C-x-e
which gives me what i want (it expands the path which will then put it into history) but i have to always type the extra subshell and which text as well as the C-x-e (to expand the line) on every iteration of the command while losing the command completion functionality which basically makes this useless. put another way, is there anyway to automate the above using a bind key so that
vc cmd2.sh
is automatcially transformed first into
vim $(which cmd2.sh)
and then automatically follows up with C-x-e so that it gets expanded to
vim ~/bin/a/b/cmd2.sh
but have all the editing movement, text insertion and final command line expansion happen all in the same bindkey macro? this might be the best solution of all.
OPTION 3
alternatively, since bash command completion automatically winds up in the READLINE and thus the history, a custom completion function would solve my problem. is there a way to make vc use a completion function that would BOTH complete commands in $PATH when used as vim arguments as described above AND ALSO SIMULTANEOUSLY EXPAND THEM TO THEIR FULL PATHS?
i know how to write a basic completion function. countless hours of attempts (which i am choosing not to put here to keep confusion / post length down) are failing for the simple reason that i'm not sure command completion is compatible with simultaneous full path expansion b/c it breaks traditional completion.
with a custom completion function, here's what happens when i try to find one of my scripts "cmd2.sh" living in "vim ~/bin/a/b/cmd2.sh" but start with a "c" and hit "".
vim c<tab>
instead of getting me these completions to choose from
cmd1.sh
cmd2.sh
cmd3.sh
it completes the first one it finds in the $PATH and inserts it into the READLINE which might be
/ppp/n/m/cmd3.sh
when i really want
~/bin/a/b/cmd2.sh
this effectively kills the completion lookup because the word before my cursor in the READLINE now starts with /ppp/n/m/cmd3.sh and there's no way of getting back to cmd2.sh
i hope that's clear.
thanks
This requires some boilerplate in your .bashrc file, but might work for you. It makes use of the directory stack (some might say it abuses the directory stack, but if you aren't using it for anything else, it might be OK).
In your .bashrc, add each directory of interest to your directory stack. End the list with your home directory, as pushd also changes your current working directory.
pushd ~/bin/x/y/cmd1.sh
pushd ~/bin/a/b/cmd2.sh
pushd /ppp/n/m/cmd3.sh
pushd ~
Yes, it duplicates your PATH entry a bit, but I contend you don't really need access to every directory in your PATH, just the ones where you have files you intend to edit. (Are you really going to try to edit anything in /bin or /usr/bin?)
Now, in your interactive shell, you can run dirs -v to see, along with its index, the directories in your stack:
$ dirs -v
0 ~
1 /ppp/n/m
2 ~/bin/a/b
3 ~/bin/x/y
4 ~
Now, no matter where you are, if you want to edit ~/bin/x/y/cmd1.sh, you can use
$ vi ~3/cmd3.sh
As long as you don't use popd or pushd elsewhere to modify the stack, the indices will stay the same. (Using pushd will add a new directory to the top of the stack, increasing each index; popd will decrease each index after it removes the top directory.)
A much simpler process would be to simply define some variables whose values are the desired directories:
binab=~/bin/a/b
binxy=~/bin/x/y
ppp=/ppp/n/m
and simply expand them
$ vi $ppp/cmd3.sh
The shell performs parameter name completion, so the variable names don't have to be particularly short, but the dirstack approach guarantees you only need 2 or 3 characters. (Also, it doesn't pollute the global namespace with additional varibles.)
Interestingly, I've found myself wanting to do something similar a while back. I hacked together the following bash script. It's pretty self-explanatory. If I want to edit one of my scripts (this one, for example is ~/bin/vm), I just run vm vm. I can open several files in my path, either in buffers, or vertical/horizontal splits etc...
Do with it what you like, pasting it here because it's all ready to use:
#!/usr/bin/env bash
Usage() {
cat <<-__EOF_
${0##*/} Opens scripts in PATH from any location (vim -O)
Example: ${0##*/} ${0##*/}
opens this script in vim editor
-o: Change default behaviour (vim -O) to -o
-b: Change default behaviour to open in buffers (vim file1 file2)
-h: Display this message
__EOF_
}
flag="O"
vimopen() {
local wrapped
local located
local found
found=false
[ $# -lt 1 ] && echo "No script given" && return
wrapped=""
for arg in "$#"; do
if located=$(which "${arg}" 2> /dev/null); then
found=true
wrapped="${wrapped} ${located}"
else
echo "${arg} not found!"
fi
done
$found || return
# We WANT word splitting to occur here
# shellcheck disable=SC2086
case ${flag} in
O)
vim $wrapped -O
;;
o)
vim $wrapped -o
;;
*)
vim $wrapped
esac
}
while getopts :boh f; do
case $f in
h)
Usage
exit 0
;;
o)
flag="o"
shift
;;
b)
flag=""
shift
;;
*)
echo "Unknown option ${f}-${OPTARG}"
Usage
exit 1
;;
esac
done
vimopen "$#"
Let me share something that answers OPTION3 part of your answer:
Behavior of this solution
The solutions that I will show will offer up basenames of commands (i.e. what compgen -c ${cur} returns where cur is last word on the command line) until there is only one candidate in which case it will be replaced by the full path of the command.
$ vc c<TAB><TAB>
Display all 216 possibilities? (y or n)
$ vc cm<TAB>
cmake cmake-gui cmcprompt cmd1.sh cmd2.sh cmd3.sh cmp cmpdylib cmuwmtopbm
$ vc cmd<TAB>
cmd1.sh cmd2.sh cmd3.sh
$ vc cmd1<TAB>
$ vc /Users/pcarphin/vc/bin/cmd1.sh
which I think is what you want.
And for your vc function, you can still do
vc(){
vim "$(which "${1}")
}
since which /Users/pcarphin/vc/bin/cmd3.sh returns /Users/pcarphin/vc/bin/cmd3.sh and so it will work whether you do vc cmd3.sh<ENTER> or if you do vc cmd3.sh<TAB><ENTER>
Basic solution
So here it is, it's as simple as using compgen -c to get command basename candidates and checking if you only have a single candidate and if so, replacing it with the full path.
_vc(){
local cur prev words cword
_init_completion || return;
COMPREPLY=( $(compgen -c ${cur}) )
#
# If there is only one candidate for completion, replace it with the
# full path returned by which.
#
if ((${#COMPREPLY[#]} == 1)) ; then
COMPREPLY[0]=$(which ${COMPREPLY[0]})
fi
}
complete -F _vc vc
Solution that filters out shell functions
The compgen -c command will include the names of shell functions and if you want to leave those out (maybe because your vc function would fail which would be inelegant for an argument supplied by a completion function), here is what you can do:
_vc(){
local cur prev words cword
_init_completion || return;
local candidates=($(compgen -c ${cur}))
#
# Put in COMPREPLY only the command names that are files in PATH
# and leave out shell functions
#
local i=0
for cmd in "${candidates[#]}" ; do
if which $cmd 2>/dev/null ; then
COMPREPLY[i++]=${cmd}
fi
done
#
# If there is only one candidate for completion, replace it with the
# full path returned by which.
#
if ((${#COMPREPLY[#]} == 1)) ; then
COMPREPLY[0]=$(which ${COMPREPLY[0]})
fi
}
Solution that handles shell functions
If we want to handle shell functions, then we can get rid of the part that filters them out and enhance the part that replaces the command name by a full path when COMPREPLY contains only one candidate. This is based on turning on extdebug which causes declare -F shell_function to output the file where shell_function was defined:
cmd_location(){
local location
if location=$(which "${1}" 2>/dev/null) ; then
echo "${location}"
else
# If extdebug is off, remember that and turn it on
local user_has_extdebug
if ! shopt extdebug ; then
user_has_extdebug=no
shopt -s extdebug
fi
info=$(declare -F ${COMPREPLY[0]})
if [[ -n "${info}" ]] ; then
echo ${info} | cut -d ' ' -f 3
fi
# Turn extdebug back off if it was off before
if [[ "${user_has_extdebug}" == no ]] ; then
shopt -u extdebug
fi
fi
}
_vc(){
local cur prev words cword
_init_completion || return;
COMPREPLY=( $(compgen -c ${cur}) )
if ((${#COMPREPLY[#]} == 1)) ; then
COMPREPLY[0]=$(cmd_location ${COMPREPLY[0]})
fi
}
And in this case, your vc function would need the same kind of logic or you could just remember to always use the shell completion to end up calling it with a full path.
That's why I factored out the cmd_location function
vc(){
if [[ "${1}" == /* ]] ; then
vim "${1}"
else
vim $(cmd_location "${1}")
fi
}
I was looking for something else but I found this question which inspired me to do this for myself so thank you, now I'll have a neat vc function with a cool completion function. Personally, I'm going to use the last version which handles shell functions.
The declare -F command with extdebug prints out the function name, the line number, and the file, so I'll see if I can adapt the solution so that in the case of shell functions, it opens the file at the location.
For that, I'd have to get rid of the part that puts a full path on the command line. So what I'm going to do for myself won't be an answer to your question. Note the use of parentheses for open_shell_function which makes it run in a subshell so I don't have to do the whole thing with user_has_extdebug.
open_shell_function()(
# Use subshell so as not to turn on extdebug in the user's shell
# and avoid doing this remembering stuff
shopt -s extdebug
local info=$(declare -F ${1})
if [[ -z "${info}" ]] ; then
echo "No info from 'declare -F' for '${1}'"
return 1
fi
local lineno
if ! lineno=$(echo ${info} | cut -d ' ' -f 2) ; then
echo "Error getting line number from info '${info}' on '${1}'"
return 1
fi
local file
if ! file=$(echo ${info} | cut -d ' ' -f 3) ; then
echo "Error getting filename from info '${info}' on '${1}'"
return 1
fi
vim ${file} +${lineno}
)
vc(){
local file
if file=$(which ${1} 2>/dev/null) ; then
vim ${file}
else
echo "no '${1}' found in path, looking for shell function"
open_shell_function "${1}"
fi
}
complete -c vc

Wildcard that executes command once for each match

Alternate title: How to loop without a loop or xargs.
Recently, I switched to zsh because of its many features. I'm curious: Is there a feature which expands wildcards such that the command is executed once for each match instead of only one time for all matches at once.
Example
The command ebook-convert input_file output_file [options] accepts just one input file. When I want to convert multiple files, I have to execute the command multiple times manually or use a loop, for instance:
for i in *.epub; do
ebook-convert "$i" .mobi
done
What I'd like is a wildcard that functions like the loop so that I can save a few keystrokes. Let said wildcard be ⁂. The command
ebook-convert ⁂.epub .mobi
should expand to
ebook-convert 1stMatch.epub .mobi
ebook-convert 2ndMatch.epub .mobi
ebook-convert 3rdMatch.epub .mobi
...
Still interested in other answers
I accepted an answer that works for me (thanks to Grisha Levit). But if you know other shells with such a feature, alternative commands which are shorter than writing a loop, or even a way to extend zsh with the wanted wildcard your answers are appreciated.
so that I can save a few keystrokes
OK, so let's say you typed out
ebook-convert *.epub .mobi
…and now you realized that this isn't going to work — you need to write a loop. What would you normally do? Probably something like:
add ; done to the end of the line
hit CtrlA to go the beginning of the line
type for i in…
etc…
This looks like a good fit for readline keyboard macro:
Let's write this out the steps in terms of readline commands and regular keypresses:
end-of-line # (start from the end for consistency)
; done # type in the loop closing statement
character-search-backward * # go back to the where the glob is
shell-backward-word # (in case the glob is in the mid-word)
shell-kill-word # "cut" the word with the glob
"$i" # type the loop variable
beginning-of-line # go back to the start of the line
for i in # type the beginning of the loop opening
yank # "paste" the word with the glob
; do # type the end of the loop opening
Creating the binding:
For any readline command used above that does not have a key-binding, we need to create one. We also need to create a binding for the new macro that we are creating.
Unless you've already done a lot of readline customization, running the commands below will set the bindings up for the current shell. This uses default bindings like \C-e ➙ end-of-line.
bind '"\eB": shell-backward-word'
bind '"\eD": shell-kill-word'
bind '"\C-i": "\C-e; done\e\C-]*\eB\eD \"$i\"\C-afor i in\C-y; do "'
The bindings can also go into the inputrc file for persistence.
Using the shortcut:
After setting things up:
Type in something like
ebook-convert *.epub .mobi
Press CtrlI
The line will transform into
for i in *.epub; do ebook-convert "$i" .mobi; done
If you want to run the command right away, you can modify the macro to append a \C-j as the last keypress, which will trigger accept-line (same as hitting Return).
You could checkout zargs in zsh.
This function has a similar purpose to GNU xargs. Instead of reading lines of arguments from the standard input, it takes them from the command line
zshcontrib(1): OTHER FUNCTIONS, zargs
So, we could write:
autoload -Uz zargs
zargs -I⁂ -- *.epub -- ebook-convert ⁂ .mobi
PS: you could find zmv is handy if you need to capture some portions of patterns for building commands.
The for loop has a shortened form that you might like:
for f (*.epub) ebook-convert $f .mobi
You could make yourself a script that does this :
#!/bin/bash
command="$1"
shift
if
[[ $# -lt 3 ]]
then
echo "Usage: command file/blog arg1, arg2..."
exit 1
fi
declare -a files=()
while [ "$1" != "--" ]
do
[ "$1" ] || continue
files+=("$1")
shift
done
if
[ "$1" != "--" ]
then
echo "Separator not found : end file list with --"
exit 1
fi
shift
for file in "${files[#]}"
do
"$command" "$file" "$#"
done
You cal call this like this (assumes the script is called apply_to).
apply_to command /dir/* arg1, arg2...
EDIT
I modified the code to insert filenames at the beginning of the command.

Bash command to see if any files in dir - test if a directory is empty [duplicate]

This question already has answers here:
Checking from shell script if a directory contains files
(30 answers)
Closed 2 years ago.
I have the following bash script:
if ls /Users/david/Desktop/empty > /dev/null
then
echo 'yes -- files'
else
echo 'no -- files'
fi
How would I modify the top line such that it evaluates true if there are one or more files in the /Users/david/Desktop/empty dir?
This is covered in detail in BashFAQ #004. Notably, use of ls for this purpose is an antipattern and should be avoided.
shopt -s dotglob # if including hidden files is desired
files=( "$dir"/* )
[[ -e $files || -L $files ]] && echo "Directory is not empty"
[[ -e $files ]] doesn't actually check if the entire array's contents exist; rather, it checks the first name returned -- which handles the case when no files match, wherein the glob expression itself is returned as the sole result.
Notably:
This is far faster than invoking ls, which requires using fork() to spawn a subshell, execve() to replace that subshell with /bin/ls, the operating system's dynamic linker to load shared libraries used by the ls binary, etc, etc. [An exception to this is extremely large directories, of tens of thousands of files -- a case in which ls will also be slow; see the find-based solution below for those].
This is more correct than invoking ls: The list of files returned by globbing is guaranteed to exactly match the literal names of files, whereas ls can munge names with hidden characters. If the first entry is a valid filename, "${files[#]}" can be safely iterated over with assurance that each returned value will be a name, and there's no need to worry about filesystems with literal newlines in their names inflating the count if the local ls implementation does not escape them.
That said, an alternative approach is to use find, if you have one with the -empty extension (available both from GNU find and from modern BSDs including Mac OS):
[[ $(find -H "$dir" -maxdepth 0 -type d -empty) ]] || echo "Directory is not empty"
...if any result is given, the directory is nonempty. While slower than globbing on directories which are not unusually large, this is faster than either ls or globbing for extremely large directories not present in the direntry cache, as it can return results without a full scan.
Robust pure Bash solutions:
For background on why a pure Bash solution with globbing is superior to using ls, see Charles Duffy's helpful answer, which also contains a find-based alternative, which is much faster and less memory-intensive with large directories.[1]
Also consider anubhava's equally fast and memory-efficient stat-based answer, which, however, requires distinct syntax forms on Linux and BSD/OSX.
Updated to a simpler solution, gratefully adapted from this answer.
# EXCLUDING hidden files and folders - note the *quoted* use of glob '*'
if compgen -G '*' >/dev/null; then
echo 'not empty'
else
echo 'empty, but may have hidden files/dirs.'
fi
compgen -G is normally used for tab completion, but it is useful in this case as well:
Note that compgen -G does its own globbing, so you must pass it the glob (filename pattern) in quotes for it to output all matches. In this particular case, even passing an unquoted pattern up front would work, but the difference is worth nothing.
if nothing matches, compgen -G always produces no output (irrespective of the state of the nullglob option), and it indicates via its exit code whether at least 1 match was found, which is what the conditional takes advantage of (while suppressing any stdout output with >/dev/null).
# INCLUDING hidden files and folders - note the *unquoted* use of glob *
if (shopt -s dotglob; compgen -G * >/dev/null); then
echo 'not empty'
else
echo 'completely empty'
fi
compgen -G never matches hidden items (irrespective of the state of the dotglob option), so a workaround is needed to find hidden items too:
(...) creates a subshell for the conditional; that is, the commands executed in the subshell don't affect the current shell's environment, which allows us to set the dotglob option in a localized way.
shopt -s dotglob causes * to match hidden items too (except for . and ..).
compgen -G * with unquoted *, thanks to up-front expansion by the shell, is either passed at least one filename, whether hidden or not (additional filenames are ignored) or the empty string, if neither hidden nor non-hidden items exists. In the former case the exit code is 0 (signaling success and therefore a nonempty directory), in the later 1 (signaling a truly empty directory).
[1]
This answer originally falsely claimed to offer a Bash-only solution that is efficient with large directories, based on the following approach: (shopt -s nullglob dotglob; for f in "$dir"/*; do exit 0; done; exit 1).
This is NOT more efficient, because, internally, Bash still collects all matches in an array first before entering the loop - in other words: for * is not evaluated lazily.
Here is a solution based on stat command that can return number of hard links if run against a directory (or link to a directory). It starts incrementing number of hard links from 3 as first two are . and .. entries thus subtracting 2 from this number gives as actual number of entries in the given directory (this includes symlinks as well).
So putting it all together:
(( ($(stat -Lc '%h' "$dir") - 2) > 0)) && echo 'not empty' || echo 'empty'
As per man stat options used are:
%h number of hard links
-L --dereference, follow links
EDIT: To make it BSD/OSX compatible use:
(( ($(stat -Lf '%l' "$dir") - 2) > 0)) && echo 'not empty' || echo 'empty'

Bash script not working as expected. Wrong file listing behaviour

I've been following this tutorial (the idea can also be found in other posts of SO)
http://www.cyberciti.biz/faq/bash-loop-over-file/
This is my test script:
function getAllTests {
allfiles=$TEST_SCRIPTS/*
# Getting all stests in the
if [[ $1 == "s" ]]; then
for f in $allfiles
do
echo $f
done
fi
}
The idea is to print all files (one per line) in the directory found in TEST_SCRIPTS.
Instead of that this is what I get as an output:
/path/to/dir/*
(The actual path obviously, but this is to convey the idea).
I have tried the followign experiment on bash. Doing this
a=(./*)
And this read me all files in the current directory into a as an array. However if anything other than ./ is used then it does not work.
How can I use this procedure with a directory other than ./?
When there are no matches, the wildcard is not expanded.
I speculate that TESTSCRIPTS contains a path which does not exist; but without access to your code, there is obviously no way to diagnose this properly.
Common solutions include shopt -s nullglob which causes the shell to replace the wildcard with nothing when there are no matches; and explicitly checking for the expanded value being equal to the wildcard (in theory, this could misfire if there is a single file named literally * so this is not completely bulletproof!)
By the by, the allfiles variable appears to be superfluous, and you should generally be much more meticulous about quoting. See When to wrap quotes around a shell variable? for details.
function getAllTests {
local nullglob
shopt -q nullglob || nullglob=reset
shopt -s nullglob
# Getting all stests in the # fix sentence fragment?
if [[ $1 == "s" ]]; then
for f in "$TEST_SCRIPTS"/*; do # notice quotes
echo "$f" # ditto
done
fi
# Unset if it wasn't set originally
case $nullglob in 'reset') shopt -u nullglob;; esac
}
Setting and unsetting nullglob inside a single function is probably excessive; most commonly, you would set it once at the beginning of your script, and then write the script accordingly.

Resources