Test -d directory true - subdirectory false (POSIX) - shell

I'm trying to print all directories/subdirectories from a given start directory.
for i in $(ls -A -R -p); do
if [ -d "$i" ]; then
printf "%s/%s \n" "$PWD" "$i"
fi
done;
This script returns all of the directories found in the . directory and all of the files in that directory, but for some reason the test fails for subdirectories. All of the directories end up in $i and the output looks exactly the same.
Let's say I have the following structure:
foo/bar/test
echo $i prints
foo/
bar/
test/
While the contents of the folders are listed like this:
./foo:
file1
file2
./bar:
file1
file2
However the test statement just prints:
PWD/TO/THIS/DIRECTORY/foo
For some reason it returns true for the first level directories, but false for all of the subdirectories.
(ls is probably not a good way of doing this and I would be glad for a find statement that solves all of my issues, but first I want to know why this script doesn't work the way you'd think.)

As pointed out in the comments, the issue is that the directory names include a :, so -d is false.
I guess that this command gives you the output you want (although it requires Bash):
# enable globstar for **
# disabled in non-interactive shell (e.g. a script)
shopt -s globstar
# print each path ending in a / (all directories)
# ** expands recursively
printf '%s\n' **/*/
The standard way would either to do the recursion yourself, or to use find:
find . -type d

Consider your output:
dir1:
dir1a
Now, the following will be true:
[ -d dir1/dir1a ]
but that's not what your code does; instead, it runs:
[ -d dir1a ]
To avoid this, don't attempt to parse ls; if you want to implement recursion in baseline POSIX sh, do it yourself:
callForEachEntry() {
# because calling this without any command provided would try to execute all found files
# as commands, checking for safe/correct invocation is essential.
if [ "$#" -lt 2 ]; then
echo "Usage: callForEachEntry starting-directory command-name [arg1 arg2...]" >&2
echo " ...calls command-name once for each file recursively found" >&2
return 1
fi
# try to declare variables local, swallow/hide error messages if this fails; code is
# defensively written to avoid breaking if recursing changes either, but may be faulty if
# the command passed as an argument modifies "dir" or "entry" variables.
local dir entry 2>/dev/null ||: "not strict POSIX, but available in dash"
dir=$1; shift
for entry in "$dir"/*; do
# skip if the glob matched nothing
[ -e "$entry" ] || [ -L "$entry" ] || continue
# invoke user-provided callback for the entry we found
"$#" "$entry"
# recurse last for if on a baseline platform where the "local" above failed.
if [ -d "$entry" ]; then
callForEachEntry "$entry" "$#"
fi
done
}
# call printf '%s\n' for each file we recursively find; replace this with the code you
# actually want to call, wrapped in a function if appropriate.
callForEachEntry "$PWD" printf '%s\n'
find can also be used safely, but not as a drop-in replacement for the way ls was used in the original code -- for dir in $(find . -type d) is just as buggy. Instead, see the "Complex Actions" and "Actions In Bulk" section of Using Find.

Related

For files in directory Bash [duplicate]

I'm trying to loop through files in a directory, where the directory is passed through as an argument. I currently have the following script saved in test.sh:
#!/bin/bash
for filename in "$1"/*; do
echo "File:"
echo $filename
done
And I am running the above using:
sh test.sh path/to/loop/over
However, the above doesn't output the files at the directory path/to/loop/over, it instead outputs:
File:
path/to/loop/over/*
I'm guessing it's interpreting path/to/loop/over/* as a string and not a directory. My expected output is the following:
File:
foo.txt
File:
bar.txt
Where foo.txt and bar.txt are files in the path/to/loop/over/ directory. I found this answer which suggested to add a /* after the $1, however, this doesn't seem to help (neither do these suggestions)
Iterate over content of directory
Compatible answer (not only bash)
As this question is tagged shell, there is a POSIX compatible way:
#!/bin/sh
for file in "$1"/* ;do
[ -f "$file" ] && echo "Process '$file'."
done
Will be enough (work with filenames containing spaces):
$ myscript.sh /path/to/dir
Process '/path/to/dir/foo'.
Process '/path/to/dir/bar'.
Process '/path/to/dir/foo bar'.
This work well by using any posix shell. Tested with bash, ksh, dash, zsh and busybox sh.
#!/bin/sh
cd "$1" || exit 1
for file in * ;do
[ -f "$file" ] && echo "Process '$file'."
done
This version won't print path:
$ myscript.sh /path/to/dir
Process 'foo'.
Process 'bar'.
Process 'foo bar'.
Some bash ways
Introduction
I don't like to use shopt when not needed... (This change standard
bash behaviours and make script less readables).
There is an elegant way for doing this by using standard bash, without requirement of shopt.
Of course, previous answer work fine under bash, but. There are some
interresting way for making your script more powerfull, flexible, pretty, detailed...
Sample
#!/bin/bash
die() { echo >&2 "$0 ERROR: $#";exit 1;} # Emergency exit function
[ "$1" ] || die "Argument missing." # Exit unless argument submitted
[ -d "$1" ] || die "Arg '$1' is not a directory." # Exit if argument is not dir
cd "$1" || die "Can't access '$1'." # Exit unless access dir.
files=(*) # All files names in array $files
[ -f "$files" ] || die "No files found." # Exit if no files found
for file in "${files[#]}";do # foreach file:
echo Process "$file" # Process file
done
Explanation: considering globbing vs real files
When doing:
files=(/path/to/dir/*)
variable $files becomes an array containing all files contained under /path/to/dir/:
declare -p files
declare -a files=([0]="/path/to/dir/bar" [1]="/path/to/dir/baz" [2]="/path/to/dir/foo")
But if nothing match glob pattern, star won't be replaced and array become:
declare -p files
declare -a files=([0]="/path/to/dir/*")
From there. looking for $files is like looking for ${files[0]} ie: first field in array. So
[ -f "$files" ] || die "No files found."
will execute die function unless first field of array files is a file ([ -e "$files" ] to check for existing entry, [ -d "$files" ] to check for existing directory, ans so on... see man bash or help test).
But you could do replace this filesystem test by some string based test, like:
[ "$files" = "/path/to/dir/*" ] && die "No files found."
or, using array length:
((${#files[#]}==1)) && [ "${files##*/}" = "*" ] && die "No files found."
Dropping paths by using Parameter expansion:
For suppressing path from filenames, instead of cd $path you could do:
targetPath=/path/to/dir
files=($targetPath/*)
[ -f "$files" ] || die "No files found."
Then:
declare -p files
declare -a files=([0]="/path/to/dir/bar" [1]="/path/to/dir/baz" [2]="/path/to/dir/foo")
You could
printf 'File: %s\n' ${files[#]#$targetPath/}
File: bar
File: baz
File: foo
This would happen if the directory is empty, or misspelled. The shell (in its default configuration) simply doesn't expand a wildcard if it has no matches. (You can control this in Bash with shopt -s nullglob; with this option, wildcards which don't match anything are simply removed.)
You can verify this easily for yourself. In a directory with four files,
sh$ echo *
a file or two
sh$ echo [ot]*
or two
sh$ echo n*
n*
And in Bash,
bash$ echo n*
n*
bash$ shopt -s nullglob
bash$ echo n*
I'm guessing you are confused about how the current working directory affects the resolution of directory names; maybe read Difference between ./ and ~/

Prevent "mv" command from raising error if no file matches the glob. eg" mv *.json /dir/

I want to move all JSON files created within a jenkins job to a different folder.
It is possible that the job does not create any json file.
In that case the mv command is raising an error and so that job is failing.
How do I prevent mv command from raising error in case no file is found?
Welcome to SO.
Why do you not want the error?
If you just don't want to see the error, then you could always just throw it away with 2>/dev/null, but PLEASE don't do that. Not every error is the one you expect, and this is a debugging nightmare. You could write it to a log with 2>$logpath and then build in logic to read that to make certain it's ok, and ignore or respond accordingly --
mv *.json /dir/ 2>$someLog
executeMyLogParsingFunction # verify expected err is the ONLY err
If it's because you have set -e or a trap in place, and you know it's ok for the mv to fail (which might not be because there is no file!), then you can use this trick -
mv *.json /dir/ || echo "(Error ok if no files found)"
or
mv *.json /dir/ ||: # : is a no-op synonym for "true" that returns 0
see https://www.gnu.org/software/bash/manual/html_node/Conditional-Constructs.html
(If it's failing simply because the mv is returning a nonzero as the last command, you could also add an explicit exit 0, but don't do that either - fix the actual problem rather than patching the symptom. Any of these other solutions should handle that, but I wanted to point out that unless there's a set -e or a trap that catches the error, it shouldn't cause the script to fail unless it's the very last command.)
Better would be to specifically handle the problem you expect without disabling error handling on other problems.
shopt -s nullglob # globs with no match do not eval to the glob as a string
for f in *.json; do mv "$f" /dir/; done # no match means no loop entry
c.f. https://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html
or if you don't want to use shopt,
for f in *.json; do [[ -e "$f" ]] && mv "$f" /dir/; done
Note that I'm only testing existence, so that will include any match, including directories, symlinks, named pipes... you might want [[ -f "$f" ]] && mv "$f" /dir/ instead.
c.f. https://www.gnu.org/software/bash/manual/html_node/Bash-Conditional-Expressions.html
This is expected behavior -- it's why the shell leaves *.json unexpanded when there are no matches, to allow mv to show a useful error.
If you don't want that, though, you can always check the list of files yourself, before passing it to mv. As an approach that works with all POSIX-compliant shells, not just bash:
#!/bin/sh
# using a function here gives us our own private argument list.
# that's useful because minimal POSIX sh doesn't provide arrays.
move_if_any() {
dest=$1; shift # shift makes the old $2 be $1, the old $3 be $2, etc.
# so, we then check how many arguments were left after the shift;
# if it's only one, we need to also check whether it refers to a filesystem
# object that actually exists.
if [ "$#" -gt 1 ] || [ -e "$1" ] || [ -L "$1" ]; then
mv -- "$#" "$dest"
fi
}
# put destination_directory/ in $1 where it'll be shifted off
# $2 will be either nonexistent (if we were really running in bash with nullglob set)
# ...or the name of a legitimate file or symlink, or the string '*.json'
move_if_any destination_directory/ *.json
...or, as a more bash-specific approach:
#!/bin/bash
files=( *.json )
if (( ${#files[#]} > 1 )) || [[ -e ${files[0]} || -L ${files[0]} ]]; then
mv -- "${files[#]}" destination/
fi
Loop over all json files and move each of them, if it exists, in a oneliner:
for X in *.json; do [[ -e $X ]] && mv "$X" /dir/; done

Checking the input arguments to script to be empty failed in bash script

This a small bash program that is tasked with looking through a directory and counting how many files are in the directory. It's to ignore other directories and only count the files.
Below is my bash code, which seems to fail to count the files specifically in the directory, I say this because if I remove the if statement and just increment the counter the for loop continues to iterate and prints 4 in the counter (this is including directories though). With the if statement it prints this to the console.
folder1 has files
Looking at other questions I think the expression in my if statement is right and I am getting no compilation errors for syntax or another problems.
So I just simply dumbfounded as to why it is not counting the files.
#!/bin/bash
folder=$1
if [ $1 = empty ]; then
folder=empty
counter=0
echo $folder has $counter files
exit
fi
for d in $(ls $folder); do
if [[ -f $d ]]; then
let 'counter++'
fi
done
echo $folder has $counter files
Thank you.
Your entire script could be very well simplified as below with enhancements made. Never use output of ls programmatically. It should be used only in the command-line. The -z construct allows to you assert if the parameter following it is empty or non-empty.
For looping over files, use the default glob expansion provided by the shell. Note the && is a short-hand to do a action when the left-side of the operand returned a true condition, in a way short-hand equivalent of if <condition>; then do <action>; fi
#!/usr/bin/env bash
[ -z "$1" ] && { printf 'invalid argument passed\n' >&2 ; exit 1 ; }
shopt -s nullglob
for file in "$1"/*; do
[ -f "$file" ] && ((count++))
done
printf 'folder %s had %d files\n' "$1" "$count"

How do I check whether a file or file directory exist in bash?

I currently have this bash script (which is located in my home directory, i.e., /home/username/ and I am running it as root as it's necessary for the icon copying lines):
cd /home/username/Pictures/Icon*
declare -a A={Arch,Debian,Fedora,Mageia,Manjaro,OpenSUSE}
declare -a B={Adwaita,Faenza,gnome,Humanity}
for i in $A; do
for j in $B; do
if test -e /usr/share/icons/$j/scalable ; else
mkdir /usr/share/icons/$j/scalable/
fi
if test -e /usr/share/icons/$j/scalable/$i.svg ; else
cp -a $i*.svg /usr/share/icons/$j/scalable/$i.svg
fi
done
done
What I want this script to do is to copy icons from my Pictures/Icons and logos directory to the scalable theme (specified in $B) subdirectories in /usr/share/icons. Before it does this, however, I'd like it to create a scalable directory in these theme subdirectories if it does not already exist. The problem is that the else part of the conditionals is not being read properly, as I keep receiving this error:
./copyicon.sh: line 8: syntax error near unexpected token `else'
./copyicon.sh: line 8: ` if test -e /usr/share/icons/$j/scalable ; else'
If you're wondering why the test -e ... in the conditional it's based on a textbook on bash scripting I've been following.
Checking file and/or directory existence
To check whether a file exists in bash, you use the -f operator. For directories, use -d. Example usage:
$ mkdir dir
$ [ -d dir ] && echo exists!
exists!
$ rmdir dir
$ [ -d dir ] && echo exists!
$ touch file
$ [ -f file ] || echo "doesn't exist..."
$ rm file
$ [ -f file ] || echo "doesn't exist..."
doesn't exist...
For more information simply execute man test.
A note on -e, this test operator checks whether a file exists. While this may seem like a good choice, it's better to use -f which will return false if the file isn't a regular file. /dev/null for example is a file but nor a regular file. Having the check return true is undesired in this case.
A note on variables
Be sure to quote variables too, once you have a space or any other special character contained in a variable it can have undesired side effects. So when you test for existence of files and directories, wrap the file/dir in double quotes. Something like [ -f "/path/to/some/${dir}/" ] will work while the following would fail if there is a space in dir: [ -f /path/to/some/${dir}/ ].
Fixing the syntax error
You are experiencing a syntax error in the control statements. A bash if clause is structured as following:
if ...; then
...
fi
Or optional with an else clause:
if ...; then
...
else
...
fi
You cannot omit the then clause. If you wish to only use the else clause you should negate the condition. Resulting in following code:
if [ ! -f "/usr/share/icons/$j/scalable" ]; then
mkdir "/usr/share/icons/$j/scalable/"
fi
Here we add an exclamation point (!) to flip the expression's evaluation. If the expression evaluates to true, the same expression preceded by ! will return false and the other way around.
You can't skip the then part of the if statement, easiest solution would be to just negate the test
if [[ ! -e /usr/share/icons/${j}/scalable ]] ; then
mkdir /usr/share/icons/${j}/scalable/
fi
if [[ ! -e /usr/share/icons/${j}/scalable/${i}.svg ]] ; then
cp -a ${i}*.svg /usr/share/icons/${j}/scalable/${i}.svg
fi
I left it with -e (exists), but you might consider using -d for directories or -f for files and some error handling to catch stuff (e.g. /usr/share/icons/$j/scalable/ exists, but is a file and not a directory for whatever reason.)
I also noticed that in your original code you are potentially trying to copy multiple files into one:
cp -a $i*.svg /usr/share/icons/$j/scalable/$i.svg
I left it that way in my example in case you are sure that it is always only one file and are intentionally renaming it. If not I'd suggest only specifying a target directory.

bash - recursive script can't see files in sub directory

I got a recursive script which iterates a list of names, some of which are files and some are directories.
If it's a (non-empty) directory, I should call the script again with all of the files in the directory and check if they are legal.
The part of the code making the recursive call:
if [[ -d $var ]] ; then
if [ "$(ls -A $var)" ]; then
./validate `ls $var`
fi
fi
The part of code checking if the files are legal:
if [[ -f $var ]]; then
some code
fi
But, after making the recursive calls, I can no longer check any of the files inside that directory, because they are not in the same directory as the main script, the -f $var if cannot see them.
Any suggestion how can I still see them and use them?
Why not use find? Simple and easy solution to the problem.
Always quote variables, you never known when you will find a file or directory name with spaces
shopt -s nullglob
if [[ -d "$path" ]] ; then
contents=( "$path"/* )
if (( ${#contents[#]} > 0 )); then
"$0" "${contents[#]}"
fi
fi
you're re-inventing find
of course, var is a lousy variable name
if you're recursively calling the script, you don't need to hard-code the script name.
you should consider putting the logic into a function in the script, and the function can recursively call itself, instead of having to spawn an new process to invoke the shell script each time. If you do this, use $FUNCNAME instead of "$0"
A few people have mentioned how find might solve this problem, I just wanted to show how that might be done:
find /yourdirectory -type f -exec ./validate {} +;
This will find all regular files in yourdirectory and recursively in all its sub-directories, and return their paths as arguments to ./validate. The {} is expanded to the paths of the files that find locates within yourdirectory. The + at the end means that each call to validate will be on a large number of files, instead of calling it individually on each file (wherein the + is replaced with a \), this provides a huge speedup sometimes.
One option is to change directory (carefully) into the sub-directory:
if [[ -d "$var" ]] ; then
if [ "$(ls -A $var)" ]; then
(cd "$var"; exec ./validate $(ls))
fi
fi
The outer parentheses start a new shell so the cd command does not affect the main shell. The exec replaces the original shell with (a new copy of) the validate script. Using $(...) instead of back-ticks is sensible. In general, it is sensible to enclose variable names in double quotes when they refer to file names that might contain spaces (but see below). The $(ls) will list the files in the directory.
Heaven help you with the ls commands if any file names or directory names contain spaces; you should probably be using * glob expansion instead. Note that a directory containing a single file with a name such as -n would trigger a syntax error in your script.
Corrigendum
As Jens noted in a comment, the location of the shell script (validate) has to be adjusted as you descend the directory hierarchy. The simplest mechanism is to have the script on your PATH, so you can write exec validate or even exec $0 instead of exec ./validate. Failing that, you need to adjust the value of $0 — assuming your shell leaves $0 as a relative path and doesn't mess around with converting it to an absolute path. So, a revised version of the code fragment might be:
# For validate on PATH or absolute name in $0
if [[ -d "$var" ]] ; then
if [ "$(ls -A $var)" ]; then
(cd "$var"; exec $0 $(ls))
fi
fi
or:
# For validate not on PATH and relative name in $0
if [[ -d "$var" ]] ; then
if [ "$(ls -A $var)" ]; then
(cd "$var"; exec ../$0 $(ls))
fi
fi

Resources