I'm trying to loop through files in a directory, where the directory is passed through as an argument. I currently have the following script saved in test.sh:
#!/bin/bash
for filename in "$1"/*; do
echo "File:"
echo $filename
done
And I am running the above using:
sh test.sh path/to/loop/over
However, the above doesn't output the files at the directory path/to/loop/over, it instead outputs:
File:
path/to/loop/over/*
I'm guessing it's interpreting path/to/loop/over/* as a string and not a directory. My expected output is the following:
File:
foo.txt
File:
bar.txt
Where foo.txt and bar.txt are files in the path/to/loop/over/ directory. I found this answer which suggested to add a /* after the $1, however, this doesn't seem to help (neither do these suggestions)
Iterate over content of directory
Compatible answer (not only bash)
As this question is tagged shell, there is a POSIX compatible way:
#!/bin/sh
for file in "$1"/* ;do
[ -f "$file" ] && echo "Process '$file'."
done
Will be enough (work with filenames containing spaces):
$ myscript.sh /path/to/dir
Process '/path/to/dir/foo'.
Process '/path/to/dir/bar'.
Process '/path/to/dir/foo bar'.
This work well by using any posix shell. Tested with bash, ksh, dash, zsh and busybox sh.
#!/bin/sh
cd "$1" || exit 1
for file in * ;do
[ -f "$file" ] && echo "Process '$file'."
done
This version won't print path:
$ myscript.sh /path/to/dir
Process 'foo'.
Process 'bar'.
Process 'foo bar'.
Some bash ways
Introduction
I don't like to use shopt when not needed... (This change standard
bash behaviours and make script less readables).
There is an elegant way for doing this by using standard bash, without requirement of shopt.
Of course, previous answer work fine under bash, but. There are some
interresting way for making your script more powerfull, flexible, pretty, detailed...
Sample
#!/bin/bash
die() { echo >&2 "$0 ERROR: $#";exit 1;} # Emergency exit function
[ "$1" ] || die "Argument missing." # Exit unless argument submitted
[ -d "$1" ] || die "Arg '$1' is not a directory." # Exit if argument is not dir
cd "$1" || die "Can't access '$1'." # Exit unless access dir.
files=(*) # All files names in array $files
[ -f "$files" ] || die "No files found." # Exit if no files found
for file in "${files[#]}";do # foreach file:
echo Process "$file" # Process file
done
Explanation: considering globbing vs real files
When doing:
files=(/path/to/dir/*)
variable $files becomes an array containing all files contained under /path/to/dir/:
declare -p files
declare -a files=([0]="/path/to/dir/bar" [1]="/path/to/dir/baz" [2]="/path/to/dir/foo")
But if nothing match glob pattern, star won't be replaced and array become:
declare -p files
declare -a files=([0]="/path/to/dir/*")
From there. looking for $files is like looking for ${files[0]} ie: first field in array. So
[ -f "$files" ] || die "No files found."
will execute die function unless first field of array files is a file ([ -e "$files" ] to check for existing entry, [ -d "$files" ] to check for existing directory, ans so on... see man bash or help test).
But you could do replace this filesystem test by some string based test, like:
[ "$files" = "/path/to/dir/*" ] && die "No files found."
or, using array length:
((${#files[#]}==1)) && [ "${files##*/}" = "*" ] && die "No files found."
Dropping paths by using Parameter expansion:
For suppressing path from filenames, instead of cd $path you could do:
targetPath=/path/to/dir
files=($targetPath/*)
[ -f "$files" ] || die "No files found."
Then:
declare -p files
declare -a files=([0]="/path/to/dir/bar" [1]="/path/to/dir/baz" [2]="/path/to/dir/foo")
You could
printf 'File: %s\n' ${files[#]#$targetPath/}
File: bar
File: baz
File: foo
This would happen if the directory is empty, or misspelled. The shell (in its default configuration) simply doesn't expand a wildcard if it has no matches. (You can control this in Bash with shopt -s nullglob; with this option, wildcards which don't match anything are simply removed.)
You can verify this easily for yourself. In a directory with four files,
sh$ echo *
a file or two
sh$ echo [ot]*
or two
sh$ echo n*
n*
And in Bash,
bash$ echo n*
n*
bash$ shopt -s nullglob
bash$ echo n*
I'm guessing you are confused about how the current working directory affects the resolution of directory names; maybe read Difference between ./ and ~/
Related
I'm trying to print all directories/subdirectories from a given start directory.
for i in $(ls -A -R -p); do
if [ -d "$i" ]; then
printf "%s/%s \n" "$PWD" "$i"
fi
done;
This script returns all of the directories found in the . directory and all of the files in that directory, but for some reason the test fails for subdirectories. All of the directories end up in $i and the output looks exactly the same.
Let's say I have the following structure:
foo/bar/test
echo $i prints
foo/
bar/
test/
While the contents of the folders are listed like this:
./foo:
file1
file2
./bar:
file1
file2
However the test statement just prints:
PWD/TO/THIS/DIRECTORY/foo
For some reason it returns true for the first level directories, but false for all of the subdirectories.
(ls is probably not a good way of doing this and I would be glad for a find statement that solves all of my issues, but first I want to know why this script doesn't work the way you'd think.)
As pointed out in the comments, the issue is that the directory names include a :, so -d is false.
I guess that this command gives you the output you want (although it requires Bash):
# enable globstar for **
# disabled in non-interactive shell (e.g. a script)
shopt -s globstar
# print each path ending in a / (all directories)
# ** expands recursively
printf '%s\n' **/*/
The standard way would either to do the recursion yourself, or to use find:
find . -type d
Consider your output:
dir1:
dir1a
Now, the following will be true:
[ -d dir1/dir1a ]
but that's not what your code does; instead, it runs:
[ -d dir1a ]
To avoid this, don't attempt to parse ls; if you want to implement recursion in baseline POSIX sh, do it yourself:
callForEachEntry() {
# because calling this without any command provided would try to execute all found files
# as commands, checking for safe/correct invocation is essential.
if [ "$#" -lt 2 ]; then
echo "Usage: callForEachEntry starting-directory command-name [arg1 arg2...]" >&2
echo " ...calls command-name once for each file recursively found" >&2
return 1
fi
# try to declare variables local, swallow/hide error messages if this fails; code is
# defensively written to avoid breaking if recursing changes either, but may be faulty if
# the command passed as an argument modifies "dir" or "entry" variables.
local dir entry 2>/dev/null ||: "not strict POSIX, but available in dash"
dir=$1; shift
for entry in "$dir"/*; do
# skip if the glob matched nothing
[ -e "$entry" ] || [ -L "$entry" ] || continue
# invoke user-provided callback for the entry we found
"$#" "$entry"
# recurse last for if on a baseline platform where the "local" above failed.
if [ -d "$entry" ]; then
callForEachEntry "$entry" "$#"
fi
done
}
# call printf '%s\n' for each file we recursively find; replace this with the code you
# actually want to call, wrapped in a function if appropriate.
callForEachEntry "$PWD" printf '%s\n'
find can also be used safely, but not as a drop-in replacement for the way ls was used in the original code -- for dir in $(find . -type d) is just as buggy. Instead, see the "Complex Actions" and "Actions In Bulk" section of Using Find.
This a small bash program that is tasked with looking through a directory and counting how many files are in the directory. It's to ignore other directories and only count the files.
Below is my bash code, which seems to fail to count the files specifically in the directory, I say this because if I remove the if statement and just increment the counter the for loop continues to iterate and prints 4 in the counter (this is including directories though). With the if statement it prints this to the console.
folder1 has files
Looking at other questions I think the expression in my if statement is right and I am getting no compilation errors for syntax or another problems.
So I just simply dumbfounded as to why it is not counting the files.
#!/bin/bash
folder=$1
if [ $1 = empty ]; then
folder=empty
counter=0
echo $folder has $counter files
exit
fi
for d in $(ls $folder); do
if [[ -f $d ]]; then
let 'counter++'
fi
done
echo $folder has $counter files
Thank you.
Your entire script could be very well simplified as below with enhancements made. Never use output of ls programmatically. It should be used only in the command-line. The -z construct allows to you assert if the parameter following it is empty or non-empty.
For looping over files, use the default glob expansion provided by the shell. Note the && is a short-hand to do a action when the left-side of the operand returned a true condition, in a way short-hand equivalent of if <condition>; then do <action>; fi
#!/usr/bin/env bash
[ -z "$1" ] && { printf 'invalid argument passed\n' >&2 ; exit 1 ; }
shopt -s nullglob
for file in "$1"/*; do
[ -f "$file" ] && ((count++))
done
printf 'folder %s had %d files\n' "$1" "$count"
I want to know if my file exists in any of the sub directories below. The sub directories are created in the steps above in my shell script, the below code always tells me the file do not exist (even if it does) and I want the path to be printed as well.
#!/bin/bash
....
if ! [[ -e [ **/**/somefile.txt && -s **/**/somefile.txt ]]; then
echo "===> Warn: somefile.txt was not created in the following path: "
# I want to be able to print the path in which file is not generated
exit 1
fi
I know the file name is somefile.txt which is to be created in all sub-directories, but the subdirectory names change a lot.. Hence globbing.
#!/bin/bash
shopt -s extglob ## enable **, which by default has no special behavior
for d in **/; do
if ! [[ -s "$d/somefile.txt" ]]; then
echo "===> WARN: somefile.txt was not created (or is empty) in $d" >&2
exit 1
fi
done
I have written a bash just to display the name of all the files of a given directory but when I am running this it breaking the file name which has spaces.
if [ $# -eq 0 ]
then
echo "give a source directory in the command line argument in order to rename the jpg file"
exit 1
fi
if [ ! -d "$1" ]; then
exit 2
fi
if [ -d "$1" ]
then
for i in $(ls "$1")
do
echo "$i"
done
fi
I am getting the following thing when I run the bash script
21151991jatinkhurana_image
(co
py).jpg
24041991jatinkhurana_im
age.jpg
35041991jatinkhurana_image
.jpg
The thing that i have tried till now is resetting the IFS variable like IFS=$(echo -en "\t\n\0") but found no change....
If anyone know please help me.....
Do not loop through the result of ls. Parsing ls makes world worse (good read: Why you shouldn't parse the output of ls).
Instead, you can do make use of the *, that expands to the existing content in a given directory:
for file in /your/dir/*
do
echo "this is my file: $file"
done
Using variables:
for file in $dir/*
do
echo "this is my file: $file"
done
This work is being done on a test virtualbox machine
In my /root dir, i have created the following:
"/root/foo"
"/root/bar"
"/root/i have multiple words"
Here is the (relevant)code I currently have
if [ ! -z "$BACKUP_EXCLUDE_LIST" ]
then
TEMPIFS=$IFS
IFS=:
for dir in $BACKUP_EXCLUDE_LIST
do
if [ -e "$3/$dir" ] # $3 is the backup source
then
BACKUP_EXCLUDE_PARAMS="$BACKUP_EXCLUDE_PARAMS --exclude='$dir'"
fi
done
IFS=$TEMPIFS
fi
tar $BACKUP_EXCLUDE_PARAMS -cpzf $BACKUP_PATH/$BACKUP_BASENAME.tar.gz -C $BACKUP_SOURCE_DIR $BACKUP_SOURCE_TARGET
This is what happens when I run my script with sh -x
+ IFS=:
+ [ -e /root/foo ]
+ BACKUP_EXCLUDE_PARAMS= --exclude='foo'
+ [ -e /root/bar ]
+ BACKUP_EXCLUDE_PARAMS= --exclude='foo' --exclude='bar'
+ [ -e /root/i have multiple words ]
+ BACKUP_EXCLUDE_PARAMS= --exclude='foo' --exclude='bar' --exclude='i have multiple words'
+ IFS=
# So far so good
+ tar --exclude='foo' --exclude='bar' --exclude='i have multiple words' -cpzf /backup/root/daily/root_20130131.071056.tar.gz -C / root
tar: have: Cannot stat: No such file or directory
tar: multiple: Cannot stat: No such file or directory
tar: words': Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors
# WHY? :(
The Check completes sucessfully, but the --exclude='i have multiple words' does not work.
Mind you that it DOES work when i type it in my shell, manually:
tar --exclude='i have multiple words' -cf /somefile.tar.gz /root
I know that this would work in bash when using arrays, but i want this to be POSIX.
Is there a solution to this?
Consider this scripts; ('with whitespace' and 'example.desktop' is sample files)
#!/bin/bash
arr=("with whitespace" "examples.desktop")
for file in ${arr[#]}
do
ls $file
done
This outputs as exactly as yours;
21:06 ~ $ bash test.sh
ls: cannot access with: No such file or directory
ls: cannot access whitespace: No such file or directory
examples.desktop
You can set IFS to '\n' character to escape white spaces on file names.
#!/bin/bash
arr=("with whitespace" "examples.desktop")
(IFS=$'\n';
for file in ${arr[#]}
do
ls $file
done
)
the output of the second version should be;
21:06 ~ $ bash test.sh
with whitespace
examples.desktop
David the H. from the LinuxQuestions forums steered me in the right direction.
First of all, in my question, I did not make use IFS=: all the way through to the tar command
Second of all, I included "set -f" for safety
BACKUP_EXCLUDE_LIST="foo:bar:i have multiple words"
# Grouping our parameters
if [ ! -z "$BACKUP_EXCLUDE_LIST" ]
then
IFS=: # Here we set our temp $IFS
set -f # Disable globbing
for dir in $BACKUP_EXCLUDE_LIST
do
if [ -e "$3/$dir" ] # $3 is the directory that contains the directories defined in $BACKUP_EXCLUDE_LIST
then
BACKUP_EXCLUDE_PARAMS="$BACKUP_EXCLUDE_PARAMS:--exclude=$dir"
fi
done
fi
# We are ready to tar
tar $BACKUP_EXCLUDE_PARAMS \
-cpzf "$BACKUP_PATH/$BACKUP_BASENAME.tar.gz" \
-C "$BACKUP_SOURCE_DIR" \
"$BACKUP_SOURCE_TARGET"
unset IFS # our custom IFS has done it's job. Let's unset it!
set +f # Globbing is back on
I advise against using the TEMPIFS variable, like I did, because that method does not set the IFS back correctly. It's best to unset IFS when you are done with it