Loop over find result in bash - bash

I have a bash script written by some previous colleague in my company. It's shellcheck result is horrible and me, who is using zsh can't run the script. He seems to use the notorious find with for loop thingy in bash. But I can't figure out how to get it better.
At the moment i got a temporary fix.
this is his code
#!/bin/bash
releases=$(for d in $(find ${DELIVERIES} -maxdepth 1 -type d -name "*_delivery_33_SR*" | sort) ; do echo ${d##*_} ; done)
for sr in ${releases[#]}
do
echo "Release $sr"
deliveries=$(find ${deliveries_path}/*${sr}/ -type f -name "*.ear" -o -name "*.war" | sort)
if [ ! -e ${sr}.txt ]
then
for d in ${deliveries[#]}
do
echo "$(basename $d)" | tee -a ${sr}.txt
done
fi
echo
done
And this is my code that get to even loop the first part.
#!/bin/bash
for release in $(for d in $(find "${DELIVERIES}" -maxdepth 1 -type d -name "*_delivery_33_SR*" | sort) ; do echo "${d##*_}" ; done)
do
echo "Release $release"
done
As you can see I needed to put the find inside the loop and I cant save it in an variable, because when i try to loop over it will try to put \n everywhere and it is like a single element? Could any1 suggest How should I solve this problem, because this previous colleague uses this kind of find search a lot.
EDIT:
The script went to each folder with a specific name and then created a file X.X.X.txt with the version number in the X part. And appended the filenames inside the subfolder to the X.X.X.txt

Blindly refactoring gets me something like
#!/bin/bash
for d in "$DELIVERIES"/*_delivery_33_SR*/; do
sr=${d##*_}
echo "Release $sr"
if [ ! -e "${sr}.txt" ]
then
find "${deliveries_path}"/*"${sr}"/ -type f -name "*.ear" -o -name "*.war" |
sort |
xargs -n 1 basename |
tee -a "$sr.txt"
fi
echo
done

Related

How to stop Bash expansion of '*.h" in a function?

In trying to run the following function—Bash is expanding my variable in an unexpected way—thus preventing me from getting my expected result.
It comes down to the way bash deals with a "*.h" which I am passing in to the function.
Here is the function I call:
link_files_of_type_from_directory "*.h" ./..
And where I would expect this variable to stay this way all the way through at some point, by the time it hits the echo $command_to_run; part of my Bash script...this variable has expanded to...
MyHeader1.h MyHeader2.h MyHeader3.h
and so on.
What I want is for Bash to not expand my files so that my code runs the following:
find ./.. -type f -name '*.h'
Instead of
find ./.. -type f -name MyHeader1.h MyHeader2.h MyHeader3.h
This is the code:
function link_files_of_type_from_directory {
local file_type=$1;
local directory_to_link=$2;
echo "File type $file_type";
echo "Directory to link $directory_to_link";
command="find $directory_to_link -type f -name $file_type";
echo $command;
#for i in $(find $directory_to_link -type f -name $file_type);
for i in $command;
do
echo $i;
if test -e $(basename $i); then
echo $i exists;
else
echo Linking: $i;
ln -s $i;
fi
done;
}
How can I prevent the expansion so that Bash does search for files that end in *.h in my the directory I want to pass in?
UPDATE 1:
So I've updated the call to be
link_files_of_type_from_directory "'*.h'" ..
And the function now assembles the string of the command to be evaluated like so:
mmd="find $directory_to_link -type f -name $file_type";
When I echo it out—it's correct :)
find .. -type f -name '*.h'
But I can't seem to get the find command to actually run. Here are the errors / mistakes I'm getting while trying to correctly assemble the for loop:
# for i in $mmd; # LOOPS THROUGH STRINGS IN COMMAND
# for i in '$(mdd)'; # RUNS MMD LITERALLY
# for i in ${!mmd}; # Errors out with: INVALID VARIABLE NAME — find .. -type f -name '*.h':
Would love help on this part—even though it is a different question :)
With quoting of your variables, removed semicolons and your loop wrapped into an -exec action to prevent problems with spaces, tabs and newlines in filenames, your function looks like this:
function link_files_of_type_from_directory {
local file_type=$1
local directory_to_link=$2
echo "File type $file_type"
echo "Directory to link $directory_to_link"
find "$directory_to_link" -type f -name "$file_type" -exec sh -c '
for i do
echo "$i"
if test -e "$(basename "$i")"; then
echo "$i exists"
else
echo "Linking: $i"
ln -s "$i"
fi
done
' sh {} +
}

Count filenumber in directory with blank in its name

If you want a breakdown of how many files are in each dir under your current dir:
for i in $(find . -maxdepth 1 -type d) ; do
echo -n $i": " ;
(find $i -type f | wc -l) ;
done
It does not work when the directory name has a blank in the name. Can anyone here tell me how I must edite this shell script so that such directory names also accepted for counting its file contents?
Thanks
Your code suffers from a common issue described in http://mywiki.wooledge.org/BashPitfalls#for_i_in_.24.28ls_.2A.mp3.29.
In your case you could do this instead:
for i in */; do
echo -n "${i%/}: "
find "$i" -type f | wc -l
done
This will work with all types of file names:
find . -maxdepth 1 -type d -exec sh -c 'printf "%s: %i\n" "$1" "$(find "$1" -type f | wc -l)"' Counter {} \;
How it works
find . -maxdepth 1 -type d
This finds the directories just as you were doing
-exec sh -c 'printf "%s: %i\n" "$1" "$(find "$1" -type f | wc -l)"' Counter {} \;
This feeds each directory name to a shell script which counts the files, similarly to what you were doing.
There are some tricks here: Counter {} are passed as arguments to the shell script. Counter becomes $0 (which is only used if the shell script generates an error. find replaces {} with the name of a directory it found and this will be available to the shell script as $1. This is done is a way that is safe for all types of file names.
Note that, wherever $1 is used in the script, it is inside double-quotes. This protects it for word splitting or other unwanted shell expansions.
I found the solution what I have to consider:
Consider_this
#!/bin/bash
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for i in $(find . -maxdepth 1 -type d); do
echo -n " $i: ";
(find $i -type f | wc -l) ;
done
IFS=$SAVEIFS

Print the content of all the files in the newest directory in BASH [duplicate]

Is there any sort option available in find command to get directory with least access date/time
find . -type d -printf "%A# %p\n" | sort -n | tail -n 1 | cut -d " " -f 2-
If you prefer the filename without leading path, replace %p by %f.
the below linux command displays the access and modified time along with size
stat -f
find -type d -printf '%T+ %p\n' | sort | head -1
source
find -type d -printf '%T+ %p\n' | sort
This sound like more of a job for ls:
ls -ultd *|grep ^d
The problem with using find, at least on my system (cygwin/bash), is that find accesses the dirs, so all access-times result in current time, defeating your apparent purpose.
A simple shell script will also do:
unset -v oldest
for i in "$dir"/*; do
[ "$i" -ot "$oldest" -o "$oldest" = "" ] && oldest="$i"
done
note: to find the oldest directory use "$dir"/*/ above (thanks Cyrus) and -type d below with the find command.
In bash if you need a recursive solution, then you can rewrite it as a while loop with process substitution using find
unset -v oldest
while IFS= read -r i; do
[ "$i" -ot "$oldest" -o "$oldest" = "" ] && oldest="$i"
done < <(find "$dir" -type f)

How to make variable infront of pipeline in bash?

I would like to save output of first command as variable in front of pipeline and send them to the pipe, too.
For example: find -type d | grep -E '^\./y'. And in my variable going to be output of find -type d.
Thanks for help
EDIT
Maybe I can solve this problem another way, but I am standing in front of another problem. How to call my own function with parameter from pipeline?
EX: find -type d | MyFunction
RE:
EDIT Maybe I can solve this problem another way, but I am standing in front of another problem. How to call my own function with parameter from pipeline?
EX: find -type d | MyFunction
The following all work:
$ cat ./blah.sh
#!/bin/bash
function blah {
while read i; do
echo $i
done
}
find ~/opt -type d | blah
blah <<< $(find ~/opt -type d)
blah < <(find ~/opt -type d)
$ ./blah.sh
/home/me/opt
/home/me/opt/bin
/home/me/opt /home/me/opt/bin
/home/me/opt
/home/me/opt/bin
So I'd imagine if find -type d | MyFunction doesn't work, then the function is probably not looking for input on stdin.
Based on Andrew's comments to Perry, perhaps
find . -type d -print0 | while IFS= read -r -d "" dir; do
# do something with $dir
case "$dir" in
./y*) echo "$dir" ;;
*) : ;;
esac
done
You can easily capture the output of any command in a variable using the $() (used to be `) syntax, like this:
VARIABLE=$(command)
You could then just pipe the output of "echo $VARIABLE" into the next command.
However, please keep in mind that the length of the values of shell variables is restricted and not guaranteed to be large enough to hold arbitrary values -- in general, it isn't a good idea to try what you are attempting.
You can feed variables like this in a array with bash, without any loop :
$ read -a array <<< $(find 2>/dev/null -type d | grep -E 'test_[0-9]+')
$ echo ${array[#]}
./test_003.t ./test_002.t ./test_001.t
$ echo ${array[1]}
./test_002.t

Bash script to list files not found

I have been looking for a way to list file that do not exist from a list of files that are required to exist. The files can exist in more than one location. What I have now:
#!/bin/bash
fileslist="$1"
while read fn
do
if [ ! -f `find . -type f -name $fn ` ];
then
echo $fn
fi
done < $fileslist
If a file does not exist the find command will not print anything and the test does not work. Removing the not and creating an if then else condition does not resolve the problem.
How can i print the filenames that are not found from a list of file names?
New script:
#!/bin/bash
fileslist="$1"
foundfiles="~/tmp/tmp`date +%Y%m%d%H%M%S`.txt"
touch $foundfiles
while read fn
do
`find . -type f -name $fn | sed 's:./.*/::' >> $foundfiles`
done < $fileslist
cat $fileslist $foundfiles | sort | uniq -u
rm $foundfiles
#!/bin/bash
fileslist="$1"
while read fn
do
FPATH=`find . -type f -name $fn`
if [ "$FPATH." = "." ]
then
echo $fn
fi
done < $fileslist
You were close!
Here is test.bash:
#!/bin/bash
fn=test.bash
exists=`find . -type f -name $fn`
if [ -n "$exists" ]
then
echo Found it
fi
It sets $exists = to the result of the find. the if -n checks if the result is not null.
Try replacing body with [[ -z "$(find . -type f -name $fn)" ]] && echo $fn. (note that this code is bound to have problems with filenames containing spaces).
More efficient bashism:
diff <(sort $fileslist|uniq) <(find . -type f -printf %f\\n|sort|uniq)
I think you can handle diff output.
Give this a try:
find -type f -print0 | grep -Fzxvf - requiredfiles.txt
The -print0 and -z protect against filenames which contain newlines. If your utilities don't have these options and your filenames don't contain newlines, you should be OK.
The repeated find to filter one file at a time is very expensive. If your file list is directly compatible with the output from find, run a single find and remove any matches from your list:
find . -type f |
fgrep -vxf - "$1"
If not, maybe you can massage the output from find in the pipeline before the fgrep so that it matches the format in your file; or, conversely, massage the data in your file into find-compatible.
I use this script and it works for me
#!/bin/bash
fileslist="$1"
found="Found:"
notfound="Not found:"
len=`cat $1 | wc -l`
n=0;
while read fn
do
# don't worry about this, i use it to display the file list progress
n=$((n + 1))
echo -en "\rLooking $(echo "scale=0; $n * 100 / $len" | bc)% "
if [ $(find / -name $fn | wc -l) -gt 0 ]
then
found=$(printf "$found\n\t$fn")
else
notfound=$(printf "$notfound\n\t$fn")
fi
done < $fileslist
printf "\n$found\n$notfound\n"
The line counts the number of lines and if its greater than 0 the find was a success. This searches everything on the hdd. You could replace / with . for just the current directory.
$(find / -name $fn | wc -l) -gt 0
Then i simply run it with the files in the files list being separated by newline
./search.sh files.list

Resources