Print the content of all the files in the newest directory in BASH [duplicate] - bash

Is there any sort option available in find command to get directory with least access date/time

find . -type d -printf "%A# %p\n" | sort -n | tail -n 1 | cut -d " " -f 2-
If you prefer the filename without leading path, replace %p by %f.

the below linux command displays the access and modified time along with size
stat -f

find -type d -printf '%T+ %p\n' | sort | head -1
source

find -type d -printf '%T+ %p\n' | sort

This sound like more of a job for ls:
ls -ultd *|grep ^d
The problem with using find, at least on my system (cygwin/bash), is that find accesses the dirs, so all access-times result in current time, defeating your apparent purpose.

A simple shell script will also do:
unset -v oldest
for i in "$dir"/*; do
[ "$i" -ot "$oldest" -o "$oldest" = "" ] && oldest="$i"
done
note: to find the oldest directory use "$dir"/*/ above (thanks Cyrus) and -type d below with the find command.
In bash if you need a recursive solution, then you can rewrite it as a while loop with process substitution using find
unset -v oldest
while IFS= read -r i; do
[ "$i" -ot "$oldest" -o "$oldest" = "" ] && oldest="$i"
done < <(find "$dir" -type f)

Related

Loop over find result in bash

I have a bash script written by some previous colleague in my company. It's shellcheck result is horrible and me, who is using zsh can't run the script. He seems to use the notorious find with for loop thingy in bash. But I can't figure out how to get it better.
At the moment i got a temporary fix.
this is his code
#!/bin/bash
releases=$(for d in $(find ${DELIVERIES} -maxdepth 1 -type d -name "*_delivery_33_SR*" | sort) ; do echo ${d##*_} ; done)
for sr in ${releases[#]}
do
echo "Release $sr"
deliveries=$(find ${deliveries_path}/*${sr}/ -type f -name "*.ear" -o -name "*.war" | sort)
if [ ! -e ${sr}.txt ]
then
for d in ${deliveries[#]}
do
echo "$(basename $d)" | tee -a ${sr}.txt
done
fi
echo
done
And this is my code that get to even loop the first part.
#!/bin/bash
for release in $(for d in $(find "${DELIVERIES}" -maxdepth 1 -type d -name "*_delivery_33_SR*" | sort) ; do echo "${d##*_}" ; done)
do
echo "Release $release"
done
As you can see I needed to put the find inside the loop and I cant save it in an variable, because when i try to loop over it will try to put \n everywhere and it is like a single element? Could any1 suggest How should I solve this problem, because this previous colleague uses this kind of find search a lot.
EDIT:
The script went to each folder with a specific name and then created a file X.X.X.txt with the version number in the X part. And appended the filenames inside the subfolder to the X.X.X.txt
Blindly refactoring gets me something like
#!/bin/bash
for d in "$DELIVERIES"/*_delivery_33_SR*/; do
sr=${d##*_}
echo "Release $sr"
if [ ! -e "${sr}.txt" ]
then
find "${deliveries_path}"/*"${sr}"/ -type f -name "*.ear" -o -name "*.war" |
sort |
xargs -n 1 basename |
tee -a "$sr.txt"
fi
echo
done

Select parent directory if non-unique directory is found

Hello I am trying to figure out how I can parse directories using built-in bash functionality.
The directory structure would look something like.
/home/mikal/PluginSDK/vendor_name1/ver1/plugin_name/plugin-config.json
/home/mikal/PluginSDK/vendor_name1/ver2/plugin_name/plugin-config.json
/home/mikal/PluginSDK/vendor_name2/ver1/plugin_name/plugin-config.json
/home/mikal/PluginSDK/vendor_name3/plugin_name/plugin-config.json
So far I have narrowed down to the name of the plugin which covers most of what I needed for the rest of the script.
find /home/mikal/PluginSDK -type f -name plugin-config.json | sed -r 's|/[^/]+$||' | awk -F "/" '{print $NF}'
The problem that I am running into is when the same vendor has different versions of plugin available for the same release. We may not always want to run a newer version of the plugin due to compatibility or performance of the plugin so having these show something like ver1-plugin_name or similar would be preferrable. I can't find anything that would be able to pick out the non-unique plugin/version so that I can make an array with all of the options.
This is the entirety of what I have written right now for this section of the script I am writing to make configuration changes to the system.
options=()
while IFS= read -r line; do
options+=( "$line" )
done < <( find /home/mikal/PluginSDK -type f -name plugin-config.json | sed -r 's|/[^/]+$||' | awk -F "/" '{print $NF}' )
select opt_number in "${options[#]}" "Quit";
do
if [[ $opt_number == "Quit" ]];
then
echo "Quitting"
break;
else
find /home/mikal/PluginSDK -type f -name plugin-config.json -exec sh -c "sed -i 's/"preferred": true/"preferred": false/g'" {} \;
find /home/mikal/PluginSDK/${options[$(($REPLY-1))]} -type f -name plugin-config.json -exec sh -c "sed -i 's/"preferred": false/"preferred": true/g'" {} \;
break;
fi
done
Desired output for the entire thing would be something like.
1.) Ver1-Plugin_name
2.) Ver2-Plugin_name
3.) Plugin_name
4.) Plugin_name
5.) Quit
I apologize if my formatting is bad. First time posting.
Maybe
lst=( Quit
$( find /home/mikal/PluginSDK -type f -name plugin-config.json |
awk -F/ '{ if (7==NF) { print $6 } else { print $6"-"$7 } }' )
select opt_number in "${lst[#]}"
. . .
You might want to c.f. BashFAQ 20 if your filenames could have any weirdness like embedded spaces.

How to get list of certain strings in a list of files using bash?

The title is maybe not really descriptive, but I couldn't find a more concise way to describe the problem.
I have a directory containing different files which have a name that e.g. looks like this:
{some text}2019Q2{some text}.pdf
So the filenames have somewhere in the name a year followed by a capital Q and then another number. The other text can be anything, but it won't contain anything matching the format year-Q-number. There will also be no numbers directly before or after this format.
I can work something out to get this from one filename, but I actually need a 'list' so I can do a for-loop over this in bash.
So, if my directory contains the files:
costumerA_2019Q2_something.pdf
costumerB_2019Q2_something.pdf
costumerA_2019Q3_something.pdf
costumerB_2019Q3_something.pdf
costumerC_2019Q3_something.pdf
costumerA_2020Q1_something.pdf
costumerD2020Q2something.pdf
I want a for loop that goes over 2019Q2, 2019Q3, 2020Q1, and 2020Q2.
EDIT:
This is what I have so far. It is able to extract the substrings, but it still has doubles. Since I'm already in the loop and I don't see how I can remove the doubles.
find original/*.pdf -type f -print0 | while IFS= read -r -d '' line; do
echo $line | grep -oP '[0-9]{4}Q[0-9]'
done
# list all _filanames_ that end with .pdf from the folder original
find original -maxdepth 1 -name '*.pdf' -type f -print "%p\n" |
# extract the pattern
sed 's/.*\([0-9]{4}Q[0-9]\).*/\1/' |
# iterate
while IFS= read -r file; do
echo "$file"
done
I used -print %p to print just the filename, instead of full path. The GNU sed has -z option that you can use with -print0 (or -print "%p\0").
With how you have wanted to do this, if your files have no newline in the name, there is no need to loop over list in bash (as a rule of a thumb, try to avoid while read line, it's very slow):
find original -maxdepth 1 -name '*.pdf' -type f | grep -oP '[0-9]{4}Q[0-9]'
or with a zero seprated stream:
find original -maxdepth 1 -name '*.pdf' -type f -print0 |
grep -zoP '[0-9]{4}Q[0-9]' | tr '\0' '\n'
If you want to remove duplicate elements from the list, pipe it to sort -u.
Try this, in bash:
~ > $ ls
costumerA_2019Q2_something.pdf costumerB_2019Q2_something.pdf
costumerA_2019Q3_something.pdf other.pdf
costumerA_2020Q1_something.pdf someother.file.txt
~ > $ for x in `(ls)`; do [[ ${x} =~ [0-9]Q[1-4] ]] && echo $x; done;
costumerA_2019Q2_something.pdf
costumerA_2019Q3_something.pdf
costumerA_2020Q1_something.pdf
costumerB_2019Q2_something.pdf
~ > $ (for x in *; do [[ ${x} =~ ([0-9]{4}Q[1-4]).+pdf ]] && echo ${BASH_REMATCH[1]}; done;) | sort -u
2019Q2
2019Q3
2020Q1

How to use find utility with logical operators and post processing

Is there way to find all directories that have executable file that matches a partial name of parent directory?
Situation
/distribution/software_a_v1.0.0/software_a
/distribution/software_a_v1.0.1/software_a
/distribution/software_a_v1.0.2/config.cfg
I need result
/distribution/software_a_v1.0.0/software_a
/distribution/software_a_v1.0.1/software_a
I've gotten only so far
find /distribution -maxdepth 1 -type d #and at depth 2 -type f -perm /u=x and binary name matches directory name, minus version
Another way using awk:
find /path -type f -perm -u=x -print | awk -F/ '{ rec=$0; sub(/_v[0-9].*$/,"",$(NF-1)); if( $NF == $(NF-1) ) print rec }'
The awk part is based on your sample and stated condition ... name matches directory name, minus version. Modify it if needed.
I would use grep:
find /distribution -maxdepth 1 -type d | grep "/distribution/software_\w_v\d*?\.\d*?\.\d*?/software_\w"
I don't know if this is the most efficient, but here's one way you could do it, using just bash...
for f in /distribution/*/*
do
if [[ -f "${f}" && -x "${f}" ]] # it's a file and executable
then
b="${f##*/} # get just the filename
[[ "${f}" =~ "/distribution/${b}*/${b}" ]] && echo "${f}"
fi
done

Bash script to list files not found

I have been looking for a way to list file that do not exist from a list of files that are required to exist. The files can exist in more than one location. What I have now:
#!/bin/bash
fileslist="$1"
while read fn
do
if [ ! -f `find . -type f -name $fn ` ];
then
echo $fn
fi
done < $fileslist
If a file does not exist the find command will not print anything and the test does not work. Removing the not and creating an if then else condition does not resolve the problem.
How can i print the filenames that are not found from a list of file names?
New script:
#!/bin/bash
fileslist="$1"
foundfiles="~/tmp/tmp`date +%Y%m%d%H%M%S`.txt"
touch $foundfiles
while read fn
do
`find . -type f -name $fn | sed 's:./.*/::' >> $foundfiles`
done < $fileslist
cat $fileslist $foundfiles | sort | uniq -u
rm $foundfiles
#!/bin/bash
fileslist="$1"
while read fn
do
FPATH=`find . -type f -name $fn`
if [ "$FPATH." = "." ]
then
echo $fn
fi
done < $fileslist
You were close!
Here is test.bash:
#!/bin/bash
fn=test.bash
exists=`find . -type f -name $fn`
if [ -n "$exists" ]
then
echo Found it
fi
It sets $exists = to the result of the find. the if -n checks if the result is not null.
Try replacing body with [[ -z "$(find . -type f -name $fn)" ]] && echo $fn. (note that this code is bound to have problems with filenames containing spaces).
More efficient bashism:
diff <(sort $fileslist|uniq) <(find . -type f -printf %f\\n|sort|uniq)
I think you can handle diff output.
Give this a try:
find -type f -print0 | grep -Fzxvf - requiredfiles.txt
The -print0 and -z protect against filenames which contain newlines. If your utilities don't have these options and your filenames don't contain newlines, you should be OK.
The repeated find to filter one file at a time is very expensive. If your file list is directly compatible with the output from find, run a single find and remove any matches from your list:
find . -type f |
fgrep -vxf - "$1"
If not, maybe you can massage the output from find in the pipeline before the fgrep so that it matches the format in your file; or, conversely, massage the data in your file into find-compatible.
I use this script and it works for me
#!/bin/bash
fileslist="$1"
found="Found:"
notfound="Not found:"
len=`cat $1 | wc -l`
n=0;
while read fn
do
# don't worry about this, i use it to display the file list progress
n=$((n + 1))
echo -en "\rLooking $(echo "scale=0; $n * 100 / $len" | bc)% "
if [ $(find / -name $fn | wc -l) -gt 0 ]
then
found=$(printf "$found\n\t$fn")
else
notfound=$(printf "$notfound\n\t$fn")
fi
done < $fileslist
printf "\n$found\n$notfound\n"
The line counts the number of lines and if its greater than 0 the find was a success. This searches everything on the hdd. You could replace / with . for just the current directory.
$(find / -name $fn | wc -l) -gt 0
Then i simply run it with the files in the files list being separated by newline
./search.sh files.list

Resources