Bash: List directories with a type of file, but missing another type of file - bash

I'm new(ish) to using Bash and I'm trying to figure out how to combine a few different things into one script.
I'm looking for file transfers that were interrupted. These folders contain image files (either jpgs or pngs), but are missing another specific file (finished.txt).
Here is what I'm using to find folders with images (from here):
for f in */incoming/ ; do
log_f="${f//\//}"
echo "searching $f"
find "$f" -iname "*jpg*" -o -iname "*png*" > "/output/${log_f}.txt"
echo "$f finished"
done
Then, I'm running this command to find folders that are missing the finished.txt file (from here):
find -mindepth 2 -maxdepth 2 -type d '!' -exec test -e "{}/finished.txt" ';' -print
Is there a way to combine them so I have a list of folders which have jpg or png files, but don't have finished.txt? Also, If I want to add -mtime, where do I put that?
Alternatively, if there's a better/faster way to do this, I'm interested in that too.
Thanks!

From the first pass when you get the files with jpg/png you can get the directory by using dirname. The list of directories can then be used for iterating over and looking for finished.txt file. On finding you can skip the directory if not print it out.
Something as below should do the needful
for i in `find "$f" -iname "*jpg*" -o -iname "*png*" -exec dirname {} \;`
do
ls $i | grep finished >/dev/null
if [ $? -eq 1 ]; then
echo $i
fi
done
Add " | sort | uniq" at the end of find command to perhaps remove the duplicates. Something like
find "$f" -iname "jpg" -o -iname "png" -exec dirname {} \; | sort | uniq

Related

Iterate over files in a subfolder

new here, learning bash for first time.
I'm trying to iterate over files named "list.txt" placed in subfolders, manipulate and create a new files, under the same subfolder. The nest could be like this:
inventory/product_names1/list.txt
inventory/product_names2/list.txt
As product_names is completly random, I would like to iterate over all list.txt files with unix cms like sed/grep/cut and create a new file, under the same random product_names folders.
for f in $( find . -name 'list.txt'); do for list in $f; do cat $f | cut -d']' -f2- > "$f/new_file.txt" ; done ; done
I can access files into the nest using find command. How can I redirect output in the right subfolder if the product_names is random?
inventory/product_names1/list.txt
inventory/product_names1/new_file.txt
inventory/product_names2/list.txt
inventory/product_names2/new_file.txt
This script is intended to work in the root folder, pointing and working with entime path "inventory". $f access to inventory/product_names1/list.txt but I need the output in inventory/product_names1. How can I redirect correctly if I don't have the right value/variable?
You can either use parameter expansion to remove the file name from the path, or you can iterate over all the directories and only work on them if they contain the list.txt file.
#!/bin/bash
for list in inventory/*/list.txt ; do
new=${list%/*}/new_list.txt
echo "$list" "$new"
done
# OR
for dir in inventory/* ; do
if [[ -f $dir/list.txt ]] ; then
echo "$dir"/list.txt "$dir"/new_list.txt
fi
done
find can not only find files but also execute commands when a file is found:
find . -type f -name 'list.txt' -execdir sh -c 'cut -d"]" -f2 list.txt > new_file.txt' \;
Explanations:
-type f condition added to skip directories named list.txt. If some of your list.txt files can be symbolic links and you want to consider them too, use -type f,l with GNU find. With other find you may need to use \(-type f -o -type l\).
-execdir runs the command in the directory where the file was found.
By default find does not print when -execdir is used. If you need it add the -print command:
find . -type f -name 'list.txt' -execdir sh -c 'cut -d"]" -f2 list.txt > new_file.txt' \; -print

use command find and copy but using a list.txt with names

I have a list.txt with different filenames and I want to find all those 3600 filename in subdirectories and then copy to /destination_folder.
Can I use the command find /path/ {file.txt} then copy to /destination_folder ?
The list.txt should have the following filenames/lines:
test_20180724004008_4270.txt.bz2
test_20180724020008_4278.txt.bz2
test_20180724034009_4288.txt.bz2
test_20180724060009_4302.txt.bz2
test_20180724061009_4303.txt.bz2
test_20180724062010_4304.txt.bz2
test_20180724063010_4305.txt.bz2
test_20180724065010_4307.txt.bz2
test_20180724070010_4308.txt.bz2
test_20180724071010_4309.txt.bz2
test_20180724072010_4310.txt.bz2
test_20180724072815_4311.txt.bz2
test_20180724073507_4312.txt.bz2
test_20180724074608_4314.txt.bz2
test_20180724075041_4315.txt.bz2
test_20180724075450_4316.txt.bz2
test_20180724075843_4317.txt.bz2
test_20180724075843_4317.txt.bz2
test_20180724080207_4318.txt.bz2
test_20180724080522_4319.txt.bz2
test_20180724080826_4320.txt.bz2
test_20180724081121_4321.txt.bz2
................................
You will probably want to make a list of all of the files in a directory, then use your list to iterate through the list of files found.
First save your list of files found to a file
find . -type f > foundFiles.txt
Then you need to use your file to search the other
cat list.txt | while read line
do
if [ `grep -c "${line}" foundFiles.txt` ]
then
cp -v $(grep "${line}" foundFiles.txt) /destination_folder/
fi
done
I'll let you take the base and make it into a script for use again.
You could use echo and sed:
echo $(sed "s/.*/\"\0\"/;s/^/ -name /;s/$/ -o/;$ s/-o//" list.txt)
This outputs a list of files to be used in find command:
-name "file1.txt.bz2" -o -name "file2.txt.bz2" -o -name "file3.txt.bz2"
Then use -exec cp -t targetDir {} + in find to copy the files:
find \( $(eval echo $(sed "s/.*/\"\0\"/;s/^/ -name /;s/$/ -o/;$ s/-o//" list.txt)) \) -exec cp -t targetDir {} +
Loop through the file and append the results to your destination folder:
for i in `cat list.txt`;
do cp `find * -name $i` destination_folder/;
done
This finds all the files in list.txt and copies those files to destination_folder/.
The for i in `cat list.txt` creates a variable i that loops through the entire file.
The cp `find * -name $i` destination_folder/ finds the path to file and copies it to the destination_folder/.

Rename files in several subdirectories

I want to rename a file present in several subdirectories using bash script.
my files are in folders:
./FolderA/ABCD/ABCD_Something.ctl
./FolderA/EFGH/EFGH_Something.ctl
./FolderA/WXYZ/WXYZ_Something.ctl
I want to rename all of the .ctl file with the same name (name.ctl).
I tried several command using mv or rename but didnt work.
Working from FolderA:
find . -name '*.ctl' -exec rename *.ctl name.ctl '{}' \;
or
for f in ./*/*.ctl; do mv "$f" "${f/*.ctl/name .ctl}"; done
or
for f in $(find . -type f -name '*.ctl'); do mv $f $(echo "$f" | sed 's/*.ctl/name.ctl/'); done
Can you help me using bash?
thanks
You can do this with one line with:
find . -name *.ctl -exec sh -c 'mv "$1" `dirname "$1"`/name.ctl' x {} \;
The x just allows the filename to be positional character 1 rather than 0 which (in my opinion) wrong to use as a parameter.
Try this:
find . -name '*.ctl' | while read f; do
dn=$(dirname "${f}")
# remove the echo after you sanity check the output
echo mv "${f}" "${dn}/name.ctl"
done
find should get all the files you want, dirname will get just the directory name, and mv will perform the rename. You can remove the quotes if you're sure that you'll never have spaces in the names.

delete a file present in multiple directories based on the status of find command in unix

I need to delete a file present in multiple directories if it is found else ignore. I tried the following snippet.
ls $dir/"$input.xml" 2> /dev/null
var = `echo$?`
if [[ $var == 0 ]]; then
echo -e "\n Deleting...\n"
rm $dir/"$input.xml"
It failed.
Can anyone suggest me a better solution or modify the above snippet to suit the solution?
Not 100% sure what do you mean with "delete a file present in multiple directories if it is found else ignore". Assuming that you simply want to delete some files that are somewhere under $dir, do this:
Use find to find the files, and pipe to xargs rm:
find "$dir" -type f -name "*.xml" | xargs rm
If your filename is likely to contain spaces then do this:
find "$dir" -type f -name "*.xml" -print0 | xargs -0 rm
To supress the rm error message in case there are no files:
find "$dir" -type f -name "*.xml" -print0 | xargs -0 rm 2>/dev/null
To make your code working Try this [Insert space],
`echo $?`
Instead of this,
`echo$?`

How can I list all unique file names without their extensions in bash?

I have a task where I need to move a bunch of files from one directory to another. I need move all files with the same file name (i.e. blah.pdf, blah.txt, blah.html, etc...) at the same time, and I can move a set of these every four minutes. I had a short bash script to just move a single file at a time at these intervals, but the new name requirement is throwing me off.
My old script is:
find ./ -maxdepth 1 -type f | while read line; do mv "$line" ~/target_dir/; echo "$line"; sleep 240; done
For the new script, I basically just need to replace find ./ -maxdepth 1 -type f
with a list of unique file names without their extensions. I can then just replace do mv "$line" ~/target_dir/; with do mv "$line*" ~/target_dir/;.
So, with all of that said. What's a good way to get a unique list of files without their file names with bash script? I was thinking about using a regex to grab file names and then throwing them in a hash to get uniqueness, but I'm hoping there's an easier/better/quicker way. Ideas?
A weird-named files tolerant one-liner could be:
find . -maxdepth 1 -type f -and -iname 'blah*' -print0 | xargs -0 -I {} mv {} ~/target/dir
If the files can start with multiple prefixes, you can use logic operators in find. For example, to move blah.* and foo.*, use:
find . -maxdepth 1 -type f -and \( -iname 'blah.*' -or -iname 'foo.*' \) -print0 | xargs -0 -I {} mv {} ~/target/dir
EDIT
Updated after comment.
Here's how I'd do it:
find ./ -type f -printf '%f\n' | sed 's/\..*//' | sort | uniq | ( while read filename ; do find . -type f -iname "$filename"'*' -exec mv {} /dest/dir \; ; sleep 240; done )
Perhaps it needs some explaination:
find ./ -type f -printf '%f\n': find all files and print just their name, followed by a newline. If you don't want to look in subdirectories, this can be substituted by a simple ls;
sed 's/\..*//': strip the file extension by removing everything after the first dot. Both foo.tar ad foo.tar.gz are transformed into foo;
sort | unique: sort the filenames just found and remove duplicates;
(: open a subshell:
while read filename: read a line and put it into the $filename variable;
find . -type f -iname "$filename"'*' -exec mv {} /dest/dir \;: find in the current directory (find .) all the files (-type f) whose name starts with the value in filename (-iname "$filename"'*', this works also for files containing whitespaces in their name) and execute the mv command on each one (-exec mv {} /dest/dir \;)
sleep 240: sleep
): end of subshell.
Add -maxdepth 1 as argument to find as you see fit for your requirements.
Nevermind, I'm dumb. there's a uniq command. Duh. New working script is: find ./ -maxdepth 1 -type f | sed -e 's/.[a-zA-Z]*$//' | uniq | while read line; do mv "$line*" ~/target_dir/; echo "$line"; sleep 240; done
EDIT: Forgot close tag on code and a backslash.

Resources