Shell script file copy checker - bash

I copied and re-sorted nearly 1TB of files on a Drobo using a find . -name \*.%ext% -print0 | xargs -I{} -0 cp -v {} %File%s command. I need to make sure all the files copied correctly. This is what I have so far:
#!/bin/sh
find . -type f -exec basename {} \; > Filelist.txt
sort -o Filelist.txt Filelist.txt
uniq -c Filelist.txt Duplist.txt
I need to find a way get the checksum for each file as well as making sure all of them are duplicated. The source folder is in the same directory as the copies, it is arranged as follows:
_Stock
_Audio
_CG
_Images
_Source (the files in all other folders come from here)
_Videos
I'm working on OSX.

#!/bin/sh
find . \( ! -regex '.*/\..*' \) -type f -exec shasum {} \; -exec basename {} \; | cut -c -40 | sed 'N;s/\n/ /' > Filelist.txt
sort -o Filelist.txt Filelist.txt
uniq -c Filelist.txt Duplist.txt
sort -o Duplist.txt Duplist.txt
The regex expression removes hidden files, the shasum and basename arguments create two separate outputs in the text file so we | to cut and then sed to merge the outputs so that the sort and uniq commands can parse them. The script is messy but it got the job done quite nicely.

Related

Terminal find, directories last instead of first

I have a makefile that concatenates JavaScript files together and then runs the file through uglify-js to create a .min.js version.
I'm currently using this command to find and concat my files
find src/js -type f -name "*.js" -exec cat {} >> ${jsbuild}$# \;
But it lists files in directories first, this makes heaps of sense but I'd like it to list the .js files in the src/js files above the directories to avoid getting my undefined JS error.
Is there anyway to do this or? I've had a google around and seen the sort command and the -s flag for find but it's a bit above my understanding at this point!
[EDIT]
The final solution is slightly different to the accepted answer but it is marked as accepted as it brought me to the answer. Here is the command I used
cat `find src/js -type f -name "*.js" -print0 | xargs -0 stat -f "%z %N" | sort -n | sed -e "s|[0-9]*\ \ ||"` > public/js/myCleverScript.js
Possible solution:
use find for getting filenames and directory depth, i.e find ... -printf "%d\t%p\n"
sort list by directory depth with sort -n
remove directory depth from output to use filenames only
test:
without sorting:
$ find folder1/ -depth -type f -printf "%d\t%p\n"
2 folder1/f2/f3
1 folder1/file0
with sorting:
$ find folder1/ -type f -printf "%d\t%p\n" | sort -n | sed -e "s|[0-9]*\t||"
folder1/file0
folder1/f2/f3
the command you need looks like
cat $(find src/js -type f -name "*.js" -printf "%d\t%p\n" | sort -n | sed -e "s|[0-9]*\t||")>min.js
Mmmmm...
find src/js -type f
shouldn't find ANY directories at all, and doubly so as your directory names will probably not end in ".js". The brackets around your "-name" parameter are superfluous too, try removing them
find src/js -type f -name "*.js" -exec cat {} >> ${jsbuild}$# \;
find could get the first directory level already expanded on commandline, which enforces the order of directory tree traversal. This solves the problem just for the top directory (unlike the already accepted solution by Sergey Fedorov), but this should answer your question too and more options are always welcome.
Using GNU coreutils ls, you can sort directories before regular files with --group-directories-first option. From reading the Mac OS X ls manpage it seems that directories are grouped always in OS X, you should just drop the option.
ls -A --group-directories-first -r | tac | xargs -I'%' find '%' -type f -name '*.js' -exec cat '{}' + > ${jsbuild}$#
If you do not have the tac command, you could easily implement it using sed. It reverses the order of lines. See info sed tac of GNU sed.
tac(){
sed -n '1!G;$p;h'
}
You could do something like this...
First create a variable holding the name of our output file:
OUT="$(pwd)/theLot.js"
Then, get all "*.js" in top directory into that file:
cat *.js > $OUT
Then have "find" grab all other "*.js" files below current directory:
find . -type d ! -name . -exec sh -c "cd {} ; cat *.js >> $OUT" \;
Just to explain the "find" command, it says:
find
. = starting at current directory
-type d = all directories, not files
-! -name . = except the current one
-exec sh -c = and for each one you find execute the following
"..." = go to that directory and concatenate all "*.js" files there onto end of $OUT
\; = and that's all for today, thank you!
I'd get the list of all the files:
$ find src/js -type f -name "*.js" > list.txt
Sort them by depth, i.e. by the number of '/' in them, using the following ruby script:
sort.rb:
files=[]; while gets; files<<$_; end
files.sort! {|a,b| a.count('/') <=> b.count('/')}
files.each {|f| puts f}
Like so:
$ ruby sort.rb < list.txt > sorted.txt
Concatenate them:
$ cat sorted.txt | while read FILE; do cat "$FILE" >> output.txt; done
(All this assumes that your file names don't contain newline characters.)
EDIT:
I was aiming for clarity. If you want conciseness, you can absolutely condense it to something like:
find src/js -name '*.js'| ruby -ne 'BEGIN{f=[];}; f<<$_; END{f.sort!{|a,b| a.count("/") <=> b.count("/")}; f.each{|e| puts e}}' | xargs cat >> concatenated

bash shell script not working as intended using cmp with output redirection

I am trying to write a bash script that remove duplicate files from a folder, keeping only one copy.
The script is the following:
#!/bin/sh
for f1 in `find ./ -name "*.txt"`
do
if test -f $f1
then
for f2 in `find ./ -name "*.txt"`
do
if [ -f $f2 ] && [ "$f1" != "$f2" ]
then
# if cmp $f1 $f2 &> /dev/null # DOES NOT WORK
if cmp $f1 $f2
then
rm $f2
echo "$f2 purged"
fi
fi
done
fi
done
I want to redirect the output and stderr to /dev/null to avoid printing them to screen.. But using the commented statement this script does not work as intended and removes all files but the first..
I'll give more informations if needed.
Thanks
Few comments:
First, the:
for f1 in `find ./ -name "*.txt"`
do
if test -f $f1
then
is the same as (find only plain files with the txt extension)
for f1 in `find ./ -type f -name "*.txt"`
Better syntax (bash only) is
for f1 in $(find ./ -type f -name "*.txt")
and finally the whole is wrong, because if the filename contains a space, the f1 variable will not get the full path name. So instead the for do:
find ./ -type f -name "*.txt" -print | while read -r f1
and as #Sir Athos pointed out, the filename can contain \n so the best is to use
find . -type f -name "*.txt" -print0 | while IFS= read -r -d '' f1
Second:
Use "$f1" instead of $f1 - again, because the $f1 can contain space.
Third:
doing N*N comparisons is not very effective. You should make a checksum (md5 or better sha256) for every txt file. When the checksum is identical - the files are dups.
If you don't trust checksums, simply compare only files what has identical checksums. Files with different checksum are SURE not duplicates. ;)
Making checksums are slow to, so you should 1st compare ony files with the same size. Different sized files are not duplicates...
You can skip empty txt files - they are duplicates all :).
so the final command can be:
find -not -empty -type f -name \*.txt -printf "%s\n" | sort -rn | uniq -d |\
xargs -I% -n1 find -type f -name \*.txt -size %c -print0 | xargs -0 md5sum |\
sort | uniq -w32 --all-repeated=separate
commented:
#find all non-empty file with the txt extension and print their size (in bytes)
find . -not -empty -type f -name \*.txt -printf "%s\n" |\
#sort the sizes numerically, and keep only duplicated sizes
sort -rn | uniq -d |\
#for each sizes (what are duplicated) find all files with the given size and print their name (path)
xargs -I% -n1 find . -type f -name \*.txt -size %c -print0 |\
#make an md5 checksum for them
xargs -0 md5sum |\
#sort the checksums and keep duplicated files separated with an empty line
sort | uniq -w32 --all-repeated=separate
The output now, you can simply edit the output file and decide what want remove and what file want keep.
&> is bash syntax, you'll need to change the shebang line (first line) to #!/bin/bash (or the appropriate path to bash.
Or if you're really using the Bourne Shell (/bin/sh), then you have to use old-style redirection, i.e.
cmp ... >/dev/null 2>&1
Also, I think the &> was only introduced in bash 4, so if you're using bash, 3.X you'll still need the old-style redirections.
IHTH
Credit to #kobame for this answer: this is really a comment but for the formatting.
You don't need to call find twice, print out the size and the filename in the find command
find . -not -empty -type f -name \*.txt -printf "%8s %p\n" |
# find the files that have duplicate sizes
sort -n | uniq -Dw 8 |
# strip off the size and get the md5 sum
cut -c 10- | xargs md5sum
An example
$ cat a.txt
this is file a
$ cat b.txt
this is file b
$ cat c.txt
different contents
$ cp a.txt d.txt
$ cp b.txt e.txt
$ find . -not -empty -type f -name \*.txt -printf "%8s %p\n" |
sort -n | uniq -Dw 8 | cut -c 10- | xargs md5sum
76fd4c1589ef708d9203f3cf09cfd032 ./a.txt
e2d75fd6a1080efb6230d0608b1f9014 ./b.txt
76fd4c1589ef708d9203f3cf09cfd032 ./d.txt
e2d75fd6a1080efb6230d0608b1f9014 ./e.txt
To keep one and delete the rest, I would pipe the output into:
... | awk '++seen[$1] > 1 {print $2}' | xargs echo rm
rm ./d.txt ./e.txt
Remove the echo if your testing is satisfactory.
Like many complex pipelines, filenames containing newlines will break it.
All nice answers, so only one short suggestion: you can install and use the
fdupes -r .
from the man:
Searches the given path for duplicate files. Such files are found by
comparing file sizes and MD5 signatures, followed by a byte-by-byte
comparison.
Added by #Francesco
fdupes -rf . | xargs rm -f
for remove dupes. (the -f in fdupes omit the 1st occurence the file, so list only dupes)

Bash script to change file names recursively

I have a script for changing file names of mht files but it does not traverse through dirs and sub dirs. I asked a question on a local forum and I got an answer that this is a solution:
find . -type f -name "*.mhtml" -o -type f -name "*.mht" | xargs -I item sh -c '{ echo item; echo item | sed "s/[:?|]//g"; }' | xargs -n2 mv
But it generates an error. With some of my experimenting it turns out that sh -c breaks file names with space and that this generates an error. How can I fix this?
#!/bin/bash
# renames.sh
# basic file renamer
for i in . *.mht
do
j=`echo $i | sed 's/|/ /g' | sed 's/:/ /g' | sed 's/?//g' | sed 's/"//g'`
mv "$i" "$j"
done
#! /bin/bash
find . -type f \( -name "*.mhtml" -o -name ".mht" \) -print0 |
while IFS= read -r -d '' source; do
target="${source//[:?|]/}"
[ "X$source" != "X$target" ] &&
mv -nv "$source" "$target"
done
Update: Do the rename according to the original question, and added support for .mht.
Use rename. With rename you can specify a renaming pattern:
find . -type f \( -name "*.mhtml" -o -name "*.mht" \) -print0 | xargs -0 -I'{}' rename 's/[:?|]//g' "{}"
This way you can properly handle names with spaces. xargs will replace {} with every names of file provided by the find command. Also note the use of -print0 and -0. This use a \0 as a separator so its avoid problems dealing with filnames containing \n (newline).
The -o was not working the way it was intended to. you must use parenthesis to group conditions.
You may also consider using -iname instead of -name if you deal with file ending with ".mHtml".

Find and rename all pictures with incorrect file extension

I'm looking for a way to automate renaming all images with a wrong filename extension. So far I at least found out how to get the list of all these files:
find /media/folder/ -name *.jpg -exec file {} \; | grep 'PNG\|GIF' > foobar.txt
find /media/folder/ -name *.png -exec file {} \; | grep 'JPEG\|GIF' >> foobar.txt
find /media/folder/ -name *.gif -exec file {} \; | grep 'JPEG\|PNG' >> foobar.txt
However, I would also like to automate the renaming. I tried things like
find /media/folder/ -name *.jpg -exec file {} \; | grep -l PNG | rename s/.jpg/.png/
but in this case grep -l or grep -lH don't list only filenames like I thought they would.
The -l and -H flags of grep are not useful in your example. These flags have no effect when used with the standard input, like in your example coming from the pipe. These flags only work if you specify files (or directories and the -r flag for recursion), for example:
grep -rl PNG path/to/dir1 file2 file3
In your example the -l has no effect, so the output is the complete lines that matched PNG, which in your example probably look something like this:
icon.png: PNG image, 512 x 512, 8-bit/color RGBA, non-interlaced
To get only the filename, maybe you can cut off everything after the colon like this:
find /media/folder/ -name *.jpg -exec file {} \; | grep PNG | sed -e s/:.*// | rename s/.jpg/.png/

grep only text files

find . -type f | xargs file | grep text | cut -d':' -f1 | xargs grep -l "TEXTSEARCH" {}
it's a good solution? for find TEXTSEARCH recursively in only textual files
You can use the -r(recursive) and -I(ignore binary) options in grep:
$ grep -rI "TEXTSEARCH" .
-I Process a binary file as if it did not contain matching data; this is equivalent to the --binary-files=without-match option.
-r Read all files under each directory, recursively; this is equivalent to the -d recurse option.
Another, less elegant solution than kevs, is, to chain -exec commands in find together, without xargs and cut:
find . -type f -exec bash -c "file -bi {} | grep -q text" \; -exec grep TEXTSEARCH {} ";"
If you know what the file extension is that you want to search, then a very simple way to search all *.txt files from the current dir, recursively through all subdirs, case insensitive:
grep -ri --include=*.txt "sometext" *

Resources