Essentially what I want to do is search the working directory recursively, then use the paths given to resize the images. For example, find all *.jpg files, resize them to 300x300 and rename to whatever.jpg.
Should I be doing something along the lines of $(find | grep *.jpg) to get the paths? When I do that, the output is directories not enclosed in quotation marks, meaning that I would have to insert them before it would be useful, right?
I use mogrify with find.
Lets say, I need everything inside my nested folder/another/folder/*.jpg to be in *.png
find . -name "*.jpg" -print0|xargs -I{} -0 mogrify -format png {}
&& with a bit of explaination:
find . -name *.jpeg -- to find all the jpeg's inside the nested folders.
-print0 -- to print desired filename withouth andy nasty surprises (eg: filenames space seperated)
xargs -I {} -0 -- to process file one by one with mogrify
and lastly those {} are just dummy file name for result from find.
You can use something like this with GNU find:
find . -iname \*jpg -exec /your/image/conversion/script.sh {} +
This will be safer in terms of quoting, and spawn fewer processes. As long as your script can handle the length of the argument list, this solution should be the most efficient option.
If you need to handle really long file lists, you may have to pay the price and spawn more processes. You can modify find to handle each file separately. For example:
find . -iname \*jpg -exec /your/image/conversion/script.sh {} \;
Related
I'd like to find and remove an image in a series of folders. The problem is that the image names are not necessarily the same.
What I did was to copy an arbitrary string from the images bytecode and use it like
grep -ir 'YA'uu�KU���^H2�Q�W^YSp��.�^H^\^Q��P^T' .
But since there are thousands of images this method lasts for ever. Also, some images are created by imagemagic of the original, so can not use size to find them all.
So I'm wondering what is the most efficient way to do so?
Updated Answer
If you have the checksum of a specific file in mind that you want to compare with, you can checksum all files in all subdirectories and find the one that is the same:
find . -name \*.jpg -exec bash -c 's=$(md5 < {}); echo $s {}' \; | grep "94b48ea6e8ca3df05b9b66c0208d5184"
Or this may work for you too:
find . -name \*.jpg -exec md5 {} \; | grep "94b48ea6e8ca3df05b9b66c0208d5184"
Original Answer
The easiest way is to generate an md5 checksum once for each file. Depending on how your md5 program works, you would do something like this:
find . -name \*.jpg -exec bash -c 's=$(md5 < {}); echo $s {}' \;
94b48ea6e8ca3df05b9b66c0208d5184 ./a.jpg
f0361a81cfbe9e4194090b2f46db5dad ./b.jpg
c7e4f278095f40a5705739da65532739 ./c.jpg
Or maybe you can use
md5 -r *.jpg
94b48ea6e8ca3df05b9b66c0208d5184 a.jpg
f0361a81cfbe9e4194090b2f46db5dad b.jpg
c7e4f278095f40a5705739da65532739 c.jpg
Now you can use uniq to find all duplicates.
I have created an ImageMagick command to join images with certain names:
convert -append *A_SLIDER.jpg *B_SLIDER.jpg out.jpg
I have lots of folders with files named *A_SLIDER.jpg and *B_SLIDER.jpg next to each other (only ever one pair in a directory).
I would like to recursively search a directory with many folders and execute the command to join the images.
If it is possible to name the output image based on the input images that would be great e.g.
=> DOGS_A_SLIDER.jpg and DOGS_B_SLIDER.jpg would combine to DOGS_SLIDER.jpg
Something like this, but back up first and try on a sample directory only!
#!/bin/bash
find . -name "*A_SLIDER*" -execdir bash -c ' \
out=$(ls *A_SLIDER*);
out=${out/_A/}; \
convert -append "*A_SLIDER*" "*B_SLIDER*" $out' \;
Find all files containing the letters "A_SLIDER" and go to the containing directory and start bash there. While you are there, get the name of the file, and remove the _A part to form the output filename. Then execute ImageMagick convert with the _A_ and the corresponding _B_ files to form the output file.
Or, a slightly more concise suggestion from #gniourf_gniourf... thank you.
#!/bin/bash
find . -name "*A_SLIDER.jpg" -type f -execdir bash -c 'convert -append "$1" "${1/_A_/_B_}" "${1/_A/}"' _ {} \;
The "find" command will recursively search folders:
$ find . -name "*.jpg" -print
That will display all the filenames. You might instead want "-iname" which does case-insensitive filename matching.
You can add a command line with "-exec", in which "{}" is replaced by the name of the file. You must terminate the command line with "\;":
$ find . -name "*.jpg" -exec ls -l {} \;
You can use sed to edit the name of a file:
$ echo DOGS_A_SLIDER.jpg | sed 's=_.*$=='
DOGS
Can you count on all of your "B" files being named the same as the corresponding "A" files? That is, you will not have "DOGS_A_SLIDER.jpg" and "CATS_A_SLIDER.jpg" in the same directory. If so, something like the following isn't everything you need, but will contribute to your solution:
$ find . -type f -name "*.jpg" -exec "(echo {} | sed 's=_.*==')" \;
That particular sed script will do the wrong thing if you have any directory names with underscores in them.
"find . -type f" finds regular files; it runs modestly faster than without the -type. Use "-d" to find directories.
Running the following script:
for i in $(find dir -name "*.jpg"); do
ln -s $i
done
incredibly makes symbolic links for 90% of the files and makes of a copy of the remaining 10%. How's that possible?
Edit: what's happening after is relevant:
Those are links to images that I rotate through mogrify e.g.
mogrify -rotate 90 link_to_image
It seems like mogrify on a link silently makes a copy of the image, debatable choice, but that's what it is.
Skip the first paragraph if you want to know more about processing of files with spaces in the names
It was not clear, what is the root of the problem and our assumption was that the problem is in the spaces in the filenames: that files that have them are not processed correctly.
The real problem was mogrify that applied to the created links, processed them and changed with real files.
No about spaces in filenames.
Processing of files with spaces in their names
That is because of spaces in names of the files.
You can write something like this:
find dir -name \*.jpg | while IFS= read i
do
ln -s "$i"
done
(IFS= is used here to avoiding stripping of leading spaces, thanks to #Alfe for the tip).
Or use xargs.
If it is possible that names contain "\n", it's better to use print0:
find dir -name \*.jpg -print0 | xargs -0 -N1 ln -s
Of course, you can use other methods also, for example:
find dir -name '*.jpg' -exec ln -s "{}" \;
ln -s "$(find dir -name '*.jpg')" .
(Imagemagick) mogrify applied on a link delete the link and makes a copy of the image
Try with single quotes:
find dir -name '*.jpg' -exec ln -s "{}" \;
I have M1.jpg M2.jpg ....... M100.jpg in /Users/KanZ/Desktop/Project/Test/
I would like Flip Canvas Vertical them, save and replace them instead of old files. How can I write the script for this problem?
You can do that with convert, with a little help from find so you don't have to write a loop:
find /Users/KanZ/Desktop/Project/Test/ -type f -name "M*.jpg" -exec convert {} -flip {} \;
Explanation:
find /Users/KanZ/Desktop/Project/Test/ - Invoke find tool and specify the base directory to perform the search for files recursively.
-type f - Find only files
-name "M*.jpg" - Find only files with names that start with M and end with .jpg
-exec ... \; - For each such file found, perform the command in ...
convert {} -flip {} - This is the actual command that flips your images. The {}'s are syntax as part of the find command, they represent where the found files from find would be substituted into. So here we are saying to use convert to flip the images vertically with the -flip option, but keep the file names unchanged.
Alternatively:
You can also do it with a loop and globbing:
for file in /Users/KanZ/Desktop/Project/Test/M*.jpg; do convert "$file" -flip "$file"; done
So basically, I have a folder with a bunch of subfolders all with over 100 files in them. I want to take all of the mp3 files (really generic extension since I'll have to do this with jpg, etc.) and move them to a new folder in the original directory. So basically the file structure looks like this:
/.../dir/recup1/file1.mp3
/.../dir/recup2/file2.mp3
... etc.
and I want it to look like this:
/.../dir/music/file1.mp3
/.../dir/music/file2.mp3
... etc.
I figured I would use a bash script that looked along these lines:
#!/bin/bash
STR=`find ./ -type f -name \*.mp3`
FILES=(echo $STR | tr ".mp3 " "\n")
for x in $FILES
do
echo "> [$x]"
done
I just have it echo for now, but eventually I would want to use mv to get it to the correct folder. Obviously this doesn't work though because tr sees each character as a delimiter, so if you guys have a better idea I'd appreciate it.
(FYI, I'm running netbook Ubuntu, so if there's a GUI way akin to Windows' search, I would not be against using it)
If the music folder exists then the following should work -
find /path/to/search -type f -iname "*.mp3" -exec mv {} path/to/music \;
A -exec command must be terminated with a ; (so you usually need to type \; or ';' to avoid interpretion by the shell) or a +. The difference is that with ;, the command is called once per file, with +, it is called just as few times as possible (usually once, but there is a maximum length for a command line, so it might be split up) with all filenames.
You can do it like this:
find /some/dir -type f -iname '*.mp3' -exec mv \{\} /where/to/move/ \;
The \{\} part will be replaced by the found file name/path. The \; part sets the end for the -exec part, it can't be left out.
If you want to print what was found, just add a -print flag like:
find /some/dir -type f -iname '*.mp3' -print -exec mv \{\} /where/to/move/ \;
HTH