I have a large and messy collection of file--hey who doesn't--some of these are large JPGs (large in this case is an arbitrary number, say 2.5MB) that I want to rename--I want to change the extension from *.jpg to *.jpeg.
I'd love to do this with a shell script, I'm running BASH 3.2.39(1), and I have a feeling this is "simple" task with find, alas I find find's syntax difficult to remember and the man page impossible to read.
Any and all help with be most appreciated.
Finding and renaming large files could be done like this:
find . -size +2500k -exec rename -s .jpg .jpeg '{}' ';'
What OS are you using? In most repositories there is an app called mmv which is perfect for these kinds of things..
usage:
mmv \*.jpg \#1.jpeg
Install rename (standard tool in your linux installation or with homebrew for mac), then:
rename -s .jpg .jpeg *
or, if you have files in subdirectories too:
rename -s .jpg .jpeg $(find . -name '*.jpg')
for i in *.jpg
do
new_name= $(echo $i|sed 's/.jpg/.jpeg/')
mv $i $new.name
done
Related
I've found a solution that claims to do one folder, but I have a deep folder hierarchy of sheet music that I'd like to batch convert from png to pdf. What do my solutions look like?
I will run into a further problem down the line, which may complicate things. Maybe I should write a script? (I'm a total n00b fyi)
The "further problem" is that some of my sheet music spans more than one page, so if the script can parse filenames that include "1of2" and "2of2" to be turned into a single pdf, that'd be neat.
What are my options here?
Thank you so much.
Updated Answer
As an alternative, the following should be faster (as it does the conversions in parallel) and also able to handle larger numbers of files:
find . -name \*.png -print0 | parallel -0 convert {} {.}.pdf
It uses GNU Parallel which is readily available on Linux/Unix and which can be simply installed on OSX with homebrew using:
brew install parallel
Original Answer (as accepted)
If you have bash version 4 or better, you can use extended globbing to recurse directories and do your job very simply:
First enable extended globbing with:
shopt -s globstar
Then recursively convert PNGs to PDFs:
mogrify -format pdf **/*.png
You can loop over png files in a folder hierarchy, and process each one as follows:
find /path/to/your/files -name '*.png' |
while read -r f; do
g=$(basename "$f" .png).pdf
your_conversion_program <"$f" >"$g"
done
To merge pdf-s, you could use pdftk. You need to find all pdf files that have a 1of2 and 2of2 in their name, and run pdftk on those:
find /path/to/your/files -name '*1of2*.pdf' |
while read -r f1; do
f2=${f1/1of2/2of2} # name of second file
([ -f "$f1" ] && [ -f "$f2" ]) || continue # check both exist
g=${f1/1of2//} # name of output file
(! [ -f "$g" ]) || continue # if output exists, skip
pdftk "$f1" "$f2" output "$g"
done
See:
bash string substitution
Regarding a deep folder hierarchy you may use find with -exec option.
First you find all the PNGs in every subfolder and convert them to PDF:
find ./ -name \*\.png -exec convert {} {}.pdf \;
You'll get new PDF files with extension ".png.pdf" (image.png would be converted to image.png.pdf for example)
To correct extensions you may run find command again but this time with "rename" after -exec option.
find ./ -name \*\.png\.pdf -exec rename s/\.png\.pdf/\.pdf/ {} \;
If you want to delete source PNG files, you may use this command, which deletes all files with ".png" extension recursively in every subfolder:
find ./ -name \*\.png -exec rm {} \;
if i understand :
you want to concatenate all your png files from a deep folders structure into only one single pdf.
so...
insure you png are ordered as you want in your folders
be aware you can redirect output of a command (say a search one ;) ) to the input of convert, and tell convert to output in one pdf.
General syntax of convert :
convert 1.png 2.png ... global_png.pdf
The following command :
convert `find . -name '*'.png -print` global_png.pdf
searches for png files in folders from cur_dir
redirects the output of the command find to the input of convert, this is done by back quoting find command
converts works and output to pdf file
(this very simple command line works fine only with unspaced filenames, don't miss quoting the wild char, and back quoting the find command ;) )
[edit]Care....
be sure of what you are doing.
if you delete your png files, you will just loose your original sources...
it might be a very bad practice...
using convert without any tricky -quality output option could create an enormous pdf file... and you might have to re-convert with -quality "60" for instance...
so keep your original sources until you do not need them any more
Running the following script:
for i in $(find dir -name "*.jpg"); do
ln -s $i
done
incredibly makes symbolic links for 90% of the files and makes of a copy of the remaining 10%. How's that possible?
Edit: what's happening after is relevant:
Those are links to images that I rotate through mogrify e.g.
mogrify -rotate 90 link_to_image
It seems like mogrify on a link silently makes a copy of the image, debatable choice, but that's what it is.
Skip the first paragraph if you want to know more about processing of files with spaces in the names
It was not clear, what is the root of the problem and our assumption was that the problem is in the spaces in the filenames: that files that have them are not processed correctly.
The real problem was mogrify that applied to the created links, processed them and changed with real files.
No about spaces in filenames.
Processing of files with spaces in their names
That is because of spaces in names of the files.
You can write something like this:
find dir -name \*.jpg | while IFS= read i
do
ln -s "$i"
done
(IFS= is used here to avoiding stripping of leading spaces, thanks to #Alfe for the tip).
Or use xargs.
If it is possible that names contain "\n", it's better to use print0:
find dir -name \*.jpg -print0 | xargs -0 -N1 ln -s
Of course, you can use other methods also, for example:
find dir -name '*.jpg' -exec ln -s "{}" \;
ln -s "$(find dir -name '*.jpg')" .
(Imagemagick) mogrify applied on a link delete the link and makes a copy of the image
Try with single quotes:
find dir -name '*.jpg' -exec ln -s "{}" \;
This question already has answers here:
Command line: piping find results to rm
(5 answers)
Closed last month.
Recently frigged up my external hard drive with my photos on it (most are on DVD anyway, but..) by some partition friggery.
Fortunately I was able to put things back together with PhotoRec another Unix partition utility and PDisk.
PhotoRec returned over one thousand folders chalk full of anything from .txt files to important .NEF's.
So I tried to make the sorting easier by using unix since the OSX Finder would simply crumble under such requests as to select and delete a billion .txt files.
But I encounter some BS when I tried to find and delete txt files, or find and move all jpegs recursively into a new folder called jpegs. I am a unix noob so I need some assistance please.
Here is what I did in bash. (I am in the directory that ls would list all the folders and files I need to act upon).
find . -name *.txt | rm
or
sudo find . -name *.txt | rm -f
So it's giving me some BS that I need to unlink the files. Whatever.
I need to find all .txt files recursively and delete them preferably verbose.
You can't pipe filenames to rm. You need to use xargs instead. Also, remember to quote the file pattern ".txt" or the shell will expand it.
find . -name "*.txt" | xargs rm
find . -name "*.txt" -exec rm {} \;
$ find . -name "*.txt" -type f -delete
I have a stack of hundreds of pictures and i want to use pngcrush for reducing the file sizes.
I know how to crush one file with terminal, but all over the web i find parts of explanations that assume previous knowledge.
can someone please explain how to do it clearly.
Thanks
Shani
You can use following script:
#!/bin/bash
# uncomment following line for more aggressive but longer compression
# pngcrush_options=-reduce -brute -l9
find . -name '*.png' -print | while read f; do
pngcrush $pngcrush_options -e '.pngcrushed' "$f"
mv "$f" "${f/%.pngcrushed/}"
done
Current versions of pngcrush support this functionality out of the box.
( I am using pngcrush 1.7.81)
pngcrush -dir outputFolder inputFolder/*.png
will create "outputFolder" if it does not exist and process all the .png files in the "inputFolder" placing them in "outputFolder".
Obviously you can add other options e.g.
pngcrush -dir outputFolder -reduce -brute -l9 inputFolder/*.png
Being in 2023, there are better tools to optimize png images like OptiPNG
install
sudo apt-get install optipng
use for one picture
optipng imagen.png
use for all pictures in folder
find /path/to/files/ -name '*.png' -exec optipng -o7 {} \;
optionally the command -o defines the quality, being possible from 1 to 7, where 7 is the maximum compression level of the image.
-o7
The high rated fix appears dangerous to me; it started compressing all png files in my iMac; needed is a command restricted to a specified directory; I am no UNIX expert; I undid the new files by searching for all files ending in .pngcrushed and deleting them
I know this is probably elementary to unix people, but I haven't found a straightforward answer online.
I have a directory with sub-directories. Some of these sub-dirs have .mov files in them. I want to consolidate all the movs to a single directory. I don't need to worry about file naming conflicts because the files are from a digital camera and it names the files incrementally, but divides them into daily folders.
What is the Unix-fu for grabbing all these files and copying (or even better, moving them) to a directory in my home folder?
Thanks.
How about this?
find "$SOURCE_DIRECTORY" -type f -name '*.mov' -exec mv '{}' "$TARGET_DIRECTORY" ';'
If the source and target directories do not overlap this should work fine.
EDIT:
BTW, if you have mixed-case extensions (x.mov, y.Mov, Z.MOV) as is the case with many cameras, this would be better. It uses -iname which is case-insensitive when matching:
find "$SOURCE_DIRECTORY" -type f -iname '*.mov' -exec mv '{}' "$TARGET_DIRECTORY" ';'
Make sure to replace the $SOURCE_DIRECTORY and $TARGET_DIRECTORY variables with the actual directories and that they do not overlap (i.e. the target being somewhere under the source)
EDIT 2:
PS: I just noticed that khachik caught this one with his edit
mv `find . -name "*.mov" | xargs` OUTPUTDIR/
Update after thkala's comment:
find . -iname "*.mov" | while read line; do mv "$line" OUTPUTDIR/; done
If you need to cope with weird filenames (spaces, special characters), try this:
$ cd <source parent directory>
$ find -name '*.mov' -print0 | xargs -0 echo mv -v -t <target directory>
Remove the "echo" above to actually do the move, rather than print what would happen.
"mv -v" gives verbose output, "mv -t ..." specifies the target directory (possibly GNU-specific).
"-print0" and "-0" are extensions to cope with weird filenames. On non-GNU systems you might need to remove those options, which will result in newline-separated data. This will still work on filenames with spaces, but not filenames with newlines (yes, it's possible).