bash batch image resize - bash

How change this string:
find . -type f -name "*.jpg" | while read i; do convert "$i" -resize 50% "${i%%.jpg*}_tn.jpg"; done
to make tn_FILENAME.jpg files, not FILENAME_tn.jpg
Thank you!

find . -type f -name "*.jpg" | while read i; do [[ "${i##*/}" =~ ^tn_ ]] || convert "$i" -resize 50% "${i%/*}/tn_${i##*/}"; done
You mean like this?
${i%/*} is the filename stripped of everything following the last dash (so the dir in which the file is located).
/tn_ adds the tn_ prefix to the file, and
${i##*/} strips everything from the file before the last dash (so it's the filename).
Paste these three together and you get your result.

Related

Parameter expansion to remove strings with multiple patterns

I'm building a script that sends the "find" command output to a temp file and from there I use "read" to iterate through all the paths and print two fields into a csv file, one field for the name of the file and the other for the complete path.
find -type f \( -iname \*.mp4 -o -iname \*.mkv \) >> $tempfile
while read -r file; do
printf '%s\n' ${file##*/} ${file} | paste -sd ' ' >> $csvfile
done < $tempfile
rm $tempfile
The problem is in the field for the names ${file##*/}. Some files have spaces in their names and this is causing they not being printed correctly in the csv file, I know I could use this ${file//[[:blank:]]/} to remove the spaces but I also need to preserve this ${file##*/} since that parameter expansion allows me to cut all but the name itself of my files (and print those in the first field of the csv file).
I was searching for a way to kinda join the two parameter expansion ${file##*/} and ${file//[[:blank:]]/} but I didn't found anything related. Is it possible to solve this using only parameter expansion?, if no what other solutions can fix this? maybe regex?
Edit: Also I will need to add a 3rd field in which the value will depend on a variable.
If you're using GNU find (And possibly other implementations?) it can be simplified a lot:
find dir/ -type f \( -iname "*.mp4" -o -iname "*.mkv" \) \
-printf '"%f","'"${newvar//%/%%}"'","%p"\n' > "$csvfile"
I put quotes around the fields of the CSV output, to handle cases where the filenames might have commas in them. It'll still have an issue with filenames with doublequotes in the name, though.
If using some other version of find... well, there's no need for a temporary file. Just pipe the output directly to your while loop:
find test1/ -type f \( -iname "*.mp4" -o -iname "*.mkv" \) -print0 |
while IFS= read -d '' -r file; do
name=$(basename "$file")
printf '"%s","%s","%s"\n' "${name//\"/\"\"}" "$newvar" "${file//\"/\"\"}"
done > "$csvfile"
This one will escape double quotes appearing in the filename, so if that's the case with your files, prefer it.

find files and delete by filename parameter

I have a folder with lots of images. In this folder are subfolders containing high resolution images. Images can be .png, .jpg or .gif.
Some images are duplicates called a.jpg and a.hi.jpg or a.b.c.gif and a.b.c.hi.gif. File names are always different, the will be never a.gif, a.jpg or a.png. I guess i have not to take care of extension.
These are the same images with different resolution.
Now i want to write a script to delete all lower resolution images. But there are files that do not have high resolution like b.png. So i want to delete only if there is a high resolution image too.
I guess i have to do something like this, but can't figure out how exactly.
find . -type f -name "*" if {FILENAME%hi*} =2 --delete smallest else keep file
Could anyone help? Thanks
Something like the following could do the job:
#!/bin/bash
while IFS= read -r -d '' hi
do
d=$(dirname "$hi")
b=$(basename "$hi")
low="${b//.hi./}"
[[ -e "$d/$low" ]] && echo rm -- "$d/$low" #dry run - if satisfied, remove the echo
done < <(find /some/path -type f -name \*.hi.\* -print0)
how it works:
finds all files with .hi. in their names. (not only images, you can extend the find be more restrictive
for all found images
get the directory, where is he
and get the name of the file (without directory)
in the name, remove all occurences of the string .hi. (aka make the "lowres" name
check the existence of the lowres image
delete if exists.
You can use bash extended glob features for this, which you can enable first by
shopt -s extglob
and using the pattern
!(pattern-list)
Matches anything except one of the given patterns.
Now to store the files not containing the string hi
shopt -s extglob
fileList=()
fileList+=( !(*hi*).jpg )
fileList+=( !(*hi*).gif )
fileList+=( !(*hi*).png )
You can print once the array to see if it lists all the files you need as
printf "%s\n" "${fileList[#]}"
and to delete those files do
for eachfile in "${fileList[#]}"; do
rm -v -- "$eachfile"
done
(or) as Benjamin.W suggested in comments below, do
rm -v -- "#{fileList[#]}"
Now i want to write a script to delete all lower resolution images
This script could be used for that:
find /path/to/dir -type f -iname '*.hi.png' -or -iname '*.hi.gif' -or -iname '*.hi.jpg' | while read F; do LOWRES="$(echo "$F" | rev | cut -c7- | rev)$(echo "$F" | rev | cut -c 1-3 | rev)"; if [ -f "$LOWRES" ]; then echo rm -fv -- "$LOWRES"; fi; done
You can run it to see what files will be removed first. If you're ok with results then remove echo before rm command.
Here is the non-one line version, but a script:
#!/bin/sh
find /path/to/dir -type f -iname '*.hi.png' -or -iname '*.hi.gif' -or -iname '*.hi.jpg' |
while read F; do
NAME="$(echo "$F" | rev | cut -c7- | rev)"
EXTENSION="$(echo "$F" | rev | cut -c 1-3 | rev)"
LOWRES="$NAME$EXTENSION"
if [ -f "$LOWRES" ]; then
echo rm -fv -- "$LOWRES"
fi
done

Bash convert resize recursively preserving filenames

Have images in subfolders that need to be limited in size (720 max width or 1100 max height). Their filenames must be preserved. Started with:
for img in *.jpg; do filename=${img%.*}; convert -resize 720x1100\> "$filename.jpg" "$filename.jpg"; done
which works within each directory, but have a lot of subfolders with these images. Tried find . -iname "*.jpg" -exec cat {} but it did not create a list as expected.
This also didn't work:
grep *.jpg | while read line ; do `for img in *.jpg; do filename=${img%.jpg}; convert -resize 720x1100\> "$filename.jpg" "$filename.jpg"`; done
Neither did this:
find . -iname '*jpg' -print0 | while IFS= read -r -d $'\0' line; do convert -resize 720x1100\> $line; done
which gives me error message "convert: no images defined." And then:
find . -iname '*.jpg' -print0 | xargs -0 -I{} convert -resize 720x1100\> {}
gives me the same error message.
It seems you're looking for simply this:
find /path/to/dir -name '*.jpg' -exec mogrify -resize 720x1100\> {} \;
In your examples, you strip the .jpg extension, and then you add it back. No need to strip at all, and that simplifies things a lot.
Also, convert filename filename is really the same as mogrify filename. mogrify is part of ImageMagick, it's useful for modifying files in-place, overwriting the original file. convert is useful for creating new files, preserving originals.
Since all of the subdirectories are two levels down, found this worked:
for img in **/*/*.jpg ; do filename=${img%.*}; convert -resize 720x1100\> "$filename.jpg" "$filename.jpg"; done
Thanks to #pjh for getting me started. This also worked:
shopt -s globstar ; for img in */*/*.jpg ; do filename=${img%.*}; convert -resize 720x1100\> "$filename.jpg" "$filename.jpg"; done
But I got the error message "-bash: shopt: globstar: invalid shell option name" but all of the images larger than specified were resized with filenames preserved.

How to automate conversion of images

I can convert an image like this:
convert -resize 50% foo.jpg foo_50.jpg
How can I automate such a command to convert all the images in a folder?
You can assume every image has .jpg extension.
A solution easily adaptable to automate the conversion of all the images inside the subdirectories of the working directory is preferable.
You can use a for loop with pattern expansion:
for img in */*.jpg ; do
convert -resize 50% "$img" "${img%.jpg}"_50.jpg
done
${variable%pattern} removes the pattern from the right side of the $variable.
You can use find -exec:
find -type f -name '*.jpg' -exec \
bash -c 'convert -resize 50% "$0" "${0%.jpg}"_50.jpg' {} \;
find -type f -name '*.jpg' finds all .jpg files (including those in subdirectories) and hands it to the command after -exec, where it can be referenced using {}.
Because we want to use parameter expansion, we can't use -exec convert -resize directly; we have to call bash -c and supply {} as a positional parameter to it ($0 inside the command). \; marks the end of the -exec command.
You can also try this (less elegant) one-liner using ls+awk:
ls *.jpg | awk -F '.' '{print "convert -resize 50% "$0" "$1"_50.jpg"}' | sh
this assumes that all the .jpg files are in the current directory. before running this, try to remove the | sh and see what is printed on the screen.

How can I list all unique file names without their extensions in bash?

I have a task where I need to move a bunch of files from one directory to another. I need move all files with the same file name (i.e. blah.pdf, blah.txt, blah.html, etc...) at the same time, and I can move a set of these every four minutes. I had a short bash script to just move a single file at a time at these intervals, but the new name requirement is throwing me off.
My old script is:
find ./ -maxdepth 1 -type f | while read line; do mv "$line" ~/target_dir/; echo "$line"; sleep 240; done
For the new script, I basically just need to replace find ./ -maxdepth 1 -type f
with a list of unique file names without their extensions. I can then just replace do mv "$line" ~/target_dir/; with do mv "$line*" ~/target_dir/;.
So, with all of that said. What's a good way to get a unique list of files without their file names with bash script? I was thinking about using a regex to grab file names and then throwing them in a hash to get uniqueness, but I'm hoping there's an easier/better/quicker way. Ideas?
A weird-named files tolerant one-liner could be:
find . -maxdepth 1 -type f -and -iname 'blah*' -print0 | xargs -0 -I {} mv {} ~/target/dir
If the files can start with multiple prefixes, you can use logic operators in find. For example, to move blah.* and foo.*, use:
find . -maxdepth 1 -type f -and \( -iname 'blah.*' -or -iname 'foo.*' \) -print0 | xargs -0 -I {} mv {} ~/target/dir
EDIT
Updated after comment.
Here's how I'd do it:
find ./ -type f -printf '%f\n' | sed 's/\..*//' | sort | uniq | ( while read filename ; do find . -type f -iname "$filename"'*' -exec mv {} /dest/dir \; ; sleep 240; done )
Perhaps it needs some explaination:
find ./ -type f -printf '%f\n': find all files and print just their name, followed by a newline. If you don't want to look in subdirectories, this can be substituted by a simple ls;
sed 's/\..*//': strip the file extension by removing everything after the first dot. Both foo.tar ad foo.tar.gz are transformed into foo;
sort | unique: sort the filenames just found and remove duplicates;
(: open a subshell:
while read filename: read a line and put it into the $filename variable;
find . -type f -iname "$filename"'*' -exec mv {} /dest/dir \;: find in the current directory (find .) all the files (-type f) whose name starts with the value in filename (-iname "$filename"'*', this works also for files containing whitespaces in their name) and execute the mv command on each one (-exec mv {} /dest/dir \;)
sleep 240: sleep
): end of subshell.
Add -maxdepth 1 as argument to find as you see fit for your requirements.
Nevermind, I'm dumb. there's a uniq command. Duh. New working script is: find ./ -maxdepth 1 -type f | sed -e 's/.[a-zA-Z]*$//' | uniq | while read line; do mv "$line*" ~/target_dir/; echo "$line"; sleep 240; done
EDIT: Forgot close tag on code and a backslash.

Resources