for parentDir in *
do
cd "$parentDir"
for subDir in *
do
cd "$subDir"
for file in *.*
do
convert "$file" -crop 120x95 summary_"$file"
convert "$file" -crop 160x225 detail_"$file"
done
mkdir detail
mkdir summary
mv summary_* summary/
mv detail_* detail/
cd ..
done
cd ..
done
Here is my script, I need a way to crop the image without resizing, get rid of the extra surrounding.
For example: 1200* 1500 image ----> 120px * 90px from the center
If you are just trying to crop each image to one center part then use
convert input.suffix -gravity center -crop WxH+0+0 +repage output.suffix
Otherwise, you will get many WxH crops for each image.
Thanks to #fmw42 I've made this script to use with my file manager Dolphin, which can be adapted for others as well:
#!/usr/bin/env bash
# DEPENDS: imagemagick (inc. convert)
OLDIFS=$IFS
IFS="
"
# Get dimensions
WH="$(kdialog --title "Image Dimensions" --inputbox "Enter image width and height - e.g. 300x400:")"
# If no name was provided
if [ -z $WH ]
then
exit 1
fi
for filename in "${#}"
do
name=${filename%.*}
ext=${filename##*.}
convert "$filename" -gravity center -crop $WH+0+0 +repage "${name}"_cropped."${ext}"
done
IFS=$OLDIFS
Enother imagemagick based solution.
Here is a scrip version with mogrify to bulk images manipulation instead of convert which works on individual images:
for parentDir in *
do
cd "$parentDir"
for subDir in *
do
cd "$subDir"
mkdir detail
cp * detail/
mogrify -gravity center -crop 160x225+0+0 +repage detail/*
mkdir summary
cp * summary/
mogrify -gravity center -crop 120x95+0+0 +repage summary/*
done
cd ..
done
Related
I managed (with the help of SO) to make perfect png-snippets from a pdf file with graphicsmagick. My pdf contains text and formula each "snippet" on a single page. My command trims the content of a page to the very content and finally scales this up to 2000 pixel width.
Untill now, I need to repeat that command for each single page in every pdf. I am wondering how to automate this. I think I could try a loop for the repetition of the command for every page i untill the last page.
Assume file1.pdf is in my current working directory.
gm convert -density 300x300 file1.pdf[0] -trim -resize 2000x file1_page1.png
gm convert -density 300x300 file1.pdf[1] -trim -resize 2000x file1_page2.png
gm convert -density 300x300 file1.pdf[2] -trim -resize 2000x file1_page3.png
...
How can I set a counter and run a loop for every page in my document?
You are in luck. GraphicsMagick knows how to do that for you:
gm convert -density 300x300 input.pdf -trim -resize 2000x +adjoin output-%d.png
If you are ok using ImageMagick instead, you can set the starting output file number to 1 instead of 0 and don't need the -adjoin:
convert -density 300x300 input.pdf -scene 1 -trim -resize 2000x output-%d.png
Or, if you want them all done in parallel, use GNU Parallel:
parallel gm convert -density 300x300 {} -trim -resize 2000x output-{#}.png ::: $(identify input.pdf | awk '{print $1}')
for file in *.pdf
do
pages=$(identify "$file" | wc -l)
for (( i=0; i<$pages; i++ ))
do
name=$(sed "s/\.pdf$/$i.png/g" <<< "$file");
gm convert -density 300x300 "$file[$i]" -trim -resize 2000x "$name"
done
done
Try this one.
It will convert every page in every *.pdf file to .png.
I am trying to convert an entire folder to grayscale, using image magick.
convert *.jpg -colorspace Gray -separate -average
is met with this error :
convert: `-average' # error/convert.c/ConvertImageCommand/3290.
What is the correct command for this?
If you have lots of files to process, use mogrify:
magick mogrify -colorspace gray *.jpg
If you have tens of thousands of images and a multi-core CPU, you can get them all done in parallel with GNU Parallel:
parallel -X magick mogrify -colorspace gray ::: *.jpg
Also, the following can be used in a script - for the context menu of file managers like Dolphin, Nautilus, Nemo, Thunar etc:
for filename in "${#}"; do
name="${filename%.*}"
ext="${filename##*.}"
cp "$filename" "$name"-grayscale."$ext"
mogrify -colorspace gray "$name"-grayscale."$ext"
rm "$name"-grayscale."$ext"~
done
I have a script that watermarks my images using imagemagick. I have setup my script as a bash job, but it watermarks every picture all the time. I wish to exclude pictures already watermarked, but I dont have the options to move all my watermarked pictures out of a certain folder. Folder A contains orginal images. Script scans folder A for png, jpg an gif images, and watermarks them - then moves the original pictures to a subfolder. Each time my script scans folder A, it watermarks all the files that are already watermarked. And I cannot change the names of the files. Is there a way to check the watermarked files by adding them to a filedatabase or something? My script is as follow:
#!/bin/bash
savedir=".originals"
for image in *png *jpg *gif do if [ -s $image ] ; then # non-zero
file size
width=$(identify -format %w $image)
convert -background '#0008' -fill white -gravity center \
-size ${width}x30 caption:'watermark' \
$image +swap -gravity south -composite new-$image
mv -f $image $savedir
mv -f new-$image $image
echo "watermarked $image successfully" fi done
Personally, I would prefer not to require some other, external database of names of images I have watermarked - what if that file gets separated from the images, what if they are moved to a different folder hierarchy, or renamed?
My preference would be to set a comment inside the images, that identifies each image as being watermarked, or not - then the information travels around with the image. So, if I watermark an image, I set it in the comment to say so
convert image.jpg -set comment "Watermarked" image.[jpg|gif|png]
Then, before I watermark, I can check with ImageMagick's identify to see if it is done or not:
identify -verbose image.jpg | grep "comment:"
Watermarked
Obviously, you could be a bit more sophisticated, and extract the current comment and add the "Watermarked" part in without overwriting anything that may already be in there. Or you could set the IPTC author/copyright holder or copyrighted information of the image when you watermark it, and use that as a marker of whether or not the image is watermarked.
The following is an example on how you can modify/update your current script to add a kind of local database file to keep track of the processed files:
#!/bin/bash
savedir=".originals"
PROCESSED_FILES=.processed
# This would create the file for the first time if it
# doesn't exists, thus avoiding "file not found problems"
touch "$PROCESSED_FILES"
for image in *png *jpg *gif; do
# non-zero
if [ -s $image ]; then
# Grep the file from the database
grep "$image" "$PROCESSED_FILES"
# Check the result of the previous command ($? is a built-in bash variable
# that gives you that), in this case if the result from grep is different
# than 0, then the file haven't been processed yet
if [ $? -ne 0 ]; then
# Process/watermark the file...
width=$(identify -format %w $image)
convert -background '#0008' -fill white -gravity center -size ${width}x30 caption:'watermark' $image +swap -gravity south -composite new-$image
mv -f $image $savedir
mv -f new-$image $image
echo "watermarked $image successfully"
# Append the file name to the list of processed files
echo "$image" >> "$PROCESSED_FILES"
fi
fi
done
I'm trying to execute a command line on each .jpg file in folder:
for i in ls *.jpg; do convert $i -resize 400x511 -gravity center -background white -extent 400x511 $i; done
But only the first .jpg is "done", what is wrong ?
First of all you don't need ls here and 2nd you need to quote it.
for i in *.jpg; do
convert "$i" -resize 400x511 -gravity center -background white -extent 400x511 "$i"
done
I want to resize multiple .jpg and .png images by using bash shell script.
The following script works fine, but I don't want to write same things twice.
for image in *.jpg; do
mogrify -resize x1000 "${image}"
done
for image in *.png; do
mogrify -resize x1000 "${image}"
done
How can I filter jpg and png images at once?
shopt -s nullglob
for image in *.jpg *.png; do
mogrify -resize x1000 "${image}"
done