I have a series of images (10000's) that are all object-on-white-bg composition. Images are let's say 1000x1000 and objects within are any size. I want to get the actual dimensions of the object, which could be something like 400x500 or 350x520. The rest of the image is just whitespace (but there is some dirtyness to the whitespace e.g. 2% off-white)
I just need analysis of the images; I don't need to export or save data.
QUESTION: Using imagemagick, What's the most performant way to get actual object dimensions on a clean background? I just wonder is there a better imagemagick tool to use than trimImage().
Here is one working method, roughly:
(PHP/Laravel project)
$path = 'images/myTestImage.jpg';
$imagick = new \Imagick(realpath($path));
$originalDimensions = $imagick->getImageWidth()."x".$imagick->getImageHeight();
// Using imagemagick trimImage(), with fuzz value to take care of dirty white pixels.
$imagick->trimImage(5000);
$newDimensions = $imagick->getImageWidth()."x".$imagick->getImageHeight();
Example image:
Here are some ideas. I don't know enough about your environment or images to say which, if any, will work best or be appropriate for you.
You may be able to use the JPEG "shrink-on-load" feature to reduce the demand on I/O, memory and CPU cycles while loading your images - especially if the edges of your shapes are fuzzy by a couple of percent, the reduced size will not make much difference to the finding of the shape outlines. I'll demonstrate by creating a test image to trim:
convert -size 2000x2000 xc:black -fill white -draw "rectangle 100,100 200,200" image.jpg
Now get trim box:
time convert image.jpg -format %# image.jpg info:
112x112+96+96112x112+96+96
real 0m0.128s
user 0m0.803s
sys 0m0.097s
Now do same thing again but with "shrink-on-load" to 1/4 the width and height:
time convert -define jpeg:size=500x500 image.jpg -format %# image.jpg info:
28x28+24+2428x28+24+24
real 0m0.040s
user 0m0.054s
sys 0m0.014s
Note that is 3-4x faster, also that the features (including trim box) are correspondingly reduced.
You may be able to use vips, though you would have to add the corresponding tag to get John's (the author) input. If vips can do this, it may be an excellent option. I think you would need to use find_trim() which is documented here.
You may able to "shell out" using system() to get a tool such as GNU Parallel to use multiple CPU cores to process your images in parallel. It's a quick, easy install. You can just try the following in your Terminal without PHP to get an idea how fast it is:
parallel --bar 'magick -format "%f:%#\n" {} info:' ::: *.jpg
If you have too many images for your command line, you can pump the names into it this way:
find . -name \*.jpg -print0 | parallel -0 --bar 'magick -format "%f:%#\n" {} info:'
You could write some Python that takes a load of filenames to check with OpenCV which will be really fast, and then get GNU Parallel to call that Python with as many filenames as your shell can handle (-X option) to amortise the overhead of starting the Python interpreter over as many images as possible. I mean this:
parallel -X script.py ::: *.jpg
Then your Python would be of the form:
for f in sys.argv[1:]:
image = cv2.imread(... probably greyscale)
cv2.findContours ... à la Rotem https://stackoverflow.com/a/60833042/2836621
Or, rather than findContours() you can probably do it faster as suggested by Divakar here.
Related
Goal
I have hundreds of images that all look similar to this one here:
I simply want to use the green screen to create a mask for each image that looks like this one here (the border should preferably be smoothed out a little bit):
Here is the original image if you want to do tests: https://mega.nz/#!0YJnzAJR!GRYI4oNWcsKztHGoK7e4uIv_GvXBjMvyry7cPmyRpRA
What I've tried
I found this post where the user used Imagemagick to achieve chroma keying.
for i in *; do convert $i -colorspace HSV -separate +channel \
\( -clone 0 -background none -fuzz 3% +transparent grey43 \) \
\( -clone 1 -background none -fuzz 10% -transparent grey100 \) \
-delete 0,1 -alpha extract -compose Multiply -composite \
-negate mask_$i; done;
But no matter how I tweak the numbers, the results are not perfect:
I feel really dumb, that I cannot find a solution to such a simple problem myself. Also note, that I am using Linux. So no Photoshop or After Effects! :)
But I am sure that there has to be a solution to this problem.
Update 1
I've just tried using this greenscreen script by fmw42 by running ./greenscreen infile.jpg outfile.png and I am rather satisfied with the result.
But it takes around 40 seconds to process one image which results in a total 8 hours for all my images (although I have a rather power workstation, see specs below)
Maybe this has something to do witch those errors that occur while processing?:
convert-im6.q16: width or height exceeds limit `black' # error/cache.c/OpenPixelCache/3911.
convert-im6.q16: ImageSequenceRequired `-composite' # error/mogrify.c/MogrifyImageList/7995.
convert-im6.q16: no images defined `./GREENSCREEN.6799/lut.png' # error/convert.c/ConvertImageCommand/3258.
convert-im6.q16: unable to open image `./GREENSCREEN.6799/lut.png': No such file or directory # error/blob.c/OpenBlob/2874.
convert-im6.q16: ImageSequenceRequired `-clut' # error/mogrify.c/MogrifyImageList/7870.
convert-im6.q16: profile 'icc': 'RGB ': RGB color space not permitted on grayscale PNG `mask.png' # warning/png.c/MagickPNGWarningHandler/1667.
Workstation specs
Memory: 125,8 GiB
Processor: AMD® Ryzen 9 3900x 12-core processor × 24
Graphics: GeForce GTX 970/PCIe/SSE2 (two of them)
We know that the background is green and is distinguishable from the object by its color, so I suggest using color thresholding. For this, I have written a simple OpenCV Python code to demonstrate the results.
First, we need to install OpenCV.
sudo apt update
pip3 install opencv-python
# verify installation
python3 -c "import cv2; print(cv2.__version__)"
Then, we create a script named skull.py in the same directory with the images.
import cv2
import numpy as np
def show_result(winname, img, wait_time):
scale = 0.2
disp_img = cv2.resize(img, None, fx=scale, fy=scale)
cv2.imshow(winname, disp_img)
cv2.waitKey(wait_time)
img = cv2.imread('skull.jpg')
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of green color in HSV
lower_green = np.array([70, 200, 100])
upper_green = np.array([90, 255, 255])
# Threshold the HSV image to extract green color
mask = cv2.inRange(hsv, lower_green, upper_green)
mask = cv2.bitwise_not(mask)
#cv2.imwrite('mask.png', mask)
show_result('mask', mask, 0)
cv2.destroyAllWindows()
You can easily find a tutorial about HSV color operations using OpenCV. I will not go over the functions used here, but one part is important. Image operations are generally done in RGB color space, which holds red, green and blue components. However, HSV is more like human vision system which holds hue, saturation and value components. You can find the conversion here. Since we seperate color based on our perception, HSV is more suitable for this task.
The essential part is to choose the threshold values appropriately. I chose by inspection around 80 for hue (which is max. 180), and above 200 and 100 for saturation and value (max. 255), respectively. You can print the values of a particular pixel by the following lines:
rows,cols,channels = hsv.shape
print(hsv[row, column])
Note that the origin is left upper corner.
Here is the result:
Two things may be needed. One is doing the operation for a set of images, which is trivial using for loops. The other is that if you do not like some portion of the result, you may want to know the pixel location and change the threshold accordingly. This is possible using mouse events.
for i in range(1, 100):
img = imread(str(i) + '.jpg')
def mouse_callback(event, x, y, flags, params):
if event == cv2.EVENT_LBUTTONDOWN:
row = y
column = x
print(row, column)
winname = 'img'
cv2.namedWindow(winname)
cv2.setMouseCallback(winname, mouse_callback)
Keep in mind that show_result function resizes the image by scale factor.
If you do not want to deal with pixel positions, rather you want smooth results, you can apply morphological transformations. Especially opening and closing will get the work done.
kernel = np.ones((11,11), np.uint8)
opened = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
closed = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
Result with opening (kernel=11x11):
I can't really fit this in a comment, so I've put it as an answer. If you want to use Fred's greenscreen script, you can hopefully use GNU Parallel to speed it up.
Say you use the commands:
mkdir out
greenscreen image.png out/image.png
to process one image, and you have thousands, you can do the following to keep all your CPU cores busy in parallel till they are all processed:
mkdir out
parallel greenscreen {} out/{} ::: *.png
If on a Unix-like system, you can try my greenscreen script that makes calls to ImageMagick and is written in Bash Unix. For example:
Input:
greenscreen img.jpg result.png
Result (green turned transparent):
The result has been reduced in size by 50%, just so that StackOverflow will not object to the original result being too large. However, StackOverflow has changed the image from transparent PNG to white background JPG.
Note that other images, may need values for the arguments other than the defaults. You can get my script at http://www.fmwconcepts.com/imagemagick/. Note that for commercial use, you will need to contact me about licensing.
I'm currently trying to convert a batch of images to greyscale using:
convert "* .jpg" -set colorspace Gray -separate -average "*.jpg"
Right now I'm working on a couple of hundred images. When I run the command I get copies of all the images, but only the 1st is actually converted to greyscale. Anyone know how what the issue might be? Also, if anyone has a better way of working through a very large quantity of images (ultimately I'll need to convert several thousand) at a time I'd appreciate it.
Thanks!
As pointed out in the comments, you can not have wildcards as the output. If you want to overwrite the original files with the updated colorspace, you can try the mogrify utility.
mogrify -set colorspace Gray -separate -average *.jpg
But that can be risky as your destroying originals. A simple for-loop might be easy to schedule & manage.
for filename in $(ls *.jpg)
do
convert "$filename" -set colorspace Gray -separate -average "output_${filename}"
done
ultimately I'll need to convert several thousand
If you really are facing large quantities of tasks, I would suggest spreading the tasks over multiple CPU cores. Perhaps with GNU Parallel.
parallel convert {} -set colorspace Gray -separate -average output_{.} ::: *.jpg
Of course I'm assuming your working with BASH on a *nix system. YMMV elsewhere.
Dear Colleagues.
Could you help me please with next question.
I want to resize huge amount of images and substitute original images with resized for saving disk space. But before substitution I want to be sure, that resized image is the same original image but with different dimensions (not white list, not Malevich's square and so on).
Is there a way to check such similarity to be sure that resizing was successful?
Thanks.
One idea might be to downscale your image into a tentative down-resed version, then scale it back up to the original size and compare that with the original. If they seem pretty similar, overwrite the original with the tentative conversion, if not, report error.
Here's how you might do that in bash with comments. It can be rehashed into other languages of course, or you can use a system() to shell out and use this command line version from another language.
#!/bin/bash
# Downscale an image and check if correct
# Supply image name as parameter
original="$1"
tentative="t-$$-$original"
echo DEBUG: tentative filename=$tentative
# Get size of original so we can resize back up to that size
origsize=$(identify -format "%G" "$original")
echo DEBUG: origsize=$origsize
# Do downsizing of image, saving result tentatively
convert image.jpg -resize 800x800 "$tentative"
# Test quality/success of conversion by looking at PSNR
PSNR=$(convert "$tentative" -resize $origsize\! "$original" -metric PSNR -format "%[distortion]" -compare info:)
echo DEBUG: PSNR=$PSNR
# PSNR above 20 is pretty indicative of good similarity - use "bc" as shell doesn't do floats
if [ $(echo "$PSNR>20" | bc) -eq 1 ]; then
echo $original looks good
else
echo $original something wrong
fi
One thing to beware of is transparency - don't convert from a GIF or PNG (both of which support transparency) to a JPEG (which doesn't) and then resize and compare - you are asking for trouble. You will see in my script above that I retain the image extension and pre-pend bits to the front rather than the end of the filename.
well as far as the size is concerned you can simply check the dimensions
in matlab
[x,y]=size(im);
this will let you know that the if the image has been resized
To check if the image is the same you can use feature extraction such as SURF features.
Extract the surf features, match them if you get 100% ~ 90% match you have the same image!!
I am working on creating a video timelapse. All the photos I took are .jpg images shot at 4:3 aspect ratio. 2592x1944 resolution. I want them all to be 16:9 at 1920x1080.
I have written a little script to do this, but the process is not very fast. It took about 17 minutes for me to crop and resize 750 images. I have a total of about 300,000 to deal with, and will probably be doing then in batches of about 50,000. That is 18 hours 45 minutes per batch, and over 4.5 days of computing total.
So does anyone know a way I can speed up this program?
here is the bash script I have written:
#!/bin/bash
mkdir cropped
for f in *.JPG
do
convert $f -resize 1920x1440 -set filename:name '%t' cropped/'%[filename:name].JPG' #Resize Photo, maintain aspect ratio
convert cropped/$f -crop 1920x1080+0+$1 -set filename:name '%t' cropped/'%[filename:name].JPG' #Crop to 16:9 aspect ratio, takes in $1 argument for where to begin crop
done
echo Cropping Complete!
Putting some echo commands before and after each line within the loop reveals that resizing takes much more time than cropping, which I guess is not surprising. I have tried using mogrify -path cropped -resize 1920x1440! $f in place of convert $f -resizebut there does not seem to be much of a difference in speed.
So, any way I can speed up the runtime on this?
BONUS POINTS if you can show me an easy way to give a simple indication of progress as the program runs (something like "421 of 750 files, 56.13% complete").
EXTRA BONUS POINTS if you can add a command to output a .mp4 file from each frame that can be edited in a software program like SONY Vegas. I have managed to make video files (.avi) using mencoder from these photos, but the resulting video wont work in any video editors I have tried.
A few things spring to mind...
Firstly, don't start ImageMagick twice per image, once to resize it and once to crop it when it should be possible to do both operations in one go. So, instead of your two convert commands, I would do just one
convert image.jpg -resize 1920x1440 -crop 1920x1080+0+$1 cropped/image.jpg
Secondly, I don't see what you are doing with the set command, something with the filename, but you can just do that in the shell.
Thirdly, I would suggest you use GNU Parallel (I regularly process upwards of 65,000 images per day with it). It is easy to install and will ensure all those lovely CPU cores you paid for are kept busy. The easiest way to use it is, instead of running commands, just echo them and pipe them into parallel
#!/bin/bash
mkdir cropped
for f in *.jpg
do
echo convert \"$f\" -resize 1920x1440 -crop 1920x1080+0+$1 cropped/\"$f\"
done | parallel
echo Cropping Complete!
Finally, if you want a progress meter, or indication of how much is done and what is left to do, use the --eta option (eta=Estimated Time of Arrival) to parallel and it tells you how many jobs and how much time is remaining.
When you get confident with parallel you will maybe run your entire process like this:
parallel --eta convert {} -resize 1920x1440 -crop 1920x1080+0+32 cropped/{} ::: *.jpg
I created 750 images the same size as yours and ran them this way and it takes my medium spec iMac 55 seconds to resize and crop the lot - YMMV. Please add a comment and say how you got on - how long the processing time is with parallel.
Firstly in order to speed up don't echo stuff to the screen echo it to a file and if you want to know the status read the file (easily done with tail command), seriously this will already be faster. However this doesn't seem like the real bottleneck of your program.
The main thing I can recommend is to run it in parallel, is there any reason why you can't crop+resize pic #1000 before pic #4? If not then modify the script to receive some parameter that specifies which files it should work on and then run it a few times with different parameters, this should cut down the time by about as many CPU cores as you have (minus some hard-drive I/O time).
Regarding your first bonus question you can do a variant of this code
TOTAL=`ls -1|wc -l` #get the total number of files (you can change this to the files parameter I mentioned above
SOFAR=0 #How many files you've done so far
for f in *.JPG
do
((SOFAR++))
echo "done so far $SOFAR out of $TOTAL"
done
Use the
-define jpeg:size=1920x1440
option along with -resize. If you have a older version of ImageMagick (sorry, I don't know exactly when the syntax changed), use the
-size 1920x1440
option along with -resize.
In my application I need to resize and make the quality on PNG files poorer.
In full size the PNGs are 3100x4400px using 2,20MB disk space.
When running the following command:
convert -resize 1400 -quality 10 input.png output.png
the images are resized to 1400x2000 using 5,33MB disk space.
So my question is: How can I reduce the file size?
You can further reduce quality of PNG by using posterization:
https://github.com/pornel/mediancut-posterizer (Mac GUI)
This is a lossy operation that allows zlib to compress better.
Convert image to PNG8 using pngquant.
It reduces images to 256 colors, so quality depends on the type of image, but pngquant makes very good palettes, so you might be surprised how often it works.
Use Zopfli-png or AdvPNG to re-compress images better.
This is lossless and recommended for all images if you have CPU cycles to spare.
After using imagemagick to resize, you can compress the image using pngquant.
On mac (with homebrew) brew install pngquant then:
pngquant <filename.png>
This will create a new image filename-fs8.png that is normally much smaller in size.
Help page says, that -quality option used with PNG sets the compression level for zlib, where (roughly) 0 is the worst compression, 100 - is the best (default is 75). So try to set -quality to 100 or even remove the option.
Another method is to specify PNG:compression-level=N, PNG:compression-strategy=N and PNG:compression-filter=N to achieve even better results.
http://www.imagemagick.org/script/command-line-options.php#quality
For lazy people that arrived here wanting to paste in a one liner:
mogrify -resize 50% -quality 50 *.png && pngquant *.png --ext .png --force
This modifies all of the pngs in the current directory in place, so make sure you have a backup. Modify your resize and quality parameters as suits your needs. I did a quick experiment and mogrify first, then pngquant, resulted in significantly smaller image size.
The ubuntu package for pngquant is called "pngquant" but I had it installed already on 20.04LTS so seems like it may be there by default.
I found that the best way was to use the
- density [value]
parameter.