I would like to automate process of creating thumbnails / contact sheets for videos. They are usually m x n matrixes of pictures, eg 6x11 or 8x12 etc. Randomly selected pictures are sometimes bad quality: contains movement (blurry image), camera spans (blurry too), too dark or completely black, or completely white, no details, etc. Currently I am using the jpg image file size for image metric: bigger file size -> more details on the picture. Combined with number of colors (can be determined with ImageMagick "identify -format %k" command). I normalize both to 0.0-1.0 interval by dividing with the largest value in the group of the pictures and then I compute the following metric:
gamma*number_of_colors^2+(1-gamma)*file_size^2
Where gamma is a weighting parameter and can be in interval 0.0-1.0. What other approaches, image metrics can be used for this purpose?
If you are interested in sharpness/blurriness, you could go to greyscale and run an edge detection (e.g. Canny) which will give you a generally black image with white areas where sharp edges are detected. If you take the mean brightness of such an image (or count the white pixels and divide by the image area in pixels), the ones that have a higher brightness are the ones with more sharp edges.
convert image.jpg -colorspace gray -canny 0x1+10%+30% -format "%[fx:mean]" info:
So, by way of example... using this sharp image:
I test the sharpness:
convert sharp.jpg -colorspace gray -canny 0x1+10%+30% -format "%[fx:mean]" info:
0.00485202
Now, with a blurry version:
I now get this:
convert blurred.jpg -colorspace gray -canny 0x1+10%+30% -format "%[fx:mean]" info:
0.00261855
Related
I'm trying to convert images to a very specific color scheme, and I have an image that consists of 6684 unique colors:
I want to specifically ONLY use colors from this image, but it's too big to use as a color palette (limited to 256).
In addition to this, any images I convert should have a MAX of 16 indexed colors. (EG: Any image I convert should have only 16 colors that it can display like it's a 4 bit image, and all of these 16 colors must be colors that are also present in the image I have posted above.)
Is it possible to do this in imagemagick?
Assuming the image you provide is called palette.png, I think you want to remap all the colours in an input image to that palette, and then do a colour reduction based on nearest neighbour interpolation down to 16 colours which will give you a command like this:
magick +dither input.png -remap palette.png -interpolate nearest -colors 16 result.png
If this is not what you want, you can debug the above by breaking it down into steps and looking at the partial results:
magick +dither input.png -remap palette.png partial.png
And:
magick partial.png -interpolate nearest -colors 16 result.png
You may want to experiment with replacing +dither by -dither Riemersma or by -dither FloydSteinberg.
Note that using a "Nearest Neighbour" interpolation avoids introducing any new colours into an image - it just uses the nearest neighbour from the existing image rather than averaging or calculating new pixels.
I have two questions:
Firstly, how to detect the area of bar code target in an image (like the sample images), which may have a few noises.
Secondly, how to efficiently do the detection, for instance, in 1/30 seconds.
Squash (resize) the image till it is only 1 pixel tall, then normalise it to the full range of 0-255 and threshold. I am using ImageMagick at the command-line here - it is installed on most Linux distros and is available for OSX and Windows also with Python, PHP, Ruby, C/C++ bindings.
convert barcode.png -resize x1! -scale x10! -normalize -threshold 50% result.png
I have then scaled it to 10 pixels tall so you can actually see it on here - but you would keep the original width and have a height of one pixel. Then just find the first white pixel in your single row of pixels.
Your recently added, smaller barcode gives this:
Dribble has a great feature that lets you browse shots by similar colors:
What is the easiest way to generate something like this in Ruby? Are there libraries or services that can manage this kind of processing? I currently have 26k images which I'll need to process and I'm evaluating the best way to do so.
This will most likely rely on the imagemagick utility in some capacity. A quick search of available libraries turned up the Miro gem available here: https://github.com/jonbuda/miro.
Not sure which aspect of the problem you need help with - generating the swatches of colour or sorting by similar colours. Anyway, here is how you can use ImageMagick to generate the 6 best colours to represent an image and make that into a colour swatch of a resonable size:
convert input.png -colors 6 -unique-colors -scale 5000% swatch.png
If you want the colours as RGB triplets, just change the command to this:
convert input.png -colors 6 -unique-colors +matte -colorspace RGB txt:
# ImageMagick pixel enumeration: 6,1,255,rgb
0,0: (1623,1472,1531) #060606 rgb(6,6,6)
1,0: (10693,4106,4231) #2A1010 rgb(42,16,16)
2,0: (23082,8867,9471) #5A2325 rgb(90,35,37)
3,0: (8667,28247,37488) #226E92 rgb(34,110,146)
4,0: (40714,34524,37545) #9E8692 rgb(158,134,146)
5,0: (59611,58620,58816) #E8E4E5 rgb(232,228,229)
And if you want to find the distance between two colours, e.g. the first and last colours listed above, you can use this:
compare -metric RMSE xc:"rgb(232,228,229)" xc:"rgb(6,6,6)" null:
57484 (0.87715)
The number in parentheses means that the colour distance in the RGB colour cube is 87% of the distance between black and white - i.e. normalised to the diagonal of the colour cube as 100%. As the first number is nearly black, i.e. rgb(0,0,0) and the second is nearly white, i.e. rgb(255,255,255), the distance between the colours is 87%.
There are Ruby bindings for ImageMagick - here and here.
I have two versions of a same image one is original image and one is smoothed version of it. I want to know how much edge information is contained in both images as numerical value not as an image like Perceptual quality metric etc. Is there any method to calculate edge information.
You can do this easily from the commandline with Imagemagick which is installed on most Linux distros and available for OSX and Windows.
First convert to grayscsle then do a Canny Edge detection then count the white pixels.
I'm not at a proper computer, just an iPhone, so I can't check but it will look like this:
convert image.jpg -colorspace gray \
-canny 0x1+5%+10% \ \
\( +clone -evaluate set 0 \) \
-metric AE -compare \
-format "%[distortion]" info:
287
Remove the last 3 lines and replace with a simple image filename to see the resulting edge detected image rather than count the white pixels.
Divide the number of white pixels by the product of the image height and width to normalize results for differently sized images.
There is no absolute measure for edge information, since what constitutes an edge depends on the thresholds you apply on the luminance-gradients. What I would do is to consider the distribution of gradients (take e.g. Sobel Operator in x- and y-direction so that the magnitude in each point is then sqrt(Gradient_x^2 + Gradient_y^2). From the distibution of Gradients, you can then use a quantile (e.g. 70% which is a typical value in the Canny case), which is fore sure lower for a smoothed Picture than for an unsmoothed.
Problem description:
In imagemagick, it is very easy to diff two images using compare, which produces an image with the same size as the two images being diff'd, with the diff data. I would like to use the diff data and crop that part from the original image, while maintaining the image size by filling the rest of the space with alpha.
The approach I am taking:
I am now trying to figure the bounding box of the diff, without luck. For example, below is the script I am using to produce the diff image, see below. Now, I need to find the bounding box of the red color part of the image. The bounding box is demonstrated below, too. Note that the numbers in the image are arbitrary and not the actual values I am seeking.
compare -density 300 -metric AE -fuzz 10% ${image} ${otherImage} -compose src ${OUTPUT_DIR}/diff${i}-${j}.png
You asked quite a while ago - I found the question just today. Since I think the answer might still be of interest, I propose the following.
The trim option of convert removes any edges that are the same color as the corner pixels. The page or virtual canvas information of the image is preserved. Therefore, if you run
convert -trim edPdf.png - | identify -
it gives you:
PNG 157x146 512x406+144+32 8-bit PseudoClass 2c 1.08KB 0.000u 0:00.000
and the values you are looking for are (144,228), where the latter is 406-146-32 because you looked for the lower left corner whereas (+144+32) gives the upper left corner.