How can I calculate the overall amount of detail in an image? - image

I'm working with a library of photos, and I'd like some way of calculating a 'detail' value for each image as a whole. By detail, I mean the amount of contrasting colours, shades and edges, such that an image that's a single flat colour would have 0 detail, and a photo filled with lots of small elements would have a high detail value.
This doesn't have to be super-precise - I'd like to identify images that have a detail value higher than some threshold, and treat them differently. A reasonable guess is good enough.
I have a few ideas that might be feasible:
Save the image as a JPG at a given size and compression level, and check the resulting file size. This basically uses the compression algorithm as the check - detailed images make large files. Seems slow, expensive, and crude, but it wouldn't require a lot of custom work.
Sub-divide the image into a grid, sample points within each square, and compare how unique their values are. It seems like it could work, but would require a fine grid and a lot of samples in order to be useful.
Use an edge-detecting filter like unsharp-mask: Take the original, and a copy sharpened by a known amount, then take the average colour of each. If they are very different, the filter has done a 'lot of work' and therefore the image has a lot of edges (and so a lot of detail). This seems promising, but I'm not sure if it would actually work!
Processing will be done out-of-band, so performance isn't a huge issue. If it takes a few seconds per image, that's fine. I'm using rMagick (imageMagick) and Ruby.
Am I missing something? Is there an easier way?

You could try measuring the entropy and see how that works for your images - it's hard to tell how well it will perform for your needs though.
You get ImageMagick to measure it like this:
identify -format '%[entropy]' input.jpg
You can also measure it using the convert tool like this:
convert -size 100x100 xc:white gradient:red-blue +append -print '%[entropy]' null:
Or you could do, say, a Canny edge detection and then calculate the mean of the resulting black and white image which will tell you what percentage of the pixels are edges as an integer between 0-100, like this:
convert input.jpg -canny 0x1+10%+30% -format "%[fx:int(mean*100)]" info:
12
As suggested by Kurt, you could also look at the number of unique colours divided by the number of pixels. which will obviously give small results for large blocks of constant colour and larger results for photograph-type images.
identify -format "%k different colors, %w (width), %h (height)\n" image.jpg

I liked the edge detection suggestion -
convert input.jpg -canny 0x1+10%+30% -format "%[fx:int(mean*100)]" info:
However, this has some big drawbacks. The biggest is that the values you get are hugely dependent on the size of the image.
For reference, I am interested in using this to get an objective measure of the level of detail in Tibetan thangkas. The price of a thangka is very dependent on the amount of time an artist takes to paint it and that, in turn, is hugely dependent on the amount of detail in the painting. The more painstaking little details the longer it take to paint the thangka. One thangka with a lot of detail can take over a year. Another thangka that is the same size but has much less detail can take just a week.
To be useful, this measure has to be scaled based on the size of the photograph - it should give the same answer if you have a gigapixel photograph as if you have a megapixel photograph as long as you have enough resolution to see the smallest details in the painting.
My first pass with ImageMagick is:
magick convert pic.jpg -canny 0x1+10%+30% -format "%[fx:mean*sqrt(w*h)]" info:
When I test with this image, it works very well.
multi-star picture
I get the same value for this image when I scale it by a factor of 2 and a pretty close value when I scale it down by 10%... but the number is smaller which you should expect because some of the detail is being eliminated when you scale it down.
Unfortunately, when I try it on an actual thangka image it does not work as well:
Actual thangka
When I scale this up I get very different results - by as much as a factor of 1.5. Need to find a better measure.

Related

How to produce a unitary thumbnail for a billion of png images?

In the application, there are about 1 billion of png images (size 1024*1024 and about 1MB each), it needs combining the 1 billion images to a huge image, then produces a size 1024*1024 unitary thumbnail for it. Or maybe we don't need to really combine the images to a huge one, but just do some magic algorithm to produce the unitary thumbnail in the computer memory? Meanwhile this process needs to be done as fast as possible, better in seconds, or at least in a few minutes. Does anyone have idea?
The idea of loading a billion images into a single montage process is ridiculous. Your question is unclear, but your approach should be to determine how many pixels each original image will amount to in your final image, then extract the necessary number of pixels from each image in parallel. Then assemble those pixels into a final image.
So, if each image will be represented by one pixel in your final image, you need to get the mean of each image which you can do like this:
convert image1.png image2.png ... -format "%[fx:mean.r],%[fx:mean.g],%[fx:mean.b]:%f\n" info:
Sample Output
0.423529,0.996078,0:image1.png
0.0262457,0,0:image2.png
You can do that then very fast in parallel with GNU Parallel, using something like
find . -name \*.png -print0 | parallel -0 convert {} -format "%[fx:mean.r],%[fx:mean.g],%[fx:mean.b]:%f\n" info:
Then you can make a final image and put the individual pixels in.
Scanning even 1,000,000 PNG files is likely to take many hours...
You don't say how big your images are, but if they are of the order of 1MB each, and you have 1,000,000,000 then you need to do a petabyte of I/O to read them, so even with a 500MB/s ultra-fast SSD, you will be there 23 days.
ImageMagick can do that:
montage -tile *.png tiled.png
If you don't want to use an external helper for whatever reason, you can still use the sources.
Randomized algorithm such as random-sampling may be feasible.
Considering the combined image is so large, any linear algorithm may fail, not to mention higher complexity method.
By calculations, we can infer each thumbnail pixel depend on 1000 image. So a single sampling residual does not affect the result much.
The algorithm description may as follow:
For each thumbnail pixel coordinate, randomly choose N images which on the correspond location, and each image sampling M pixels and then calculate their average value. Do the same thing for other thumbnail pixels.
However, if your images are randomly combined, the result is tend to be a 0.5 valued grayscale image. Because by the Central Limit Theorem, the variance of thumbnail image pixel tend to be zero. So you have ensure the combined thumbnail is structured itself.
PS: using OpenCV would be a good choice

Equalize contrast and brightness across multiple images

I have roughly 160 images for an experiment. Some of the images, however, have clearly different levels of brightness and contrast compared to others. For instance, I have something like the two pictures below:
I would like to equalize the two pictures in terms of brightness and contrast (probably find some level in the middle and not equate one image to another - though this could be okay if that makes things easier). Would anyone have any suggestions as to how to go about this? I'm not really familiar with image analysis in Matlab so please bear with my follow-up questions should they arise. There is a question for Equalizing luminance, brightness and contrast for a set of images already on here but the code doesn't make much sense to me (due to my lack of experience working with images in Matlab).
Currently, I use Gimp to manipulate images but it's time consuming with 160 images and also just going with subjective eye judgment isn't very reliable. Thank you!
You can use histeq to perform histogram specification where the algorithm will try its best to make the target image match the distribution of intensities / histogram of a source image. This is also called histogram matching and you can read up about it on my previous answer.
In effect, the distribution of intensities between the two images should hopefully be the same. If you want to take advantage of this using histeq, you can specify an additional parameter that specifies the target histogram. Therefore, the input image would try and match itself to the target histogram. Something like this would work assuming you have the images stored in im1 and im2:
out = histeq(im1, imhist(im2));
However, imhistmatch is the more better version to use. It's almost the same way you'd call histeq except you don't have to manually compute the histogram. You just specify the actual image to match itself:
out = imhistmatch(im1, im2);
Here's a running example using your two images. Note that I'll opt to use imhistmatch instead. I read in the two images directly from StackOverflow, I perform a histogram matching so that the first image matches in intensity distribution with the second image and we show this result all in one window.
im1 = imread('http://i.stack.imgur.com/oaopV.png');
im2 = imread('http://i.stack.imgur.com/4fQPq.png');
out = imhistmatch(im1, im2);
figure;
subplot(1,3,1);
imshow(im1);
subplot(1,3,2);
imshow(im2);
subplot(1,3,3);
imshow(out);
This is what I get:
Note that the first image now more or less matches in distribution with the second image.
We can also flip it around and make the first image the source and we can try and match the second image to the first image. Just flip the two parameters with imhistmatch:
out = imhistmatch(im2, im1);
Repeating the above code to display the figure, I get this:
That looks a little more interesting. We can definitely see the shape of the second image's eyes, and some of the facial features are more pronounced.
As such, what you can finally do in the end is choose a good representative image that has the best brightness and contrast, then loop over each of the other images and call imhistmatch each time using this source image as the reference so that the other images will try and match their distribution of intensities to this source image. I can't really write code for this because I don't know how you are storing these images in MATLAB. If you share some of that code, I'd love to write more.

scale and reduce colors to reduce file size of scan

I need to reduce the file size of a color scan.
Up to now I think the following steps should be made:
selective blur (or similar) to reduce noise
scale to ~120dpi
reduce colors
Up to now we use convert (imagemagick) and net-ppm tools.
The scans are invoices, not photos.
Any hints appreciated.
Update
example:
http://www.thomas-guettler.de/tbz/example.png 11M
http://www.thomas-guettler.de/tbz/example_0800_pnmdepth009.png pnmscale, pnmdepth 110K
http://www.thomas-guettler.de/tbz/example_1000_pnmdepth006.png pnmscale, pnmdepth 116K
Bounty
The smallest and good readable reduced file of example.png with a reproduce-able solution gets the bounty. The solution needs to use open source software only.
The file format is not important, as long as you can convert it to PNG again. Processing time is not important. I can optimize later.
Update
I got very good results for black-and-white output (thank you). Color reducing to about 16 or 32 colors would be interesting.
This is a rather open ended question since there's still possible room for flex between image quality and image size... after all, making it black and white and compressing it with CCITT T.6 black and white (fax-style) compression is going to beat the pants off most if not all color-capable compression algorithms.
If you're willing to go black and white (not grayscale), do that! It makes documents very small.
Otherwise I recommend a series of minor image transformations and Adaptive Prediction Trees (see here). The APT software package is opensource or public domain and very easy to compile and use. Its advantages are that it performs well on a wide variety of image types, especially text, and it will allow you to scale image size vs. image quality better without losing readability. (I found myself squishing a example_1000-sized color version down to 48KB on the threshold of readability, and 64K with obvious artifacts but easy readability.)
I combined APT with imagemagick tweakery:
convert example.png -resize 50% -selective-blur 0x4+10% -brightness-contrast -5x30 -resize 80% example.ppm
./capt example.ppm example.apt 20 # The 20 means quality in the range [0,100]
And to reverse the process
./dapt example.apt out_example.ppm
convert out_example.ppm out_example.png
To explain the imagemagick settings:
-resize 50% Make it half as small to make processing faster. Also hides some print and scan artifacts.
-selective-blur 0x4+10%: Sharpening actually creates more noise. What you actually want is a selective blur (like in Photoshop) which blurs when there's no "edge".
-brightness-contrast -5x30: Here we increase the contrast a good bit to clip the bad coloration caused by the page outline (leading to less compressible data). We also darken slightly to make the blacks blacker.
-resize 80% Finally, we resize to a little bigger than your example_1000 image size. (Close enough.) This also reduces the number of obvious artifacts since they're somewhat hidden when the pixels are merged together.
At this point you're going to have a fine looking image in this example -- nice, smooth colors and crisp text. Then we compress. The quality value of 20 is a pretty low setting and it's not as spiffy looking anymore, but the document is very legible. Even at a quality value of 0 it's still mostly legible.
Again, using ADT isn't going to necessarily lead to the best results for this image, but it won't turn into an entirely unrecognizable mess on photographic-like content such as gradients, so you should be covered better on more types or unexpected types of documents.
Results:
88kb
76kb
64kb
48kb
Processed image before compression
If you truly don't care about the number of colors, we may as well go to black-and-white and use a bilevel coder. I ended up using the DJVU format because it compares well to JBIG2 and has open source encoders. In this case I used the didjvu encoder because it achieved the best results. (On Ubuntu you can apt-get install didjvu, perhaps on other distributions as well.)
The magic I ended up with looks like this to encode:
convert example.png -resize 50% -selective-blur 0x4+10% -normalize -brightness-contrast -20x100 -dither none -type bilevel example_djvu.pgm
didjvu encode -o example.djvu example_djvu.pgm --lossless
Note that this is actually a superior color blur to 0x2+10% at full resolution -- this will end up making the imagine about as nice as imaginable before it's converted to a bilevel image.
Decoding works as follows:
convert example.djvu out_example.png
Even with the larger resolution (which is much easier to read), the size weights in at 24KB. When reduced to the same size, it's still 24KB! Lastly, at only a 75% of the original image reduction and a 0x5+10% blur it weights in at 32KB.
See here for the visual results: http://img29.imageshack.us/img29/687/exampledjvu.png
If you already have it doing the right thing with the Imagemagick utility "convert" then it might be a good idea to look at the Imagemagick libraries first.
A quick look at my Ubuntu package lists shows bindings for perl,python,ruby,c++ and java

How to estimate the size of JPEG image which will be scaled down

For example, I have an 1024*768 JPEG image. I want to estimate the size of the image which will be scaled down to 800*600 or 640*480. Is there any algorithm to calculate the size without generating the scaled image?
I took a look in the resize dialog in Photoshop. The size they show is basically (width pixels * height pixels * bit/pixel) which shows a huge gap between the actual file size.
I have mobile image browser application which allow user to send image through email with options to scale down the image. We provide check boxes for the user to choose down-scale resolution with the estimate size. For large image (> 10MB), we have 3 down scale size to choose from. If we generate a cached image for each option, it may hurt the memory. We are trying to find the best solution which avoid memory consumption.
I have successfully estimated the scaled size based on the DQT - the quality factor.
I conducted some experiments and find out if we use the same quality factor as in the original JPEG image, the scaled image will have size roughly equal to (scale factor * scale factor) proportion of the original image size. The quality factor can be estimate based on the DQT defined in the every JPEG image. Algorithm has be defined to estimate the quality factor based on the standard quantization table shown in Annex K in JPEG spec.
Although other factors like color subsampling, different compression algorithm and the image itself will contribute to error, the estimation is pretty accurate.
P.S. By examining JPEGSnoop and it source code, it helps me a lot :-)
Cheers!
Like everyone else said, the best algorithm to determine what sort of JPEG compression you'll get is the JPEG compression algorithm.
However, you could also calculate the Shannon entropy of your image, in order to try and understand how much information is actually present. This might give you some clues as to the theoretical limits of your compression, but is probably not the best solution for your problem.
This concept will help you measure the differences in information between an all white image and that of a crowd, which is related to it's compressibility.
-Brian J. Stinar-
Why estimate what you can measure?
In essence, it's impossible to provide any meaningful estimate due to the fact that different types of images (in terms of their content) will compress very differently using the JPEG algorithm. (A 1024x768 pure white image will be vastly smaller than a photograph of a crowd scene for example.)
As such, if you're after an accurate figure it would make sense to simply carry out the re-size.
Alternatively, you could just provide an range such as "40KB to 90KB", based on an "average" set of images.
I think what you want is something weird and difficult to do. Based on JPG compression level some images are heavier that others in terms of heavier (size).
My hunch for JPEG images: Given two images at same resolution, compressed at the same quality ratio - the image taking smaller memory will compress more (in general) when its resolution is reduced.
Why? From experience: many times when working with a set of images, I have seen that if a thumbnail is occupying significantly more memory than most others, reducing its resolution has almost no change in the size (memory). On other hand, reducing resolution of one of the average size thumbnails reduces the size significantly. (all parameters like original/final resolution and JPEG quality being the same in the two cases).
Roughly speaking - higher the entropy, less will be the impact on size of image by changing resolution (at the same JPEG quality).
If you can verify this with experiments, maybe you can use this as a quick method to estimate the size. If my language is confusing, I can explain with some mathematical notation/psuedo formula.
An 800*600 image file should be roughly (800*600)/(1024*768) times as large as the 1024*768 image file it was scaled down from. But this is really a rough estimate, because the compressibility of original and scaled versions of the image might be different.
Before I attempt to answer your question, I'd like to join the ranks of people that think it's simpler to measure rather than estimate. But it's still an interesting question, so here's my answer:
Look at the block DCT coefficients of the input JPEG image. Perhaps you can find some sort of relationship between the number of higher frequency components and the file size after shrinking the image.
My hunch: all other things (e.g. quantization tables) being equal, the more higher frequency components you have in your original image, the bigger the difference in file size between the original and shrinked image will be.
I think that by shrinking the image, you will reduce some of the higher frequency components during interpolation, increasing the possibility that they will be quantized to zero during the lossy quantization step.
If you go down this path, you're in luck: I've been playing with JPEG block DCT coefficients and put some code up to extract them.

Image fingerprint to compare similarity of many images

I need to create fingerprints of many images (about 100.000 existing, 1000 new per day, RGB, JPEG, max size 800x800) to compare every image to every other image very fast. I can't use binary compare methods because also images which are nearly similar should be recognized.
Best would be an existing library, but also some hints to existing algorithms would help me a lot.
Normal hashing or CRC calculation algorithms do not work well with image data. The dimensional nature of the information must be taken into account.
If you need extremely robust fingerprinting, such that affine transformations (scaling, rotation, translation, flipping) are accounted for, you can use a Radon transformation on the image source to produce a normative mapping of the image data - store this with each image and then compare just the fingerprints. This is a complex algorithm and not for the faint of heart.
a few simple solutions are possible:
Create a luminosity histogram for the image as a fingerprint
Create scaled down versions of each image as a fingerprint
Combine technique (1) and (2) into a hybrid approach for improved comparison quality
A luminosity histogram (especially one that is separated into RGB components) is a reasonable fingerprint for an image - and can be implemented quite efficiently. Subtracting one histogram from another will produce a new historgram which you can process to decide how similar two images are. Histograms, because the only evaluate the distribution and occurrence of luminosity/color information handle affine transformations quite well. If you quantize each color component's luminosity information down to an 8-bit value, 768 bytes of storage are sufficient for the fingerprint of an image of almost any reasonable size. Luminosity histograms produce false negatives when the color information in an image is manipulated. If you apply transformations like contrast/brightness, posterize, color shifting, luminosity information changes. False positives are also possible with certain types of images ... such as landscapes and images where a single color dominates others.
Using scaled images is another way to reduce the information density of the image to a level that is easier to compare. Reductions below 10% of the original image size generally lose too much of the information to be of use - so an 800x800 pixel image can be scaled down to 80x80 and still provide enough information to perform decent fingerprinting. Unlike histogram data, you have to perform anisotropic scaling of the image data when the source resolutions have varying aspect ratios. In other words, reducing a 300x800 image into an 80x80 thumbnail causes deformation of the image, such that when compared with a 300x500 image (that's very similar) will cause false negatives. Thumbnail fingerprints also often produce false negatives when affine transformations are involved. If you flip or rotate an image, its thumbnail will be quite different from the original and may result in a false positive.
Combining both techniques is a reasonable way to hedge your bets and reduce the occurence of both false positives and false negatives.
There is a much less ad-hoc approach than the scaled down image variants that have been proposed here that retains their general flavor, but which gives a much more rigorous mathematical basis for what is going on.
Take a Haar wavelet of the image. Basically the Haar wavelet is the succession of differences from the lower resolution images to each higher resolution image, but weighted by how deep you are in the 'tree' of mipmaps. The calculation is straightforward. Then once you have the Haar wavelet appropriately weighted, throw away all but the k largest coefficients (in terms of absolute value), normalize the vector and save it.
If you take the dot product of two of those normalized vectors it gives you a measure of similarity with 1 being nearly identical. I posted more information over here.
You should definitely take a look at phash.
For image comparison there is this php project :
https://github.com/kennethrapp/phasher
And my little javascript clone:
https://redaktor.me/phasher/demo_js/index.html
Unfortunately this is "bitcount"-based but will recognize rotated images.
Another approach in javascript was to build a luminosity histogram from the image by the help of canvas. You can visualize a polygon histogram on the canvas and compare that polygon in your database (e.g. mySQL spatial ...)
A long time ago I worked on a system that had some similar characteristics, and this is an approximation of the algorithm we followed:
Divide the picture into zones. In our case we were dealing with 4:3 resolution video, so we used 12 zones. Doing this takes the resolution of the source images out of the picture.
For each zone, calculate an overall color - the average of all pixels in the zone
For the entire image, calculate an overall color - the average of all zones
So for each image, you're storing n + 1 integer values, where n is the number of zones you're tracking.
For comparisons, you also need to look at each color channel individually.
For the overall image, compare the color channels for the overall colors to see if they are within a certain threshold - say, 10%
If the images are within the threshold, next compare each zone. If all zones also are within the threshold, the images are a strong enough match that you can at least flag them for further comparison.
This lets you quickly discard images that are not matches; you can also use more zones and/or apply the algorithm recursively to get stronger match confidence.
Similar to Ic's answer - you might try comparing the images at multiple resolutions. So each image get saved as 1x1, 2x2, 4x4 .. 800x800. If the lowest resolution doesn't match (subject to a threshold), you can immediately reject it. If it does match, you can compare them at the next higher resolution, and so on..
Also - if the images share any similar structure, such as medical images, you might be able to extract that structure into a description that is easier/faster to compare.
As of 2015 (back to the future... on this 2009 question which is now high-ranked in Google) image similarity can be computed using Deep Learning techniques. The family of algorithms known as Auto Encoders can create a vector representation which is searchable for similarity. There is a demo here.
One way you can do this is to resize the image and drop the resolution significantly (to 200x200 maybe?), storing a smaller (pixel-averaged) version for doing the comparison. Then define a tolerance threshold and compare each pixel. If the RGB of all pixels are within the tolerance, you've got a match.
Your initial run through is O(n^2) but if you catalog all matches, each new image is just an O(n) algorithm to compare (you only have to compare it to each previously inserted image). It will eventually break down however as the list of images to compare becomes larger, but I think you're safe for a while.
After 400 days of running, you'll have 500,000 images, which means (discounting the time to resize the image down) 200(H)*200(W)*500,000(images)*3(RGB) = 60,000,000,000 comparisons. If every image is an exact match, you're going to be falling behind, but that's probably not going to be the case, right? Remember, you can discount an image as a match as soon as a single comparison falls outside your threshold.
Do you literally want to compare every image against the others? What is the application? Maybe you just need some kind of indexing and retrieval of images based on certain descriptors? Then for example you can look at MPEG-7 standard for Multimedia Content Description Interface. Then you could compare the different image descriptors, which will be not that accurate but much faster.
So you want to do "fingerprint matching" that's pretty different than "image matching". Fingerprints' analysis has been deeply studied during the past 20 years, and several interesting algorithms have been developed to ensure the right detection rate (with respect to FAR and FRR measures - False Acceptance Rate and False Rejection Rate).
I suggest you to better look to LFA (Local Feature Analysis) class of detection techniques, mostly built on minutiae inspection. Minutiae are specific characteristics of any fingerprint, and have been classified in several classes. Mapping a raster image to a minutiae map is what actually most of Public Authorities do to file criminals or terrorists.
See here for further references
For iPhone image comparison and image similarity development check out:
http://sites.google.com/site/imagecomparison/
To see it in action, check out eyeBuy Visual Search on the iTunes AppStore.
It seems that specialised image hashing algorithms are an area of active research but perhaps a normal hash calculation of the image bytes would do the trick.
Are you seeking byte-identical images rather than looking for images that are derived from the same source but may be a different format or resolution (which strikes me as a rather hard problem).

Resources