Analysing how well an image is represented? - image

Is there some algorithm that I can use to analyse image representation accuracies? Do people such as compression algorithm designers have some sort of objective way of comparing two image representations?
Say I'm trying to display a circle as a raster image; the higher the resolution, the closer the image comes to a perfect circle. The representations clearly become more accurate as you go along.
->
->
Now, how can I measure how close a particular representation of the circle is to the circle?
One method I came up with was to measure the area of the bits that didn't match between the high res and low res image (the XOR):
4.12%
1.15%
But how would I apply this to a non-silhouette image such as a photo or an anti-aliased image?

I assume that you are not thinking of mosaic images, which are easy to detect from the pattern of repeated values.
For a natural image, the question does not make sense. The image is as accurate as it can, performing area sampling (and in any case you have no ground truth).
This is your antialiased image:

Related

Detecting a Partially Blurred Image

How would one go about creating an algorithm that detects the unblurred parts of a picture? For example, it would look at this picture:
and realize the non blurred portion is:
I saw over here how to measure the blur of the whole picture. For this problem, should I just create a threshold for the maximal absolute second derivative for the pixels? And then whichever one exceeds, is considered a non blurred region?
A simple solution is to detect high frequency content.
If there's no high frequency content in an area, it may be because it is blurred.
How to detect areas with no high frequency content? You can do it in the frequency domain (for example, with DCT), or you can do it in the spatial domain.
First, I recommend the spatial domain method.
You'll need some kind of high-pass filter. The easiest method is to blur the image (for example, with a gauss filter), then subtract it from the original, then convert to grayscale:
Blurred:
Subtracted:
As you see, all the blurred pixels become dark, and high frequency content is bright. Now, you may want to blur this image, and apply a threshold, to get this:
Note: this process was done by hand, with gimp. Your algorithm can easily follow this, but need some parameters specified (like the blur radius, threshold value).

Methods to detect a common area in a series of images

Suppose we have a series of digital images D1,...,Dn. For certainty, we consider this images to be of the same size. The problem is to find the largest common area -- the largest area that all of the input images share.
I suppose that if we have an algorithm to detect such area in two input images A and B, we can generalize it to the case of n images.
The most difficulty in this problem is that this area in image A doesn't have to be identically, pixel to pixel, equal to the the same area in image B. For example, we take two shots of a building using phone camera. Our hand shook and the second picture turned out to be a little dislodged. And the noise that's present in every picture adds uncertainty as well.
What algorithms should I look into to solve this kind of problem?
Simple but approximate solution, to begin with.
Rescale the images so that the amplitude of the shaking becomes smaller than a pixel.
Compute the standard deviation of every pixel across all images.
Consider the pixels with a deviation below a threshold.
As a second approximation, you can use the image at the full resolution as a template, but only in the areas obtained as above. Then register the other images with respect to it. The registration model can be translational only, but allowing rotation would be better.
Unfortunately, registration isn't an easy task. For your small displacements, Lucas-Kanade or Shi-Tomasi might be appropriate.
After registration, you can redo the deviation test to get better delineated regions.
I would use a method like SURF (or SIFT): you compute the SURF on each image and you see if there is common interest points. The common interest points will be the zone you are looking for. Thanks to SURF, the area does not have to be at the same place or scale.

Detect the vein pattern in leaves?

My aim is to detect the vein pattern in leaves which characterize various species of plants
I have already done the following:
Original image:
After Adaptive thresholding:
However the veins aren't that clear and get distorted , Is there any way i could get a better output
EDIT:
I tried color thresholding my results are still unsatisfactory i get the following image
Please help
The fact that its a JPEG image is going to give the "block" artifacts, which in the example you posted causes most square areas around the veins to have lots of noise, so ideally work on an image that's not been through lossy compression. If that's not possible then try filtering the image to remove some of the noise.
The veins you are wanting to extract have a different colour from the background, leaf and shadow so some sort of colour based threshold might be a good idea. There was a recent S.O. question with some code that might help here.
After that some sort of adaptive normalisation would help increase the contrast before you threshold it.
[edit]
Maybe thresholding isn't an intermediate step that you want to do. I made the following by filtering to remove jpeg artifacts, doing some CMYK channel math (more cyan and black) then applying adaptive equalisation. I'm pretty sure you could then go on to produce (subpixel maybe) edge points using image gradients and non-maxima supression, and maybe use the brightness at each point and the properties of the vein structure (mostly joining at a tangent) to join the points into lines.
In the past I made good experiences with the Edge detecting algorithm difference of Gaussian. Which basically works like this:
You blur the image twice with the gaussian blurr algorithm but with differenct blur radii.
Then you calculate the difference between both images.
Pixel with same color beneath each other will creating a same blured color.
Pixel with different colors beneath each other wil reate a gradient which is depending on the blur radius. For bigger radius the gradient will stretch more far. For smaller ones it wont.
So basically this is bandpass filter. If the selected radii are to small a vain vill create 2 "parallel" lines. But since the veins of leaves are small compared with the extends of the Image you mostly find radii, where a vein results in 1 line.
Here I added th processed picture.
Steps I did on this picture:
desaturate (grayscaled)
difference of Gaussian. Here I blured the first Image with a radius of 10px and the second image with a radius of 2px. The result you can see below.
This is only a quickly created result. I would guess that by optimizing the parametes, you can even get better ones.
This sounds like something I did back in college with neural networks. The neural network stuff is a bit hard so I won't go there. Anyways, patterns are perfect candidates for the 2D Fourier transform! Here is a possible scheme:
You have training data and input data
Your data is represented as a the 2D Fourier transform
If your database is large you should run PCA on the transform results to convert a 2D spectrogram to a 1D spectrogram
Compare the hamming distance by testing the spectrum (after PCA) of 1 image with all of the images in your dataset.
You should expect ~70% recognition with such primitive methods as long as the images are of approximately the same rotation. If the images are not of the same rotation.you may have to use SIFT. To get better recognition you will need more intelligent training sets such as a Hidden Markov Model or a neural net. The truth is to getting good results for this kind of problem may be quite a lot of work.
Check out: https://theiszm.wordpress.com/2010/07/20/7-properties-of-the-2d-fourier-transform/

Algorithm for culling pixels in a graphical data view?

I'm writing a wxpython widget which shows the state of several objects over time (x cycles). Right now I have it working with 1 pixel/cycle and zooming in and back out to 1:1; but I would like to allow zooming out. I wanted to see if there are any go-to algorithms for thowing away/combining data before I started rolling my own using only my own feeble heuristics. Is there any such algo, or should I just start coding my own solution?
Depends a lot on what type of images you're resizing. See The myth of infinite detail: Bilinear vs. Bicubic and Better Image Resizing by our very own Jeff! There you can compare results of naive nearest neighbor, bilinear filtering, bicubic filtering, bicubic sharper and genuine fractals.
Jeff's conclusion:
Reducing images is a completely safe
and rational operation. You're simply
reducing precision and resolution by
discarding information. Make the image
as small as you want, and you have
complete fidelity-- within the bounds
of the number of pixels you've
a> llowed. You'll get good results no
matter which algorithm you pick.
(Well, unless you pick the nave Pixel
Resize or Nearest Neighbor
algorithms.)
Enlarging images is risky. Beyond a
certain point, enlarging images is a
fool's errand; you can't magically
s> ynthesize an infinite number of new
pixels out of thin air. And
interpolated pixels are never as good
as real pixels. That's why it's more
than a little artificial to upsize the
512x512 Lena image by 500%. It'd be
smarter to find a higher resolution
scan or picture of whatever you need*
than it would be to upsize it in
software.
But when you can't avoid enlarging an
image, that's when it pays to know the
tradeoffs between bicubic, bilinear,
and more advanced resizing algorithms.
At least arm yourself with enough
knowledge to pick the best of the bad
options you have.

Image fingerprint to compare similarity of many images

I need to create fingerprints of many images (about 100.000 existing, 1000 new per day, RGB, JPEG, max size 800x800) to compare every image to every other image very fast. I can't use binary compare methods because also images which are nearly similar should be recognized.
Best would be an existing library, but also some hints to existing algorithms would help me a lot.
Normal hashing or CRC calculation algorithms do not work well with image data. The dimensional nature of the information must be taken into account.
If you need extremely robust fingerprinting, such that affine transformations (scaling, rotation, translation, flipping) are accounted for, you can use a Radon transformation on the image source to produce a normative mapping of the image data - store this with each image and then compare just the fingerprints. This is a complex algorithm and not for the faint of heart.
a few simple solutions are possible:
Create a luminosity histogram for the image as a fingerprint
Create scaled down versions of each image as a fingerprint
Combine technique (1) and (2) into a hybrid approach for improved comparison quality
A luminosity histogram (especially one that is separated into RGB components) is a reasonable fingerprint for an image - and can be implemented quite efficiently. Subtracting one histogram from another will produce a new historgram which you can process to decide how similar two images are. Histograms, because the only evaluate the distribution and occurrence of luminosity/color information handle affine transformations quite well. If you quantize each color component's luminosity information down to an 8-bit value, 768 bytes of storage are sufficient for the fingerprint of an image of almost any reasonable size. Luminosity histograms produce false negatives when the color information in an image is manipulated. If you apply transformations like contrast/brightness, posterize, color shifting, luminosity information changes. False positives are also possible with certain types of images ... such as landscapes and images where a single color dominates others.
Using scaled images is another way to reduce the information density of the image to a level that is easier to compare. Reductions below 10% of the original image size generally lose too much of the information to be of use - so an 800x800 pixel image can be scaled down to 80x80 and still provide enough information to perform decent fingerprinting. Unlike histogram data, you have to perform anisotropic scaling of the image data when the source resolutions have varying aspect ratios. In other words, reducing a 300x800 image into an 80x80 thumbnail causes deformation of the image, such that when compared with a 300x500 image (that's very similar) will cause false negatives. Thumbnail fingerprints also often produce false negatives when affine transformations are involved. If you flip or rotate an image, its thumbnail will be quite different from the original and may result in a false positive.
Combining both techniques is a reasonable way to hedge your bets and reduce the occurence of both false positives and false negatives.
There is a much less ad-hoc approach than the scaled down image variants that have been proposed here that retains their general flavor, but which gives a much more rigorous mathematical basis for what is going on.
Take a Haar wavelet of the image. Basically the Haar wavelet is the succession of differences from the lower resolution images to each higher resolution image, but weighted by how deep you are in the 'tree' of mipmaps. The calculation is straightforward. Then once you have the Haar wavelet appropriately weighted, throw away all but the k largest coefficients (in terms of absolute value), normalize the vector and save it.
If you take the dot product of two of those normalized vectors it gives you a measure of similarity with 1 being nearly identical. I posted more information over here.
You should definitely take a look at phash.
For image comparison there is this php project :
https://github.com/kennethrapp/phasher
And my little javascript clone:
https://redaktor.me/phasher/demo_js/index.html
Unfortunately this is "bitcount"-based but will recognize rotated images.
Another approach in javascript was to build a luminosity histogram from the image by the help of canvas. You can visualize a polygon histogram on the canvas and compare that polygon in your database (e.g. mySQL spatial ...)
A long time ago I worked on a system that had some similar characteristics, and this is an approximation of the algorithm we followed:
Divide the picture into zones. In our case we were dealing with 4:3 resolution video, so we used 12 zones. Doing this takes the resolution of the source images out of the picture.
For each zone, calculate an overall color - the average of all pixels in the zone
For the entire image, calculate an overall color - the average of all zones
So for each image, you're storing n + 1 integer values, where n is the number of zones you're tracking.
For comparisons, you also need to look at each color channel individually.
For the overall image, compare the color channels for the overall colors to see if they are within a certain threshold - say, 10%
If the images are within the threshold, next compare each zone. If all zones also are within the threshold, the images are a strong enough match that you can at least flag them for further comparison.
This lets you quickly discard images that are not matches; you can also use more zones and/or apply the algorithm recursively to get stronger match confidence.
Similar to Ic's answer - you might try comparing the images at multiple resolutions. So each image get saved as 1x1, 2x2, 4x4 .. 800x800. If the lowest resolution doesn't match (subject to a threshold), you can immediately reject it. If it does match, you can compare them at the next higher resolution, and so on..
Also - if the images share any similar structure, such as medical images, you might be able to extract that structure into a description that is easier/faster to compare.
As of 2015 (back to the future... on this 2009 question which is now high-ranked in Google) image similarity can be computed using Deep Learning techniques. The family of algorithms known as Auto Encoders can create a vector representation which is searchable for similarity. There is a demo here.
One way you can do this is to resize the image and drop the resolution significantly (to 200x200 maybe?), storing a smaller (pixel-averaged) version for doing the comparison. Then define a tolerance threshold and compare each pixel. If the RGB of all pixels are within the tolerance, you've got a match.
Your initial run through is O(n^2) but if you catalog all matches, each new image is just an O(n) algorithm to compare (you only have to compare it to each previously inserted image). It will eventually break down however as the list of images to compare becomes larger, but I think you're safe for a while.
After 400 days of running, you'll have 500,000 images, which means (discounting the time to resize the image down) 200(H)*200(W)*500,000(images)*3(RGB) = 60,000,000,000 comparisons. If every image is an exact match, you're going to be falling behind, but that's probably not going to be the case, right? Remember, you can discount an image as a match as soon as a single comparison falls outside your threshold.
Do you literally want to compare every image against the others? What is the application? Maybe you just need some kind of indexing and retrieval of images based on certain descriptors? Then for example you can look at MPEG-7 standard for Multimedia Content Description Interface. Then you could compare the different image descriptors, which will be not that accurate but much faster.
So you want to do "fingerprint matching" that's pretty different than "image matching". Fingerprints' analysis has been deeply studied during the past 20 years, and several interesting algorithms have been developed to ensure the right detection rate (with respect to FAR and FRR measures - False Acceptance Rate and False Rejection Rate).
I suggest you to better look to LFA (Local Feature Analysis) class of detection techniques, mostly built on minutiae inspection. Minutiae are specific characteristics of any fingerprint, and have been classified in several classes. Mapping a raster image to a minutiae map is what actually most of Public Authorities do to file criminals or terrorists.
See here for further references
For iPhone image comparison and image similarity development check out:
http://sites.google.com/site/imagecomparison/
To see it in action, check out eyeBuy Visual Search on the iTunes AppStore.
It seems that specialised image hashing algorithms are an area of active research but perhaps a normal hash calculation of the image bytes would do the trick.
Are you seeking byte-identical images rather than looking for images that are derived from the same source but may be a different format or resolution (which strikes me as a rather hard problem).

Resources