How can I calculate the frequency of pixel change within a region of interest in an AVI file? - pixel

I have a number of AVI files for which I can identify certain regions of interest.
I am interested in calculating the frequency of pixel change (e.g. from black to white, or some other change) within each region of interest. Is there any tool or workflow I should be looking at?

I think you would have to do the following:
Extract each frame of the AVI into a bitmap (you could use something like FFmpeg)
Then read each pixel of the region into some type of array
Then compare each array to the first or previous depending on your needs to find differences, and then calculate the frequency of change.

OpenCV is a library that you can use with a lot of algorithms already developed for such tasks. It works with python and also C++ among other lenguages.

Related

Clustering of colors in a thermal image

I am working on detection of dental issues using thermography. I need to separate the all the given colours in the image into separate clusters (4-7 in number) so that the high-temperature zones (seen white in the image) are seen separately, which can be followed by thresholding if need be.
I am also attaching a sample of the images I will be working on. I am looking for a suitable program to carry out the execution in MATLAB.
I've already worked on the same, the program being attached in the previous question, which gives 3 clusters only.Since I'm a beginner, I need help in establishing more clusters.
image obtained using thermal camera on which clustering is to be carried out
the closest I could get to the sort of clustering I want to carry out. here, in this image green-blue cluster and white cluster are in the same image, which i want to have in separate clusters, hence the need of more clusters
expected result after clustering and thresholding
Rather than hoping that by chance clustering does what you need, I'd rather just use the ground truth you have...
In case you haven't noticed: there is a color index to the right. That is an easy to use, ordered (this is extremely beneficial over clustering, in particular to set thresholds) easy to detect key to interpreting these images without hoping on a random generator.
Note that you will likely also need to read off the numbers that give the color scale, in order to compare images.
You can read your image file in Matlab and then convert the data from RGB format to HSL or HSV with the function rgb2hsl() or rgb2hsv().
They are two alternative representations of the RGB color model. Then you can easily make your discrimination with the value of H which is abbreviated for hue.
For more information take a look at the following link: HLS and HSV

Match two images of different intensity, size, and taken from different sources and computing difference between them

Problem statement:
Given an input image, find and extract the image similar to that from the cluttered scene. Now from the extracted Image find the differences in the extracted image from the input image.
My Approach:
Uptill now I have used SIFT features for feature matching and affine transform to extract the image from the cluttered scene.
But I am not able to find a method good enough and feasible for me to find the difference in the input image and extracted image.
I dont think there exists a particular technique for your problem. If the traditional methods does not suite your need, maybe you can use the keypoints (SIFT) again to estimate the difference.
You have already done most work by matching image using SIFT.
Next you can use corresponding SIFT matched points to estimate the warp-affine factor. Apply required warp affine to second image and crop such that the images are super-imposable.
Now you can calculate absolute difference of the two image and SAD or SSD as a difference indication.

Is there any Algorithm for converting Image histograms into original image?

So we have Histograms... Is there any algorithm to generate original image from them?
(source: petrileskinen.fi)
No, because the histograms simply plot the number of pixels of various tones, not their locations.
That's like saying: "Can you reconstruct a specific painting (not knowing which one) from a couple of pots of paint?"
It's not possible to reconstruct an unknown picture from a histogram, but that doesn't mean there's nothing you can do. If you have a database of possible pictures, you can "fingerprint" each picture, by generating its histogram, and then use the histogram you have to search over that database of fingerprints to identify which picture it is. If you find a decent distance metric, you could possibly even use this to find pictures that are "similar" (in some very rough sense) to the picture you have.
You can't use this to say "here's a picture of the Tower of London; now find me other pictures of the Tower of London" but you could use it to say "here's a picture of a sunset; find me pictures that contain a similar set of colours", which might end up being useful to some extent.
Of course it might turn out that your evening landscape picture has a very similar histogram to something completely irrelevant, and may have a completely different histogram to a picture that, to a human, looks similar. So it's not a robust approach. But if all you have is the histogram, then it may be worth looking into what can be achieved.
No. Histograms are lossy.
A histogram doesn't carry any spatial information. I mean, it's not possible to find the x,y position of the pixel that contributed to a particular histogram bin. The histogram only represents image global brightness information.
Histogram only carries and provides information about what's distribution of tones in an image. It is an aggregation of discrete information encoded in original image - how many pixels have particular values. Thus, it's not possible to generate original image without providing addition details like what's location of pixels, etc.

Detecting image equality at different resolutions

I'm trying to build a script to go through my original, high-res photos and replace the old, low-res ones I uploaded to Flickr before I had a pro account.
For many of them I can just use Exif info such as date taken to determine a match. But some are really old, and either the original file didn't have Exif info, or it got clobbered by whatever stupid resizing software I used at the time.
So, unable to rely on metadata, I'm forced to resort to the content itself. The problem is that the originals are in different resolutions than the ones on Flickr (which is the whole point of this endeavour). So is there a way for me to compare them with some sort of fuzzy similarity measure that would allow me to set a threshold for requiring human input or not?
I guess knowing one image is a resized version of the other can yield better results than general similarity. A solution in any language will do, but Ruby would be a plus :)
Interesting problem, btw :)
Slow-ish solution - excellent chance of success
Use a scale-invariant feature detector to find corresponding features in both images. If the features are matched with with a high score at similar locations, then you have your match.
I'd recommend SIFT which generates a scale & rotation invariant 128-integer descriptor for a feature found in an image. SURF (available in OpenCV) is another (faster) feature point detector.
You can match features across two images via bruteforce (compare each descriptor to a descriptor in the other image) which is O(n^2) but pretty fast (especially in the VL SIFT implementation). But if you need to compare the features in one image to several images (which you might have to) you should build a tree of the features to query it with the other image's features. K-D trees are useful, and OpenCV has a nice implementation.
Fast solution - might work
Downsample your high-res image to the low-res dimensions and use a similarity measure like SAD (where the sum of the differences between block of, say, 3x3 pixels around a pixel in both images is the score) to determine a match.
I'd recommend scripting a solution off of ImageMagick. The following (from the documentation on comparing images with IM) would output a comparative value that you can use.
convert image1 image2 \
-compose difference -composite -colorspace gray miff:- |\
identify -verbose - |\
sed -n '/^.*Mean: */{s//scale=2;/;s/(.*)//;s/$/*100\/32768/;p;q;}' | bc
Compute the normalized color histogram of both images and compare them using some method (histogram intersection, for example - see the link above). Note the normalized histogram is needed because the images present different resolutions. If the images are so dissimilar, they are not the same picture. But if they are similar, you have one of these two cases: (i) they are the same picture or (ii) they are different pictures but present similar global color distributions.
For case (ii), split the images and rectangular tiles and repeat the process, comparing correspondent tiles. You are trying to account for local properties of the image. Rank the results and pick the best match.

How can I measure the similarity between two images? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I would like to compare a screenshot of one application (could be a Web page) with a previously taken screenshot to determine whether the application is displaying itself correctly. I don't want an exact match comparison, because the aspect could be slightly different (in the case of a Web app, depending on the browser, some element could be at a slightly different location). It should give a measure of how similar are the screenshots.
Is there a library / tool that already does that? How would you implement it?
This depends entirely on how smart you want the algorithm to be.
For instance, here are some issues:
cropped images vs. an uncropped image
images with a text added vs. another without
mirrored images
The easiest and simplest algorithm I've seen for this is just to do the following steps to each image:
scale to something small, like 64x64 or 32x32, disregard aspect ratio, use a combining scaling algorithm instead of nearest pixel
scale the color ranges so that the darkest is black and lightest is white
rotate and flip the image so that the lighest color is top left, and then top-right is next darker, bottom-left is next darker (as far as possible of course)
Edit A combining scaling algorithm is one that when scaling 10 pixels down to one will do it using a function that takes the color of all those 10 pixels and combines them into one. Can be done with algorithms like averaging, mean-value, or more complex ones like bicubic splines.
Then calculate the mean distance pixel-by-pixel between the two images.
To look up a possible match in a database, store the pixel colors as individual columns in the database, index a bunch of them (but not all, unless you use a very small image), and do a query that uses a range for each pixel value, ie. every image where the pixel in the small image is between -5 and +5 of the image you want to look up.
This is easy to implement, and fairly fast to run, but of course won't handle most advanced differences. For that you need much more advanced algorithms.
The 'classic' way of measuring this is to break the image up into some canonical number of sections (say a 10x10 grid) and then computing a histogram of RGB values inside of each cell and compare corresponding histograms. This type of algorithm is preferred because of both its simplicity and it's invariance to scaling and (small!) translation.
Use a normalised colour histogram. (Read the section on applications here), they are commonly used in image retrieval/matching systems and are a standard way of matching images that is very reliable, relatively fast and very easy to implement.
Essentially a colour histogram will capture the colour distribution of the image. This can then be compared with another image to see if the colour distributions match.
This type of matching is pretty resiliant to scaling (once the histogram is normalised), and rotation/shifting/movement etc.
Avoid pixel-by-pixel comparisons as if the image is rotated/shifted slightly it may lead to a large difference being reported.
Histograms would be straightforward to generate yourself (assuming you can get access to pixel values), but if you don't feel like it, the OpenCV library is a great resource for doing this kind of stuff. Here is a powerpoint presentation that shows you how to create a histogram using OpenCV.
Don't video encoding algorithms like MPEG compute the difference between each frame of a video so they can just encode the delta? You might look into how video encoding algorithms compute those frame differences.
Look at this open source image search application http://www.semanticmetadata.net/lire/. It describes several image similarity algorighms, three of which are from the MPEG-7 standard: ScalableColor, ColorLayout, EdgeHistogram and Auto Color Correlogram.
You could use a pure mathematical approach of O(n^2), but it will be useful only if you are certain that there's no offset or something like that. (Although that if you have a few objects with homogeneous coloring it will still work pretty well.)
Anyway, the idea is the compute the normalized dot-product of the two matrices.
C = sum(Pij*Qij)^2/(sum(Pij^2)*sum(Qij^2)).
This formula is actually the "cosine" of the angle between the matrices (wierd).
The bigger the similarity (lets say Pij=Qij), C will be 1, and if they're completely different, lets say for every i,j Qij = 1 (avoiding zero-division), Pij = 255, then for size nxn, the bigger n will be, the closer to zero we'll get. (By rough calculation: C=1/n^2).
You'll need pattern recognition for that. To determine small differences between two images, Hopfield nets work fairly well and are quite easy to implement. I don't know any available implementations, though.
A ruby solution can be found here
From the readme:
Phashion is a Ruby wrapper around the pHash library, "perceptual hash", which detects duplicate and near duplicate multimedia files
How to measure similarity between two images entirely depends on what you would like to measure, for example: contrast, brightness, modality, noise... and then choose the best suitable similarity measure there is for you. You can choose from MAD (mean absolute difference), MSD (mean squared difference) which are good for measuring brightness...there is also available CR (correlation coefficient) which is good in representing correlation between two images. You could also choose from histogram based similarity measures like SDH (standard deviation of difference image histogram) or multimodality similarity measures like MI (mutual information) or NMI (normalized mutual information).
Because this similarity measures cost much in time, it is advised to scale images down before applying these measures on them.
I wonder (and I'm really just throwing the idea out there to be shot down) if something could be derived by subtracting one image from the other, and then compressing the resulting image as a jpeg of gif, and taking the file size as a measure of similarity.
If you had two identical images, you'd get a white box, which would compress really well. The more the images differed, the more complex it would be to represent, and hence the less compressible.
Probably not an ideal test, and probably much slower than necessary, but it might work as a quick and dirty implementation.
You might look at the code for the open source tool findimagedupes, though it appears to have been written in perl, so I can't say how easy it will be to parse...
Reading the findimagedupes page that I liked, I see that there is a C++ implementation of the same algorithm. Presumably this will be easier to understand.
And it appears you can also use gqview.
Well, not to answer your question directly, but I have seen this happen. Microsoft recently launched a tool called PhotoSynth which does something very similar to determine overlapping areas in a large number of pictures (which could be of different aspect ratios).
I wonder if they have any available libraries or code snippets on their blog.
to expand on Vaibhav's note, hugin is an open-source 'autostitcher' which should have some insight on the problem.
There's software for content-based image retrieval, which does (partially) what you need. All references and explanations are linked from the project site and there's also a short text book (Kindle): LIRE
You can use Siamese Network to see if the two images are similar or dissimilar following this tutorial. This tutorial cluster the similar images whereas you can use L2 distance to measure the similarity of two images.
Beyond Compare has pixel-by-pixel comparison for images, e.g.,
If this is something you will be doing on an occasional basis and doesn't need automating, you can do it in an image editor that supports layers, such as Photoshop or Paint Shop Pro (probably GIMP or Paint.Net too, but I'm not sure about those). Open both screen shots, and put one as a layer on top of the other. Change the layer blending mode to Difference, and everything that's the same between the two will become black. You can move the top layer around to minimize any alignment differences.
Well a really base-level method to use could go through every pixel colour and compare it with the corresponding pixel colour on the second image - but that's a probably a very very slow solution.

Resources