Detecting a Partially Blurred Image - algorithm

How would one go about creating an algorithm that detects the unblurred parts of a picture? For example, it would look at this picture:
and realize the non blurred portion is:
I saw over here how to measure the blur of the whole picture. For this problem, should I just create a threshold for the maximal absolute second derivative for the pixels? And then whichever one exceeds, is considered a non blurred region?

A simple solution is to detect high frequency content.
If there's no high frequency content in an area, it may be because it is blurred.
How to detect areas with no high frequency content? You can do it in the frequency domain (for example, with DCT), or you can do it in the spatial domain.
First, I recommend the spatial domain method.
You'll need some kind of high-pass filter. The easiest method is to blur the image (for example, with a gauss filter), then subtract it from the original, then convert to grayscale:
Blurred:
Subtracted:
As you see, all the blurred pixels become dark, and high frequency content is bright. Now, you may want to blur this image, and apply a threshold, to get this:
Note: this process was done by hand, with gimp. Your algorithm can easily follow this, but need some parameters specified (like the blur radius, threshold value).

Related

What are the correct way to calculate the SNR with these images in MATLAB?

Currently I am trying to figure out the Signal to Noise Ratio of a set of images as a way of gauging the performance of my deconvolution (filtering algorithms). I Have a set of images like the one below, which show the image, before and after the algorithm:
Now, I have discovered quite a few ways of judging the performance. One of these is to use the formula for the SNR of an image, where the signal is the original image and the noise is the filtered image. Another method, as described by this question, goes about figuring out the SNR from the singular image itself. This way, I can compare the SNR ratios that I get for both images and get an all new altogether.
Therefore, my question lies in the fact that, the resources on the internet are confusing and I do not know about the "correct" way of measuring the SNR of these images and using it as a performance metric.
It really depends on what you are trying to compare, and what you deem as "signal" and "noise". In your first method, you are effectively calculating the error(or difference) between image 1 and image 2 where you assume image 2 was tinted by noise but image 1 was not (this is also a sort of signal to distortion ratio). Therefore, this measurement is relative and it measures the performance of your method of transformation from Original to Target (or distortion technique), but not the image itself. For example a new type of encrypting filter generated image 2 from image 1 and you want to measure how different the images are to work out the performance of your filter.
In the second method based on the link you posted, you are assuming that noise is present in both images but at different levels and you are measuring it against each individual image - or in other words, you are measuring the standard deviation of each individual image, which is not relative.The second measurement is usually used to compare results generated from the same source, i.e. an experiment produces N images of the same object in a controlled environment and you want to measure, for example the amount of noise present at the scene (you would use this method to work out the covariance of noise to enable you to control the experiment environment).

Methods to detect a common area in a series of images

Suppose we have a series of digital images D1,...,Dn. For certainty, we consider this images to be of the same size. The problem is to find the largest common area -- the largest area that all of the input images share.
I suppose that if we have an algorithm to detect such area in two input images A and B, we can generalize it to the case of n images.
The most difficulty in this problem is that this area in image A doesn't have to be identically, pixel to pixel, equal to the the same area in image B. For example, we take two shots of a building using phone camera. Our hand shook and the second picture turned out to be a little dislodged. And the noise that's present in every picture adds uncertainty as well.
What algorithms should I look into to solve this kind of problem?
Simple but approximate solution, to begin with.
Rescale the images so that the amplitude of the shaking becomes smaller than a pixel.
Compute the standard deviation of every pixel across all images.
Consider the pixels with a deviation below a threshold.
As a second approximation, you can use the image at the full resolution as a template, but only in the areas obtained as above. Then register the other images with respect to it. The registration model can be translational only, but allowing rotation would be better.
Unfortunately, registration isn't an easy task. For your small displacements, Lucas-Kanade or Shi-Tomasi might be appropriate.
After registration, you can redo the deviation test to get better delineated regions.
I would use a method like SURF (or SIFT): you compute the SURF on each image and you see if there is common interest points. The common interest points will be the zone you are looking for. Thanks to SURF, the area does not have to be at the same place or scale.

Algorithm to detect the change in visible luminosity in an image

I want a formula to detect/calculate the change in visible luminosity in a part of the image,provided i can calculate the RGB, HSV, HSL and CMYK color spaces.
E.g: In the above picture we will notice that the left side of the image is more bright when compared to the right side , which is beneath a shade.
I have had a little think about this, and done some experiments in Photoshop, though you could just as well use ImageMagick which is free. Here is what I came up with.
Step 1 - Convert to Lab mode and discard the a and b channels since the Lightness channel holds most of the brightness information which, ultimately, is what we are looking for.
Step 2 - Stretch the contrast of the remaining L channel (using Levels) to accentuate the variation.
Step 3 - Perform a Gaussian blur on the image to remove local, high frequency variations in the image. I think I used 10-15 pixels radius.
Step 4 - Turn on the Histogram window and take a single row marquee and watch the histogram change as different rows are selected.
Step 5 - Look out for a strongly bimodal histogram (two distimct peaks) to identify the illumination variations.
This is not a complete, general purpose solution, but may hold some pointers and cause people who know better to suggest improvememnts for you!!! Note that the method requires the image to have a some areas of high uniformity like the whiteish horizontal bar across your input image. However, nearly any algorithm is going to have a hard time telling the difference between a sheet of white paper with a shadow of uneven light across it and the same sheet of paper with a grey sheet of paper laid on top of it...
In the images below, I have superimposed the histogram top right. In the first one, you can see the histogram is not narrow and bimodal because the dotted horizontal selection marquee is across the bar-code area of the image.
In the subsequent images, you can see a strong bimodal histogram because the dotted selection marquee is across a uniform area of image.
The first problem is in "visible luminosity". It me mean one of several things. This discussion should be a good start. (Yes, it has incomplete and contradictory answers, as well.)
Formula to determine brightness of RGB color
You should make sure you operate on the linear image which does not have any gamma correction applied to it. AFAIK Photoshop does not degamma and regamma images during filtering, which may produce erroneous results. It all depends on how accurate results you want. Photoshop wants things to look good, not be precise.
In principle you should first pick a formula to convert your RGB values to some luminosity value which fits your use. Then you have a single-channel image which you'll need to filter with a Gaussian filter, sliding average, or some other suitable filter. Unfortunately, this may require special tools as photoshop/gimp/etc. type programs tend to cut corners.
But then there is one thing you would probably like to consider. If you have an even brightness gradient across an image, the eye is happy and does not perceive it. Rather large differences go unnoticed if the contrast in the image is constant across the image. Unfortunately, the definition of contrast is not very meaningful if you do not know at least something about the content of the image. (If you have scanned/photographed documents, then the contrast is clearly between ink and paper.) In your sample image the brightness changes quite abruptly, which makes the change visible.
Just to show you how strange the human vision is in determining "brightness", see the classical checker shadow illusion:
http://en.wikipedia.org/wiki/Checker_shadow_illusion
So, my impression is that talking about the conversion formulae is probably the second or third step in the process of finding suitable image processing methods. The first step would be to try to define the problem in more detail. What do you want to accomplish?

How do I deal with brightness rescaling after FFT'ing and spatially filtering images?

Louise here. I've recently started experimenting with Fourier transforming images and spatially filtering them. For example, here's one of a fireplace, high-pass filtered to remove everything above ten cycles per image:
http://imgur.com/ECa306n,NBQtMsK,Ngo8eEY#0 - first image (sorry, I can't post images on Stack Overflow because I haven't got enough reputation).
As we can see, the image is very dark. However, if we rescale it to [0,1] we get
http://imgur.com/ECa306n,NBQtMsK,Ngo8eEY#0 - second image
and if we raise everything in the image to the power of -0.5 (we can't raise to positive powers as the image data is all between 0 and 1, and would thus get smaller), we get this:
same link - third image
My question is: how should we deal with reductions in dynamic range due to hi/low pass filtering? I've seen lots of filtered images online and they all seemed to have similar brightness profiles to the original image, without manipulation.
Should I be leaving the centre pixel of the frequency domain (the DC value) alone, and not removing it when low-pass filtering?
Is there a commonplace transform (like histogram equalisation) that I should be using after the filtering?
Or should I just interpret the brightness reduction as normal, because some of the information in the image has been removed?
Thanks for the advice :)
Best,
Louise
I agree with Connor, the best way to preserve brightness is to keep the origin (DC) value unchanged. It is common practise. This way you will get similar image as your second image, because you do not change the average gray level of the image. Removing it using high-pass filtering will set its value to 0 and some scaling is needed afterwards.

Get dominant colors from image discarding the background

What is the best (result, not performance) algorithm to fetch dominant colors from an image. The algorithm should discard the background of the image.
I know I can build an array of colors and how many they appear in the image, but I need a way to determine what is the background and what is the foreground, and keep only the second (foreground) in mind while read the dominant colors.
The problem is very hard especially for gradient backgrounds or backrounds with patterns (not plain)
Isolating the foreground from the background is beyond the scope of this particular answer, but...
I've found that applying a pixelation filter to an image will draw out a really good set of 'average' colours.
Before
After
I sometimes use this approach to derive a pallete of colours with a particular mood. I first find a photograph with the general tones I'm after, pixelate and then sample from the resulting image.
(Thanks to Pietro De Grandi for the image, found on unsplash.com)
The colour summarizer is a pretty sweet spot for info on this subject, not to mention their seemingly free XML Web API that will produce descriptive colour statistics for an image of your choosing, reporting back the following formatted with swatches in HTML or as XML...
what is the average color hue, saturation and value in my image?
what is the RGB colour that is most representative of the image?
what do the RGB and HSV histograms look like?
what is the image's human readable colour description (e.g. dark pure blue)?
The purpose of this utility is to generate metadata that summarizes an
image's colour characteristics for inclusion in an image database,
such as Flickr. In particular this tool is being used to generate
metadata for Flickr's Color Fields group.
In my experience though.. this tool still misses the "human-readable" / obvious "main" color, A LOT of the time. Silly machines!
I would say this problem is closer to "impossible" than "very hard". The only approach to it that I can think of would be to make the assumption that the background of an image is likely to consist of solid blocks of similar colors, while the foreground is likely to consist of smaller blocks of dissimilar colors.
If this assumption is generally true, then you could scan through the whole image and weight pixels according to how similar or dissimilar they are to neighboring pixels. In other words, if a pixel's neighbors (within some arbitrary radius, perhaps) were all similar colors, you would not incorporate that pixel into the overall estimate. If the neighbors tend to be very different colors, you would weight the pixel heavily, perhaps in proportion to the degree of difference.
This may not work perfectly, but it would definitely at least tend to exclude large swaths of similar colors.
As far as my knowledge of image processing algorithms extends , there is no certain way to get the "foreground"; it is only possible to get the borders between objects. You'll probably have to make do with an average, or your proposed array count method. In that, you'll want to give colours with higher saturation a higher "score" as they're much more prominent.

Resources