An image is given to us that has been corrupted by:
Gaussian blur
Gaussian noise
Motion blur
in that order. The parameters of all the above (filter size, variance, SNR, etc) are known to us.
How can we restore the image?
I have tried to compute the aggregate degradation function by convolving the above and then used the Weiner filter to restore, but the attempts have failed so far, since the blur still remains.
Could anyone please shed some light?
For Gaussian and motion blur, it is a matter of deducing the convolution kernel. Once it is known, deconvolution can be done in Fourier space. The Fourier transform of the image, divided by the Fourier transform of the kernel, gives the Fourier transform of a (hopefully) improved image.
Gaussians transform into other Gaussians, so there is no problem with divide by zero. But Gaussians do fall of rather fast, as exp(-x^2), so you'd be dividing by small numbers to obtain large whacky high frequency amplitudes. So, some sort constant bias or other way of keeping the FT of the kernal from getting small must be applied. That's where the Wiener filter comes in. The bias is usually chosen in relation to random noise levels, or quantization.
For motion blur, a typical case is when the clean image is convolved with a short line segment. Unfortunately, sharply cut-off line segments have plenty of zeros. Again, Wiener filter to the rescue.
Additive Guassian noise cannot be removed, but can be averaged out. The simplest quickest way is to blur the image with Gaussian, box, or other filter. Biggest problem with that - you end up with a blurred image! Median filters are somewhat better at preserving edges and details if not too small. There are many noise reduction techniques out there.
Sometimes noise reduction is easy for certain types of images. For Cassini imaging work, most image features were either high-contrast hard edges (planet edges, craters), or softly varying (cloud details in atmospheres) so I used an edge detector, fattened (dilated) its output, blurred it, and used that as a mask to protect parts of the image from a small-radius blur filter. Applying different filters.
There's Signal Processing Stack Exchange site (in beta for now) which may have questions and answers about restoring corrupted images. https://dsp.stackexchange.com/questions
Related
How would one go about creating an algorithm that detects the unblurred parts of a picture? For example, it would look at this picture:
and realize the non blurred portion is:
I saw over here how to measure the blur of the whole picture. For this problem, should I just create a threshold for the maximal absolute second derivative for the pixels? And then whichever one exceeds, is considered a non blurred region?
A simple solution is to detect high frequency content.
If there's no high frequency content in an area, it may be because it is blurred.
How to detect areas with no high frequency content? You can do it in the frequency domain (for example, with DCT), or you can do it in the spatial domain.
First, I recommend the spatial domain method.
You'll need some kind of high-pass filter. The easiest method is to blur the image (for example, with a gauss filter), then subtract it from the original, then convert to grayscale:
Blurred:
Subtracted:
As you see, all the blurred pixels become dark, and high frequency content is bright. Now, you may want to blur this image, and apply a threshold, to get this:
Note: this process was done by hand, with gimp. Your algorithm can easily follow this, but need some parameters specified (like the blur radius, threshold value).
I have an image which may contain some blobs. The blobs can be any size, and some will yield a very strong signal, while others are very weak. In this question I will focus on the weak ones because they are the difficult ones to detect.
Here is an example with 4 blobs.
The blob at (480, 180) is the most difficult one to detect. By running a Gaussian filter followed by an opening operation increases the contrast a bit, but not a lot:
The tricky part of this problem is that the natural noise in the background will result in (many) pixels which have a stronger signal than the blob I want to detect. What makes the blob a blob is that it's either a large area with an average increase in intensity, (or a small area with a very strong increase in intensity (not relevant here)).
How can I include this spacial information in order to detect my blob?
It is obvious that I first needs to filter the image with a Gaussian and/or median filter in order to incorporate the nearby region of each pixel into each single pixel value. However, no amount of blurring is enough to make it easy to segment the blobs from the background.
EDIT: Regarding thresholding: Thresholding is very temping, but also problematic by itself. I do not have a region of "pure background" and the larger a blob is, the weaker the signal can be - while still being detectable.
I should also not that the typical image will not have any blobs at all, but just be pure background.
You could try a h-minima transform. It removes any minima under the height of h and increases the height of all other throughs by h. It's defined as the morphological reconstruction of an erosion increased by the height h. Here's the results with h = 35:
It should be a lot easier to manipulate. It also needs a input like segmentation. The difference is that this is more robust. Underestimating h by a relatively large number will only bring you back closer to the original problem image instead of failing completely.
You could try to characterize the background noise to get an estimate, assuming that whatever your application is would have a relatively constant amount of it.
Note that one blue dot between the two large bottom blobs. Even further processing is needed. You could try continuing with the morphology. Something that I have found to work in some 'ink-blot' segmentation cases like this is running through every connected component, calculating their convex hulls and finally the union of all the convex hulls in the image. It usually makes further morphological operations much easier and provides a good estimate for the label.
In my experience, if you can see your gaussian filter size (those little circles), then your filter width is too small. Although terribly expensive, try bumping up the radius on your gaussian, it should continue to improve your results up to its radius matching the radius of the smallest object you are trying to find.
Following that (heavy gaussian), I would do a peak search across the whole image. Cut out any peaks that are too low, and or have too little contrast to the nearest valley/ background.
Don't be afraid to split this into two isolated processing pipelines: ie one filtration and extraction for low contrast spread out blobs, and a completely different one to isolate high contrast spikes (much much easier to find). That being said, a high contrast spike "should" survive even a pretty aggressive filter. Another thing to keep in mind is iterative subtraction, if there are some blobs that can be found easily from the get go, pull them out of the image and then do a stretch (but be careful as you can make the image be whatever you want it to be with too much stretching)
Maybe try an iterative approach using thresholding and edge detection:
Start with a very high threshold (say 90% signal), then run a canny filter (or any binary edge filter you like) on the thresholded image. Count and store the number of pixels (edge pixels) generated.
Proceed to repeat this step for lower and lower thresholds. At a certain point you are going to see a massive spike in edges detected (ie your cool textured background). Then pull back the threshold a little higher and run closing and floodfill on your resulting edge image.
I have been thinking about this for quite some time, but never really performed detailed analysis on this. Does the foreground segmentation using GrabCut[1] algorithm depend on the size of the input image? Intuitively, it appears to me that since grabcut is based on color models, color distributions should not change as the size of the image changes, but [aliasing] artifacts in smaller images might play a role.
Any thoughts or existing experiments on the dependence of size of the image on image segmentation using grabcut would be highly appreciated.
Thanks
[1] C. Rother, V. Kolmogorov, and A. Blake, GrabCut: Interactive foreground extraction using iterated graph cuts, ACM Trans. Graph., vol. 23, pp. 309–314, 2004.
Size matters.
The objective function of GrabCut balances two terms:
The unary term that measures the per-pixel fit to the foreground/background color model.
The smoothness term (pair-wise term) that measures the "complexity" of the segmentation boundary.
The first term (unary) scales with the area of the foreground while the second (smoothness) scales with the perimeter of the foreground.
So, if you scale your image by a x2 factor you increase the area by x4 while the perimeter scales only roughly by a x2 factor.
Therefore, if you tuned (or learned) the parameters of the energy function for a specific image size / scale, these parameters may not work for you in different image sizes.
PS
Did you know that Office 2010 "foreground selection tool" is based on GrabCut algorithm?
Here's a PDF of the GrabCut paper, courtesy of Microsoft Research.
The two main effects of image size will be run time and the scale of details in the image which may be considered significant. Of these two, run time is the one which will bite you with GrabCut - graph cutting methods are already rather slow, and GrabCut uses them iteratively.
It's very common to start by downsampling the image to a smaller resolution, often in combination with a low-pass filter (i.e. you sample the source image with a Gaussian kernel). This significantly reduces the n which the algorithm runs over while reducing the effect of small details and noise on the result.
You can also use masking to restrict processing to only specific portions of the image. You're already getting some of this in GrabCut as the initial "grab" or selection stage, and again later during the brush-based refinement stage. This stage also gives you some implicit information about scale, i.e. the feature of interest is probably filling most of the selection region.
Recommendation:
Display the image at whatever scale is convenient and downsample the selected region to roughly the n = 100k to 200k range per their example. If you need to improve the result quality, use the result of the initial stage as the starting point for a following iteration at higher resolution.
I'm writing an application that averages/combines/stacks a series of exposures. This is commonly used to reduce noise in the resultant image.
However, it seems, to optimize the average/stack the exposures are usually first normalized. It seems that this process assigns weights to each of the exposures and then proceeds to combine them. I am guessing that the process computes the overall intensity of each image as the purpose is to match the intensities of all the images in the stack.
My question is, how can I incorporate an algorithm that will allow me to normalize a series of images? I guess the question be generalized by instead asking "How can I normalize a series of readings?"
An outline in my head appears as follows:
Compute the average of a reference image.
Divide the average of each frame by the average of the the reference frame.
The result of each division is the weight for each frame.
Scale/Multiply each pixel in a frame by the weight found for that particular frame.
Does this seem to make sense to anyone? I have tried to google for the past hour but didn't found anything. Also took at the indices of various image processing books on Amazon but that didn't turn up anything either.
Each integration consists of signal and assorted noise - some is time-independent (e.g. bias or CCD readout noise), some time-dependent (e.g dark current), and some is random (shot noise). The aim is to remove the noise, and leave the signal. So you would first subtract the 'fixed' sources using dark frames (which will include dark current, readout and bias) leaving signal plus shot noise. Signal scales as flux times exposure time, shot noise as the square root of the signal
http://en.wikipedia.org/wiki/Shot_noise
so overall your signal/noise scales as the square root of the integration time (assuming your integrations are short enough that they are not saturated). So by adding frames you are simply increasing the exposure time, and hence the signal/noise ratio. You don't need to normalize first.
To complicate matters, transient non-Gaussian noise is also present (e.g. cosmic ray hits). There are many techniques for dealing with these, but a common one is 'sigma-clipping', where you have an extra pass to calculate the mean and standard deviation of each pixel, and then reject outliers that are many standard deviations from the mean. Real signal will show Gaussian fluctuations around the mean value, whereas transients will show a large deviation in one frame of the stack. Maybe that's what you are thinking of?
I need to create fingerprints of many images (about 100.000 existing, 1000 new per day, RGB, JPEG, max size 800x800) to compare every image to every other image very fast. I can't use binary compare methods because also images which are nearly similar should be recognized.
Best would be an existing library, but also some hints to existing algorithms would help me a lot.
Normal hashing or CRC calculation algorithms do not work well with image data. The dimensional nature of the information must be taken into account.
If you need extremely robust fingerprinting, such that affine transformations (scaling, rotation, translation, flipping) are accounted for, you can use a Radon transformation on the image source to produce a normative mapping of the image data - store this with each image and then compare just the fingerprints. This is a complex algorithm and not for the faint of heart.
a few simple solutions are possible:
Create a luminosity histogram for the image as a fingerprint
Create scaled down versions of each image as a fingerprint
Combine technique (1) and (2) into a hybrid approach for improved comparison quality
A luminosity histogram (especially one that is separated into RGB components) is a reasonable fingerprint for an image - and can be implemented quite efficiently. Subtracting one histogram from another will produce a new historgram which you can process to decide how similar two images are. Histograms, because the only evaluate the distribution and occurrence of luminosity/color information handle affine transformations quite well. If you quantize each color component's luminosity information down to an 8-bit value, 768 bytes of storage are sufficient for the fingerprint of an image of almost any reasonable size. Luminosity histograms produce false negatives when the color information in an image is manipulated. If you apply transformations like contrast/brightness, posterize, color shifting, luminosity information changes. False positives are also possible with certain types of images ... such as landscapes and images where a single color dominates others.
Using scaled images is another way to reduce the information density of the image to a level that is easier to compare. Reductions below 10% of the original image size generally lose too much of the information to be of use - so an 800x800 pixel image can be scaled down to 80x80 and still provide enough information to perform decent fingerprinting. Unlike histogram data, you have to perform anisotropic scaling of the image data when the source resolutions have varying aspect ratios. In other words, reducing a 300x800 image into an 80x80 thumbnail causes deformation of the image, such that when compared with a 300x500 image (that's very similar) will cause false negatives. Thumbnail fingerprints also often produce false negatives when affine transformations are involved. If you flip or rotate an image, its thumbnail will be quite different from the original and may result in a false positive.
Combining both techniques is a reasonable way to hedge your bets and reduce the occurence of both false positives and false negatives.
There is a much less ad-hoc approach than the scaled down image variants that have been proposed here that retains their general flavor, but which gives a much more rigorous mathematical basis for what is going on.
Take a Haar wavelet of the image. Basically the Haar wavelet is the succession of differences from the lower resolution images to each higher resolution image, but weighted by how deep you are in the 'tree' of mipmaps. The calculation is straightforward. Then once you have the Haar wavelet appropriately weighted, throw away all but the k largest coefficients (in terms of absolute value), normalize the vector and save it.
If you take the dot product of two of those normalized vectors it gives you a measure of similarity with 1 being nearly identical. I posted more information over here.
You should definitely take a look at phash.
For image comparison there is this php project :
https://github.com/kennethrapp/phasher
And my little javascript clone:
https://redaktor.me/phasher/demo_js/index.html
Unfortunately this is "bitcount"-based but will recognize rotated images.
Another approach in javascript was to build a luminosity histogram from the image by the help of canvas. You can visualize a polygon histogram on the canvas and compare that polygon in your database (e.g. mySQL spatial ...)
A long time ago I worked on a system that had some similar characteristics, and this is an approximation of the algorithm we followed:
Divide the picture into zones. In our case we were dealing with 4:3 resolution video, so we used 12 zones. Doing this takes the resolution of the source images out of the picture.
For each zone, calculate an overall color - the average of all pixels in the zone
For the entire image, calculate an overall color - the average of all zones
So for each image, you're storing n + 1 integer values, where n is the number of zones you're tracking.
For comparisons, you also need to look at each color channel individually.
For the overall image, compare the color channels for the overall colors to see if they are within a certain threshold - say, 10%
If the images are within the threshold, next compare each zone. If all zones also are within the threshold, the images are a strong enough match that you can at least flag them for further comparison.
This lets you quickly discard images that are not matches; you can also use more zones and/or apply the algorithm recursively to get stronger match confidence.
Similar to Ic's answer - you might try comparing the images at multiple resolutions. So each image get saved as 1x1, 2x2, 4x4 .. 800x800. If the lowest resolution doesn't match (subject to a threshold), you can immediately reject it. If it does match, you can compare them at the next higher resolution, and so on..
Also - if the images share any similar structure, such as medical images, you might be able to extract that structure into a description that is easier/faster to compare.
As of 2015 (back to the future... on this 2009 question which is now high-ranked in Google) image similarity can be computed using Deep Learning techniques. The family of algorithms known as Auto Encoders can create a vector representation which is searchable for similarity. There is a demo here.
One way you can do this is to resize the image and drop the resolution significantly (to 200x200 maybe?), storing a smaller (pixel-averaged) version for doing the comparison. Then define a tolerance threshold and compare each pixel. If the RGB of all pixels are within the tolerance, you've got a match.
Your initial run through is O(n^2) but if you catalog all matches, each new image is just an O(n) algorithm to compare (you only have to compare it to each previously inserted image). It will eventually break down however as the list of images to compare becomes larger, but I think you're safe for a while.
After 400 days of running, you'll have 500,000 images, which means (discounting the time to resize the image down) 200(H)*200(W)*500,000(images)*3(RGB) = 60,000,000,000 comparisons. If every image is an exact match, you're going to be falling behind, but that's probably not going to be the case, right? Remember, you can discount an image as a match as soon as a single comparison falls outside your threshold.
Do you literally want to compare every image against the others? What is the application? Maybe you just need some kind of indexing and retrieval of images based on certain descriptors? Then for example you can look at MPEG-7 standard for Multimedia Content Description Interface. Then you could compare the different image descriptors, which will be not that accurate but much faster.
So you want to do "fingerprint matching" that's pretty different than "image matching". Fingerprints' analysis has been deeply studied during the past 20 years, and several interesting algorithms have been developed to ensure the right detection rate (with respect to FAR and FRR measures - False Acceptance Rate and False Rejection Rate).
I suggest you to better look to LFA (Local Feature Analysis) class of detection techniques, mostly built on minutiae inspection. Minutiae are specific characteristics of any fingerprint, and have been classified in several classes. Mapping a raster image to a minutiae map is what actually most of Public Authorities do to file criminals or terrorists.
See here for further references
For iPhone image comparison and image similarity development check out:
http://sites.google.com/site/imagecomparison/
To see it in action, check out eyeBuy Visual Search on the iTunes AppStore.
It seems that specialised image hashing algorithms are an area of active research but perhaps a normal hash calculation of the image bytes would do the trick.
Are you seeking byte-identical images rather than looking for images that are derived from the same source but may be a different format or resolution (which strikes me as a rather hard problem).