Linearly Normalizing Stack of Images (data?) Prior to Averaging? - algorithm

I'm writing an application that averages/combines/stacks a series of exposures. This is commonly used to reduce noise in the resultant image.
However, it seems, to optimize the average/stack the exposures are usually first normalized. It seems that this process assigns weights to each of the exposures and then proceeds to combine them. I am guessing that the process computes the overall intensity of each image as the purpose is to match the intensities of all the images in the stack.
My question is, how can I incorporate an algorithm that will allow me to normalize a series of images? I guess the question be generalized by instead asking "How can I normalize a series of readings?"
An outline in my head appears as follows:
Compute the average of a reference image.
Divide the average of each frame by the average of the the reference frame.
The result of each division is the weight for each frame.
Scale/Multiply each pixel in a frame by the weight found for that particular frame.
Does this seem to make sense to anyone? I have tried to google for the past hour but didn't found anything. Also took at the indices of various image processing books on Amazon but that didn't turn up anything either.

Each integration consists of signal and assorted noise - some is time-independent (e.g. bias or CCD readout noise), some time-dependent (e.g dark current), and some is random (shot noise). The aim is to remove the noise, and leave the signal. So you would first subtract the 'fixed' sources using dark frames (which will include dark current, readout and bias) leaving signal plus shot noise. Signal scales as flux times exposure time, shot noise as the square root of the signal
http://en.wikipedia.org/wiki/Shot_noise
so overall your signal/noise scales as the square root of the integration time (assuming your integrations are short enough that they are not saturated). So by adding frames you are simply increasing the exposure time, and hence the signal/noise ratio. You don't need to normalize first.
To complicate matters, transient non-Gaussian noise is also present (e.g. cosmic ray hits). There are many techniques for dealing with these, but a common one is 'sigma-clipping', where you have an extra pass to calculate the mean and standard deviation of each pixel, and then reject outliers that are many standard deviations from the mean. Real signal will show Gaussian fluctuations around the mean value, whereas transients will show a large deviation in one frame of the stack. Maybe that's what you are thinking of?

Related

What algorithm can find noise in a signal?

In class the lecturer asked us a question, during a concert the conductor hears a false note (noise), translating this into signal fundamentals how does the conductor detect this noise?
My guess is that it may be related to the Fourier transform, but I'm not sure I'm even close to the answer.
Check out a spectrogram. it's a 3d representation of a frequency domain over a time.
As a general routine:
divide your time-domain recording of a musical piece into suitably sized time segments, small enough that they can capture the shortest musical note you want to represent (preferably smaller than this amount as you want more granular measurements of when the note starts/stops).
take the Fourier transform of each time segment, and represent this information in a spectrogram (X-axis for time, Y-axis for frequency, and Z-axis (colour) for signal power).
do appropriate filtering on each time segment to keep only frequencies with significant signal power.
compare this against your sheet music. Sheet music is essentially a spectrogram, telling you which notes(frequencies) should be played at which times (using the BPM or time signature of the music). if you have a note present in the spectrogram but not in the sheet music, it's spurious or accidental (or the result of a badly formed spectrogram).

Detecting weak blobs in a noise image

I have an image which may contain some blobs. The blobs can be any size, and some will yield a very strong signal, while others are very weak. In this question I will focus on the weak ones because they are the difficult ones to detect.
Here is an example with 4 blobs.
The blob at (480, 180) is the most difficult one to detect. By running a Gaussian filter followed by an opening operation increases the contrast a bit, but not a lot:
The tricky part of this problem is that the natural noise in the background will result in (many) pixels which have a stronger signal than the blob I want to detect. What makes the blob a blob is that it's either a large area with an average increase in intensity, (or a small area with a very strong increase in intensity (not relevant here)).
How can I include this spacial information in order to detect my blob?
It is obvious that I first needs to filter the image with a Gaussian and/or median filter in order to incorporate the nearby region of each pixel into each single pixel value. However, no amount of blurring is enough to make it easy to segment the blobs from the background.
EDIT: Regarding thresholding: Thresholding is very temping, but also problematic by itself. I do not have a region of "pure background" and the larger a blob is, the weaker the signal can be - while still being detectable.
I should also not that the typical image will not have any blobs at all, but just be pure background.
You could try a h-minima transform. It removes any minima under the height of h and increases the height of all other throughs by h. It's defined as the morphological reconstruction of an erosion increased by the height h. Here's the results with h = 35:
It should be a lot easier to manipulate. It also needs a input like segmentation. The difference is that this is more robust. Underestimating h by a relatively large number will only bring you back closer to the original problem image instead of failing completely.
You could try to characterize the background noise to get an estimate, assuming that whatever your application is would have a relatively constant amount of it.
Note that one blue dot between the two large bottom blobs. Even further processing is needed. You could try continuing with the morphology. Something that I have found to work in some 'ink-blot' segmentation cases like this is running through every connected component, calculating their convex hulls and finally the union of all the convex hulls in the image. It usually makes further morphological operations much easier and provides a good estimate for the label.
In my experience, if you can see your gaussian filter size (those little circles), then your filter width is too small. Although terribly expensive, try bumping up the radius on your gaussian, it should continue to improve your results up to its radius matching the radius of the smallest object you are trying to find.
Following that (heavy gaussian), I would do a peak search across the whole image. Cut out any peaks that are too low, and or have too little contrast to the nearest valley/ background.
Don't be afraid to split this into two isolated processing pipelines: ie one filtration and extraction for low contrast spread out blobs, and a completely different one to isolate high contrast spikes (much much easier to find). That being said, a high contrast spike "should" survive even a pretty aggressive filter. Another thing to keep in mind is iterative subtraction, if there are some blobs that can be found easily from the get go, pull them out of the image and then do a stretch (but be careful as you can make the image be whatever you want it to be with too much stretching)
Maybe try an iterative approach using thresholding and edge detection:
Start with a very high threshold (say 90% signal), then run a canny filter (or any binary edge filter you like) on the thresholded image. Count and store the number of pixels (edge pixels) generated.
Proceed to repeat this step for lower and lower thresholds. At a certain point you are going to see a massive spike in edges detected (ie your cool textured background). Then pull back the threshold a little higher and run closing and floodfill on your resulting edge image.

Does GrabCut Segmentation depend on the size of the image?

I have been thinking about this for quite some time, but never really performed detailed analysis on this. Does the foreground segmentation using GrabCut[1] algorithm depend on the size of the input image? Intuitively, it appears to me that since grabcut is based on color models, color distributions should not change as the size of the image changes, but [aliasing] artifacts in smaller images might play a role.
Any thoughts or existing experiments on the dependence of size of the image on image segmentation using grabcut would be highly appreciated.
Thanks
[1] C. Rother, V. Kolmogorov, and A. Blake, GrabCut: Interactive foreground extraction using iterated graph cuts, ACM Trans. Graph., vol. 23, pp. 309–314, 2004.
Size matters.
The objective function of GrabCut balances two terms:
The unary term that measures the per-pixel fit to the foreground/background color model.
The smoothness term (pair-wise term) that measures the "complexity" of the segmentation boundary.
The first term (unary) scales with the area of the foreground while the second (smoothness) scales with the perimeter of the foreground.
So, if you scale your image by a x2 factor you increase the area by x4 while the perimeter scales only roughly by a x2 factor.
Therefore, if you tuned (or learned) the parameters of the energy function for a specific image size / scale, these parameters may not work for you in different image sizes.
PS
Did you know that Office 2010 "foreground selection tool" is based on GrabCut algorithm?
Here's a PDF of the GrabCut paper, courtesy of Microsoft Research.
The two main effects of image size will be run time and the scale of details in the image which may be considered significant. Of these two, run time is the one which will bite you with GrabCut - graph cutting methods are already rather slow, and GrabCut uses them iteratively.
It's very common to start by downsampling the image to a smaller resolution, often in combination with a low-pass filter (i.e. you sample the source image with a Gaussian kernel). This significantly reduces the n which the algorithm runs over while reducing the effect of small details and noise on the result.
You can also use masking to restrict processing to only specific portions of the image. You're already getting some of this in GrabCut as the initial "grab" or selection stage, and again later during the brush-based refinement stage. This stage also gives you some implicit information about scale, i.e. the feature of interest is probably filling most of the selection region.
Recommendation:
Display the image at whatever scale is convenient and downsample the selected region to roughly the n = 100k to 200k range per their example. If you need to improve the result quality, use the result of the initial stage as the starting point for a following iteration at higher resolution.

What type of smoothing to use?

Not sure if this may or may not be valid here on SO, but I was hoping someone can advise of the correct algorithm to use.
I have the following RAW data.
In the image you can see "steps". Essentially I wish to get these steps, but then get a moving average of all the data between. In the following image, you can see the moving average:
However you will notice that at the "steps", the moving average decreases the gradient where I wish to keep the high vertical gradient.
Is there any smoothing technique that will take into account a large vertical "offset", but smooth the other data?
Yup, I had to do something similar with images from a spacecraft.
Simple technique #1: use a median filter with a modest width - say about 5 samples, or 7. This provides an output value that is the median of the corresponding input value and several of its immediate neighbors on either side. It will get rid of those spikes, and do a good job preserving the step edges.
The median filter is provided in all number-crunching toolkits that I know of such as Matlab, Python/Numpy, IDL etc., and libraries for compiled languages such as C++, Java (though specific names don't come to mind right now...)
Technique #2, perhaps not quite as good: Use a Savitzky-Golay smoothing filter. This works by effectively making least-square polynomial fits to the data, at each output sample, using the corresponding input sample and a neighborhood of points (much like the median filter). The SG smoother is known for being fairly good at preserving peaks and sharp transistions.
The SG filter is usually provided by most signal processing and number crunching packages, but might not be as common as the median filter.
Technique #3, the most work and requiring the most experience and judgement: Go ahead and use a smoother - moving box average, Gaussian, whatever - but then create an output that blends between the original with the smoothed data. The blend, controlled by a new data series you create, varies from all-original (blending in 0% of the smoothed) to all-smoothed (100%).
To control the blending, start with an edge detector to detect the jumps. You may want to first median-filter the data to get rid of the spikes. Then broaden (dilation in image processing jargon) or smooth and renormalize the the edge detector's output, and flip it around so it gives 0.0 at and near the jumps, and 1.0 everywhere else. Perhaps you want a smooth transition joining them. It is an art to get this right, which depends on how the data will be used - for me, it's usually images to be viewed by Humans. An automated embedded control system might work best if tweaked differently.
The main advantage of this technique is you can plug in whatever kind of smoothing filter you like. It won't have any effect where the blend control value is zero. The main disadvantage is that the jumps, the small neighborhood defined by the manipulated edge detector output, will contain noise.
I recommend first detecting the steps and then smoothing each step individually.
You know how to do the smoothing, and edge/step detection is pretty easy also (see here, for example). A typical edge detection scheme is to smooth your data and then multiply/convolute/cross-corelate it with some filter (for example the array [-1,1] that will show you where the steps are). In a mathematical context this can be viewed as studying the derivative of your plot to find inflection points (for some of the filters).
An alternative "hackish" solution would be to do a moving average but exclude outliers from the smoothing. You can decide what an outlier is by using some threshold t. In other words, for each point p with value v, take x points surrounding it and find the subset of those points which are between v - t and v + t, and take the average of these points as the new value of p.

Anti-aliasing: Preferred ways of determing maximum frequency?

I've been reading up a bit on anti-aliasing and it seems to make sense, but there is one thing I'm not too sure of. How exactly do you find the maximum frequency of a signal (in the context of graphics).
I realize there's more than one case so I assume there is more than one answer. But first let me state a simple algorithm that I think would represent maximum frequency so someone can tell me if I'm conceptualizing it the wrong way.
Let's say it's for a 1 dimensional,finite, and greyscale image (in pixels). Am I correct in assuming you could simply scan the entire pixel line (in the spatial domain) looking for a for the minimum oscillation and the inverse of that smallest oscillation would be the maximum frequency?
Ex values {23,26,28,22,48,49,51,49}
Frequency:Pertaining to Set {}
(1/2) = .5 : {28,22}
(1/4) = .25 : {22,48,49,51}
So would .5 be the maximum frequency?
And what would be the ideal way to calculate this for a similar pixel line as the one above?
And on a more theoretical note, what if your sampling input was infinite (more like the real world)? Would a valid process be sort of like:
Predetermine a decent interval for point sampling
Determine max frequency from point sampling
while(2*maxFrequency > pointSamplingInterval)
{
pointSamplingInterval*=2
Redetermine maxFrequency from point sampling (with new interval)
}
I know these algorithms are fraught with inefficiencies, so what are some of the preferred ways? (Not looking for something crazy-optimized, just fundamentally better concepts)
The proper way to approach this is using a Fourier Transform (in practice, an FFT,or fast fourier transform)
The theory works as follows: if you have an set of pixels with color/grayscale, then we can say that the image is represented by pixels in the "spatial domain"; that is, each individual number specifies the image at a particular spatial location.
However, what we really want is a representation of the image in the "frequency domain". Instead of each individual number specifying each pixel, each number represents the amplitude of a particular frequency in the image as a whole.
The tool which converts from the "spatial domain" to the "frequency domain" is the Fourier Transform. The output of the FT will be a sequence of numbers specifying the relative contribution of different frequencies.
In order to find the maximum frequency, you perform the FT, and look at the amplitudes that you get for the high frequencies - then it is just a matter of searching from the highest frequency down until you hit your "minimum significant amplitude" threshold.
You can code your own FFT, but it is much easier in practice to use a pre-packaged library such as FFTW
You don't scan a signal for the highest frequency and then choose your sampling frequency: You choose a sampling frequency that's high enough to capture the things you want to capture, and then you filter the signal to remove high frequencies. You throw away everything higher than half the sampling rate before you sample it.
Am I correct in assuming you could
simply scan the entire pixel line (in
the spatial domain) looking for a for
the minimum oscillation and the
inverse of that smallest oscillation
would be the maximum frequency?
If you have a line of pixels, then the sampling is already done. It's too late to apply an antialiasing filter. The highest frequency that could be present is half the sampling frequency ("1/2px", I guess).
And on a more theoretical note, what
if your sampling input was infinite
(more like the real world)?
Yes, that's when you use the filter. First, you have a continuous function, like a real-life image (infinite sampling rate). Then you filter it to remove everything above fs/2, then you sample it at fs (digitize the image into pixels). Cameras don't actually do any filtering, which is why you get Moire patterns when you photograph bricks, etc.
If you're anti-aliasing computer graphics, you have to think of the ideal continuous mathematical function first, and think through how you would filter it and digitize it to produce the output on the screen.
For instance, if you want to generate a square wave with a computer, you can't just naively alternate between maximum and minimum values. That would be just like sampling a real life signal without filtering first. The higher harmonics wrap back into the baseband and cause lots of spurious spikes in the spectrum. You need to generate points as if they were sampled from a filtered continuous mathematical function:
I think this article from the O'Reilly site might also be useful to you ... http://www.onlamp.com/pub/a/python/2001/01/31/numerically.html ... in there they're referring to frequency analysis of sound files but you it gives you the idea.
I think what you need is an application of Fourier Analysis (http://en.wikipedia.org/wiki/Fourier_analysis). I've studied this but never used it so take it with a pinch of salt but I believe if you apply it correctly to your set of numbers you will get a set of frequencies which are components of the series and then you can pick off the highest one.
I can't point you at a piece of code that does this but I'm sure it would be out there somewhere .

Resources