What is the base of the logarithm in Log Transformation in image processing? - image

I am reading gonzales image processing book and as you know the log transformation has been defined like the following in the book:
s = c*log(1+r)
Now I have one question:
Is the logarithm based on 10 or it is a natural logarithm which is based on napier number?

The log transform is used for dark pixels enhancement. The dark pixels in an image are expanded as compare to the higher pixel values. So the base can be any number depending on the visualization effect of image.
I think log10 is often used because it is related to the decibel scale in signal processing, such as what is used in signal to noise definition.

If this is log() from math.h, then it's the natural logarithm.
That is, it's base is e, which is approximately 2.71828.

Related

How to change dynamic range of an RGB image?

I have 16-bit raw image (12 effective bits). I convert it to rgb and now I want to change the dynamic range. I created 2 map functions. You can see them visualized below. As you can see the first function maps values 0-500 to 0-100 and the second one maps the rest values to 101-255.
Now I want to apply the map-functions on the rgb image. What I'm doing is iterating through each pixel, find appropriate function for each channel and apply it on the channel. For example, the pixel is RGB=[100 2000 4000]. On R channel I'll apply the first function since 100 is in 0-500 range. But, on G and B channels I'll apply the second function since their values are in 501-4095.
But, in doing this way I'm actually changing the actual color of the pixel since I apply different functions on the channels of the pixel.
Can you suggest how to do it or at least give me a direction or show some articles?
What you're doing is a very straightforward imaging operation, frequently applied in image and video processing. Sometimes it's (imprecisely) called a lookup table (LUT), even though it's not always implemented via an actual lookup table. Examples of this are gamma adjustment or log encoding.
For instance, an example of this kind of encoding is sRGB, which is a gamma encoding from linear light. You can read about it here: http://en.wikipedia.org/wiki/SRGB. You'll see that it has a nonlinear adjustment.
The name LUT implies a good way of doing it. If you can make your image a uint8 or uint16 valued set, you can create a vector of desired output values for any input value. The lookup table has the same number of elements as the possible range of the variable type. If you were using a uint8, you'd have a lookup table of 256 values. Then the lookup is easy, you just use the image value as an index into your LUT to get the resulting value. That computational efficiency is why LUTS are so widely used.
In your case, since you're working in RGB space, it is acceptable to apply the curves in exactly the same way to each of the three color channels. RGB space is nice for that reason. However, for various reasons, sometimes different LUTs are implemented per-channel.
So if you had an image (we'll use one included in MATLAB and pretend it's 12 bit by scaling it):
someimage = uint16(imread('autumn.tif')).*16;
image(someimage.*16); % Need to multiply again to display 16 bit data scaled properly
For your LUT, you would implement this as:
lut = uint8([(0:500).*(1/5), (501:4095).*((255-101)/(4095-501)) + 79.5326]);
plot(lut); %Take a look at the lut
This makes the piecewise calculation you described in your question.
You could make a new image this way:
convertedimage = lut(double(someimage)+1);
image(convertedimage);
Note that because MATLAB indexes with doubles--one based--you need to cast properly and add one. This doesn't slow things down as much as you may think; MATLAB is made to do this. I've been using MATLAB for decades and this still looks odd to me.
This method lets you get fancy with the LUT creation (logs, exp, whatever) and it still runs very fast.
In your case, your LUT only needs 4096 elements since your input data is only 12 bits. You may want to be careful with the bounds, since it's possible a uint16 could have higher values. One clean way to bound this is to use the min and end functions:
convertedimage = lut(min(double(someimage)+1, end));
Now, this has implemented your function, but perhaps you want a slightly different function. For instance, a common function of this type is a simple gamma adjustment. A gamma of 2.2 means that the incoming image values are scaled by taking them to the 1/2.2 power (if scaled between 0 and 1). We can create such a LUT as follows:
lutgamma = uint8(256.*(((0:4095)./4095).^(1/2.2)));
plot(lutgamma);
Again, we apply the LUT with a simple indexing:
convertedimage = lutgamma(min(double(someimage)+1, end));
And we get the following image:
Using a smooth LUT will usually improve overall image quality. A piecewise linear LUT will tend to cause the resulting image to have odd discontinuities in the shaded regions.
These are so common in many imaging systems that LUTs have file formats. To see what I mean, look at this LUT generator from a major camera company. LUTs are a big deal, and it looks like you're on the right track.
I think you are referring to something that Photoshop calls "Enhance Monochromatic Contrast", which is described here - look at "Step 3: Try Out The Different Algorithms".
Basically, I think you find a single min from all the channels and a single max from across all 3 channels and apply the same scaling to all the channels, rather than doing each channel individually with its own min and max.
Alternatively, you can convert to Lab (Lightness plus a and b) mode and apply your function to the Lightness channel (without affecting the a and b channels which hold the colour information) then transform back to RGB, your colour unaffected.

Calculate SNR in single image in MATLAB

I have this image:
I want to calculate SNR in it. For this i used code:
img=imread('noicy.JPG');
img=double(img(:));
ima=max(img(:));
imi=min(img(:));
ims=std(img(:));
snr=20*log10((ima-imi)./ims)
Is that correct code to calculate SNR?
The definition of SNR can be found here or here:
Both the standard and the industry definition can be used (10log(x) and 20log(x)). check this
now, the signal is equal to the mean of the pixel values (mean(img(:))) and the noise is the standard deviation or error value of the pixel values (std(img(:))).
You may use either the ratio or the SNR=10*log10(signal/noise) to express the result in decibel.

Does GrabCut Segmentation depend on the size of the image?

I have been thinking about this for quite some time, but never really performed detailed analysis on this. Does the foreground segmentation using GrabCut[1] algorithm depend on the size of the input image? Intuitively, it appears to me that since grabcut is based on color models, color distributions should not change as the size of the image changes, but [aliasing] artifacts in smaller images might play a role.
Any thoughts or existing experiments on the dependence of size of the image on image segmentation using grabcut would be highly appreciated.
Thanks
[1] C. Rother, V. Kolmogorov, and A. Blake, GrabCut: Interactive foreground extraction using iterated graph cuts, ACM Trans. Graph., vol. 23, pp. 309–314, 2004.
Size matters.
The objective function of GrabCut balances two terms:
The unary term that measures the per-pixel fit to the foreground/background color model.
The smoothness term (pair-wise term) that measures the "complexity" of the segmentation boundary.
The first term (unary) scales with the area of the foreground while the second (smoothness) scales with the perimeter of the foreground.
So, if you scale your image by a x2 factor you increase the area by x4 while the perimeter scales only roughly by a x2 factor.
Therefore, if you tuned (or learned) the parameters of the energy function for a specific image size / scale, these parameters may not work for you in different image sizes.
PS
Did you know that Office 2010 "foreground selection tool" is based on GrabCut algorithm?
Here's a PDF of the GrabCut paper, courtesy of Microsoft Research.
The two main effects of image size will be run time and the scale of details in the image which may be considered significant. Of these two, run time is the one which will bite you with GrabCut - graph cutting methods are already rather slow, and GrabCut uses them iteratively.
It's very common to start by downsampling the image to a smaller resolution, often in combination with a low-pass filter (i.e. you sample the source image with a Gaussian kernel). This significantly reduces the n which the algorithm runs over while reducing the effect of small details and noise on the result.
You can also use masking to restrict processing to only specific portions of the image. You're already getting some of this in GrabCut as the initial "grab" or selection stage, and again later during the brush-based refinement stage. This stage also gives you some implicit information about scale, i.e. the feature of interest is probably filling most of the selection region.
Recommendation:
Display the image at whatever scale is convenient and downsample the selected region to roughly the n = 100k to 200k range per their example. If you need to improve the result quality, use the result of the initial stage as the starting point for a following iteration at higher resolution.

How can I choose an image with higher contrast in PHP?

For a thumbnail-engine I would like to develop an algorithm that takes x random thumbnails (crop, no resize) from an image, analyzes them for contrast and chooses the one with the highest contrast. I'm working with PHP and Imagick but I would be glad for some general tips about how to compute contrast of imagery.
It seems that many things are easier than computing contrast, for example counting colors, computing luminosity,etc.
What are your experiences with the analysis of picture material?
I'd do it that way (pseudocode):
L[256] = {0,0,0...}
loop over each pixel:
luminance = avg(R,G,B)
increment L[luminance] by 1
for i = 0 to 255:
if L[i] < C: L[i] = 0 // C = threshold of your chose
find index of first and last non-zero value of L[]
contrast = last - first
In looking for the image "with the highest contrast," you will need to be very careful in how you define contrast for the image. In the simplest way, contrast is the difference between the lowest intensity and the highest intensity in the image. That is not going to be very useful in your case.
I suggest you use a histogram approach to describe the contrast of a given image and then compare the properties of the histograms to determine the image with the highest contrast as you define it. You could use a variety of well known containers to represent the histogram in code, or construct a class to meet your specific needs. (I am not implying that you need to create a histogram in the form of a chart – just a statistical representation of the intensity values.) You could use the variance of each histogram directly as a measure of contrast, or use the standard deviation if that is easier to work with.
The key really lies in how you define the contrast of the image. In general, I would define a high contrast image as one with values present for all, or nearly all, the possible values. And I would further add that in this definition of a high contrast image, the intensity values of the image will tend to be distributed across the range of possible values in a uniform way.
Using this approach, a low contrast image would tend to have relatively few discrete intensity values and they would tend to be closely grouped together rather than uniformly distributed. (As a general rule, they will also tend to be grouped toward the center of the range.)

Anti-aliasing: Preferred ways of determing maximum frequency?

I've been reading up a bit on anti-aliasing and it seems to make sense, but there is one thing I'm not too sure of. How exactly do you find the maximum frequency of a signal (in the context of graphics).
I realize there's more than one case so I assume there is more than one answer. But first let me state a simple algorithm that I think would represent maximum frequency so someone can tell me if I'm conceptualizing it the wrong way.
Let's say it's for a 1 dimensional,finite, and greyscale image (in pixels). Am I correct in assuming you could simply scan the entire pixel line (in the spatial domain) looking for a for the minimum oscillation and the inverse of that smallest oscillation would be the maximum frequency?
Ex values {23,26,28,22,48,49,51,49}
Frequency:Pertaining to Set {}
(1/2) = .5 : {28,22}
(1/4) = .25 : {22,48,49,51}
So would .5 be the maximum frequency?
And what would be the ideal way to calculate this for a similar pixel line as the one above?
And on a more theoretical note, what if your sampling input was infinite (more like the real world)? Would a valid process be sort of like:
Predetermine a decent interval for point sampling
Determine max frequency from point sampling
while(2*maxFrequency > pointSamplingInterval)
{
pointSamplingInterval*=2
Redetermine maxFrequency from point sampling (with new interval)
}
I know these algorithms are fraught with inefficiencies, so what are some of the preferred ways? (Not looking for something crazy-optimized, just fundamentally better concepts)
The proper way to approach this is using a Fourier Transform (in practice, an FFT,or fast fourier transform)
The theory works as follows: if you have an set of pixels with color/grayscale, then we can say that the image is represented by pixels in the "spatial domain"; that is, each individual number specifies the image at a particular spatial location.
However, what we really want is a representation of the image in the "frequency domain". Instead of each individual number specifying each pixel, each number represents the amplitude of a particular frequency in the image as a whole.
The tool which converts from the "spatial domain" to the "frequency domain" is the Fourier Transform. The output of the FT will be a sequence of numbers specifying the relative contribution of different frequencies.
In order to find the maximum frequency, you perform the FT, and look at the amplitudes that you get for the high frequencies - then it is just a matter of searching from the highest frequency down until you hit your "minimum significant amplitude" threshold.
You can code your own FFT, but it is much easier in practice to use a pre-packaged library such as FFTW
You don't scan a signal for the highest frequency and then choose your sampling frequency: You choose a sampling frequency that's high enough to capture the things you want to capture, and then you filter the signal to remove high frequencies. You throw away everything higher than half the sampling rate before you sample it.
Am I correct in assuming you could
simply scan the entire pixel line (in
the spatial domain) looking for a for
the minimum oscillation and the
inverse of that smallest oscillation
would be the maximum frequency?
If you have a line of pixels, then the sampling is already done. It's too late to apply an antialiasing filter. The highest frequency that could be present is half the sampling frequency ("1/2px", I guess).
And on a more theoretical note, what
if your sampling input was infinite
(more like the real world)?
Yes, that's when you use the filter. First, you have a continuous function, like a real-life image (infinite sampling rate). Then you filter it to remove everything above fs/2, then you sample it at fs (digitize the image into pixels). Cameras don't actually do any filtering, which is why you get Moire patterns when you photograph bricks, etc.
If you're anti-aliasing computer graphics, you have to think of the ideal continuous mathematical function first, and think through how you would filter it and digitize it to produce the output on the screen.
For instance, if you want to generate a square wave with a computer, you can't just naively alternate between maximum and minimum values. That would be just like sampling a real life signal without filtering first. The higher harmonics wrap back into the baseband and cause lots of spurious spikes in the spectrum. You need to generate points as if they were sampled from a filtered continuous mathematical function:
I think this article from the O'Reilly site might also be useful to you ... http://www.onlamp.com/pub/a/python/2001/01/31/numerically.html ... in there they're referring to frequency analysis of sound files but you it gives you the idea.
I think what you need is an application of Fourier Analysis (http://en.wikipedia.org/wiki/Fourier_analysis). I've studied this but never used it so take it with a pinch of salt but I believe if you apply it correctly to your set of numbers you will get a set of frequencies which are components of the series and then you can pick off the highest one.
I can't point you at a piece of code that does this but I'm sure it would be out there somewhere .

Resources