Fitting an image to Gaussian distribution - image

I'm implementing a gaussian PDF to which I've already fitted a histogram (matched)
However with changing the MEAN and SD values, there should be changes in the output image- I don't seem to be getting any.
Can someone please explain in the context of images, how would varying the SD & MEAN affect it?
- if mean = 30, SD= 10, the image would be lighter(merge bright) compared to mean=30,SD=80 ?

Mean will correspond to the overall average brightness of the image and sd will correspond to the contrast, that is, the difference between the brightest and darkest parts of the image. So if the mean remains the same, as in your example, but sd is increased, then overall average brightness remains the same, but the darkest parts of the image get darker and the brightest parts get brighter, increasing the contrast.

Related

Detecting a Partially Blurred Image

How would one go about creating an algorithm that detects the unblurred parts of a picture? For example, it would look at this picture:
and realize the non blurred portion is:
I saw over here how to measure the blur of the whole picture. For this problem, should I just create a threshold for the maximal absolute second derivative for the pixels? And then whichever one exceeds, is considered a non blurred region?
A simple solution is to detect high frequency content.
If there's no high frequency content in an area, it may be because it is blurred.
How to detect areas with no high frequency content? You can do it in the frequency domain (for example, with DCT), or you can do it in the spatial domain.
First, I recommend the spatial domain method.
You'll need some kind of high-pass filter. The easiest method is to blur the image (for example, with a gauss filter), then subtract it from the original, then convert to grayscale:
Blurred:
Subtracted:
As you see, all the blurred pixels become dark, and high frequency content is bright. Now, you may want to blur this image, and apply a threshold, to get this:
Note: this process was done by hand, with gimp. Your algorithm can easily follow this, but need some parameters specified (like the blur radius, threshold value).

Applying an image as a mask in matlab

I am a new user on image processing via Matlab. My first aim is applying the article and comparing my results and authors' results.
The article can be found here: http://arxiv.org/ftp/arxiv/papers/1306/1306.0139.pdf
First problem, Image Quality: In Figure 7, masks are defined but I couldn't reach the mask data set, and I use the screenshot so image quality is low. In my view, it can effect the results. Is there any suggestions?
Second problem, Merging images: I want to apply mask 1 on the Lena. But I don't want to use paint =) On the other hand, is it possible merging the images and keeping the lena?
You need to create the mask array. The first step is probably to turn your captured image from Figure 7 into a black and white image:
Mask = im2bw(Figure7, 0.5);
Now the background (white) is all 1 and the black line (or text) is 0.
Let's make sure your image of Lena that you got from imread is actually grayscale:
LenaGray = rgb2gray(Lena);
Finally, apply your mask on Lena:
LenaAndMask = LenaGray.*Mask;
Of course, this last line won't work if Lena and Figure7 don't have the same size, but this should be an easy fix.
First of all, You have to know that this paper is published in archive. when papers published in archive it is always a good idea to know more about the author and/or the university that published the paper.
TRUST me on that: you do not need to waste your time on this paper.
I understand your demand: but it is not a good idea to do get the mask by doing print screen. The pixel values that can be achieved by using print screen may not be the same as the original values. The zoom may change the size. so you need to be sure that the sizes are the same.
you can do print screen. past the image.
crop the mask.
convert rgb to gray scale.
threshold the gray scale to get the binary.
if you saved the image as jpeg. distortions because of high frequency edges will change edge shape.

detect white areas with sharp boundary

In the grayscale image shown below, how can I accurately detect the white region having sharp boundary (marked with red color)?
In this particular image, a simple thresholding might work, however, I have several images in which there are similar areas around corner of images which I want to ignore.
Also, there might be more than one regions of interest, both having different intensities. One can be as bright as it is in the example image, other can be of medium intensity.
However, the only difference between the interested and non-interested areas is as follows:
The interest areas have sharp well defined boundaries.
Non-interested areas don't have sharp boundaries. They tend to gradually merge with neighbourhood areas.
Image without mark for testing:
When you say sharp boundaries, you have to think gradient. The sharper the boundaries, the bigger the gradient. Therefore apply a gradient and you will see that it will be stronger around the shapes you want to segment.
But in your case, you can also observe that the area you want to segment is also the brightest. So I would also try a noise reduction (median filter) plus a convolution filter (simple average) in order to homogenize the different zones, then thresholding by keeping only the brightest/right peak.
im = imread('o2XfN.jpg');
figure
imshow(im)
smooth = imgaussfilt(im,.8); %"blur" the image to take out noisey pixels
big = double(smooth); % some functions don't work with UINT8, I didn't check for these
maxiRow = quantile(big,.99); % .99 qualtile... think quartile from stats
maxiCol = quantile(maxiRow,.98); % again for the column
pixels = find(big>=maxiCol); % which pixels have the highest values
logicMat = false(size(big)); %initalize a logic matrix of zeros
logicMat(pixels) = 1; %set the pixels that passed to logic pass
figure
imshow(logicMat)
It is not extremely clear what you want to do with the regions that you are finding. Also, a few more sample images would be helpful to debug a code. What I posted above may work for that one image, but it is unlikely that it will work for every image that you are processing.

Image Equalization to compensate for light sources

Currently I am involved in an image processing project where I am dealing with human faces. But I am facing problems with the images in cases where the light source is on either the left or right side of the face. In those cases, the portion of the image away from the light source is darker. I want to distribute the brightness over the image more evenly, so that the the brightness of darker pixels is increased and the brightness of overly bright pixels is decreased at the same time.
I had used 'gamma correction' techniques to do the same but the results are not desirable , Actually I want to create an output in which the brightness is independent of the light source, in other words , increasing the brightness of the darker part and decreasing the brightness of the brighter part. I am not sure if I reproduced the problem statement correctly but this is a very common problem and I haven't found anything useful abut this on the web.
1. Image with Light source on the right side
2. Image after increasing the brightness of the darker pixels.[img = cv2.pow(img, 0.5)]
3. Image after decreasing the brightness of Bright pixels[img = cv2.pow(img, 2.0)]
I was thinking of taking the mean of both the images 2 and 3 but as we see that the over bright pixels still persist in the image 3 , and I want to get rid of that pixels, Any suggestion ?
In the end I need an image with homogeneous brightness, and independent of the light source.
Take a look at homomorphic filtering applied to image enhancement in which you can selectively filter reflectance and illumination components of an image.
I found this paper: http://www.mirlab.org/conference_papers/International_Conference/ICASSP%202010/pdfs/0001374.pdf i think it exactly addresses the concern you have.
you will need to compute "gradient" of an image i.e. laplacian derivatives for which you can read up on this: http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_gradients/py_gradients.html
i'd be very interested to know about your implementation. if you run into trouble post a comment here and i can try to help.

How do I deal with brightness rescaling after FFT'ing and spatially filtering images?

Louise here. I've recently started experimenting with Fourier transforming images and spatially filtering them. For example, here's one of a fireplace, high-pass filtered to remove everything above ten cycles per image:
http://imgur.com/ECa306n,NBQtMsK,Ngo8eEY#0 - first image (sorry, I can't post images on Stack Overflow because I haven't got enough reputation).
As we can see, the image is very dark. However, if we rescale it to [0,1] we get
http://imgur.com/ECa306n,NBQtMsK,Ngo8eEY#0 - second image
and if we raise everything in the image to the power of -0.5 (we can't raise to positive powers as the image data is all between 0 and 1, and would thus get smaller), we get this:
same link - third image
My question is: how should we deal with reductions in dynamic range due to hi/low pass filtering? I've seen lots of filtered images online and they all seemed to have similar brightness profiles to the original image, without manipulation.
Should I be leaving the centre pixel of the frequency domain (the DC value) alone, and not removing it when low-pass filtering?
Is there a commonplace transform (like histogram equalisation) that I should be using after the filtering?
Or should I just interpret the brightness reduction as normal, because some of the information in the image has been removed?
Thanks for the advice :)
Best,
Louise
I agree with Connor, the best way to preserve brightness is to keep the origin (DC) value unchanged. It is common practise. This way you will get similar image as your second image, because you do not change the average gray level of the image. Removing it using high-pass filtering will set its value to 0 and some scaling is needed afterwards.

Resources