Image noise removal after hsv thresholding - image

I am currently trying to remove the noise on this image.
This image is obtained using cv2 hsv thresholding. Unfortunately there are a lot of random pixels and pieces that need to be filtered out. I've already tried open cv's fastNlMeansDenoisingColored function, this did not work. Is there anything else I could try?

You can try multiple things, I would try them in this order:
Blur the image before calculating the threshold
Change the threshold value
Erode&Dilate the image before calculating the threshold
Erode&Dilate afterwards is not so good:
Or going all in: use connectedComponentsWithStats and remove all components with a small area.

Related

MATALB Kinnect sensor depth image(Too Dark)

I am using Kinect2 with Matlab; however, the depth images shown in the video stream are much brighter than when I saved it in Matlab?
do you know the solution for this problem
Firstly, you should provide the code that you are using at the moment so we can see where you are going wrong.. this is a generic advice for posting on any forum; to provide with all your information, so others can help.
If you use the histogram to check your depth values, you will see that the image is a uint8 image with values from 0 to 255. And since the depth distances are scaled to grayscale value, the values are scaled to new values and using imshow, will not provide enough contrast.
An easy workaround for displaying images is to use any type of
histogram equalization such as
figure(1);
C= adapthisteq(A, 'clipLimit',0.02,'Distribution','rayleigh');
imshow(C);
The image will be contrast adjusted for display.
I used mat2gray and it solved the problem.

Image segmentation by pixel intensity in matlab

I'm trying to segment a part of an image in matlab. I'm using CT images and I would like to segment the teeth that have metal because this metal artifacts compromise the image quality. Can someone give me a help?
What I want to segment
2. Original image
As a simple start, you can use a simple threshold that you set manually based on the histogram.
imhist(image)
threshold = 120
binaryImage = image>threshold
imshow(binaryImage)
next find the boundary of the binary image using some function such as bwtraceboundary. Finally, combine the original image and the boundary image to produce the final image.
I think the hardest part would be the threshold.

Applying an image as a mask in matlab

I am a new user on image processing via Matlab. My first aim is applying the article and comparing my results and authors' results.
The article can be found here: http://arxiv.org/ftp/arxiv/papers/1306/1306.0139.pdf
First problem, Image Quality: In Figure 7, masks are defined but I couldn't reach the mask data set, and I use the screenshot so image quality is low. In my view, it can effect the results. Is there any suggestions?
Second problem, Merging images: I want to apply mask 1 on the Lena. But I don't want to use paint =) On the other hand, is it possible merging the images and keeping the lena?
You need to create the mask array. The first step is probably to turn your captured image from Figure 7 into a black and white image:
Mask = im2bw(Figure7, 0.5);
Now the background (white) is all 1 and the black line (or text) is 0.
Let's make sure your image of Lena that you got from imread is actually grayscale:
LenaGray = rgb2gray(Lena);
Finally, apply your mask on Lena:
LenaAndMask = LenaGray.*Mask;
Of course, this last line won't work if Lena and Figure7 don't have the same size, but this should be an easy fix.
First of all, You have to know that this paper is published in archive. when papers published in archive it is always a good idea to know more about the author and/or the university that published the paper.
TRUST me on that: you do not need to waste your time on this paper.
I understand your demand: but it is not a good idea to do get the mask by doing print screen. The pixel values that can be achieved by using print screen may not be the same as the original values. The zoom may change the size. so you need to be sure that the sizes are the same.
you can do print screen. past the image.
crop the mask.
convert rgb to gray scale.
threshold the gray scale to get the binary.
if you saved the image as jpeg. distortions because of high frequency edges will change edge shape.

Equalize contrast and brightness across multiple images

I have roughly 160 images for an experiment. Some of the images, however, have clearly different levels of brightness and contrast compared to others. For instance, I have something like the two pictures below:
I would like to equalize the two pictures in terms of brightness and contrast (probably find some level in the middle and not equate one image to another - though this could be okay if that makes things easier). Would anyone have any suggestions as to how to go about this? I'm not really familiar with image analysis in Matlab so please bear with my follow-up questions should they arise. There is a question for Equalizing luminance, brightness and contrast for a set of images already on here but the code doesn't make much sense to me (due to my lack of experience working with images in Matlab).
Currently, I use Gimp to manipulate images but it's time consuming with 160 images and also just going with subjective eye judgment isn't very reliable. Thank you!
You can use histeq to perform histogram specification where the algorithm will try its best to make the target image match the distribution of intensities / histogram of a source image. This is also called histogram matching and you can read up about it on my previous answer.
In effect, the distribution of intensities between the two images should hopefully be the same. If you want to take advantage of this using histeq, you can specify an additional parameter that specifies the target histogram. Therefore, the input image would try and match itself to the target histogram. Something like this would work assuming you have the images stored in im1 and im2:
out = histeq(im1, imhist(im2));
However, imhistmatch is the more better version to use. It's almost the same way you'd call histeq except you don't have to manually compute the histogram. You just specify the actual image to match itself:
out = imhistmatch(im1, im2);
Here's a running example using your two images. Note that I'll opt to use imhistmatch instead. I read in the two images directly from StackOverflow, I perform a histogram matching so that the first image matches in intensity distribution with the second image and we show this result all in one window.
im1 = imread('http://i.stack.imgur.com/oaopV.png');
im2 = imread('http://i.stack.imgur.com/4fQPq.png');
out = imhistmatch(im1, im2);
figure;
subplot(1,3,1);
imshow(im1);
subplot(1,3,2);
imshow(im2);
subplot(1,3,3);
imshow(out);
This is what I get:
Note that the first image now more or less matches in distribution with the second image.
We can also flip it around and make the first image the source and we can try and match the second image to the first image. Just flip the two parameters with imhistmatch:
out = imhistmatch(im2, im1);
Repeating the above code to display the figure, I get this:
That looks a little more interesting. We can definitely see the shape of the second image's eyes, and some of the facial features are more pronounced.
As such, what you can finally do in the end is choose a good representative image that has the best brightness and contrast, then loop over each of the other images and call imhistmatch each time using this source image as the reference so that the other images will try and match their distribution of intensities to this source image. I can't really write code for this because I don't know how you are storing these images in MATLAB. If you share some of that code, I'd love to write more.

How do I deal with brightness rescaling after FFT'ing and spatially filtering images?

Louise here. I've recently started experimenting with Fourier transforming images and spatially filtering them. For example, here's one of a fireplace, high-pass filtered to remove everything above ten cycles per image:
http://imgur.com/ECa306n,NBQtMsK,Ngo8eEY#0 - first image (sorry, I can't post images on Stack Overflow because I haven't got enough reputation).
As we can see, the image is very dark. However, if we rescale it to [0,1] we get
http://imgur.com/ECa306n,NBQtMsK,Ngo8eEY#0 - second image
and if we raise everything in the image to the power of -0.5 (we can't raise to positive powers as the image data is all between 0 and 1, and would thus get smaller), we get this:
same link - third image
My question is: how should we deal with reductions in dynamic range due to hi/low pass filtering? I've seen lots of filtered images online and they all seemed to have similar brightness profiles to the original image, without manipulation.
Should I be leaving the centre pixel of the frequency domain (the DC value) alone, and not removing it when low-pass filtering?
Is there a commonplace transform (like histogram equalisation) that I should be using after the filtering?
Or should I just interpret the brightness reduction as normal, because some of the information in the image has been removed?
Thanks for the advice :)
Best,
Louise
I agree with Connor, the best way to preserve brightness is to keep the origin (DC) value unchanged. It is common practise. This way you will get similar image as your second image, because you do not change the average gray level of the image. Removing it using high-pass filtering will set its value to 0 and some scaling is needed afterwards.

Resources