MATALB Kinnect sensor depth image(Too Dark) - image

I am using Kinect2 with Matlab; however, the depth images shown in the video stream are much brighter than when I saved it in Matlab?
do you know the solution for this problem

Firstly, you should provide the code that you are using at the moment so we can see where you are going wrong.. this is a generic advice for posting on any forum; to provide with all your information, so others can help.
If you use the histogram to check your depth values, you will see that the image is a uint8 image with values from 0 to 255. And since the depth distances are scaled to grayscale value, the values are scaled to new values and using imshow, will not provide enough contrast.
An easy workaround for displaying images is to use any type of
histogram equalization such as
figure(1);
C= adapthisteq(A, 'clipLimit',0.02,'Distribution','rayleigh');
imshow(C);
The image will be contrast adjusted for display.

I used mat2gray and it solved the problem.

Related

Crack detection opencv c++

I am new to image processsing i want to detect crack please any one can help me.
enter image description here
From the description that you're presenting, I believe that the best way to do this is by using a binary mask with the cv2.binary_and() function.
You can do this color segmentation by using 2 thresholds that are going to be the minimum and the maximum color values for the color of the cracks.
Another sollution may be the usage of the Otsu's threshold method. This probably will generate the best values for your mask.
After the masking of the image, you'll have to try to create the contours of those cracks in the image. You can use the cv2.findContours() function. (Check this link that describes the way you can implement this function)

Imagesc conversion formula

I have a .png image that has been created from some grayscale numbers using Matlab's imagesc tool using the standard color map.
For some reason, I am unable to recover the raw data. Is there a way of recovering the raw data from the image? I tried rgb2gray which more or less worked, but if I replug the new image into imagesc, it gives me a slightly different result. Also, the pixel with the most intensity differs in both images.
So, to clarify: I would love to know, how Matlab applies the rgb colormap to the grayscale values, when using the standard colormap.
This is the image we are talking about:
http://imgur.com/qFsGrWw.png
Thank you!
No, you will not get the right data if you are using the standard colormap, or jet.
Generally, its a very bad thing to try to reverse engineer plots, as they will never contain the entirety of the information. This is true in general, but even more if you use colormaps that are do not change accordingly with the data. The amount of blue in jet is massively bigger in range than the amount of orange, or another color. The color changes are non-linear with the data changes, and this will make you miss a lot of resolution. You may know what value orange corresponds to, but blue will be a very wide range of possible values.
In short:
Triying to get data from representation of data (i.e. plots) is a terrible idea
jet is a terrible idea

Applying an image as a mask in matlab

I am a new user on image processing via Matlab. My first aim is applying the article and comparing my results and authors' results.
The article can be found here: http://arxiv.org/ftp/arxiv/papers/1306/1306.0139.pdf
First problem, Image Quality: In Figure 7, masks are defined but I couldn't reach the mask data set, and I use the screenshot so image quality is low. In my view, it can effect the results. Is there any suggestions?
Second problem, Merging images: I want to apply mask 1 on the Lena. But I don't want to use paint =) On the other hand, is it possible merging the images and keeping the lena?
You need to create the mask array. The first step is probably to turn your captured image from Figure 7 into a black and white image:
Mask = im2bw(Figure7, 0.5);
Now the background (white) is all 1 and the black line (or text) is 0.
Let's make sure your image of Lena that you got from imread is actually grayscale:
LenaGray = rgb2gray(Lena);
Finally, apply your mask on Lena:
LenaAndMask = LenaGray.*Mask;
Of course, this last line won't work if Lena and Figure7 don't have the same size, but this should be an easy fix.
First of all, You have to know that this paper is published in archive. when papers published in archive it is always a good idea to know more about the author and/or the university that published the paper.
TRUST me on that: you do not need to waste your time on this paper.
I understand your demand: but it is not a good idea to do get the mask by doing print screen. The pixel values that can be achieved by using print screen may not be the same as the original values. The zoom may change the size. so you need to be sure that the sizes are the same.
you can do print screen. past the image.
crop the mask.
convert rgb to gray scale.
threshold the gray scale to get the binary.
if you saved the image as jpeg. distortions because of high frequency edges will change edge shape.

How to find entropy of Depth images in Matlab?

I want to compute depth entropy of a depth image in Matlab (same as this work ). Unfortunately, the authors don't reply my emails. There is a function,"entropyfilt", that compute the local entropy of grayscale image. I've used this function with a depth input image that captured by Kinect but it hasn't worked probably. Here is my input depth image:
Here is the code used for entropy computing:
J = entropyfilt(Depth);
imshow(mat2gray(J))
Sorry, My "reputation view" isn't enough, so I can't upload my result image.
How can I compute entropy image of a depth image? I want to acquire an image same as figure 4 in above paper.
Thanks in advance.
It is written in the paper, for each pixel you first extract two patches from the image, then you calculate the entropy of each patch. The formula for which is also in the paper and well-known in statistics.
If you want to use the function entropyfilt, you need to provide as a second argument a mask image that describes the patch (all pixels within the patch need to be 1 in the mask, others need to be 0). This is detailed in the documentation of said function.
The authors generate one color image from two entropy images. How they do so they seemingly forgot to mention.
I think the paper is low quality.
image1= imread('where is located ')
entropy(image1)
imshow(image1)

Need advise in image processing and binarization

On this picure:
http://i.stack.imgur.com/RfPqv.png
I have to reduse texture noise and sharpen the borders of the both squares to make possible applying binarization in order to recognize those squares. I tried to use median filters, sharpen filters with different matrixes but had no success. Can you, please, advise me something useful and working in the situation. Maybe, you know some binarization methods which will help me even without filters. Thanks.
Well, in that picture, the noise can be described as saturation and/or luminosity, and the information can be described as hue. So you probably want to convert RGB to HSB. See: convert hsl to hsb
If you do that, then equalize your saturation and brightness/lightness, you should take the noise out of the image.

Resources