I am new to image processsing i want to detect crack please any one can help me.
enter image description here
From the description that you're presenting, I believe that the best way to do this is by using a binary mask with the cv2.binary_and() function.
You can do this color segmentation by using 2 thresholds that are going to be the minimum and the maximum color values for the color of the cracks.
Another sollution may be the usage of the Otsu's threshold method. This probably will generate the best values for your mask.
After the masking of the image, you'll have to try to create the contours of those cracks in the image. You can use the cv2.findContours() function. (Check this link that describes the way you can implement this function)
Related
How to detect such cracks that you can see in the attached images? I have tried some OpenCV algorithms like blob detection (cv::SimpleBlobDetector) but couldn't get any results.
It is a cropped image, the full image has some other features as well, so I am not sure thresholding can work because I have to get the bounding box of the detected crack. One way is to assign several (region of interest) ROI and try to detect within that ROI, but this crack doesn't appear at the same location in the image. Any idea?
Can this problem be solved with machine/deep learning (like object detection)? If I train a model with a crack dataset? Because the crack part of the image doesn't have lots of features so I am not sure this method will work. Please guide.
Thanks.
These cracks are difficult to detect because the image is noisy (presumably X-ray) and the contrast poor, so the signal-to-noise ratio is low.
I would try by applying a gaussian filter for denoising, but only in the horizontal direction, to preserve the horizontal edges. Then detection of the horizontal edges.
This is about what a Gabor filter does. You can try different orientations.
Use mathematical morphology operation.
By example Matlab code:
a=imread('in.png');
se=strel( 'disk', 7);
b = imgaussfilt(a,1.3);
c=b-imopen(b,se);
c=3*c;
d=imclearborder(c);
imwrite(d, 'out.png');
I am using Kinect2 with Matlab; however, the depth images shown in the video stream are much brighter than when I saved it in Matlab?
do you know the solution for this problem
Firstly, you should provide the code that you are using at the moment so we can see where you are going wrong.. this is a generic advice for posting on any forum; to provide with all your information, so others can help.
If you use the histogram to check your depth values, you will see that the image is a uint8 image with values from 0 to 255. And since the depth distances are scaled to grayscale value, the values are scaled to new values and using imshow, will not provide enough contrast.
An easy workaround for displaying images is to use any type of
histogram equalization such as
figure(1);
C= adapthisteq(A, 'clipLimit',0.02,'Distribution','rayleigh');
imshow(C);
The image will be contrast adjusted for display.
I used mat2gray and it solved the problem.
If anybody is familiar with classification in remote sensing
you know at first we should choose a region on the image and use information on this region to extract statistical parameters.
how can I choose this area of the image in matlab?
I think I found the answer to my own question.
As our friend user2466766 said I used roipoly to have a mask image and then I multiplied this mask with my image using '.*'.
then I extracted nonzero elements of the resulted matrix with the function nonzeros.
and know I have the digital numbers of the region within the polygon in a columnal matrix that can be used to calculate statistical parameters like variance, mean and etc
Try roipoly. It allows you to create a mask image. If you are looking for more flexibility you can use poly2mask.
I am new to Opencv. What I am trying to do is measure liquid level using single digital camera. After searching a lot I got one research paper having the algorithm and steps at the below link.
http://personnel.sju.edu.tw/改善師資研究成果/98年度/著作/22.pdf
But this algorithm having a step where I need to apply chrominance filtering on captured image. Opncv doesn't come with such inbuilt functionality. So how can I implement chrominance filtering, is there any way to to do this. Please help me, thanks.
I didn't manage to access the link yo provided, but I guess what you want to to is to segment your image based on the color (chroma). In order to do that, you have to first convert your BGR image (RGB but with the red and blue channels swapped, which is the natural ordering for OpenCV) to a chrominance-based space. You can do this using the function cv::cvtColor()with an argument such as CV_BGR2YUV or CV_BGR2YCrCb.
See the full doc of the conversion function here.
After that, you can split the channels of your images with cv::split()e.g. :
cv::Mat yuvImage;
std::vector<cv::Mat> yuvChannels;
cv::split(yuvImage, yuvChannels);
Then work on the image of the resulting vector that contains your channel of interest.
I need to remove the blur this image:
Image source: http://www.flickr.com/photos/63036721#N02/5733034767/
Any Ideas?
Although previous answers are right when they say that you can't recover lost information, you could investigate a little and make a few guesses.
I downloaded your image in what seems to be the original size (75x75) and you can see here a zoomed segment (one little square = one pixel)
It seems a pretty linear grayscale! Let's verify it by plotting the intensities of the central row. In Mathematica:
ListLinePlot[First /# ImageData[i][[38]][[1 ;; 15]]]
So, it is effectively linear, starting at zero and ending at one.
So you may guess it was originally a B&W image, linearly blurred.
The easiest way to deblur that (not always giving good results, but enough in your case) is to binarize the image with a 0.5 threshold. Like this:
And this is a possible way. Just remember we are guessing a lot here!
HTH!
You cannot generally retrieve missing information.
If you know what it is an image of, in this case a Gaussian or Airy profile then it's probably an out of focus image of a point source - you can determine the characteristics of the point.
Another technique is to try and determine the character tics of the blurring - especially if you have many images form the same blurred system. Then iteratively create a possible source image, blur it by that convolution and compare it to the blurred image.
This is the general technique used to make radio astronomy source maps (images) and was used for the flawed Hubble Space Telescope images
When working with images one of the most common things is to use a convolution filter. There is a "sharpen" filter that does what it can to remove blur from an image. An example of a sharpen filter can be found here:
http://www.panoramafactory.com/sharpness/sharpness.html
Some programs like matlab make convolution really easy: conv2(A,B)
And most nice photo editing have the filters under some name or another (sharpen usually).
But keep in mind that filters can only do so much. In theory, the actual information has been lost by the blurring process and it is impossible to perfectly reconstruct the initial image (no matter what TV will lead you to believe).
In this case it seems like you have a very simple image with only black and white. Knowing this about your image you could always use a simple threshold. Set everything above a certain threshold to white, and everything below to black. Once again most photo editing software makes this really easy.
You cannot retrieve missing information, but under certain assumptions you can sharpen.
Try unsharp masking.