What is the good way to explain Boxplot. Is it necessary to stretch the information in the report or we can simply write a one-line answer for the findings. It should not be mandatory to explain it at length. How to explain the attached graph?
enter image description here
For correlation: what it means from the given figure?
enter image description here
Related
I am new to image processsing i want to detect crack please any one can help me.
enter image description here
From the description that you're presenting, I believe that the best way to do this is by using a binary mask with the cv2.binary_and() function.
You can do this color segmentation by using 2 thresholds that are going to be the minimum and the maximum color values for the color of the cracks.
Another sollution may be the usage of the Otsu's threshold method. This probably will generate the best values for your mask.
After the masking of the image, you'll have to try to create the contours of those cracks in the image. You can use the cv2.findContours() function. (Check this link that describes the way you can implement this function)
I have some doubt about FindHomography algorithm. I wrote a program to test it. In this program I have rotated an image look for descriptors in original image and rotated image. After matching I use findHomography to retrieve transformation and calculate square error for RANSAC, LMEDS and RHO method. I wrote algorithm for Levenberg-Marquardt algorithm (using Numerical Recipes ). I can add some noise to point location. NR algorithm is best without noise. Problem is when noise increase. NR is always best and other algorithms (RANSAC LMEDS and RHO) become completly wrong. I fit only six parameters using NR. I think it is like in findHomography (see original post).
Everybody can check my code here. If you want to check with NR you can download full code on github
Is my code good (see enter link description here)? Why opencv results are always worst than my code?
PS Original post is answers.opencv.org with three links and this title
FindHomography algorithm : some doubt
I am new to Matlab and I would like to use Matlab to compare the following pictures and find out the differences between them.
Real picture:
Reference picture:
First, the system should match the real picture and the reference picture
Second, the system should match the modified picture with the reference picture and highlight out the differences.
Please advise on my doubt:
How can I match the similarity from two total different image? Should I selectively compare part of the both images? I have an idea using histogram and normalized to find out the peak match.
Thank you.
There are many things people do. This is an active research area called image matching.
Sometimes people do template matching. Which is to match the entire reference picture to the real picture at many locations and at many scales. You can read more about this particular technique here: http://en.wikipedia.org/wiki/Template_matching
I dont know whether should I post this question here or not? But if someone knows it, please answer?
What are the algorithms for determining which region in an image is text and which one is graphic? Means how to separate such regions? (figure or diagram)
Most OCR software, e.g., Ocropus, support layout analysis, which is what you need.
Mao, Rosenfeld & Kanungo (2003) Document structure analysis algorithms: a literature survey provides a fairly recent survey of layout analysis algorithms.
first step would probably be to isolate the sharper contrast between text and image. This can be done by taking the derivative of the image. This will show the change in color and the high values would most likely then be compared to textual shapes
So we have Histograms... Is there any algorithm to generate original image from them?
(source: petrileskinen.fi)
No, because the histograms simply plot the number of pixels of various tones, not their locations.
That's like saying: "Can you reconstruct a specific painting (not knowing which one) from a couple of pots of paint?"
It's not possible to reconstruct an unknown picture from a histogram, but that doesn't mean there's nothing you can do. If you have a database of possible pictures, you can "fingerprint" each picture, by generating its histogram, and then use the histogram you have to search over that database of fingerprints to identify which picture it is. If you find a decent distance metric, you could possibly even use this to find pictures that are "similar" (in some very rough sense) to the picture you have.
You can't use this to say "here's a picture of the Tower of London; now find me other pictures of the Tower of London" but you could use it to say "here's a picture of a sunset; find me pictures that contain a similar set of colours", which might end up being useful to some extent.
Of course it might turn out that your evening landscape picture has a very similar histogram to something completely irrelevant, and may have a completely different histogram to a picture that, to a human, looks similar. So it's not a robust approach. But if all you have is the histogram, then it may be worth looking into what can be achieved.
No. Histograms are lossy.
A histogram doesn't carry any spatial information. I mean, it's not possible to find the x,y position of the pixel that contributed to a particular histogram bin. The histogram only represents image global brightness information.
Histogram only carries and provides information about what's distribution of tones in an image. It is an aggregation of discrete information encoded in original image - how many pixels have particular values. Thus, it's not possible to generate original image without providing addition details like what's location of pixels, etc.