I'am playing with the SIFT and SURF algorithm. I'm trying to figure out why does the SIFT and SURF detects keypoint in the center of the circle shown in the image below. Any ideas please? The first photo is corner Harris detection, second SIFT and third SURF.
SIFT (Distinctive Image Features from Scale-Invariant Keypoints) detects
stable keypoint locations using scale-space extrema in the
difference-of-Gaussian function.
From what I have understood, a blob in the current scale can be seen as a small dot in another scale space.
It should be the same thing with SURF (SURF: Speeded Up Robust Features).
Related
Hi, first picture represents the damage and delamination that follows
What I want to do is remove the intact area and only visualize the damage area that is marked by black curves (I want everything to be white, or blank except the damage area)
I tried Thresholding method but doesn't seem to be effective
So I found out that histogram equalization and laplacian of gaussian filter are useful for edge detection image processing.
Are there any other image processing tool to get what I want?
Will histogram equalization or laplacian of gaussian filter be good enough?
Any tips are welcome!
Thanks in advance guys
In following image, having detected all corners, I want to determine how I can move each corner to restore it to a undistorted checkerboard while minimizing the total distance of the moves. Any suggestions?
One reliable way - use OpenCV library, learn algorithms used there for camera calibration, and apply them
After getting edge image using canny, what's the use of edge image?
Is there any use case of edge image?
find object and Segment it from image? or get the sharp,area and perimeter of the object?
As in the wikipedia,
Edge detection is the name for a set of mathematical methods which
aim at identifying points in a digital image at which the image
brightness changes sharply or, more formally, has discontinuities. The
points at which image brightness changes sharply are typically
organized into a set of curved line segments termed edges.
You can use this to find the interested area of an image by programmatically. For example, you have a lazer image of a indoor floor map and you want to detect the actual area a robot can visit, this will be useful. You can refer google more on this. It's just an example in real world usage.
I am currently building an Augmented Reality application and stuck on a problem that seem quite easy but is very hard to me ... The problem is as follow:
My device's camera is calibrated and detect a 2D marker (such as a QRCode). I know the focal length, the sensor's position, the distance between my camera and the center of the marker, the real size of the marker and the coordinates of the 4 corners of the marker and of it center on the 2D image I got from the camera. See the following image:
On the image, we know the a,b,c,d distances and the coordinates of the red dots.
What I need to know is the position and the orientation of the camera according to the marker (as represented on the image, the origin is the center of the marker).
Is there an easy and fast way to do so? I tried some method imagined by myself (using Al-Kashi's formulas), but this ended with too much errors :(. Could someone point out a way to get me out of this?
You can find some example code for the EPnP algorithm on this webpage. This code consists in one header file and one source file, plus one file for the usage example, so this shouldn't be too hard to include in your code.
Note that this code is released for research/evaluation purposes only, as mentioned on this page.
EDIT:
I just realized that this code needs OpenCV to work. By the way, although this would add a pretty big dependency to your project, the current version of OpenCV has a builtin function called solvePnP, which does what you want.
You can compute the homography between the image points and the corresponding world points. Then from the homography you can compute the rotation and translation mapping a point from the marker's coordinate system into the camera's coordinate system. The math is described in the paper on camera calibration by Zhang.
Here's an example in MATLAB using the Computer Vision System Toolbox, which does most of what you need. It is using the extrinsics function, which computes a 3D rotation and a translation from matching image and world points. The points need not come from a checkerboard.
I want to mask an arbitrary convex polygon area of an image and put another image into that area. I found this posting, but is wasn't clear to me if this applies only to rectangular areas and not arbitrary polygons.
The basic flow I am talking about is to have an (x,y) coordinate on the screen which would serve to be the center of my polygon (center in terms of an arbitrary point which is consistent for me). I would like to mask this area where the new image (polygonal in nature) would be displayed while leaving the rest of the screen as is.
Can I do this easily and quickly?
You have to use stencil buffer. It's basically another type of buffer that has plethora of awesome applications and one of the simplest one is masking. While I can't recommend any OpenGL ES specific tutorial off the top of my head, I highly recommend reading general tutorials, since it's not that different and surely is fascinating.
Try glScissor... it might be the rectangle you want.