Looking for method to restore distorted checkerboard image - algorithm

In following image, having detected all corners, I want to determine how I can move each corner to restore it to a undistorted checkerboard while minimizing the total distance of the moves. Any suggestions?

One reliable way - use OpenCV library, learn algorithms used there for camera calibration, and apply them

Related

What is the most robust way to detect projected rectangular regions in images?

IOS 11 includes a document scanner and uses the internal class VNDetectRectanglesRequests to find the four corners of a rectangle in an image.
That is the first step for further processing like warping the projected rectangle to get a straight image of the scanned document where all inner angles have 90°.
What is the algorithm used for that computer vision problem? First detect lines using Hough Transform and then detect corners using FAST? What strategy is used to make it fast and robust?

Why SIFT and SURF detects keypoint in white circle?

I'am playing with the SIFT and SURF algorithm. I'm trying to figure out why does the SIFT and SURF detects keypoint in the center of the circle shown in the image below. Any ideas please? The first photo is corner Harris detection, second SIFT and third SURF.
SIFT (Distinctive Image Features from Scale-Invariant Keypoints) detects
stable keypoint locations using scale-space extrema in the
difference-of-Gaussian function.
From what I have understood, a blob in the current scale can be seen as a small dot in another scale space.
It should be the same thing with SURF (SURF: Speeded Up Robust Features).

Looking for algorithm to map an image to 4 sides polygon

This is more a math question than a programming question beside the fact that I must implement it using Delphi inside a graphic application.
Assuming I have a picture of a sheet of paper. The actual sheet of paper is of course a rectangular area. When the picture is shown on a computer screen the rectangular area is no more rectangular because when the picture was taken, the camera was not perfectly positioned above the sheet of paper. There is all kinds of perspective effects which result in deformations.
My application needs to tweak the image so that the original rectangular area is displayed as a rectangular area on screen.
Most photo processing software have an interactive tool to do that. The user draw a rectangular area on screen around the rectangular object and then drag each corner to deform the displayed rectangular area until he see the real area as rectangular. What I'm looking for is the algorithm to do that computation.
You need to split the problem into 2 steps. Find the edges or corners of the sheet and remap the pixels.
To find the corners or edges it's a really hard problem since they might be invisible, outside of the picture, obstructed, bent or deformed. Assuming you have a very simple setup (black uniform background, white paper, very little distortion) you could run an edge detection kernel over the image then find the 4 outer edges. If you find the edges you can intersect them to find the corners and the other way around.
Once you find the corners run an interpolation over the image to map the pixels onto the rectangle you want. You should be able to get the graphics engine to do this for you if you provide the coordinates of the corners as texture coordinates for the rectangle and map the image as a texture.
I made it sound simple, but you will encounter many parameters to set and experiment with.
It seems (because you mentioned bilinear interpolation) that you need perspective transformations.
There is implementation of perspective transformations (mapping of arbitrary convex quad to rectangle and vice versa) in Anti-Grain Geometry library (exe example). Delphi port.
With agg_trans_perspective one can calculate the matrix of persp. transformation and then apply it to map coordinates from one quad to another.

what's the use of Edge Detection of image?

After getting edge image using canny, what's the use of edge image?
Is there any use case of edge image?
find object and Segment it from image? or get the sharp,area and perimeter of the object?
As in the wikipedia,
Edge detection is the name for a set of mathematical methods which
aim at identifying points in a digital image at which the image
brightness changes sharply or, more formally, has discontinuities. The
points at which image brightness changes sharply are typically
organized into a set of curved line segments termed edges.
You can use this to find the interested area of an image by programmatically. For example, you have a lazer image of a indoor floor map and you want to detect the actual area a robot can visit, this will be useful. You can refer google more on this. It's just an example in real world usage.

Find my camera's 3D position and orientation according to a 2D marker

I am currently building an Augmented Reality application and stuck on a problem that seem quite easy but is very hard to me ... The problem is as follow:
My device's camera is calibrated and detect a 2D marker (such as a QRCode). I know the focal length, the sensor's position, the distance between my camera and the center of the marker, the real size of the marker and the coordinates of the 4 corners of the marker and of it center on the 2D image I got from the camera. See the following image:
On the image, we know the a,b,c,d distances and the coordinates of the red dots.
What I need to know is the position and the orientation of the camera according to the marker (as represented on the image, the origin is the center of the marker).
Is there an easy and fast way to do so? I tried some method imagined by myself (using Al-Kashi's formulas), but this ended with too much errors :(. Could someone point out a way to get me out of this?
You can find some example code for the EPnP algorithm on this webpage. This code consists in one header file and one source file, plus one file for the usage example, so this shouldn't be too hard to include in your code.
Note that this code is released for research/evaluation purposes only, as mentioned on this page.
EDIT:
I just realized that this code needs OpenCV to work. By the way, although this would add a pretty big dependency to your project, the current version of OpenCV has a builtin function called solvePnP, which does what you want.
You can compute the homography between the image points and the corresponding world points. Then from the homography you can compute the rotation and translation mapping a point from the marker's coordinate system into the camera's coordinate system. The math is described in the paper on camera calibration by Zhang.
Here's an example in MATLAB using the Computer Vision System Toolbox, which does most of what you need. It is using the extrinsics function, which computes a 3D rotation and a translation from matching image and world points. The points need not come from a checkerboard.

Resources