I am trying to understand the 3D reconstruction of Object using 3D structured Lighting scanner and I am stuck at the point where a method of decoding set of camera and projector correspondences to use to reconstruct a 3D point cloud. How exactly is 3D point cloud information acquired from the information obtained from those correspondences? I want to understand the mathematical implementation, not the code implementation.
assuming you used structured light method which uses some sort of lines (vertical or horizontal - like binary coding or de-brujin) the idea is as follows:
a light plane goes through the projector perspective center and the line in the pattern.
the light plane normal needs to be rotated with the projector rotation matrix relative to the camera (or world depends on the calibration). the rotation part for the light plane can be avoided if if treat the projector perspective center as system origin.
using the correspondences you find a pixel in the image that match he light plane. now you need to define a vector that goes from the camera perspective center to the pixel in the image and then rotate this vector by the camera rotation (relative to the projector or world. again' depending on the calibration).
intersect the light plane with the found vector. how to compute that can be found in wikipedia: https://en.wikipedia.org/wiki/Line%E2%80%93plane_intersection
the mathematical problem (3d reconstruction) here is very simple as you can see. the hard part is recognizing the projected pattern in the image (easier than regular stereo but still hard) and calibrating (finding relative orientation between camera and projector).
Related
I want to match pixels of calibrated 3D lidar and 2D camera data. I will use this to train a network. Can this be considered as labeled data with this matching? If it is, is there anyone to help me to achive this? Any suggestions will be appreciated.
On a high level, assuming you have some transformation (rotation/translation) between your camera and your lidar, and the calibration matrix of the camera, you have a 3D image and a 2D projection of it.
That is, if you project the 3D pointcloud onto the the image plane of the camera, you will have a (x,y)_camera (point in camera frame) for every (RGB)D_world == (x,y,z)_world) point.
Whether this is helpful to train on depends on what you're trying to achieve; if you're trying to find where the camera is or calibrate it, given (RGB)D data and image(s), that could be done better with a Perspective-n point algorithm (the lidar could make it easier, perhaps, if it built up a "real" view of the world to compare against). Whether it would be considered labeled data, depends on how you are trying to label it. They both say very similar things.
I am currently building an Augmented Reality application and stuck on a problem that seem quite easy but is very hard to me ... The problem is as follow:
My device's camera is calibrated and detect a 2D marker (such as a QRCode). I know the focal length, the sensor's position, the distance between my camera and the center of the marker, the real size of the marker and the coordinates of the 4 corners of the marker and of it center on the 2D image I got from the camera. See the following image:
On the image, we know the a,b,c,d distances and the coordinates of the red dots.
What I need to know is the position and the orientation of the camera according to the marker (as represented on the image, the origin is the center of the marker).
Is there an easy and fast way to do so? I tried some method imagined by myself (using Al-Kashi's formulas), but this ended with too much errors :(. Could someone point out a way to get me out of this?
You can find some example code for the EPnP algorithm on this webpage. This code consists in one header file and one source file, plus one file for the usage example, so this shouldn't be too hard to include in your code.
Note that this code is released for research/evaluation purposes only, as mentioned on this page.
EDIT:
I just realized that this code needs OpenCV to work. By the way, although this would add a pretty big dependency to your project, the current version of OpenCV has a builtin function called solvePnP, which does what you want.
You can compute the homography between the image points and the corresponding world points. Then from the homography you can compute the rotation and translation mapping a point from the marker's coordinate system into the camera's coordinate system. The math is described in the paper on camera calibration by Zhang.
Here's an example in MATLAB using the Computer Vision System Toolbox, which does most of what you need. It is using the extrinsics function, which computes a 3D rotation and a translation from matching image and world points. The points need not come from a checkerboard.
I am wondering how can the extrinsic parameters of a camera be constant?
I know that the rotation matrix aligns the world coordinate system axises to the camera coordinate system, and the translation matrix/vector aligns the origo on top of each other.
But how can the parameters be constant? Would it not somehow be required that I know the orientation of the camera in world space? I.e. by an accelerometer or something?
I hope someone can help me wrap my head around this.
As you have correctly pointed out, camera extrinsics consist of a rotation and a translation of the camera's coordinate system relative to some world coordinate system. So, the extrinsics are constant only as long as the camera does not move relative to the world coordinates. As soon as your camera moves, its extrinsics change.
When you calibrate your camera, you typically use multiple images of a planar calibration pattern. During the calibration process the extrinsics of each location of the calibration pattern are computed. Once the camera is calibrated, you can compute the extrinsics by detecting some reference points with known world coordinates in the image. See this example in MATLAB.
I am trying to build a simple camera matching (or match moving) application. The functionality is the same as that in most 3d applications like 3ds Max or Maya. Given an image of a cube and a 3d model of the cube, the user selects points on the image corresponding to each vertex of the model. The application must then generate a camera view that displays the 3d cube model from the same angle as shown in the image.
Can anyone point me in the direction of an algorithm for that?
PS: The camera is calibrated and the camera calibration matrix is available to the program
You can try with the algorithm illustrated step-by-step on http://www.offbytwo.net/camera-matching/. The Octave source code is provided, too.
As a plus, you don't need to start with a cube, but just with any two edges parallel to the x axis and two in the y direction.
In a 3x3 camera matrix what does the principle point do? how its location is formed? can we visualize that?
It is told that the principle point is the intersection of optical axis with the image plane. but why it is not always in the center of image?
we use opencv
The 3x3 camera intrinsics matrix is used to map between the coordinates in the image to the physical world coordinates. Similarly, the role of the principle point in this matrix is the mapping of "the intersection of optical axis with the image plane", between the coordinates in the image to the physical world coordinates. Ideally the principle point is in the center of the image, for most cameras, but this is not always the case in practice. The principle point may be slightly off center due to tangential distortion or imperfect centering of the lens components and other manufacturing defects. The 3x3 camera intrinsics matrix tries to correct this distortion.
I have found this site to be helpful to me when when learning about camera calibration. Although it is in MATLAB, it is based on the same camera calibration used in OpenCV.
The principal point in the 3x3 camera calibration matrix might also, more usefully, represent image cropping. If the image has been cropped around an object, then mapping the pixel coordinates to word coordinates requires a translation vector which appears as a non-centralized principal point in the matrix.