Getting extrinsic from Kinect V2 RGB and depth camera - camera-calibration

I'm trying to do offline registration from a series of color and depth frame I obtained from the kinect V2 sensor. To properly do the registration, the first step is to get the intrinsic from both camera and the extrinsic(rotation and translation) between them. I found a couple of sources teaching how to do the registration but many of them are ignoring the step to get the extrinsic between color and depth camera:
https://www.codefull.org/2016/03/align-depth-and-color-frames-depth-and-rgb-registration/
kinect V2 get real xyz points from raw depth image in matlab without visionKinectDepthToSkeleton or depthToPointCloud or pcfromkinect
http://traumabot.blogspot.com/2013/02/kinect-rgb-depth-camera-calibration.html
Many of those links assume kinect sensor which outputs same size (512*424) of the dolor and depth sensor such that the calibration between color and depth can be done easily.
But in Kinect V2 sensor, we have the ir frame(512424) and the color frame(19201080) for the chessboard calibration process. How exactly I can get the extrinsic of them?

Related

Using ARCore for Measurment

I would like to know if it is possible to measure the dimensions of an object by just pointing the camera at the object without moving the camera from left to right like we do in Google Measurement.
A depth map cannot be calculated from just a 2D camera image. A smart phone does not have a distance sensor but it does have motion sensors, so by combining the movement of the device with changes in the input from the camera(s), ARCore can calculate depth. To put it simply, objects in close to the camera move around on screen more, compared to objects further away.
To get depth data from a fixed position would require different technologies than found on current phones, such as LiDAR or an infrared beam projector and infrared camera.

Color space to Camera space transformation matrix

I am looking for transformation matrix to convert color space to camera space.
I know that the point conversion can be done using CoordinateMapper but I am not using Kinect v2 official APIs.
I really appreciate if someone can share the transformation matrix, which can convert color space to camera space.
As always, thank you very much.
Important : The raw kinect RGB image has a distortion. Remove it first.
Short answer
The "transformation matrix" you are searching is called projection matrix.
rgb.cx:959.5
rgb.cy:539.5
rgb.fx:1081.37
rgb.fy:1081.37
Long answer
First understand how color image is generated in Kinect.
X, Y, Z : coordinates of the given point in a coordinate space where kinect sensor is consider as the origin. AKA camera space. Note that camera space is 3D.
u, v : Coordinates of the corresponding color pixel in color space. Note that color space is 2D.
fx , fy : Focal length
cx, cy : principal points (you can consider the principal points of the kinect RGB camera as the center of image)
(R|t) : Extrinsic camera matrix. In kinect this one you can consider as (I|0) where I is identity matrix.
s : scaler value. you can set it to 1.
To get the most accurate values for the fx , fy, cx, cy, you need to calibrate your rgb camera in kinect using a chess board.
The above fx , fy, cx, cy values are my own calibration of my kinect. These values are differ from one kinect to another in very small margin.
More info and implementaion
All Kinect camera matrix
Distort
Registration
I implemented the Registration process in CUDA since CPU is not fast enough to process that much of data (1920 x 1080 x 30 matrix calculations per second) in real-time.

Kinect v2 Color Camera Calibration Parameters

I am looking for camera calibration parameters of Kinect V2 color camera. Can someone provide these parameters or any reference?
I know that the camera calibration parameters are lens specific but I am fine with default parameters which Kinect v2 is using.
As always, thank you very much.
Each kinect's calibration values are differ by small margins. if you do very precise calculations on them, you will need to calibrate your kinect with a chess board using opencv. Otherwise you can use following values. I calibrated myself.
All the Kinect v2 calibration parameters

Is it possible to use Tango fisheye camera and rgb sensor at the same time?

As far as I researched, it seems that it is not possible to use the Tango fisheye camera and the rgb sensor at the same time.
Thus is it possible to take a color picture with the fisheye camera on Tango?
It is not possibile to take color picture with the fisheye simply because RGB camera and fisheye camera are two different hw devices as you can see here

Camera matching application

I am trying to build a simple camera matching (or match moving) application. The functionality is the same as that in most 3d applications like 3ds Max or Maya. Given an image of a cube and a 3d model of the cube, the user selects points on the image corresponding to each vertex of the model. The application must then generate a camera view that displays the 3d cube model from the same angle as shown in the image.
Can anyone point me in the direction of an algorithm for that?
PS: The camera is calibrated and the camera calibration matrix is available to the program
You can try with the algorithm illustrated step-by-step on http://www.offbytwo.net/camera-matching/. The Octave source code is provided, too.
As a plus, you don't need to start with a cube, but just with any two edges parallel to the x axis and two in the y direction.

Resources