Kinect v2 Color Camera Calibration Parameters - camera-calibration

I am looking for camera calibration parameters of Kinect V2 color camera. Can someone provide these parameters or any reference?
I know that the camera calibration parameters are lens specific but I am fine with default parameters which Kinect v2 is using.
As always, thank you very much.

Each kinect's calibration values are differ by small margins. if you do very precise calculations on them, you will need to calibrate your kinect with a chess board using opencv. Otherwise you can use following values. I calibrated myself.
All the Kinect v2 calibration parameters

Related

Getting extrinsic from Kinect V2 RGB and depth camera

I'm trying to do offline registration from a series of color and depth frame I obtained from the kinect V2 sensor. To properly do the registration, the first step is to get the intrinsic from both camera and the extrinsic(rotation and translation) between them. I found a couple of sources teaching how to do the registration but many of them are ignoring the step to get the extrinsic between color and depth camera:
https://www.codefull.org/2016/03/align-depth-and-color-frames-depth-and-rgb-registration/
kinect V2 get real xyz points from raw depth image in matlab without visionKinectDepthToSkeleton or depthToPointCloud or pcfromkinect
http://traumabot.blogspot.com/2013/02/kinect-rgb-depth-camera-calibration.html
Many of those links assume kinect sensor which outputs same size (512*424) of the dolor and depth sensor such that the calibration between color and depth can be done easily.
But in Kinect V2 sensor, we have the ir frame(512424) and the color frame(19201080) for the chessboard calibration process. How exactly I can get the extrinsic of them?

Matching 2D image pixels in corresponding 3D point cloud

I want to match pixels of calibrated 3D lidar and 2D camera data. I will use this to train a network. Can this be considered as labeled data with this matching? If it is, is there anyone to help me to achive this? Any suggestions will be appreciated.
On a high level, assuming you have some transformation (rotation/translation) between your camera and your lidar, and the calibration matrix of the camera, you have a 3D image and a 2D projection of it.
That is, if you project the 3D pointcloud onto the the image plane of the camera, you will have a (x,y)_camera (point in camera frame) for every (RGB)D_world == (x,y,z)_world) point.
Whether this is helpful to train on depends on what you're trying to achieve; if you're trying to find where the camera is or calibrate it, given (RGB)D data and image(s), that could be done better with a Perspective-n point algorithm (the lidar could make it easier, perhaps, if it built up a "real" view of the world to compare against). Whether it would be considered labeled data, depends on how you are trying to label it. They both say very similar things.

How to calibrate cameras which are completely facing towards each other as in figure?

I know how to calibrate a single camera and a stereo camera.
But I am not sure how to calibrate when two cameras are facing towards each other as in the below figure.
For my application, I am using two Intel Realsense cameras.
Thank you for your suggestions.

Is it possible to use Tango fisheye camera and rgb sensor at the same time?

As far as I researched, it seems that it is not possible to use the Tango fisheye camera and the rgb sensor at the same time.
Thus is it possible to take a color picture with the fisheye camera on Tango?
It is not possibile to take color picture with the fisheye simply because RGB camera and fisheye camera are two different hw devices as you can see here

Need of Extrinsic parameter Camera calibration tips

I am new to camera calibration (Extrinsic Parameters)..please give clear idea how to start with extrinsic parameters computation. I really need some answers.. am in critical position.
Try the Camera Calibrator app in MATLAB.

Resources