Is it possible to use Tango fisheye camera and rgb sensor at the same time? - google-project-tango

As far as I researched, it seems that it is not possible to use the Tango fisheye camera and the rgb sensor at the same time.
Thus is it possible to take a color picture with the fisheye camera on Tango?

It is not possibile to take color picture with the fisheye simply because RGB camera and fisheye camera are two different hw devices as you can see here

Related

Getting extrinsic from Kinect V2 RGB and depth camera

I'm trying to do offline registration from a series of color and depth frame I obtained from the kinect V2 sensor. To properly do the registration, the first step is to get the intrinsic from both camera and the extrinsic(rotation and translation) between them. I found a couple of sources teaching how to do the registration but many of them are ignoring the step to get the extrinsic between color and depth camera:
https://www.codefull.org/2016/03/align-depth-and-color-frames-depth-and-rgb-registration/
kinect V2 get real xyz points from raw depth image in matlab without visionKinectDepthToSkeleton or depthToPointCloud or pcfromkinect
http://traumabot.blogspot.com/2013/02/kinect-rgb-depth-camera-calibration.html
Many of those links assume kinect sensor which outputs same size (512*424) of the dolor and depth sensor such that the calibration between color and depth can be done easily.
But in Kinect V2 sensor, we have the ir frame(512424) and the color frame(19201080) for the chessboard calibration process. How exactly I can get the extrinsic of them?

Using ARCore for Measurment

I would like to know if it is possible to measure the dimensions of an object by just pointing the camera at the object without moving the camera from left to right like we do in Google Measurement.
A depth map cannot be calculated from just a 2D camera image. A smart phone does not have a distance sensor but it does have motion sensors, so by combining the movement of the device with changes in the input from the camera(s), ARCore can calculate depth. To put it simply, objects in close to the camera move around on screen more, compared to objects further away.
To get depth data from a fixed position would require different technologies than found on current phones, such as LiDAR or an infrared beam projector and infrared camera.

How is point cloud data acquired from the structured light 3D scanning?

I am trying to understand the 3D reconstruction of Object using 3D structured Lighting scanner and I am stuck at the point where a method of decoding set of camera and projector correspondences to use to reconstruct a 3D point cloud. How exactly is 3D point cloud information acquired from the information obtained from those correspondences? I want to understand the mathematical implementation, not the code implementation.
assuming you used structured light method which uses some sort of lines (vertical or horizontal - like binary coding or de-brujin) the idea is as follows:
a light plane goes through the projector perspective center and the line in the pattern.
the light plane normal needs to be rotated with the projector rotation matrix relative to the camera (or world depends on the calibration). the rotation part for the light plane can be avoided if if treat the projector perspective center as system origin.
using the correspondences you find a pixel in the image that match he light plane. now you need to define a vector that goes from the camera perspective center to the pixel in the image and then rotate this vector by the camera rotation (relative to the projector or world. again' depending on the calibration).
intersect the light plane with the found vector. how to compute that can be found in wikipedia: https://en.wikipedia.org/wiki/Line%E2%80%93plane_intersection
the mathematical problem (3d reconstruction) here is very simple as you can see. the hard part is recognizing the projected pattern in the image (easier than regular stereo but still hard) and calibrating (finding relative orientation between camera and projector).

Circular fisheye distort using opencv3 fisheye model

I use a OpenCV fisheye model function to perform fisheye calibration work. My image is a Circular fisheye (example), but I'm getting this result from the OpenCV fisheye model function.
I have the following problems:
I don't know why the result is an oval and not a perfect circle. Is this as expected?
Can OpenCV fisheye model be calibrated for a Circular fisheye image?
I don't understand why the image is not centered when using the cv::fisheye::calibrate function to get the Cx Cy parameter in K?
What tips (picture number, angle and position...) can be used on the checkboard to get the corrent camera matrix and Distortion factor?
Expected Result
My Result
First at all cv::fisheye uses a very simple idea. To remove radial distortion it will move points of fisheye circle in direction from circle center to circle edge.
Points near center will be moved a little. Points near edges will be moved on much larger distance.
In other words distance of point movement is not a constant. It's a function f(x)= 1+K1*x3+K2*x5+K3*x7=K4*x9. K1-K4 are coefficients of radial distortion of opencv fisheye undistortion model. In normal case undistorted image is always larger then initial image.
As you can see your undistorted image is smaller then initial fisheye image. I think the source of problem is bad calibration.
I don't know why the result is an oval and not a perfect circle. Is this as expected?
-> Tangential parameter of the calibration model can make it look like oval. It could be either your actual lens is tilted or calibration is incorrect. Just try to turn off tangential parameter option.
Can OpenCV fisheye model be calibrated for a Circular fisheye image?
-> No problem as far as I know. Try ocam as well.
I don't understand why the image is not centered when using the cv::fisheye::calibrate function to get the Cx Cy parameter in K?
-> It is normal that optical center does not align with the image center. It is a matter of degree, however. Cx, Cy represents the actual optical center. Low-quality fisheye camera manufactures does not control the quality of this parameter.
What tips (picture number, angle and position...) can be used on the checkboard to get the corrent camera matrix and Distortion factor?
-> clear images only, different distances, different angles, different positions. As many as possible.

Kinect v2 Color Camera Calibration Parameters

I am looking for camera calibration parameters of Kinect V2 color camera. Can someone provide these parameters or any reference?
I know that the camera calibration parameters are lens specific but I am fine with default parameters which Kinect v2 is using.
As always, thank you very much.
Each kinect's calibration values are differ by small margins. if you do very precise calculations on them, you will need to calibrate your kinect with a chess board using opencv. Otherwise you can use following values. I calibrated myself.
All the Kinect v2 calibration parameters

Resources