Google Tango Lenovo Phab 2 Camera Intrinsics - google-project-tango

I was trying to extract the camera intrinsics and distortion coefficients from my Lenovo Phab 2 via the documented:
ret = TangoService_getCameraIntrinsics(TANGO_CAMERA_COLOR, &ccIntrinsics);
Weirdly enough, the distortion coefficients are coming back 0 for every one. However, there is data for the intrinsics, with what I think is very low precision.
I thought at first it could have been a casting error, but with the %f, %lf and %E flags (LOGE();), the values don't change.
I know that on the previous Google Tango Tablet dev kit, the calibration coefficients and distortion modle was in a file called calibration.xml. Is this also true of the Lenovo Phab 2?
EDIT: After dumping the contents of the camera intrinsics struct to file, there is for sure no distortion coefficients being returned for the device. I.e. All distortion entries are 0.0000.

This was an issue with my device! It was resolved by receiving an updated device. Somehow the calibration data was missing.
Make sure to check your device for the calibration.xml file. If this file is not in-place, contact costumer support!

Related

low depth resolution on google-project-tango

I see that the resolution of the depth camera is 320*180, however, each depth capture frame produces only 10K to 15 K points. Am I missing a setting?
I looked at the transformation matrices keeping the device fixed and with an area_learn update method, with no ADF loaded. I see non-zero offsets on the translation values. I expected 0 offsets.
Is there a published motion estimation performance document for Tango that specifies latency and performance of the IMU + ADF? I am looking for detailed test information.
Thanks
You are right about the resolution of the depth camera and your results align with mine. Depending of where the depth camera is pointing at, i'll get between 5K and 12K points. Scanning the floor surface would generate more points since it is flat and uniform.
I think that you are experiencing drift. This is expected when not using Area Learning (no ADF loaded) There is a known issue of drift occuring because of android 4.4. (source https://plus.google.com/+ChuckKnowledge/posts/dVk4ZgVikgT)
Loading an ADF should help this but i wouldn't expect it to be perfect.
I don't know about this. Sorry!

Transforming and registering point clouds

I’m starting to develop with Project Tango API.
I need to save PointCloud data that I get in the event OnXyzIjAvailable;
to do this, I started from your example "PointCloudJava" and wrote PointCloud coordinates in single files (an AsyncTask is started for this purpose).
So I have one file with xyz for each event. On the same event I get the corresponding transformation matrix (mRenderer.getModelMatCalculator(). GetPointCloudModelMatrixCopy()).
Point clouds
Then I’ve imported all this data (xyz point cloud with corresponding transformation matrix; the transformation matrix is applied to the point clouds) but the point clouds doesn’t match exactly; it seems that point clouds are closed each other but not overlapping exactly.
My questions are:
-Why I don’t have the matching between the single point clouds ?
-What I should have to do to have this matching ?
Then I’ve notice the following that is probably related to the above problem; I’ve used Project Tango Explore application (Area learning), I can see my position, but is constantly in motion even if I don't move.
Which is the problem ? Is it necessary a calibration?
Device Information
Poses delivered by Tango have a non-negligible amount of drift. Here is a sample graph of pose position when my tablet was in its stand observing a static scene (ideally the traces should be flat):
When we couple this drift with tracking errors when the device is actually moving then this produces noticeable registration issues. I see this especially when the device is rolled, i.e. rotated about the view axis. The raw pose quality may be sufficient for some applications (e.g. location) but causes problems for others (e.g. 3D scanning, seamless augmented reality).
I was disappointed when I saw this. But if Tango is attempting to measure motion by using the fisheye camera to correct inertial motion prediction - and not by using stereo vision between the fisheye and color cameras - then that is a really hard problem. And the reason for doing that would be to stay within CPU/GPU/RAM/latency/battery budgets to leave something for applications. So after consideration, while I remain disappointed, I can understand it.
I am hopeful that Tango will improve their pose algorithm over time, but I suspect that applications that depend on precise tracking will still have to add their own corrections, e.g. via stereo, structure from motion, point cloud correlation, etc.
Point clouds should be viewed as statistically accurate, not exactly accurate - there is a distance estimation error range that is a function of distance and surface characteristics - a tango fixed in a specific location will not return a constant point clout - rotation of the device can cause apparent drift, but it really isn't, it's just that the error is rotating along with the tango

Project Tango Camera Specifications

I've been developing a virtual camera app for depth cameras and I'm extremely interested in the Tango project. I have several questions regarding the cameras on board. I can't seem to find these specs anywhere in the developer section or forums, so I understand completely if these cant be answered publicly. I thought I would ask regardless and see if the current device is suitable for my app.
Are the depth and color images from the rgb/ir camera captured simultaneously?
What frame rates is the rgb/ir capable of? e.g. 30, 25, 24? And at what resolutions?
Does the motion tracking camera run in sync with the rgb/ir camera? If not what frame rate (or refresh rate) does the motion tracking camera run at? Also if they do not run on the same clock does the API expose a relative or an absolute time stamp for both cameras?
What manual controls (if any) are exposed for the color camera? Frame rate, gain, exposure time, white balance?
If the color camera is fully automatic, does it automatically drop its frame rate in low light situations?
Thank you so much for your time!
Edit: Im specifically referring to the new tablet.
Some guessing
No, the actual image used to generate the point cloud is not the droid you want - I put up a picture on Google+ that shows what you get when you get one of the images that has the IR pattern used to calculate depth (an aside - it looks suspiciously like a Serpinski curve to me
Image frame rate is considerably higher than point cloud frame rate, but seems variable - probably a function of the load that Tango imposes
Motion tracking, i.e. pose, is captured at a rate roughly 3x the pose cloud rate
Timestamps are done with the most fascinating double precision number - in prior releases there was definitely artifacts/data in the lsb's of the double - I do a getposeattime (callbacks used for ADF localization) when I pick up a cloud, so supposedly I've got a pose aligned with the cloud - images have very low timestamp correspondance with pose and cloud data - it's very important to note that the 3 tango streams (pose,image,cloud) all return timestamps
Don't know about camera controls yet - still wedging OpenCV into the cloud services :-) Low light will be interesting - anecdotal data indicates that Tango has a wider visual spectrum than we do, which makes me wonder if fiddling with the camera at the point of capture to change image quality, e.g. dropping the frame rate, might not cause Tango problems

Responding to tilt of iPhone in Sprite Kit

I have been building a Sprite Kit game for quite some time now. Just recently I have been adding gyro/tilt functionality. Using the CMMotionManager, I've been able to access the numbers surprisingly easily. However, my problem arises as a result of how the acceleration.x values are stored.
You see, the way my game works, when the game starts, the phone quickly calibrates itself to how it's currently being held, and then I respond to changes in the acceleration.x value (holding your phone in landscape orientation, this is equivalent to tilting your screen towards and away from you.) However, laying your phone flat is 1.0 and tilting it straight towards you is 0.0, and then it loops back through those values if you go beyond that. So, if someone is sitting upright and their phone is calibrated at .1, and they tilt their phone .2 downwards, the results will not be what is expected.
Is there any easy way to counteract this?
Why are you trying to make your own system for this? You shouldn't really be using the accelerometer values directly.
There is a class called CMAttitude that contains all the information about the orientation of the device.
This orientation is not taken raw from accelerometer data but uses a combination of the accelerometers, gyroscopes and magnetometer to calculate the current attitude of the device.
From this you can then take the roll, pitch and yaw values and use those instead of having to calculate them yourself.
Class documentation for CMAttitude.

Pcl object reconstruction

I have a square table with four cameras (Xtion pro), one at each angle.
I'm trying to reconstruct the complete point-cloud of an arbitrary object that is on the table.
I've calibrated the cameras. Intrinsic parameters with a chessboard, and estrinsic parameters with a tag like ARToolkit.
The problem is that when I transform the point-clouds from camera's frame to the tag-defined frame I have errors, quite big.
How can I correct this error? I tried registration with ICP with poor results.
How can I use the obtained transformed cloud as initial guess for a fine registration?
Any suggestion is appreciated!
Edit after D.J.Duff comment:
I'm using Xtion PRO version without the RGB camera. So I'm calibrating the IR camera. To do so I covered the IR projector and performed the calibration with the IR stream with the ready to use ROS calibration tool for intrinsic parameters and the PiTag tags to calibrate extrinsic parameters.
I should manually align the cloud with a known object, but can this be automated? In sense: if I use something like an L-shaped object with no orientation ambiguities, can I automate the registration process to obtain a better transform matrix?

Resources