Is (experimental) Drift-Correction working yet? - google-project-tango

I'm looking for more information on how to use drift-correction correctly (using Unity SDK).
On the Tango website it says "Drift-corrected frames come through the Area Description reference frame", that the frame pair Start Of Service -> Device "does not include drift correction" and for Area Description -> Start Of Service that it "provides updates only when a localization event or a drift correction occurs".
The way I'd like to use a drift-corrected pose is like in the TangoPointCloud prefab, where depth points are multiplied by a matrix startServiceTDevice which results from the frame pair SoS -> Device. Assuming that the drift-corrected frame is in the AD frame, I'd need SoS -> AD. Since only AD -> SoS is available, I tried with this one and its inverse. The resulting pose is too small though to make any sense (even if using it the wrong direction, the translation shouldn't be close to zero if I had been walking around). Then I considered that the AD frame might actually be something like a drift-corrected Start of Service, but then again I can't find any significant/visible difference between AD -> Device and SoS -> Device, definitely no loop closures in it. I'm requesting and applying poses after finishing my scan, so drifts should have been detected by then.
On the Tango website it's further said that "There will be a period after Startup during which drift-corrected frames are not available.", yet the AD -> SoS pose is available (and valid) from the beginning and I couldn't yet produce a situation where it wasn't (e.g. no motion, rapid motion...).
Is drift correction working at all? Or am I using it all wrong?
PS: On the latest stackoverflow post it sounds as if drift correction would be for relocalization after tracking loss only. However, I find this hard to believe since the Tango website describes drift correction as "When the device sees a place it knows it has seen earlier in your session, it realizes it has traveled in a loop and adjusts its path to be more consistent with its previous observations.".

Drift correction is working as experimental features at this moment, there's corner cases that it will break. I will go into more details later.
In order to use drift correction pose, you will need to use ADF_T_Device frame pair (ADF is base frame, Device is target frame). In the example of using drift-correction pose to project points into world space, you don't need to do Adf_T_ss * ss_T_device transform, instead, all you only need to use ADF_T_device frame directly. If this is in Unity, you can just check the use area description pose on PointCloud prefab.
Corner cases that breaks drift-correction:
User shakes the device right after starting the experience.
Under the hood, drift correction is constructing a more dense but more accurate version of ADF. If user covers camera or shake device at the very beginning, that will cause that no ADF (or features) being saved in the buffer. Thus the API could get into a state that never gives any valid pose from ADF_T_Device frame pair.
Device lost tracking, and user moved to a new space without relocalizing.
This is similar to the first case. If user moved to a new space without relocalizing after lost tracking, device will never relocalized, thus no valid pose will be available through ADF_T_device frame.
Drift correction API is still experimental, we are trying to address above issues from API level as well.

Related

Pose drifting in featureless environment

The Tango pose is drifting around while holding the device still when the camera is facing a region without too many distinct features, i.e. facing a white wall. Typically the drift direction is away from the target it is facing. I understand it is hard for the device to localize itself under such a condition due to lack of landmarks. However, is there a mechanism to let the device know that it has difficulties in getting reliable pose, then I am able to tell the device stop doing something until the device is relocalized by going back to the area with rich landmarks or features.
Note: the pose status is still showing valid in this case.
Please check tango_client_api_header
you are looking for TANGO_POSE_INVALID
another way you want is integrating UX library.
check this below:
https://developers.google.com/project-tango/ux/ux-framework#exception_handling
or you can write up your own handle for different module.

Multiple Tangos Looking at one location - IR Conflict

I am getting my first Tango in the next day or so; worked a little bit with Occipital's Structure Sensor - which is where my background in depth perceiving camera's come from.
Has anyone used multiple Tango at once (lets say 6-10), looking at the same part of a room, using depth for identification and placement of 3d character/content? I have been told that multiple devices looking at the same part of a room will confuse each Tango as they will see the other Tango's IR dots.
Thanks for your input.
Grisly
I have not tried to use several Tangos, but I have however tried to use my Tango in a room where I had a Kinect 2 sensor, which caused the Tango to go bananas. It seems however like the Tango has lower intensity on its IR projector in comparison, but I would still say that it is a reasonable assumption that it will not work.
It might work under certain angles but I doubt that you will be able to find a configuration of that many cameras without any of them interfering with each other. If you would make it work however, I would be very interested to know how.
You could lower the depth camera rate (defaults to 5/second I believe) to avoid conflicts, but that might not be desirable given what you're using the system for.
Alternatively, only enable the depth camera when placing your 3D models on surfaces, then disable said depth camera when it is not needed. This can also help conserve CPU and battery power.
It did not work. Occipital Structure Sensor on the other hand, did work (multiple devices in one place)!

Disabling inertia tracker

Is there anyway to disable the "inertial motion sensors" for a program, so that my program does not track the devices acceleration? I've noticed that if a user suddenly moves and then suddenly stops with the device the motion tracking becomes inaccurate.
Even if you could, "disabling the sensors" is not a good idea. And when you say "disabling" I assume you mean setting them to an initialization state rather than just ignoring the data stream. The 3-axis accelerometer and gyroscope data are fused with position data to provide your relative location during motion tracking. You have no way of knowing which of these data streams is the source of the inaccuracy, and just turning off acceleration would likely require a re-calibration of all sensors so that tracking (data fusion) is accurate.
Replicate the error with as much data as possible (speed, stopping time, orientation of the tablet, distance to the nearest object, nearest object characteristics, etc.) and report it to the project Tango team.

Transforming and registering point clouds

I’m starting to develop with Project Tango API.
I need to save PointCloud data that I get in the event OnXyzIjAvailable;
to do this, I started from your example "PointCloudJava" and wrote PointCloud coordinates in single files (an AsyncTask is started for this purpose).
So I have one file with xyz for each event. On the same event I get the corresponding transformation matrix (mRenderer.getModelMatCalculator(). GetPointCloudModelMatrixCopy()).
Point clouds
Then I’ve imported all this data (xyz point cloud with corresponding transformation matrix; the transformation matrix is applied to the point clouds) but the point clouds doesn’t match exactly; it seems that point clouds are closed each other but not overlapping exactly.
My questions are:
-Why I don’t have the matching between the single point clouds ?
-What I should have to do to have this matching ?
Then I’ve notice the following that is probably related to the above problem; I’ve used Project Tango Explore application (Area learning), I can see my position, but is constantly in motion even if I don't move.
Which is the problem ? Is it necessary a calibration?
Device Information
Poses delivered by Tango have a non-negligible amount of drift. Here is a sample graph of pose position when my tablet was in its stand observing a static scene (ideally the traces should be flat):
When we couple this drift with tracking errors when the device is actually moving then this produces noticeable registration issues. I see this especially when the device is rolled, i.e. rotated about the view axis. The raw pose quality may be sufficient for some applications (e.g. location) but causes problems for others (e.g. 3D scanning, seamless augmented reality).
I was disappointed when I saw this. But if Tango is attempting to measure motion by using the fisheye camera to correct inertial motion prediction - and not by using stereo vision between the fisheye and color cameras - then that is a really hard problem. And the reason for doing that would be to stay within CPU/GPU/RAM/latency/battery budgets to leave something for applications. So after consideration, while I remain disappointed, I can understand it.
I am hopeful that Tango will improve their pose algorithm over time, but I suspect that applications that depend on precise tracking will still have to add their own corrections, e.g. via stereo, structure from motion, point cloud correlation, etc.
Point clouds should be viewed as statistically accurate, not exactly accurate - there is a distance estimation error range that is a function of distance and surface characteristics - a tango fixed in a specific location will not return a constant point clout - rotation of the device can cause apparent drift, but it really isn't, it's just that the error is rotating along with the tango

Using windows phone combined motion api to track device position

I'd like to track the position of the device with respect to an initial position with high accuracy (ideally) for motions at a small scale (say < 1 meter). The best bet seems to be using motionReading.SensorReading.DeviceAcceleration. I tried this. But ran into few problems. Apart from the noisy readings (which I was expecting and can tolerate), I see some behaviors that are conceptually wrong - e.g. If I start from rest, move the phone around and bring it back to rest- and in the process periodically update the velocity vector along all the dimensions, I would expect the magnitude of the velocity to be very small (ideally 0). But I don't see that. I have extensively reviewed available help including the official msdn pages but I don't see any examples where the position/velocity of the device are updated using the acceleration vector. Is the acceleration vector that the api returns (atleast in theory) supposed to be the rate of change of velocity or something else? (FYI - my device does not have a gyroscope, so the api is going to be the low accuracy version.)

Resources