Disabling inertia tracker - google-project-tango

Is there anyway to disable the "inertial motion sensors" for a program, so that my program does not track the devices acceleration? I've noticed that if a user suddenly moves and then suddenly stops with the device the motion tracking becomes inaccurate.

Even if you could, "disabling the sensors" is not a good idea. And when you say "disabling" I assume you mean setting them to an initialization state rather than just ignoring the data stream. The 3-axis accelerometer and gyroscope data are fused with position data to provide your relative location during motion tracking. You have no way of knowing which of these data streams is the source of the inaccuracy, and just turning off acceleration would likely require a re-calibration of all sensors so that tracking (data fusion) is accurate.
Replicate the error with as much data as possible (speed, stopping time, orientation of the tablet, distance to the nearest object, nearest object characteristics, etc.) and report it to the project Tango team.

Related

Is there any way to get live telemetry data from the DJI Mavic 2 Zoom to another unit, say a computer?

As part of a course on my university we've been given the task of taking the live wind telemetry from a drone and then feeding it to a neural network so that it gives better estimates than just using a sensor.
The research we've concluded so far tells us that our drone, the DJI Mavic 2 Zoom, is only compatible with the Windows SDK but not the onboard SDK.
Simply our question is; is there any way for us to send the raw wind speed and direction data from the drones sensors to a computer?
Create Android application with DJI Mobile SDK and send data from msdk to your computer with wifi.
The SDK only provides the wind warning level(0, 1 and 2). It does not provide any information regarding the direction from which the wind is blowing or the actual speed of the wind.
The aircraft tries to stay in it's current position on it's own, even if there is moderate wind blowing. However the drone does not tell the user how much it has to work in which direction to negate the effect of the wind.
I assume you're better off with accessing real time wind information for your position from a weather service on the internet, if that's available in your country.
I've done a wind meter app.
The best method is:
Fly against the wind
In virtualstick use angle mode, and set pitch and roll to 0 This will let the drone drift with the wind.
Slowly rotate the yaw.
Meassure the speed, when it stop increasing, the gps-speed gives you the windspeed and direction.
Warning, in strong wind you have to fly for quite a while against the wind.
The yaw rotation needs to be done due to the drone is never exactly leveled, and it will pick up speed at one direction. If turned, it cancel that out.
Send the info to a server over internet/wifi.
I've done this on an android phone connected the controller.
Windows api doesn't seem to support virtualsticks, which I find strange. In that case it must be done on android or ios, and trasnmitted to a server. I might be wrong since I never used the windows api.

Is (experimental) Drift-Correction working yet?

I'm looking for more information on how to use drift-correction correctly (using Unity SDK).
On the Tango website it says "Drift-corrected frames come through the Area Description reference frame", that the frame pair Start Of Service -> Device "does not include drift correction" and for Area Description -> Start Of Service that it "provides updates only when a localization event or a drift correction occurs".
The way I'd like to use a drift-corrected pose is like in the TangoPointCloud prefab, where depth points are multiplied by a matrix startServiceTDevice which results from the frame pair SoS -> Device. Assuming that the drift-corrected frame is in the AD frame, I'd need SoS -> AD. Since only AD -> SoS is available, I tried with this one and its inverse. The resulting pose is too small though to make any sense (even if using it the wrong direction, the translation shouldn't be close to zero if I had been walking around). Then I considered that the AD frame might actually be something like a drift-corrected Start of Service, but then again I can't find any significant/visible difference between AD -> Device and SoS -> Device, definitely no loop closures in it. I'm requesting and applying poses after finishing my scan, so drifts should have been detected by then.
On the Tango website it's further said that "There will be a period after Startup during which drift-corrected frames are not available.", yet the AD -> SoS pose is available (and valid) from the beginning and I couldn't yet produce a situation where it wasn't (e.g. no motion, rapid motion...).
Is drift correction working at all? Or am I using it all wrong?
PS: On the latest stackoverflow post it sounds as if drift correction would be for relocalization after tracking loss only. However, I find this hard to believe since the Tango website describes drift correction as "When the device sees a place it knows it has seen earlier in your session, it realizes it has traveled in a loop and adjusts its path to be more consistent with its previous observations.".
Drift correction is working as experimental features at this moment, there's corner cases that it will break. I will go into more details later.
In order to use drift correction pose, you will need to use ADF_T_Device frame pair (ADF is base frame, Device is target frame). In the example of using drift-correction pose to project points into world space, you don't need to do Adf_T_ss * ss_T_device transform, instead, all you only need to use ADF_T_device frame directly. If this is in Unity, you can just check the use area description pose on PointCloud prefab.
Corner cases that breaks drift-correction:
User shakes the device right after starting the experience.
Under the hood, drift correction is constructing a more dense but more accurate version of ADF. If user covers camera or shake device at the very beginning, that will cause that no ADF (or features) being saved in the buffer. Thus the API could get into a state that never gives any valid pose from ADF_T_Device frame pair.
Device lost tracking, and user moved to a new space without relocalizing.
This is similar to the first case. If user moved to a new space without relocalizing after lost tracking, device will never relocalized, thus no valid pose will be available through ADF_T_device frame.
Drift correction API is still experimental, we are trying to address above issues from API level as well.

Transforming and registering point clouds

I’m starting to develop with Project Tango API.
I need to save PointCloud data that I get in the event OnXyzIjAvailable;
to do this, I started from your example "PointCloudJava" and wrote PointCloud coordinates in single files (an AsyncTask is started for this purpose).
So I have one file with xyz for each event. On the same event I get the corresponding transformation matrix (mRenderer.getModelMatCalculator(). GetPointCloudModelMatrixCopy()).
Point clouds
Then I’ve imported all this data (xyz point cloud with corresponding transformation matrix; the transformation matrix is applied to the point clouds) but the point clouds doesn’t match exactly; it seems that point clouds are closed each other but not overlapping exactly.
My questions are:
-Why I don’t have the matching between the single point clouds ?
-What I should have to do to have this matching ?
Then I’ve notice the following that is probably related to the above problem; I’ve used Project Tango Explore application (Area learning), I can see my position, but is constantly in motion even if I don't move.
Which is the problem ? Is it necessary a calibration?
Device Information
Poses delivered by Tango have a non-negligible amount of drift. Here is a sample graph of pose position when my tablet was in its stand observing a static scene (ideally the traces should be flat):
When we couple this drift with tracking errors when the device is actually moving then this produces noticeable registration issues. I see this especially when the device is rolled, i.e. rotated about the view axis. The raw pose quality may be sufficient for some applications (e.g. location) but causes problems for others (e.g. 3D scanning, seamless augmented reality).
I was disappointed when I saw this. But if Tango is attempting to measure motion by using the fisheye camera to correct inertial motion prediction - and not by using stereo vision between the fisheye and color cameras - then that is a really hard problem. And the reason for doing that would be to stay within CPU/GPU/RAM/latency/battery budgets to leave something for applications. So after consideration, while I remain disappointed, I can understand it.
I am hopeful that Tango will improve their pose algorithm over time, but I suspect that applications that depend on precise tracking will still have to add their own corrections, e.g. via stereo, structure from motion, point cloud correlation, etc.
Point clouds should be viewed as statistically accurate, not exactly accurate - there is a distance estimation error range that is a function of distance and surface characteristics - a tango fixed in a specific location will not return a constant point clout - rotation of the device can cause apparent drift, but it really isn't, it's just that the error is rotating along with the tango

Project Tango Camera Specifications

I've been developing a virtual camera app for depth cameras and I'm extremely interested in the Tango project. I have several questions regarding the cameras on board. I can't seem to find these specs anywhere in the developer section or forums, so I understand completely if these cant be answered publicly. I thought I would ask regardless and see if the current device is suitable for my app.
Are the depth and color images from the rgb/ir camera captured simultaneously?
What frame rates is the rgb/ir capable of? e.g. 30, 25, 24? And at what resolutions?
Does the motion tracking camera run in sync with the rgb/ir camera? If not what frame rate (or refresh rate) does the motion tracking camera run at? Also if they do not run on the same clock does the API expose a relative or an absolute time stamp for both cameras?
What manual controls (if any) are exposed for the color camera? Frame rate, gain, exposure time, white balance?
If the color camera is fully automatic, does it automatically drop its frame rate in low light situations?
Thank you so much for your time!
Edit: Im specifically referring to the new tablet.
Some guessing
No, the actual image used to generate the point cloud is not the droid you want - I put up a picture on Google+ that shows what you get when you get one of the images that has the IR pattern used to calculate depth (an aside - it looks suspiciously like a Serpinski curve to me
Image frame rate is considerably higher than point cloud frame rate, but seems variable - probably a function of the load that Tango imposes
Motion tracking, i.e. pose, is captured at a rate roughly 3x the pose cloud rate
Timestamps are done with the most fascinating double precision number - in prior releases there was definitely artifacts/data in the lsb's of the double - I do a getposeattime (callbacks used for ADF localization) when I pick up a cloud, so supposedly I've got a pose aligned with the cloud - images have very low timestamp correspondance with pose and cloud data - it's very important to note that the 3 tango streams (pose,image,cloud) all return timestamps
Don't know about camera controls yet - still wedging OpenCV into the cloud services :-) Low light will be interesting - anecdotal data indicates that Tango has a wider visual spectrum than we do, which makes me wonder if fiddling with the camera at the point of capture to change image quality, e.g. dropping the frame rate, might not cause Tango problems

Using windows phone combined motion api to track device position

I'd like to track the position of the device with respect to an initial position with high accuracy (ideally) for motions at a small scale (say < 1 meter). The best bet seems to be using motionReading.SensorReading.DeviceAcceleration. I tried this. But ran into few problems. Apart from the noisy readings (which I was expecting and can tolerate), I see some behaviors that are conceptually wrong - e.g. If I start from rest, move the phone around and bring it back to rest- and in the process periodically update the velocity vector along all the dimensions, I would expect the magnitude of the velocity to be very small (ideally 0). But I don't see that. I have extensively reviewed available help including the official msdn pages but I don't see any examples where the position/velocity of the device are updated using the acceleration vector. Is the acceleration vector that the api returns (atleast in theory) supposed to be the rate of change of velocity or something else? (FYI - my device does not have a gyroscope, so the api is going to be the low accuracy version.)

Resources