training the area learning in project tango with other data sources - google-project-tango

Is it possible to train the area-learning module in a project tango device with other data than the one automatically input through the sensors?
I am asking because I want to teach the area algorithm a preexisting 3D model, thereby making object recognition possible.
I am not asking for a highlevel ability to convert any 3D model to an ADF. If I have to generate several point clouds and color buffers myself based on the 3D model, that would also work.
I am also not asking to know about any Google secrets of the internal format of ADFs. Only to have some way to put data in there.

Currently, there's no way of doing that through Tango public APIs. All pipeline, learning or relocalizing have to be done on device.

Related

How to animate 3d Reconstructed face models

I'm making an app that is based on unity3d game engine and targeted to IOS and Android platform. The core function of this app is that : users needs to input a 2d frontal face photo and the app will produce a 3d reconstructed face model which looks like the 2d face in the photo. I did some research and found the algorithm on github:
https://github.com/patrikhuber/eos.
I implemented the algorithm in unity3d and it looked good. But the face they provide can't do animation(because it's an triangle mesh). What I need for this app is an animated face which can do all kinds of human expressions. The best software for this kind of purpose I found is faceGen, but their technology is not suitable for mobile device. So I want to ask if there are any articles, reference or forums that discuss this kind of problems.
A possible way to render 3D data in real time is to use PCL(Point Cloud Library)
http://www.pointclouds.org/news/2012/05/29/pcl-goes-mobile-with-ves-and-kiwi/
Hope it can help

Google Tango - Load 3D Reconstruction and Localize

Hi I'm new to Google Tango and stackoverflow and I'm doing a group project for school using the Tango device. I've searched the site and the web in general, but can't seem to find anything quite like what I'm looking for.
We've successfully used the AreaLearning feature to re-localize within a space and have used the Constructor to create a 3d model of a space. We want to load that model and localize within it so that we can use raycasting to measure the distance to walls.
Is there a way to do this or a better idea of how to obtain the wall (and other obstructions) distances?
thanks,
Ryan

Object tracking with Project Tango

As far as I know, the main features of Project Tango SDK are:
motion tracking
area learning
depth perception
But what about object recognition & tracking?
I didn't see anything about that in the SDK, though I assume that Tango hardware would be really good at it (compared with traditional smartphones). Why?
Update 2017/06/05
A marker detection API has been introduced in the Hopak release
There are already good libraries for object tracking in 2D images and the additional features of project tango would likely add only marginal improvement in performance(of existing functions) for major overhauls of the library to support a small set of evolving hardware.
How do you think project tango could improve on existing object recognition & tracking?
With a 3d model of the object to be tracked, and knowledge of the motion and pose of the camera, one could predict what the next image of the tracked object 'should' look like. If the next image is different than predicted, it could be assumed that the tracked object has moved from its prior position. And the actual new 3D image could indicate the tracked object's vectors. That certainly has uses in navigating a live environment.
But that sounds like the sort of solution a self driving car might use. And that would be a valuable piece of tech worth keeping away from competitors despite its value to the community.
This is all just speculation. I have no first hand knowledge.
I'm not really sure what you're expecting for an "open question", but I can tell you one common way that people exploit Tango's capabilities to aid object recognition & tracking. Tango's point cloud, image callbacks, and pose data can be used as input for a library like PCL (http://pointclouds.org/).
Simply browsing the documentation & tutorials will give you a good idea of what's possible and how it can be achieved.
http://pointclouds.org/documentation/
Beyond that, you might browse the pcl-users mail archives:
http://www.pcl-users.org/

Which features does project tango use

Does project tango extract any visual features per frame (such as ORB or SIFT/SURF)? or the entire point cloud is just 3D points extracted from depth camera. If so, is it possible to know which algorithm are they using? Is it just corners ?
I would like to dump 3D point cloud along with corresponding features and wondering if it is all possible in real-time.
Unfortunately, they don't expose which features they use. All you get is XYZ + Confidence. Here's the realtime point cloud callback, from the C API:
TangoErrorType TangoService_connectOnPointCloudAvailable(
void(*)(void *context, const TangoPointCloud *cloud) TangoService_onPointCloudAvailable,
...
);
See:
https://developers.google.com/tango/apis/c/reference/group/depth
https://developers.google.com/tango/apis/java/reference/TangoPointCloudData
https://developers.google.com/tango/apis/unity/reference/class/tango/tango-unity-depth
TangoPointCloud is defined here:
https://developers.google.com/tango/apis/c/reference/struct/tango-point-cloud#struct_tango_point_cloud
https://developers.google.com/tango/apis/java/reference/TangoPointCloudData
https://developers.google.com/tango/apis/unity/reference/class/tango/tango-point-cloud-data
As an aside, if you regard Tango's objective as being a portable API that sits atop various different sensors and hardware platforms, then it makes sense that they wouldn't expose the details of the underlying depth estimation method. It might change, from one device to the next.
BTW, they also keep the internals of their ADF (Area Description File) format secret.

Lightweight 3D animation driven by external data

I'm a structural engineering master student work on a seismic evaluation of a temple structure in Portugal. For the evaluation, I have created a 3D block model of the structure and will use a discrete element code to analyze the behaviour of the structure under a variety of seismic (earthquake) records. The software that I will use for the analysis has the ability to produce snapshots of the structure at regular intervals which can then be put together to make a movie of the response. However, producing the images slows down the analysis. Furthermore, since the pictures are 2D images from a specified angle, there is no possibility to rotate and view the response from other angles without re-running the model (a process that currently takes 3 days of computer time).
I am looking for an alternative method for creating a movie of the response of the structure. What I want is a very lightweight solution, where I can just bring in the block model which I have and then produce the animation by feeding in the location and the three principal axis of each block at regular intervals to produce the animation on the fly. The blocks are described as prisms with the top and bottom planes defining all of the vertices. Since the model is produced as text files, I can modify the output so that it can be read and understood by the animation code. The model is composed of about 180 blocks with 24 vertices per block (so 4320 vertices). The location and three unit vectors describing the block axis are produced by the program and I can write them out in a way that I want.
The main issue is that the quality of the animation should be decent. If the system is vector based and allows for scaling, that would be great. I would like to be able to rotate the model in real time with simple mouse dragging without too much lag or other issues.
I have very limited time (in fact I am already very behind). That is why I wanted to ask the experts here so that I don't waste my time on something that will not work in the end. I have been using Rhino and Grasshopper to generate my model but I don't think it is the right tool for this purpose. I was thinking that Processing might be able to handle this but I don't have any experience with it. Another thing that I would like to be able to do is to maybe have a 3D PDF file for distribution. But I'm not sure if this can be done with 3D PDF.
Any insight or guidance is greatly appreciated.
Don't let the name fool you, but BluffTitler DX9, a commercial software, may be what your looking for.
It's simple interface provides a fast learning curve, may quick tutorials to either watch or dissect. Depending on how fast your GPU is, real-time previews are scalable.
Reference:
Model Layer Page
User Submitted Gallery (3D models)
Jim Merry from tetra4D here. We make the 3D CAD conversion tools for Acrobat X to generate 3D PDFs. Acrobat has a 3D javascript API that enables you to manipulate objects, i.e, you could drive translations, rotations, etc of objects from your animation information after translating your model to 3D PDF. Not sure I would recommend this approach if you are in a hurry however. Also - I don't think there are any commercial 3D PDF generation tools for the formats you are using (Rhino, Grasshopper, Processing).
If you are trying to animate geometric deformations, 3D PDF won't really help you at all. You could capture the animation and encode it as flash video and embed in a PDF, but this a function of the multimedia tool in Acrobat Pro, i.e, is not specific to 3D.

Resources