Tracking motion of object (by itself or by colour) - google-project-tango

Is it possible to use the Google Tango Java API to track an object by itself (say, in the captured video feed I tap on the object and then the device tracks it continuously) or track an object by color (e.g. If I have a black sphere against a white background)

Unfortunately, the Tango API seems designed only to track the observer (i.e. the Tango tablet/phone), at present. If you'd like to track other objects, I recommend using it with a library like OpenCV:
See:
http://opencv.org/platforms/android.html
http://docs.opencv.org/2.4/doc/tutorials/introduction/android_binary_package/dev_with_OCV_on_Android.html
You'll need some way of detecting or selecting an object, and then some way of tracking it:
http://docs.opencv.org/3.2.0/d5/d54/group__objdetect.html
http://docs.opencv.org/3.2.0/d9/df8/group__tracking.html

Related

Does cobalt support YouTube 360 Video(Spherical Video)

Dose cobalt support youtube 360 video(Spherical Video)? If yes, how it's been implemented, is there any document for it? Does it need the platform to do extra things to support it?
Almost. There is still some small remaining work that prevents this from being a simple yes, but the vast bulk of the work is done and has been shown to function.
A document will soon be appearing in the source tree explaining all this, but here is a preview...
In order to support spherical video, a platform will have to support decode-to-texture, introduced in Starboard API version 4. Cobalt will choose between punch-out and decode-to-texture when creating an SbPlayer, based on whether a mesh transform has been applied to the video tag. In decode-to-texture mode, every frame, the video texture will then be queried from the player, and rendered into the UI graphics plane with the current transform applied.

Camera texture in Unity with multithreaded rendering

I'm trying to do pretty much what TangoARScreen does but with multithreaded rendering on in Unity. I did some experiments and I'm stuck.
I tried several things, such as letting Tango render into the OES texture that would be then blitted into a regular Texture2D in Unity, but OpenGL keeps complaining about invalid display when I try to use it. Probably not even OnTangoCameraTextureAvailable is called in the correct GL context? Hard to say when you have no idea how Tango Core works internally.
Is registering a YUV texture via TangoService_Experimental_connectTextureIdUnity the way to go? I'd have to deal with YUV2RGB conversion I assume. Or should I use OnTangoImageMultithreadedAvailable and deal with the buffer? Render it with a custom share for instance? The documentation is pretty blank in these areas and every experiment means several wasted days at least. Did anyone get this working? Could you point me in the right direction? All I need is live camera image rendered into Unity's camera background.
Frome the April 2017: Gankino release notes: "The C API now supports getting the latest camera image's timestamp outside a GL thread .... Unity multithreaded rendering support will get added in a future release.". So I guess we need to wait a little bit.
Multithreaded rendering still can be used in applications without camera feed (with motion tracking only), choosing "YUV Texture and Raw Bytes" as overlay method in Tango Application Script.

How can i save a 3D Object in Google Tango?

Is it possible to save a 3D object in Google Tango and then detect it?
example: i want to save my car in it. Can Google Tango now detect every car of the same model as "my car". I just want to know if it is possible already with Google Tango , or do i have to write an application for it?
I think there isn't yet any Tango application available to the public that can capture a 3D object like a car, save it, and then recognize even that same individual car again later.
The Constructor app is I think the most sophisticated 3D model capture app publicly available. It only captures and saves 3D scenes. It can't separate objects like a car from the rest of the scene, like the road it's parked on or the wall behind it etc. You could open the saved scene in a separate application, like a 3D studio, and edit the scene like any other, then continue passing the model to other (non-Tango) applications. But that's a human/tool workflow.
The rest of the app you're describing would have to later recognize other 3D objects in captured 3D scenes by matching at least some distinctive features of the saved earlier capture. I don't know of any app that does that. And I'm not sure that the Tango hardware currently available to the public is fast enough to do that, nor the Google webservices that support more sophisticated intelligence about what Tango captures. And recognizing the same model car, but with different colors/options, dirty/scraped, etc is a really tall order for Tango at this time.
But it does sound like a killer app. Somebody push the envelope and make this happen!

How to render and retrieve image data on Google Project Tango?

I'd like to render the live image data on a GL surface (as shown in various Project Tango samples), and at the same time record (encode) it via a MediaCodec.
(On an Android Lollipop device, I've accomplished that using the camera2 interface and multiple surface targets, which works fine, but thus far Tango is pre-Lollipop...)
From other answers, it appears that you have to use the C API to access the image data.
The C API provides two camera frame functions -- TangoService_connectTextureId() and TangoService_connectOnFrameAvailable(). However, the documentation states "Use either TangoService_connectTextureId() or TangoService_connectOnFrameAvailable() but not both."
Why not both?
How do I best render and retrieve the image data?
The Pythagoras release now allows for simultaneous use of color and color texture callbacks now. That said, you want to use the connectOnFrameAvailable if you want to process the image, you'd end up doing extra unnecessary work if you try and peel it out of the texture.

Windows Phone 7 Image Looping

I would like to loop through a sequence of images. I have tried using a Pivot control, but I don't like the blank space in between image transitions. I would prefer to use something that will animate between images smoothly. I also looked at the LoopingSelector control, but I can't seem to set the orientation to horizontal.
I'm assuming you're interested in a kind of image viewer like iOS offers, swiping right or left to navigate through the photos. If that's the case, I hate to say it but i think you're looking at building your own control.
I think to implement it properly these are the essential things you need to think about and address:
For performance' sake, load all the images you have into memorystream objects and store the binary data (you can get creative with this and only store the first 10-15 images, depending on how large the images are, doing this would enable your control to support thousands of images and still perform like a champ).
Once an image is about to be on-screen set the source of the image to the saved memorystream object that has the bytes loaded into it (this will minimize the work that the UI thread does, keeping the control performant and responsive)
Use Manipulation events to track the delta x of the motion someone uses when swiping left to right in order to actually perform the moving of the items
Move the images by changing their Canvas.Left property (you can go negative I think, otherwise just make your canvas the width of all the images you have combined)
Look up some of the available libraries to support momentum so you can have a natural smooth transition between images

Resources