Camera texture in Unity with multithreaded rendering - google-project-tango

I'm trying to do pretty much what TangoARScreen does but with multithreaded rendering on in Unity. I did some experiments and I'm stuck.
I tried several things, such as letting Tango render into the OES texture that would be then blitted into a regular Texture2D in Unity, but OpenGL keeps complaining about invalid display when I try to use it. Probably not even OnTangoCameraTextureAvailable is called in the correct GL context? Hard to say when you have no idea how Tango Core works internally.
Is registering a YUV texture via TangoService_Experimental_connectTextureIdUnity the way to go? I'd have to deal with YUV2RGB conversion I assume. Or should I use OnTangoImageMultithreadedAvailable and deal with the buffer? Render it with a custom share for instance? The documentation is pretty blank in these areas and every experiment means several wasted days at least. Did anyone get this working? Could you point me in the right direction? All I need is live camera image rendered into Unity's camera background.

Frome the April 2017: Gankino release notes: "The C API now supports getting the latest camera image's timestamp outside a GL thread .... Unity multithreaded rendering support will get added in a future release.". So I guess we need to wait a little bit.
Multithreaded rendering still can be used in applications without camera feed (with motion tracking only), choosing "YUV Texture and Raw Bytes" as overlay method in Tango Application Script.

Related

Saving a frame/picture using Tango Camera

I have a question. Is there a way to take a picture, or save current frame while a tango app is running and how can i achieve this kind of behaviour
I just used ReadPixels and gave as parameters the dimensions of the screen; it worked in Unity

Why does the Tango Camera Interface have two separate update texture functions?

I am using the lastest Tango release at the time of this question which is Zaniah (Version 1.46, November 2016). I have two devices, a Project Tango development kit and a pre-release Lenovo phone.
Does anyone know why TangoService_updateTexture only works when a texture with the target GL_TEXTURE_EXTERNAL_OES is connected to the camera interface ?
There is a separate TangoService_updateTextureExternalOes function which is stated for use with GL_TEXTURE_EXTERNAL_OES textures so this gives the impression that TangoService_updateTexture should work with other types of textures such as GL_TEXTURE_2D (why else have a separate function?). However if you connect a texture with the GL_TEXTURE_2D target then a gl error is generated stating the texture can't be bound when TangoService_updateTexture is called, now without seeing the code I'm guessing that the Tango API tries to bind a texture to the GL_TEXTURE_EXTERNAL_OES target regardless of which function is called.
So if this is the case why are there two separate functions ?
Has anybody else observed this, is this intended behaviour or is this a known issue ?
I'm struggling to find any sort of information or documentation about it.
The API docs: https://developers.google.com/tango/apis/c/reference/group/camera
Both TangoService_updateTexture and TangoService_updateTextureExternalOes uses OES texture. Unfortunately, Tango only supports OES texture through C-API functions.
The major difference between these two functions is that TangoService_updateTexture requires TangoService_connectTexture with a valid texture id beforehand. That means when calling TangoService_connectTexture you have to have a valid texture id (and of course, a gl-context) setup. This ties gl-context's lifecycle very tightly together with Tango&Android lifecycle. This can be a little bit tricky to handle in some cases.
On the other side, TangoService_updateTextureExternalOes doesn't require any texture id setup before calling this function, so you can simply call it in render() function, which guarantees that gl-context is available.

Scene2d tables turn black on one phone after a few game resets

My game screen uses both Scene2d and normal libgdx sprites. I use scene2d for the pause menus which contain some tables and textbuttons. All is ok on the pc. All is ok also on two mobile phones I'm testing the game on, but I have a pb on a third phone. It seems that after a restart or two of the game level all the scene2d elements that are supposed to appear on the screen have turned black. They are still responsive, meaning the buttons do what they are supposed to do, they move rotate and execute properly but they are all black. what could be the issue here? I don't have this pb on the pc or on the other phones.
What you describe is a symptom of using a texture across a reset of the OpenGL context. Your app contains pointers, in the Libgdx Texture objects, into OpenGL state, and when the OpenGL device is given over to another app, your pointers become stale.
LibGDX generally does a good job of restoring state across simple resets, but there are several ways to cause problems. The most common is to (1) store LibGDX OpenGL state (e.g., a Texture) into a static property. The JVM will get reused across application instances, so LibGDX cannot tell that this static object has become stale. See http://bitiotic.com/blog/2013/05/23/libgdx-and-android-application-lifecycle/ for details on how to trigger the different lifecyles.
See In game Images disappear on Android device if i run from widget, but not when I install apk first time and Android static object lifecycle
I know that there is already best answer for this post, but maybe this will somehow help you too:
Texture is not displayed in the application
the main idea is to dispose your assets and load them again when application becomes visible

Setting up OpenGL/Cuda interop in Windows

I am writing a DLL that needs to do some work in Cuda 3.2 and some work in OpenGL. OpenGL will render some grayscale images that my Cuda code needs to read in and modify, and then give back to OpenGL as a texture. I believe I need to create PBOs to do that. I have done some basic OpenGL stuff before but never worked with extensions, and that's where my problem is - I've been searching for 2 days and so far haven't been able to find a working example, despite wading through pages and pages of code. None of the samples I've tried work (and I'm sure my vid card will support it, being a GTX470)
Some specific questions:
1. I installed the nvidia opengl sdk. Should I be using glew.h and wglew.h to access the extensions?
2. My DLL does not have any UI - do I need to create a hidden window or is there an easier way to create an off-screen rendering context?
3. Can I create a grayscale PBO by using GL_RED_8UI format? Will both cuda and gl be happy with that? I read the opengl interop section in the cuda programming manual and it said GL_RGBA_8UI was only usable by pixel shaders because it was an OpenGL 3.0 feature, but I didn't know if that applied to a 1-channel format. 1 channel float would also work for my purposes.
4. I thought this would be fairly easy to do - does it really require hundreds of lines of code?
Edit:
I have code to create an OpenGL context attached to a HBITMAP. Should I create a bitmap-rendering context and then try to attach a PBO to that? Or will that slow me down by also rendering to CPU memory? Is it better to create an invisible window and attach the PBO to that? Also, does the pixel format of my PBO have to match the window/bitmap? What about the dimensions?
Thanks,
Alex
There's actually an example of how to use OpenGL and CUDA together. Look at the SimpleGL example.
You may want to take a look at this example:
https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st

Multiple GLES contexts, one framebuffer (iOS)

I'm doing an iOS app where I have a black box rendering library that has its own EAGLContext and framebuffer and does it's own rendering. I also need to do additional rendering outside the black box lib.
Up until now I've been doing that by carefully reading, setting and restoring all the pertinent states each frame. This works, but is fiddly and hard to maintain. Then it occurred to me, "Why not have a separate EAGLContext instead?"
I've implemented a second context, so now I'm switching contexts instead of setting/restoring all the states each frame. Only problem is I'm getting lots of visual artifacts and the performance went from a rock solid 30 FPS to about 5 FPS...
So apparently I'm not meant to render to the same framebuffer from several contexts. Can anyone confirm this?

Resources