I need to draw a decoded video frame into an android.view.Surface object, in Android native C++ code.
I understand that the common way to do this is to implement GLSurfaceView.Renderer interface and call the native method from onDrawFrame(). But I cannot use that, I need to pass the Surface object to the native method and work on it from there.
I'm planning on using OpenGL ES APIs to do all the rendering job, because it's cross-platform. But I have no idea where does the Surface object comes in.
I've seen several examples where ANativeWindow_fromSurface() is used. I guess I could go that way it if I was not compelled to use OpenGL.
So where do I "set" my Surface object inside native code, so I am good-to-go to rendering my scene frames?
EDIT: Ok, I guess I just was not aware of EGL standard, which seems to be what I need in the presented case.
However, I am still looking for a way to "map" or "convert" a android.view.Surface object into whatever structure handled by EGL context, which I assume to be EGLNativeWindowType, without needing to access any Android-specific APIs.
Thanks in advance.
Related
I'm trying to do pretty much what TangoARScreen does but with multithreaded rendering on in Unity. I did some experiments and I'm stuck.
I tried several things, such as letting Tango render into the OES texture that would be then blitted into a regular Texture2D in Unity, but OpenGL keeps complaining about invalid display when I try to use it. Probably not even OnTangoCameraTextureAvailable is called in the correct GL context? Hard to say when you have no idea how Tango Core works internally.
Is registering a YUV texture via TangoService_Experimental_connectTextureIdUnity the way to go? I'd have to deal with YUV2RGB conversion I assume. Or should I use OnTangoImageMultithreadedAvailable and deal with the buffer? Render it with a custom share for instance? The documentation is pretty blank in these areas and every experiment means several wasted days at least. Did anyone get this working? Could you point me in the right direction? All I need is live camera image rendered into Unity's camera background.
Frome the April 2017: Gankino release notes: "The C API now supports getting the latest camera image's timestamp outside a GL thread .... Unity multithreaded rendering support will get added in a future release.". So I guess we need to wait a little bit.
Multithreaded rendering still can be used in applications without camera feed (with motion tracking only), choosing "YUV Texture and Raw Bytes" as overlay method in Tango Application Script.
I am using the lastest Tango release at the time of this question which is Zaniah (Version 1.46, November 2016). I have two devices, a Project Tango development kit and a pre-release Lenovo phone.
Does anyone know why TangoService_updateTexture only works when a texture with the target GL_TEXTURE_EXTERNAL_OES is connected to the camera interface ?
There is a separate TangoService_updateTextureExternalOes function which is stated for use with GL_TEXTURE_EXTERNAL_OES textures so this gives the impression that TangoService_updateTexture should work with other types of textures such as GL_TEXTURE_2D (why else have a separate function?). However if you connect a texture with the GL_TEXTURE_2D target then a gl error is generated stating the texture can't be bound when TangoService_updateTexture is called, now without seeing the code I'm guessing that the Tango API tries to bind a texture to the GL_TEXTURE_EXTERNAL_OES target regardless of which function is called.
So if this is the case why are there two separate functions ?
Has anybody else observed this, is this intended behaviour or is this a known issue ?
I'm struggling to find any sort of information or documentation about it.
The API docs: https://developers.google.com/tango/apis/c/reference/group/camera
Both TangoService_updateTexture and TangoService_updateTextureExternalOes uses OES texture. Unfortunately, Tango only supports OES texture through C-API functions.
The major difference between these two functions is that TangoService_updateTexture requires TangoService_connectTexture with a valid texture id beforehand. That means when calling TangoService_connectTexture you have to have a valid texture id (and of course, a gl-context) setup. This ties gl-context's lifecycle very tightly together with Tango&Android lifecycle. This can be a little bit tricky to handle in some cases.
On the other side, TangoService_updateTextureExternalOes doesn't require any texture id setup before calling this function, so you can simply call it in render() function, which guarantees that gl-context is available.
I am learning about GLSL in order to manage it in my IOS & android C++ engine.
I get a lot of documentation about syntax and GLSL programming but I need some tutorials about how to manage it in a complete scene (How to apply a shader only on a specific object of the scene ? How combine several effects on an object ? )
Do you have some links or book reference to send me ?
How to apply a shader only on a specific object of the scene ?
It's the same way you apply a texture to a specific object. You call glUseProgram with the program you want to use. Any subsequent rendering commands will use that program, until another glUseProgram call is encountered.
How combine several effects on an object ?
In general, this means that you write a new shader. Shaders are not really things you can combine with the API. You can copy bits of them into other shaders. You can use the unique features of the OpenGL shader object paradigm to change program functionality based on which programs are linked to which.
But in the general case, if you want to combine several "effects", you have to write a new shader that has those effects in it.
I am writing a DLL that needs to do some work in Cuda 3.2 and some work in OpenGL. OpenGL will render some grayscale images that my Cuda code needs to read in and modify, and then give back to OpenGL as a texture. I believe I need to create PBOs to do that. I have done some basic OpenGL stuff before but never worked with extensions, and that's where my problem is - I've been searching for 2 days and so far haven't been able to find a working example, despite wading through pages and pages of code. None of the samples I've tried work (and I'm sure my vid card will support it, being a GTX470)
Some specific questions:
1. I installed the nvidia opengl sdk. Should I be using glew.h and wglew.h to access the extensions?
2. My DLL does not have any UI - do I need to create a hidden window or is there an easier way to create an off-screen rendering context?
3. Can I create a grayscale PBO by using GL_RED_8UI format? Will both cuda and gl be happy with that? I read the opengl interop section in the cuda programming manual and it said GL_RGBA_8UI was only usable by pixel shaders because it was an OpenGL 3.0 feature, but I didn't know if that applied to a 1-channel format. 1 channel float would also work for my purposes.
4. I thought this would be fairly easy to do - does it really require hundreds of lines of code?
Edit:
I have code to create an OpenGL context attached to a HBITMAP. Should I create a bitmap-rendering context and then try to attach a PBO to that? Or will that slow me down by also rendering to CPU memory? Is it better to create an invisible window and attach the PBO to that? Also, does the pixel format of my PBO have to match the window/bitmap? What about the dimensions?
Thanks,
Alex
There's actually an example of how to use OpenGL and CUDA together. Look at the SimpleGL example.
You may want to take a look at this example:
https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st
I'm calling this method:
http://msdn.microsoft.com/en-us/library/dd371264(VS.85).aspx
The call fails with E_NOINTERFACE. The documentation is especially unhelpful as to why this may happen. I've enabled all of the DirectX 11 debug stuff and that's the best I got. I know that I have a valid IDXGISurface1* (also tried IDXGISurface) and the other parameters are set correctly. Any ideas as to why this call may fail?
Edit:
I also am having problems creating D3D11 devices. If I pass nullptr as the IDXGIAdapter* argument in D3D11CreateDeviceAndSwapChain, it works fine, but if I enumerate the adapters myself and pass in a pointer (the only one returned), it fails with invalid argument. The MSDN documentation explicitly says that if nullptr is passed, then the system uses the first return from EnumAdapters1. I am running a DX11 system.
Direct2D only works when you create a Direct3D 10.1 device, but it can share surfaces with Direct3D 11. All you need to do is create both devices and render all of your Direct2D content to a texture that you share between them. I use this technique in my own applications to use Direct2D with Direct3D 11. It incurs a slight cost, but it is small and constant per frame.
A basic outline of the process you will need to use is:
Create your Direct3D 11 device like you do normally.
Create a texture with the D3D10_RESOURCE_MISC_SHARED_KEYEDMUTEX option in order to allow access to the ID3D11KeyedMutex interface.
Use the GetSharedHandle to get a handle to the texture that can be shared among devices.
Create a Direct3D 10.1 device, ensuring that it is created on the same adapter.
Use OpenSharedResource function on the Direct3D 10.1 device to get a version of the texture for Direct3D 10.1.
Get access to the D3D10 KeyedMutex interface for the texture.
Use the Direct3D 10.1 version of the texture to create the RenderTarget using Direct2D.
When you want to render with D2D, use the keyed mutex to lock the texture for the D3D10 device. Then, acquire it in D3D11 and render the texture like you were probably already trying to do.
It's not trivial, but it works well, and it is the way that they intended you to interoperate between them. Windows 8 looks like it will introduce full D3D11 compatibility, so it will be just as simple as you expect.
Direct2D uses D3D10 devices not D3D11 devices. D3D11 device is probably that is reported as lacking interface by that E_NOINTERFACE.