Using a D3D11 texture in a D3D12 application - directx-11

My application is written in D3D12. We have a VR plugin that provide a D3D11 shared texture. Can a D3D11 texture be "converted" into a D3D12 texture?
The only solution I found so far is to write D3D11onD3D12 code that would make use of that texture + the resources of the D3D12 app.

Seems the only way is to write DX11 code that would make use of that texture. Then run this code on the D3D11On12 context.

Related

detecting AR markers in the fisheye camera?

I am experimenting with using AR markers in my Tango app. The example Java and C applications are great for getting this working with the color camera, however, I want to try this with the fisheye camera (for added field of view).
I tried the naive approach of simply changing the camera callback so that I was getting the fisheye image. Then, I passed this into the function TangoSupport.detectMarkers. This resulted in a TangoInvalid exception (presumably due to the arguments that I passed to the function being invalid).
Based on what I've tried thus far, it appears that the fisheye image is not support by the detectMarkers function. Can someone connected to the project verify this? I couldn't find this in the documentation.
Assuming it's not support by detectMarkers, does anyone have an idea of how to proceed? I am currently streaming the fisheye camera data to my laptop where I undistort the fisheye image using some OpenCV code I wrote. Using this undistorted image, I am able to quite successfully find April Tags (a bit different than the Tango's tags) in the image.
Any pointers would be much appreciated.
I never found an easy way to do this, so I implemented my own version by using OpenCV Android SDK to undistort the fisheye image and then using apriltags (ported over to Android).
Here is a link to my code if anyone is interested: https://github.com/occamLab/MobilityGamesAndroid/tree/master/cane_game

Camera texture in Unity with multithreaded rendering

I'm trying to do pretty much what TangoARScreen does but with multithreaded rendering on in Unity. I did some experiments and I'm stuck.
I tried several things, such as letting Tango render into the OES texture that would be then blitted into a regular Texture2D in Unity, but OpenGL keeps complaining about invalid display when I try to use it. Probably not even OnTangoCameraTextureAvailable is called in the correct GL context? Hard to say when you have no idea how Tango Core works internally.
Is registering a YUV texture via TangoService_Experimental_connectTextureIdUnity the way to go? I'd have to deal with YUV2RGB conversion I assume. Or should I use OnTangoImageMultithreadedAvailable and deal with the buffer? Render it with a custom share for instance? The documentation is pretty blank in these areas and every experiment means several wasted days at least. Did anyone get this working? Could you point me in the right direction? All I need is live camera image rendered into Unity's camera background.
Frome the April 2017: Gankino release notes: "The C API now supports getting the latest camera image's timestamp outside a GL thread .... Unity multithreaded rendering support will get added in a future release.". So I guess we need to wait a little bit.
Multithreaded rendering still can be used in applications without camera feed (with motion tracking only), choosing "YUV Texture and Raw Bytes" as overlay method in Tango Application Script.

Why does the Tango Camera Interface have two separate update texture functions?

I am using the lastest Tango release at the time of this question which is Zaniah (Version 1.46, November 2016). I have two devices, a Project Tango development kit and a pre-release Lenovo phone.
Does anyone know why TangoService_updateTexture only works when a texture with the target GL_TEXTURE_EXTERNAL_OES is connected to the camera interface ?
There is a separate TangoService_updateTextureExternalOes function which is stated for use with GL_TEXTURE_EXTERNAL_OES textures so this gives the impression that TangoService_updateTexture should work with other types of textures such as GL_TEXTURE_2D (why else have a separate function?). However if you connect a texture with the GL_TEXTURE_2D target then a gl error is generated stating the texture can't be bound when TangoService_updateTexture is called, now without seeing the code I'm guessing that the Tango API tries to bind a texture to the GL_TEXTURE_EXTERNAL_OES target regardless of which function is called.
So if this is the case why are there two separate functions ?
Has anybody else observed this, is this intended behaviour or is this a known issue ?
I'm struggling to find any sort of information or documentation about it.
The API docs: https://developers.google.com/tango/apis/c/reference/group/camera
Both TangoService_updateTexture and TangoService_updateTextureExternalOes uses OES texture. Unfortunately, Tango only supports OES texture through C-API functions.
The major difference between these two functions is that TangoService_updateTexture requires TangoService_connectTexture with a valid texture id beforehand. That means when calling TangoService_connectTexture you have to have a valid texture id (and of course, a gl-context) setup. This ties gl-context's lifecycle very tightly together with Tango&Android lifecycle. This can be a little bit tricky to handle in some cases.
On the other side, TangoService_updateTextureExternalOes doesn't require any texture id setup before calling this function, so you can simply call it in render() function, which guarantees that gl-context is available.

Easiest way to import image file to OpenGL ES 2.0 cross platform

I am learning to use OpenGL ES 2.0 by using MoSync to write cross platform C code. I have already managed to draw basic shapes such as a triangle, square and circle so the next stage is to draw some text to the screen. After reading various books, tutorials and forum posts I realise I have to create a texture atlas bitmap.
I have a file with the text I want to use, i.e 0-9 a-z image file. Before I can upload and bind it to a texture object I first need to upload the image to OpenGL. Various tutorials use UIImage or BitmapFactory to upload the image but I cannot use these as MoSync does not contain their header files. Could anyone suggest a way to load my image file to OPenGL?
To use MoSync on the Android platform you are probably going to have to make a native library for MoSync and your OpenGL ES code in C++. Most OpenGL ES projects on Android are done in native code for many reasons which are detailed in this article:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1/
I ended up using maOpenGLTexImage(MAHandle image), which works exactly as glTexImage2D() but it uses an image resource instead and figures out pixel formats etc.

Can't create Direct2D DXGI Surface

I'm calling this method:
http://msdn.microsoft.com/en-us/library/dd371264(VS.85).aspx
The call fails with E_NOINTERFACE. The documentation is especially unhelpful as to why this may happen. I've enabled all of the DirectX 11 debug stuff and that's the best I got. I know that I have a valid IDXGISurface1* (also tried IDXGISurface) and the other parameters are set correctly. Any ideas as to why this call may fail?
Edit:
I also am having problems creating D3D11 devices. If I pass nullptr as the IDXGIAdapter* argument in D3D11CreateDeviceAndSwapChain, it works fine, but if I enumerate the adapters myself and pass in a pointer (the only one returned), it fails with invalid argument. The MSDN documentation explicitly says that if nullptr is passed, then the system uses the first return from EnumAdapters1. I am running a DX11 system.
Direct2D only works when you create a Direct3D 10.1 device, but it can share surfaces with Direct3D 11. All you need to do is create both devices and render all of your Direct2D content to a texture that you share between them. I use this technique in my own applications to use Direct2D with Direct3D 11. It incurs a slight cost, but it is small and constant per frame.
A basic outline of the process you will need to use is:
Create your Direct3D 11 device like you do normally.
Create a texture with the D3D10_RESOURCE_MISC_SHARED_KEYEDMUTEX option in order to allow access to the ID3D11KeyedMutex interface.
Use the GetSharedHandle to get a handle to the texture that can be shared among devices.
Create a Direct3D 10.1 device, ensuring that it is created on the same adapter.
Use OpenSharedResource function on the Direct3D 10.1 device to get a version of the texture for Direct3D 10.1.
Get access to the D3D10 KeyedMutex interface for the texture.
Use the Direct3D 10.1 version of the texture to create the RenderTarget using Direct2D.
When you want to render with D2D, use the keyed mutex to lock the texture for the D3D10 device. Then, acquire it in D3D11 and render the texture like you were probably already trying to do.
It's not trivial, but it works well, and it is the way that they intended you to interoperate between them. Windows 8 looks like it will introduce full D3D11 compatibility, so it will be just as simple as you expect.
Direct2D uses D3D10 devices not D3D11 devices. D3D11 device is probably that is reported as lacking interface by that E_NOINTERFACE.

Resources