I'm getting a double image from the depth sensor, both in the Explorer and via Unity SDK. The object (in this instance my hand) is about 40-100cm away which should be within the specs. Does anybody know if this is a "normal" behavior? See the screenshot here (build KOT49H.160920 and yes, the cameras are clean :)
The point cloud data callback provides a shared buffer.
You have to copy this buffer before processing it. Because, the buffer will be overwritten with new depth data and than will again invoke the callback.
It seems to me that your are processing the buffer directly. This will lead to corrupt data and you are rendering mixed new and old data!
Related
I'm trying to do pretty much what TangoARScreen does but with multithreaded rendering on in Unity. I did some experiments and I'm stuck.
I tried several things, such as letting Tango render into the OES texture that would be then blitted into a regular Texture2D in Unity, but OpenGL keeps complaining about invalid display when I try to use it. Probably not even OnTangoCameraTextureAvailable is called in the correct GL context? Hard to say when you have no idea how Tango Core works internally.
Is registering a YUV texture via TangoService_Experimental_connectTextureIdUnity the way to go? I'd have to deal with YUV2RGB conversion I assume. Or should I use OnTangoImageMultithreadedAvailable and deal with the buffer? Render it with a custom share for instance? The documentation is pretty blank in these areas and every experiment means several wasted days at least. Did anyone get this working? Could you point me in the right direction? All I need is live camera image rendered into Unity's camera background.
Frome the April 2017: Gankino release notes: "The C API now supports getting the latest camera image's timestamp outside a GL thread .... Unity multithreaded rendering support will get added in a future release.". So I guess we need to wait a little bit.
Multithreaded rendering still can be used in applications without camera feed (with motion tracking only), choosing "YUV Texture and Raw Bytes" as overlay method in Tango Application Script.
I have setup a WMF session (built an IMFTopology object with a source pointing to a webcam and a standard EVR for screen output), assigned it to an IMFMediaSession and started a preview. All is working great.
Now, I stop the session (waiting for the actual stop), change the source's resolution (setting an appropriate IMFMediaType via its IMFMediaTypeHandler) and then build a new topology with that new source and a newly created IMFActivate object for the EVR. Also changing the output window's size to match the new frame size.
When I start that new session there's no image (or the image is garbled, or cut off at the bottom - depends on the change in resolution). It is almost as if the new topology is trying to re-use the previously setup EVR and it is not working correctly.
I tried setting that new media type on the EVR when generating a new one, tried to force the new window size on the EVR (via a call to SetWindowPos()), tried to get that output node by previously assigned streamID and set its preferred input format... Nothing worked - I get the same black (or garbled) image when I start the playback.
The only time the "new" session plays correctly is when I chose back the original source format. Then it continues as if nothing bad happened.
Why is that? How do I fix this?
Not providing the source code as there's no easy way to just provide the relevant parts. Generally my code closely follows the sample from MSDN's article on creating a Media Session for playing back a file.
According to MS's documentation the IMFMediaSession is managing the start/stop of the source so I'm relying on that when I'm changing the source's video format (otherwise the application fails).
If you want to build a real new topology, you need to release all MediaFoundation objects (source, sink, topology, and so on).
If not, it can be a little bit complicated.
Is it possible to copy pixel data directly from a windows device context into an openGL rendering context (an openGL texture, to be specific)? I know that I can copy the windows device context pixel data into system memory (take it out of the graphics card), and then later upload it back into my openGL framebuffer, but what I'm looking for is a direct transfer where the data doesn't have to leave the GPU.
I am trying to write an application that is essentially a virtual magnifier. It consists of one window, which displays the contents of any other windows that are open underneath it. The target for my application is machines with windows 8 and higher, and I am using the basic win32 API. The reason why I want to use openGL to display the contents of my window is because I wish to perform various spatial transformations (distortions) with the GPU, not just magnification. Using openGL, I believe I can perform these transformations very fast.
Previously, I thought that all I had to do was to "move" my openGL rendering context onto each "third party" window that I wanted to steal pixel data from, and use glReadPixels() to copy this data into a PBO. I could then switch my rendering context back to my magnifier window, and proceed with rendering. However, I understand that this isn't possible, because openGL doesn't have access to any pixel data that wasn't rendered using openGL itself.
Any help would be greatly appreciated.
I have to get (shared memory or GPU memory) the directX render from a windows application called "Myapp" and apply this render (view) to four directX simple applications (only exactly the same view as the first windows application "Myapp")
Someone tells about backbuffer and anothers tells about FrontBufferData
1) How can I get easily the directX render of a directXWindows application in C++ ?
2) How can i Share easily and quickly this render to 4 another DirectX applications in C++ ?
Thanks in advance
You can never get the rendering data from backbuffer for an 3rd application, the only Interface Microsoft provide is GetFrontBufferData(), this function is the only way to take an antialiased screen shot, and it's very slow.
front buffer contains the data currently displayed on your screen.
back buffer contains the data being draw, but not present yet.
when you call Present, DirecX will swap the two buffers by simply change the buffer pointers, thus the front buffer became back buffer now, and the back buffer became front buffer now. this is called surface flipping.
There are many ways to share memory between processes.
Can I ask a question, what do you want to do with the rendering data?
thanks for your answer.
I just want to post / show the render / view of the application "Myapp" in 4 others directX views without changes (in C++)
I am having some memory issues with our android app when handling bitmaps (duh!).
We are having multiple activities loading images from a server, this could be a background image for the activity.
This background image could be the same for multiple activities, and right now each activity is loading its own background image.
This means if the flow is ac1->ac2->ac3->ac4 the same image will be loaded 4 times and using 4x memory.
How do I optimize imagehandling for this scenario? Do I create an image cache where the image is stored and then each activity ask the cache first for images. If this is the case, how do I know when to garbage collect the image from the cache?
Any suggestions, link to good tutorials or similar is highly appreciated.
Regards
EDIT:
When downloading images for the device the exact sizes is used, meaning that if the ui element needs an 100x100 pixel image it gets that size and therefore no need for scaling. So i am not sure about downscaling the image when loading it into the memory. Maybe it is needed to unload images in the activity when moving on the the next and then reload when going back.
One thing you might want to try is scaling down your bitmaps (make a thumbnail) to a size that is more appropriate to your device. It's pretty easy to quickly use up all the RAM on an Android device with a bitmap if you don't scale it down. This recipe shows how to do so on the fly. You could adapt this to save the images to disk.
You could also create your own implementation of the LRUCache to cache the images for your app.
After ready your update I will give you an other tip then.
I can still post the patch too if people want that..
What you need to do with those bitmaps is call them with a using block. That way Android will unload the bitmap as soon as that block is executed.
Example:
using(Bitmap usedBitmap = new Bitmap()){
//Do stuff with the Bitmap here
//You can call usedBitmap.Dispose() but it's not really needed
}
With this code your app shouldn't keep all the used bitmaps in memory.