directX capture renderer in C++ - view

I have to get (shared memory or GPU memory) the directX render from a windows application called "Myapp" and apply this render (view) to four directX simple applications (only exactly the same view as the first windows application "Myapp")
Someone tells about backbuffer and anothers tells about FrontBufferData
1) How can I get easily the directX render of a directXWindows application in C++ ?
2) How can i Share easily and quickly this render to 4 another DirectX applications in C++ ?
Thanks in advance

You can never get the rendering data from backbuffer for an 3rd application, the only Interface Microsoft provide is GetFrontBufferData(), this function is the only way to take an antialiased screen shot, and it's very slow.
front buffer contains the data currently displayed on your screen.
back buffer contains the data being draw, but not present yet.
when you call Present, DirecX will swap the two buffers by simply change the buffer pointers, thus the front buffer became back buffer now, and the back buffer became front buffer now. this is called surface flipping.
There are many ways to share memory between processes.
Can I ask a question, what do you want to do with the rendering data?

thanks for your answer.
I just want to post / show the render / view of the application "Myapp" in 4 others directX views without changes (in C++)

Related

DirectX to OpenGL hot-swap, doesn't display on Win32 window

During the developement of my engine, I'm trying to implement a feature, that enables hot-swapping between OpenGL and DirectX. Currently I'm testing on Win32 platform, and came across the following problem:
I implemented both renderer (OpenGL 3.0, and Direct3D11), both work fine alone. The swapping mechanism is the following:
Destroy the current rendering context, and build up the new one. For example: Release all DirectX objects, and then create OpenGL context, via WGL. I'm trying to implement this, using only one window (HWND).
Swapping from OpenGL 3.0 to DirectX11 works. (After destroying OpenGL, DirectX renders fine)
Destroying OpenGL and then recreating OpenGL again, works. Same with DirectX.
When I'm trying to swap from DirectX to OpenGL, the window will stop displaying the newly draw content, only the lastly drawn DirectX frame.
To construct the OpenGL context I'm using WGL. The class for the window was created with the CS_OWNDC style. I'm using SwapBuffers to flip the window buffers. Before setting up the context, I use SetPixelFormat with the previously returned value from ChoosePixelFormat. The created context is version 3.0, ensured via wglCreateContextAttribsARB.
Additional information:
All of the DirectX references are released, this was checked by calling ReportLiveDeviceObjects and checking the return value of ID3D11Device1::Release (0). ID3D11DeviceContext1::ClearState and Flush were called to ensure object destruction.
None of the OpenGL methods report error via glGetError, this is checked after every call. This is same for all OS, and WGL calls.
The OpenGL rendering calls are executing as expected, for example:
OpenGL rendering with 150 fps
Swap to DirectX, render with 60 fps (VSYNC)
Swap back to OpenGL, rendering again with 150 fps (not more)
There are other scenarios where OpenGL renders with more than 150 fps, so the rendering calls are executing properly.
My guess is that the flipping of the buffers doesn't work somehow, however SwapBuffers returns TRUE anyway.
I tried using SaveDC and RestoreDC before and after using DirectX, this resulted in now solution.
Using wglSwapLayerBuffers instead of SwapBuffers gives no change.
Can I somehow restore the HWND, or HDC to the original state, or do you guys have any idea why this might happen?
Guess I posted my question to soon, but however, this is how I solved it.
I dug around the documnentation for DirectX, and for the function CreateSwapChainForHwnd, I found the following:
Because you can associate only one flip presentation model swap chain at a time with an HWND, the Microsoft Direct3D 11 policy of deferring the destruction of objects can cause problems if you attempt to destroy a flip presentation model swap chain and replace it with another swap chain.
I was using DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL in my swap chain descriptor, and this could mean, that DirectX sets up a flip swap chain for the window, but when I try to use it with OpenGL, it will fail swapping buffers somehow.
The solution for this, is to not use FLIP mode for creating the swap chain:
DXGI_SWAP_CHAIN_DESC1 scd;
scd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
scd.Scaling = DXGI_SCALING_ASPECT_RATIO_STRETCH;
You have to set the Scaling to something else than DXGI_SCALING_NONE, or the creation will fail.
The interesting part is, that the DirectX still does not properly destroy the flip model on the window, altough I did everything it suggested in the documentation (ClearState and Flush calls).
CreateSwapChainForHwnd see Remarks
Edit: I found this question after some time. If anybody still has some idea, how to revert back to using GDI again instead of the DWM backbuffer, it is greatly appreciated.

Copying pixel data directly from windows Device Context to an openGL rendering context

Is it possible to copy pixel data directly from a windows device context into an openGL rendering context (an openGL texture, to be specific)? I know that I can copy the windows device context pixel data into system memory (take it out of the graphics card), and then later upload it back into my openGL framebuffer, but what I'm looking for is a direct transfer where the data doesn't have to leave the GPU.
I am trying to write an application that is essentially a virtual magnifier. It consists of one window, which displays the contents of any other windows that are open underneath it. The target for my application is machines with windows 8 and higher, and I am using the basic win32 API. The reason why I want to use openGL to display the contents of my window is because I wish to perform various spatial transformations (distortions) with the GPU, not just magnification. Using openGL, I believe I can perform these transformations very fast.
Previously, I thought that all I had to do was to "move" my openGL rendering context onto each "third party" window that I wanted to steal pixel data from, and use glReadPixels() to copy this data into a PBO. I could then switch my rendering context back to my magnifier window, and proceed with rendering. However, I understand that this isn't possible, because openGL doesn't have access to any pixel data that wasn't rendered using openGL itself.
Any help would be greatly appreciated.

Scene2d tables turn black on one phone after a few game resets

My game screen uses both Scene2d and normal libgdx sprites. I use scene2d for the pause menus which contain some tables and textbuttons. All is ok on the pc. All is ok also on two mobile phones I'm testing the game on, but I have a pb on a third phone. It seems that after a restart or two of the game level all the scene2d elements that are supposed to appear on the screen have turned black. They are still responsive, meaning the buttons do what they are supposed to do, they move rotate and execute properly but they are all black. what could be the issue here? I don't have this pb on the pc or on the other phones.
What you describe is a symptom of using a texture across a reset of the OpenGL context. Your app contains pointers, in the Libgdx Texture objects, into OpenGL state, and when the OpenGL device is given over to another app, your pointers become stale.
LibGDX generally does a good job of restoring state across simple resets, but there are several ways to cause problems. The most common is to (1) store LibGDX OpenGL state (e.g., a Texture) into a static property. The JVM will get reused across application instances, so LibGDX cannot tell that this static object has become stale. See http://bitiotic.com/blog/2013/05/23/libgdx-and-android-application-lifecycle/ for details on how to trigger the different lifecyles.
See In game Images disappear on Android device if i run from widget, but not when I install apk first time and Android static object lifecycle
I know that there is already best answer for this post, but maybe this will somehow help you too:
Texture is not displayed in the application
the main idea is to dispose your assets and load them again when application becomes visible

Best practis for handling bitmaps in android with mono droid / xamarin.android

I am having some memory issues with our android app when handling bitmaps (duh!).
We are having multiple activities loading images from a server, this could be a background image for the activity.
This background image could be the same for multiple activities, and right now each activity is loading its own background image.
This means if the flow is ac1->ac2->ac3->ac4 the same image will be loaded 4 times and using 4x memory.
How do I optimize imagehandling for this scenario? Do I create an image cache where the image is stored and then each activity ask the cache first for images. If this is the case, how do I know when to garbage collect the image from the cache?
Any suggestions, link to good tutorials or similar is highly appreciated.
Regards
EDIT:
When downloading images for the device the exact sizes is used, meaning that if the ui element needs an 100x100 pixel image it gets that size and therefore no need for scaling. So i am not sure about downscaling the image when loading it into the memory. Maybe it is needed to unload images in the activity when moving on the the next and then reload when going back.
One thing you might want to try is scaling down your bitmaps (make a thumbnail) to a size that is more appropriate to your device. It's pretty easy to quickly use up all the RAM on an Android device with a bitmap if you don't scale it down. This recipe shows how to do so on the fly. You could adapt this to save the images to disk.
You could also create your own implementation of the LRUCache to cache the images for your app.
After ready your update I will give you an other tip then.
I can still post the patch too if people want that..
What you need to do with those bitmaps is call them with a using block. That way Android will unload the bitmap as soon as that block is executed.
Example:
using(Bitmap usedBitmap = new Bitmap()){
//Do stuff with the Bitmap here
//You can call usedBitmap.Dispose() but it's not really needed
}
With this code your app shouldn't keep all the used bitmaps in memory.

Rendering OpenGL just once rather than every frame

Nearly every example I see of OpenGL ES involves it updating every frame, even if the image itself is not moving in any way.
I did some tests and I see it works quite fine to just render (using drawArrays etc) and then present the render buffer (these two actions, together) just once and then not do either again until you have something change onscreen.
Is this "normal" ? I just don't see this really done much. Once drawn, the graphics stay on the screen without additional constant rendering.
Is this acceptable?
Yes, it is acceptable and completely valid. You also need to take account to render again when the context is lost. To give you an example, using Android standard OpenGL helper classes there is an option to only draw when needed, not in loop (RENDERMODE_WHEN_DIRTY).

Resources