In my app I render my Direct3D content to the window, as recommended, using the swap chain in flip sequential mode (IDXGISwapChain1, DXGI_SWAP_EFFECT_SEQUENTIAL), in a window without redirection bitmap (WS_EX_NOREDIRECTIONBITMAP)
However for the purpose of smooth window resizing (without lags and flicker), I want to intercept live resize enter/exit events and temporarily switch the rendering to ID2D1HwndRenderTarget to the window's redirection surface, which seems to be offer the smoothest possible resize.
The question is - how to render the Direct3D content to ID2D1HwndRenderTarget?
The problem is that obviously the Direct3D rendering and ID2D1HwndRenderTarget belong to different devices and I don't seem to find a way to interop them together.
What ideas come to mind:
A. Somehow assign ID2D1HwndRenderTarget to be the output frame buffer for 3D rendering. This would be the best case scenario, because it would minimize buffer copying.
Somehow obtain a ID3D11Texture2D or at least an IDXGISurface from the ID2D1HwndRenderTarget
Create render target ID3D11Device::CreateRenderTargetView from the DXGI surface / Direct3D texture
Set render target ID3D11DeviceContext::OMSetRenderTargets
Render directly to the ID2D1HwndRenderTarget
The problem is in step 1. ID2D1HwndRenderTarget and its ancestor ID2D1RenderTarget seem pretty scarce and don't seem to provide this capability. Is there a way to obtain a DXGI or D3D surface from the ID2D1HwndRenderTarget?
B. Create an off-screen Direct3D frame buffer texture, render the 3D content there and then copy it to the window render target.
Create off-screen texture ID3D11Texture2D
(ID3D11Device::CreateTexture2D)
Create a ID2D1Device using D2D1CreateDevice (from the same
IDXGIDevice as my ID3D11Device)
Create a ID2D1DeviceContext
using ID2D1Device::CreateDeviceContext
Create a ID2D1Bitmap
using ID2D1DeviceContext::CreateBitmapFromDxgiSurface
Draw the
bitmap using ID2D1HwndRenderTarget::DrawBitmap
Unfortunately, I get the error message "An operation failed because a device-dependent resource is associated with the wrong ID2D1Device (resource domain)". Obviously, the resource comes from a different ID2D1Device But how to draw a texture bitmap from one Direct2D device onto another?
C. Out of desperation, I tried to map the content of my frame buffer to CPU memory using IDXGISurface::Map, however this is tricky because it requires D3D11_USAGE_STAGING and D3D11_CPU_ACCESS_READ flags when creating the texture, which then seems make it impossible to use this texture as an output frame buffer (or am I missing something?). And generally this technique will be most likely very slow, because it involves syncing between CPU and GPU, and copying the whole texture at least two times.
Has anyone ever succeeded with the task of rendering 3D content to a ID2D1HwndRenderTarget? Please share your experience. Thanks!
Related
I have a 3D engine, and I'd like to be able to embed Helix-based content into it (like draw on a wall).
I know it is possible to use RenderTargetBitmap to render any WPF elements, but that would introduce too many levels of indirection (especially considering WPF runs DX9), and no official way to access the image apart from copying it to the CPU memory first.
Ideally, I'd like to supply Viewport3DX with my own target color and depth buffers.
You can get the back buffer from
Viewport3DX.RenderHost.RenderBuffer.BackBuffer.Resource as global::SharpDX.Direct3D11.Texture2D
You can hookup Viewport3DX.RenderHost.Rendered event and copy the buffer data onto your own texture.
I'm using Direct2D with for some simple accelerated image compositing/manipulation, and now need to get the pixel data the RenderTarget to pass it to an encoder.
So far, I've managed this by rendering to a BitmapRenderTarget, then finally drawing the bitmap from that onto a WicBitmapRenderTarget which allows me to Lock an area and get a pointer to the pixels.
However...
This only works if my initial RenderTarget uses D2D1_RENDER_TARGET_TYPE_SOFTWARE, because a hardware rendertarget's bitmap can't be 'shared' with the WicBitmapRenderTarget which only supports software rendering. The software rendering seems significantly slower than hardware.
Is there any way round this? Would I be better off using Direct3D instead?
I'm using OpenGL to speed-up GUI rendering. It works fine, but when user drags the window into a different display (potentially connected to a different GPU), it starts rendering black interior. When the user moves the window back to the original display, it starts working again.
I have this report from Windows XP, I'm unfortunately unable to check on Win 7/8 and Mac OS X right now.
Any ideas what to do about it?
In the current Windows and Linux driver models OpenGL contexts are tied to a certain graphics scanout framebuffer. It's perfectly possible for the scanout framebuffer to span over several connected displays and even GPUs, if the GPU architecture allows for that (for example NVidia SLi and AMD CrossFire).
However what does not work (with current driver architectures) are GPUs of different vendors (e.g. NVidia and Intel) to share a scanout buffer. Or in the case of all NVidia or AMD if the GPUs have not been connected using SLI or CrossFire.
So if the monitors are connected to different graphics cards then this can happen.
This is however only a software design limitation. It's perfectly possible to separate graphics rendering from display scanout. This forms in fact the technical base for Hybrid graphics, where a fast GPU renders into the scanout buffer memory managed by a different GPU (NVidia Optimus for example).
The low hanging fruit to fix that would be to recreate the context when the window passes over to a screen conntected to a different GPU. But this has a problem: If the Window is split among screens on one of the screens it will stay black. Also recreating a context, together with uploading all the data can be a length operation. And often in situations like yours the device on the other screen is incompatible with the feature set of the original context.
A workaround for this is to do all rendering on a off-screen framebuffer object (FBO), which contents you then copy to CPU memory and from there to the target Window using GDI operations. This method however has a huge drawback of involving a full memory roundtrip and increased latency.
The steps to set this up would be:
Identify the screen with the GPU you want to use
Create a hidden window centered on that screen. (i.e. do not WS_VISIBLE as style in CreateWindow and do not call ShowWindow on it).
Create a OpenGL context on this window; it doesn't have to be double buffered PIXELFORMAT, but usually double buffered gives better performance.
Create the target, user visible window; do not bind a OpenGL context to this window.
Setup a Framebuffer Object (FBO) on the OpenGL context
the renderbuffer target of this FBO is to be created to match the client rect size of the target window; when the window gets resized, resize the FBO renderbuffer.
setup 2 renderbuffer object for double buffered operation
Setup a Pixel Buffer Object (PBO) that matches the dimensions of the renderbuffers
when the renderbuffers' size changes, so needs the PBO
With OpenGL render to the FBO, then transfer the pixel contents to the PBO (glBindBuffer, glReadPixels)
Map the PBO to process memory using glMapBuffer and use the SetDIBitsToDevice function to transfer the data from the mapped memory region to the target window device context; then unmap the PBO
I have a realtime OpenGL application rendering some objects with textures. I have build a function to make an internal screenshot of the rendered scene by rendering it to an DIB via PFD_DRAW_TO_BITMAP and copy it to an image. It works quite well except for one kind of texture. These are JPGs with 24bpp (so 8 Bit for every R,G,B). I can load them and they render correctly in realtime but not when rendered to the DIB. For other textures it works well.
I have the same behaviour when testing my application on virtual machine (WinXP, no hardware acceleration!). Here these specific textures are not even shown in realtime rendering. Without hardware acceleration I guess WinXP uses its own software implementation of OpenGL and falls back to OpenGL 1.1.
So are there any kinds of textures that cant be drawn without 3d hardware acceleration? Or is there a common pitfall?
PFD_DRAW_TO_BITMAP will always drop you into the fallback OpenGL-1.1 software rasterizer. So you should not use it. Create an off-screen FBO, render to that, retrieve the pixel data using glReadPixels and write it to a file using an image file I/O library.
Good evening,
I have several 32-bit images in memory buffers that I wish to "blit" to a device context, quickly. Speed is an issue here because the buffer will be manipulated constantly and need to be blitted to the DC repeatedly.
The color depth of the buffer is 32-bits, so it is already in the DIB-expected format of SetDIBits(). However, this is rather cumbersome since the bitmap target of SetDIBits() cannot be already selected into the DC prior to the operation. So I will need to constantly swap out the DC's bitmap, call SetDIBits(), swap the bitmap back into the DC, and then blit the DC to the Window's DC. To me, that just seems like a LOT of workload on the CPU and too much branching in the Windows API; way too much for optimal performance.
I would be interested in using DirectX if it didn't force me to use Device Contexts for 2D operations, or uploading textures to video memory before displaying them, because the contents of the image are constantly changing.
My question is simple (despite the long writeup). What would be the fastest way for me to blit an image from a pixel buffer in memory onto the screen? Direct access to the pixel buffer of a DC would be great, but I know that's not going to happen.
Thanks for reading my long writeup.
There is an API method CreateDIBSection to create a DIB that applications can write to directly. This allows to continuously updating the bitmap (either memcopy or directly writing to it).
See MSDN article for further details.
Access to the bitmap must be synchronized. Do this by calling the GdiFlush function.