OpenGL Win32 texture not shown in DrawToBitmap (DIB) - winapi

I have a realtime OpenGL application rendering some objects with textures. I have build a function to make an internal screenshot of the rendered scene by rendering it to an DIB via PFD_DRAW_TO_BITMAP and copy it to an image. It works quite well except for one kind of texture. These are JPGs with 24bpp (so 8 Bit for every R,G,B). I can load them and they render correctly in realtime but not when rendered to the DIB. For other textures it works well.
I have the same behaviour when testing my application on virtual machine (WinXP, no hardware acceleration!). Here these specific textures are not even shown in realtime rendering. Without hardware acceleration I guess WinXP uses its own software implementation of OpenGL and falls back to OpenGL 1.1.
So are there any kinds of textures that cant be drawn without 3d hardware acceleration? Or is there a common pitfall?

PFD_DRAW_TO_BITMAP will always drop you into the fallback OpenGL-1.1 software rasterizer. So you should not use it. Create an off-screen FBO, render to that, retrieve the pixel data using glReadPixels and write it to a file using an image file I/O library.

Related

How to render Direct3D content into ID2D1HwndRenderTarget?

In my app I render my Direct3D content to the window, as recommended, using the swap chain in flip sequential mode (IDXGISwapChain1, DXGI_SWAP_EFFECT_SEQUENTIAL), in a window without redirection bitmap (WS_EX_NOREDIRECTIONBITMAP)
However for the purpose of smooth window resizing (without lags and flicker), I want to intercept live resize enter/exit events and temporarily switch the rendering to ID2D1HwndRenderTarget to the window's redirection surface, which seems to be offer the smoothest possible resize.
The question is - how to render the Direct3D content to ID2D1HwndRenderTarget?
The problem is that obviously the Direct3D rendering and ID2D1HwndRenderTarget belong to different devices and I don't seem to find a way to interop them together.
What ideas come to mind:
A. Somehow assign ID2D1HwndRenderTarget to be the output frame buffer for 3D rendering. This would be the best case scenario, because it would minimize buffer copying.
Somehow obtain a ID3D11Texture2D or at least an IDXGISurface from the ID2D1HwndRenderTarget
Create render target ID3D11Device::CreateRenderTargetView from the DXGI surface / Direct3D texture
Set render target ID3D11DeviceContext::OMSetRenderTargets
Render directly to the ID2D1HwndRenderTarget
The problem is in step 1. ID2D1HwndRenderTarget and its ancestor ID2D1RenderTarget seem pretty scarce and don't seem to provide this capability. Is there a way to obtain a DXGI or D3D surface from the ID2D1HwndRenderTarget?
B. Create an off-screen Direct3D frame buffer texture, render the 3D content there and then copy it to the window render target.
Create off-screen texture ID3D11Texture2D
(ID3D11Device::CreateTexture2D)
Create a ID2D1Device using D2D1CreateDevice (from the same
IDXGIDevice as my ID3D11Device)
Create a ID2D1DeviceContext
using ID2D1Device::CreateDeviceContext
Create a ID2D1Bitmap
using ID2D1DeviceContext::CreateBitmapFromDxgiSurface
Draw the
bitmap using ID2D1HwndRenderTarget::DrawBitmap
Unfortunately, I get the error message "An operation failed because a device-dependent resource is associated with the wrong ID2D1Device (resource domain)". Obviously, the resource comes from a different ID2D1Device But how to draw a texture bitmap from one Direct2D device onto another?
C. Out of desperation, I tried to map the content of my frame buffer to CPU memory using IDXGISurface::Map, however this is tricky because it requires D3D11_USAGE_STAGING and D3D11_CPU_ACCESS_READ flags when creating the texture, which then seems make it impossible to use this texture as an output frame buffer (or am I missing something?). And generally this technique will be most likely very slow, because it involves syncing between CPU and GPU, and copying the whole texture at least two times.
Has anyone ever succeeded with the task of rendering 3D content to a ID2D1HwndRenderTarget? Please share your experience. Thanks!

How to access bitmap data of Direct2D Hardware RenderTarget?

I'm using Direct2D with for some simple accelerated image compositing/manipulation, and now need to get the pixel data the RenderTarget to pass it to an encoder.
So far, I've managed this by rendering to a BitmapRenderTarget, then finally drawing the bitmap from that onto a WicBitmapRenderTarget which allows me to Lock an area and get a pointer to the pixels.
However...
This only works if my initial RenderTarget uses D2D1_RENDER_TARGET_TYPE_SOFTWARE, because a hardware rendertarget's bitmap can't be 'shared' with the WicBitmapRenderTarget which only supports software rendering. The software rendering seems significantly slower than hardware.
Is there any way round this? Would I be better off using Direct3D instead?

Mixing OpenGL and software rendered GUI

I need to write application where the main content will be OpenGL rendered (something like game engine), but there is no good OpenGL based GUI library similiar to what Qt widgets does (but they are software rendered).
As i browsed the source code of Qt, all painting is done via QPainter and there is even QPainter implementation in OpenGL, but the suppport for multiple graphics backends was dropped in Qt 5, so you can't render Qt Widgets in OpenGL anymore (i don't know why).
The problem is that you can't paint to window surface using both software and hardware rendering. You can have the window associated with OpenGL context or use software rendering. That means if i want to have app with complex GUI with OpenGL based content, i need either paint everything using OpenGL (which is hard because as i said, there is no good GUI library for it), or i can render GUI to image using software rendering (for example Qt) and than load that image as OpenGL texture (probably big performance loss).
Does anyone know any good application that is using software rendered GUI loaded as texture to OpenGL? I need to be sure it will work without some big performance loss, but can't find good example that it will work well even for apps like game engines.
If you take the "render ui to texture then draw a textured quad over my game" route, and are worried about performances, try to avoid transfering the whole texture each frame.
If you think about it :
60fps is not necessary for ui : 30fps is enough, so update it one time out of two.
Most of the time, ui dont change between frames, and if it changes, only a small portion of it do.
ui framework often keep track of which part of the ui is "dirty" and need to be redrawn. If you can get your hand on that, you can stream to the texture only the parts that need to be updated (glTexSubImage2D).

OpenGL context stops working when user moves the window into a different display

I'm using OpenGL to speed-up GUI rendering. It works fine, but when user drags the window into a different display (potentially connected to a different GPU), it starts rendering black interior. When the user moves the window back to the original display, it starts working again.
I have this report from Windows XP, I'm unfortunately unable to check on Win 7/8 and Mac OS X right now.
Any ideas what to do about it?
In the current Windows and Linux driver models OpenGL contexts are tied to a certain graphics scanout framebuffer. It's perfectly possible for the scanout framebuffer to span over several connected displays and even GPUs, if the GPU architecture allows for that (for example NVidia SLi and AMD CrossFire).
However what does not work (with current driver architectures) are GPUs of different vendors (e.g. NVidia and Intel) to share a scanout buffer. Or in the case of all NVidia or AMD if the GPUs have not been connected using SLI or CrossFire.
So if the monitors are connected to different graphics cards then this can happen.
This is however only a software design limitation. It's perfectly possible to separate graphics rendering from display scanout. This forms in fact the technical base for Hybrid graphics, where a fast GPU renders into the scanout buffer memory managed by a different GPU (NVidia Optimus for example).
The low hanging fruit to fix that would be to recreate the context when the window passes over to a screen conntected to a different GPU. But this has a problem: If the Window is split among screens on one of the screens it will stay black. Also recreating a context, together with uploading all the data can be a length operation. And often in situations like yours the device on the other screen is incompatible with the feature set of the original context.
A workaround for this is to do all rendering on a off-screen framebuffer object (FBO), which contents you then copy to CPU memory and from there to the target Window using GDI operations. This method however has a huge drawback of involving a full memory roundtrip and increased latency.
The steps to set this up would be:
Identify the screen with the GPU you want to use
Create a hidden window centered on that screen. (i.e. do not WS_VISIBLE as style in CreateWindow and do not call ShowWindow on it).
Create a OpenGL context on this window; it doesn't have to be double buffered PIXELFORMAT, but usually double buffered gives better performance.
Create the target, user visible window; do not bind a OpenGL context to this window.
Setup a Framebuffer Object (FBO) on the OpenGL context
the renderbuffer target of this FBO is to be created to match the client rect size of the target window; when the window gets resized, resize the FBO renderbuffer.
setup 2 renderbuffer object for double buffered operation
Setup a Pixel Buffer Object (PBO) that matches the dimensions of the renderbuffers
when the renderbuffers' size changes, so needs the PBO
With OpenGL render to the FBO, then transfer the pixel contents to the PBO (glBindBuffer, glReadPixels)
Map the PBO to process memory using glMapBuffer and use the SetDIBitsToDevice function to transfer the data from the mapped memory region to the target window device context; then unmap the PBO

OpenGL Render to texture

I know this has been asked before (I did search) but I promise you this one is different.
I am making an app for Mac OS X Mountain Lion, but I need to add a little bit of a bloom effect. I need to render the entire scene to a texture the size of the screen, reduce the size of the texture, pass it through a pixel buffer, then use it as a texture for a quad.
I ask this again because a few of the usual techniques do not seem to function. I cannot use the #version, layout, or out in my fragment shader, as they do not compile. If I just use gl_FragColor as normal, I get random pieces of the screen behind my app rather than the scene I am trying to render. The documentation doesn't say anything about such things.
So, basically, how can I render to a texture properly with the Mac implementation of OpenGL? Do you need to use extensions to do this?
I use the code from here
Rendering to a texture is best done using FBOs, which let you render directly into the texture. If your hardware/driver doesn't support OpenGL 3+, you will have to use the FBO functionality through the ARB_framebuffer_object core extension or the EXT_framebuffer_object extension.
If FBOs are not supported at all, you will either have to resort to a simple glCopyTexSubImage2D (which involves a copy though, even if just GPU-GPU) or use the more flexible but rather intricate (and deprecated) PBuffers.
This tutorial on FBOs provides a simple example for rendering to a texture and using this texture for rendering afterwards. Since the question lacks specific information about the particular problems you encountered with your approach, those rather general googlable pointers to the usual render-to-texture resources need to suffice for now.

Resources