GetDC, ReleaseDC, CS_OWNDC with OpenGL and Gdi+ - winapi

Reading about GetDC/ReleaseDC I should it appears always do the latter, and that CS_OWNDC on a window is considered evil:
https://blogs.msdn.microsoft.com/oldnewthing/20060601-06/?p=31003
Looking through my code I see I'm holding onto a DC retrieved from GetDC, which I've sent into wglCreateContextAttribARB. I'm presuming the context is created on that DC so it would be bad manners to subsequently release it from under the driver. Is my assumption correct? At the moment I'm calling ReleaseDC when I destroy my other OpenGL resources.
Additionally, elsewhere in my libraries I'm calling GetDC to instantiate a GDI+ Graphics object, then releasing it again when I've finished drawing. I'm thinking it would be nice to persist the DC and Graphics object between draw calls for performance reasons, only recreating it on WM_DISPLAYCHANGE etc.
So, is there a definitive guide to best practice in this area? The way I diligently release the GDI+ DC but persist the OpenGL DC seems somewhat inconsistent.

OpenGL and GDI+ behave differently.
In OpenGL you need a context, which is attached with a DC. This means that the DC must exist while the context exists. Thus, you do need CS_OWNDC style for the window where OpenGL draws.
Call ReleaseDC after you have deleted the context.
GDI+ is used in MS Windows like any common DC: retrieve a DC, draw to it, release that DC. In this scenary the use of CS_OWNDC can be evil, as pointed out in the link you posted.
The way MS GDI+ uses the graphics hardware (i.e. creating a context or whatever) is irrelevant for you.
EDIT due to Chris Becke's comment:
The CS_OWNDC usage is not required
Quoting https://msdn.microsoft.com/es-es/library/windows/desktop/dd374387(v=vs.85).aspx:
The hdc parameter must refer to a drawing surface supported by OpenGL.
It need not be the same hdc that was passed to wglCreateContext when
hglrc was created, but it must be on the same device and have the same
pixel format.
The CS_OWNDC usage is recommended.
In the old days of Windows 9x acquiring and releasing a device context was expensive and slow. Having a fixed dc was much more efficent. Using the CS_OWNDC flag at window registration was the way to have a fixed dc.
The CS_OWNDC usage provides a private device context (see https://msdn.microsoft.com/en-us/library/windows/desktop/ms633574(v=vs.85).aspx#class_styles).
Quoting from MS docs (https://msdn.microsoft.com/en-us/library/windows/desktop/dd162872(v=vs.85).aspx):
Although a private device context is convenient to use, it is
memory-intensive in terms of system resources, requiring 800 or more
bytes to store. Private device contexts are recommended when
performance considerations outweigh storage costs.
You must be aware that you must avoid ReleaseDC with a private device context:
An application can retrieve a handle to the private device context by
using the GetDC function any time after the window is created. The
application must retrieve the handle only once. Thereafter, it can
keep and use the handle any number of times. Because a private device
context is not part of the display device context cache, an
application need never release the device context by using the
ReleaseDC function.
In the common scenary where you draw to an unique window by retriving a DC, setting the curent context, drawing, swapping buffers and releasing the DC the usage of CS_OWNDC instead of GetDC&ReleaseDC is natural.
It can be also the case where wglGetCurrentDC() is used (e.g. by an extern library) regarless of your GetDC/ReleaseDC code. Normally no issues will happen. But if the current gl-context is NULL (as you would do right after ReleaseDC) then wglGetCurrentDC() will fail.
Code without CS_OWNDC
used in two windows with the same pixel format would look like this:
myGLContext = wglCreateContext(...)
//Draw to window A
HDC hdcA = GetDC(hWndA)
wglMakeCurrent(hdcA, myGLContext)
... render...
SwapBuffers(hdcA)
ReleaseDC(hWndA, hdcA)
//Draw to window B
HDC hdcB = GetDC(hWndB)
wglMakeCurrent(hdcB, myGLContext)
... render...
SwapBuffers(hdcB)
ReleaseDC(hWndA, hdcA)
wglMakeCurrent(hdcB, NULL)

Related

How do you synchronize an MTLTexture and IOSurface across processes?

What APIs do I need to use, and what precautions do I need to take, when writing to an IOSurface in an XPC process that is also being used as the backing store for an MTLTexture in the main application?
In my XPC service I have the following:
IOSurface *surface = ...;
CIRenderDestination *renderDestination = [... initWithIOSurface:surface];
// Send the IOSurface to the client using an NSXPCConnection.
// In the service, periodically write to the IOSurface.
In my application I have the following:
IOSurface *surface = // ... fetch IOSurface from NSXPConnection.
id<MTLTexture> texture = [device newTextureWithDescriptor:... iosurface:surface];
// The texture is used in a fragment shader (Read-only)
I have an MTKView that is running it's normal update loop. I want my XPC service to be able to periodically write to the IOSurface using Core Image and then have the new contents rendered by Metal on the app side.
What synchronization is needed to ensure this is done properly? A double or triple buffering strategy is one, but that doesn't really work for me because I might not have enough memory to allocate 2x or 3x the number of surfaces. (The example above uses one surface for clarity, but in reality I might have dozens of surfaces I'm drawing to. Each surface represents a tile of an image. An image can be as large as JPG/TIFF/etc allows.)
WWDC 2010-442 talks about IOSurface and briefly mentions that it all "just works", but that's in the context of OpenGL and doesn't mention Core Image or Metal.
I originally assumed that Core Image and/or Metal would be calling IOSurfaceLock() and IOSurfaceUnlock() to protect read/write access, but that doesn't appear to be the case at all. (And the comments in the header file for IOSurfaceRef.h suggest that the locking is only for CPU access.)
Can I really just let Core Image's CIRenderDestination write at-will to the IOSurface while I read from the corresponding MTLTexture in my application's update loop? If so, then how is that possible if, as the WWDC video states, all textures bound to an IOSurface share the same video memory? Surely I'd get some tearing of the surface's content if reading and writing occurred during the same pass.
The thing you need to do is ensure that the CoreImage drawing has completed in the XPC before the IOSurface is used to draw in the application. If you were using either OpenGL or Metal on both sides, you would either call glFlush() or [-MTLRenderCommandEncoder waitUntilScheduled]. I would assume that something in CoreImage is making one of those calls.
I can say that it will likely be obvious if that's not happening because you will get tearing or images that are half new rendering and half old rendering if things aren't properly synchronized. I've seen that happen when using IOSurfaces across XPCs.
One thing you can do is put some symbolic breakpoints on -waitUntilScheduled and -waitUntilCompleted and see if CI is calling them in your XPC (assuming the documentation doesn't explicitly tell you). There are other synchronization primitives in Metal, but I'm not very familiar with them. They may be useful as well. (It's my understanding that CI is all Metal under the hood now.)
Also, the IOSurface object has methods -incrementUseCount, -decrementUseCount, and -localUseCount. It might be worth checking those to see if CI sets them appropriately. (See <IOSurface/IOSurfaceObjC.h> for details.)

DirectX to OpenGL hot-swap, doesn't display on Win32 window

During the developement of my engine, I'm trying to implement a feature, that enables hot-swapping between OpenGL and DirectX. Currently I'm testing on Win32 platform, and came across the following problem:
I implemented both renderer (OpenGL 3.0, and Direct3D11), both work fine alone. The swapping mechanism is the following:
Destroy the current rendering context, and build up the new one. For example: Release all DirectX objects, and then create OpenGL context, via WGL. I'm trying to implement this, using only one window (HWND).
Swapping from OpenGL 3.0 to DirectX11 works. (After destroying OpenGL, DirectX renders fine)
Destroying OpenGL and then recreating OpenGL again, works. Same with DirectX.
When I'm trying to swap from DirectX to OpenGL, the window will stop displaying the newly draw content, only the lastly drawn DirectX frame.
To construct the OpenGL context I'm using WGL. The class for the window was created with the CS_OWNDC style. I'm using SwapBuffers to flip the window buffers. Before setting up the context, I use SetPixelFormat with the previously returned value from ChoosePixelFormat. The created context is version 3.0, ensured via wglCreateContextAttribsARB.
Additional information:
All of the DirectX references are released, this was checked by calling ReportLiveDeviceObjects and checking the return value of ID3D11Device1::Release (0). ID3D11DeviceContext1::ClearState and Flush were called to ensure object destruction.
None of the OpenGL methods report error via glGetError, this is checked after every call. This is same for all OS, and WGL calls.
The OpenGL rendering calls are executing as expected, for example:
OpenGL rendering with 150 fps
Swap to DirectX, render with 60 fps (VSYNC)
Swap back to OpenGL, rendering again with 150 fps (not more)
There are other scenarios where OpenGL renders with more than 150 fps, so the rendering calls are executing properly.
My guess is that the flipping of the buffers doesn't work somehow, however SwapBuffers returns TRUE anyway.
I tried using SaveDC and RestoreDC before and after using DirectX, this resulted in now solution.
Using wglSwapLayerBuffers instead of SwapBuffers gives no change.
Can I somehow restore the HWND, or HDC to the original state, or do you guys have any idea why this might happen?
Guess I posted my question to soon, but however, this is how I solved it.
I dug around the documnentation for DirectX, and for the function CreateSwapChainForHwnd, I found the following:
Because you can associate only one flip presentation model swap chain at a time with an HWND, the Microsoft Direct3D 11 policy of deferring the destruction of objects can cause problems if you attempt to destroy a flip presentation model swap chain and replace it with another swap chain.
I was using DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL in my swap chain descriptor, and this could mean, that DirectX sets up a flip swap chain for the window, but when I try to use it with OpenGL, it will fail swapping buffers somehow.
The solution for this, is to not use FLIP mode for creating the swap chain:
DXGI_SWAP_CHAIN_DESC1 scd;
scd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
scd.Scaling = DXGI_SCALING_ASPECT_RATIO_STRETCH;
You have to set the Scaling to something else than DXGI_SCALING_NONE, or the creation will fail.
The interesting part is, that the DirectX still does not properly destroy the flip model on the window, altough I did everything it suggested in the documentation (ClearState and Flush calls).
CreateSwapChainForHwnd see Remarks
Edit: I found this question after some time. If anybody still has some idea, how to revert back to using GDI again instead of the DWM backbuffer, it is greatly appreciated.

What does this bit of MSDN documentation mean?

The first parameter to the EnumFontFamiliesEx function, according to the MSDN documentation, is described as:
hdc [in]
A handle to the device context from which to enumerate the fonts.
What exactly does it mean?
What does device context mean?
Why should a device context be related to fonts?
Question (3) is a legitimately difficult thing to find an explanation for, but the reason is simple enough:
Some devices provide their own font support. For example, a PostScript printer will allow you to use PostScript fonts. But those same fonts won't be usable when rendering on-screen, or to another printer without PostScript support. Another example would be that a plotter (which is a motorized pen) requires vector fonts with a fixed stroke thickness, so raster fonts can't be used with such a device.
If you're interested in device-specific font support, you'll want to know about the GetDeviceCaps function.
The windows API uses the concept of handles extensively. A handle is an integer value that you can use as a token to access an API resource. You can think of it as a kind of "this" pointer, although it is definitely not a pointer.
A device context is an object within the windows API that represents a something that you can draw on or display graphics on. It might be a printer, a bitmap, or a screen, or some other context in which creating graphics makes sense. In Windows, fonts must be selected into device contexts before they can be used. In order to find out what fonts are currently available in any given device context, you can enumerate them. That's where EnumFontFamiliesEx comes in.
Microsoft has other articles on device context,
https://learn.microsoft.com/en-us/windows/win32/gdi/about-device-contexts
An application must inform GDI to load a particular device driver and,
once the driver is loaded, to prepare the device for drawing
operations (such as selecting a line color and width, a brush pattern
and color, a font typeface, a clipping region, and so on). These tasks
are accomplished by creating and maintaining a device context (DC). A
DC is a structure that defines a set of graphic objects and their
associated attributes, and the graphic modes that affect output. The
graphic objects include a pen for line drawing, a brush for painting
and filling, a bitmap for copying or scrolling parts of the screen, a
palette for defining the set of available colors, a region for
clipping and other operations, and a path for painting and drawing
operations. Unlike most of the structures, an application never has
direct access to the DC; instead, it operates on the structure
indirectly by calling various functions.
Obviously font is a kind of drawing.

any quick and easy way to capture a part of the screen? getPixel was slow and GetDIBits seemed a bit complcated as a start

I was trying some code to capture part of the screen using getPixel on Windows, with the device context being null (to capture screen instead of window), but it was really slow. It seems that GetDIBits() can be fast, but it seems a bit complcated... I wonder if there is a library that can put the whole region into an array, and pixel[x][y] will return the 24 bit color code of the pixel?
Or does such library exist on the Mac? Or if Ruby or Python already has such a library that can do that?
I've never done this but I'd try to:
Create a memory Device Context (DC)
using CreateCompatibleDC passing it
the Device Context of the desktop
(GetDC(NULL)).
Create a Bitmap (using
CreateCompatibleBitmap) the same
size as the region your capturing.
Select the bitmap into the DC you
created (using SelectObject).
Do a BitBlt from the
desktop DC to the DC you created (using SRCCOPY flag).
Working with Device Contexts can cause GDI leaks if you do things in the wrong order so make sure that you read the documentation on all the GDI functions you use (e.g. SelectObject, GetDC, etc.).

Win32 CreatePatternBrush

MSDN displays the following for CreatePatternBrush:
You can delete a pattern brush without
affecting the associated bitmap by
using the DeleteObject function.
Therefore, you can then use this
bitmap to create any number of pattern
brushes.
My question is the opposite. If the HBRUSH is long lived, can I delete the HBITMAP right after I create the brush? IE: does the HBRUSH store its own copy of the HBITMAP?
In this case, I'd like the HBRUSH to have object scope while the HBITMAP would have method scope (the method that creates the HBRUSH).
The HBRUSH and HBITMAP are entirely independent. The handles can be deleted entirely independent from each other, and, once created, no changes to either object will effect the other.
The brush does have its own copy of the bitmap. This is easily see by deleting the bitmap after creating the brush and then using the brush (works fine)
Using GetObject to fill a LOGBRUSH structure will return the original BITMAP handle in member lbhatch, though, and not the copy's handle, unfortunately. And using GetObject on the returned bitmap handle fails if the bitmap is deleted.
Anyone any idea how to get the original bitmap dimensions from the brush in this case? I wish to create a copy of the pattern brush even though the original bitmap is deleted. I can get a copy of the original bitmap simply by painting with the brush, but I don't know it's size. I tried using SetbrushorgEx (hdc, -1,-1), hoping the -1's would be reduced modulo its dimensions when brush selected into device context and get values when I retrieve with GetBrushOrgEx. Doesn't work.
I think the bitmap must outlive the brush: the brush just references the existing bitmap rather than copying it.
You could always try it and see what happened.
I doubt that the CreatePatternBrush() API copies the bitmap you give it, since an HBITMAP is:
a GDI handle, the maximum number of which is limited, and
potentially quite large.
Win32 and GDI tend to be conservative about creating internal copies of your data, if only because when most of their APIs were created (CreatePatternBrush() dates to Windows 95, and many functions are older still), memory and GDI handles were in much more limited supply than they are now. (For example, Windows 95 was required to run well on a system with only 4MB of RAM.)

Resources