any quick and easy way to capture a part of the screen? getPixel was slow and GetDIBits seemed a bit complcated as a start - winapi

I was trying some code to capture part of the screen using getPixel on Windows, with the device context being null (to capture screen instead of window), but it was really slow. It seems that GetDIBits() can be fast, but it seems a bit complcated... I wonder if there is a library that can put the whole region into an array, and pixel[x][y] will return the 24 bit color code of the pixel?
Or does such library exist on the Mac? Or if Ruby or Python already has such a library that can do that?

I've never done this but I'd try to:
Create a memory Device Context (DC)
using CreateCompatibleDC passing it
the Device Context of the desktop
(GetDC(NULL)).
Create a Bitmap (using
CreateCompatibleBitmap) the same
size as the region your capturing.
Select the bitmap into the DC you
created (using SelectObject).
Do a BitBlt from the
desktop DC to the DC you created (using SRCCOPY flag).
Working with Device Contexts can cause GDI leaks if you do things in the wrong order so make sure that you read the documentation on all the GDI functions you use (e.g. SelectObject, GetDC, etc.).

Related

Screenshot of the specific window (HWND, HW accelerated)

I need to capture a snapshots/screenshots of the specific window (HWND) that is using HW acceleration and record them to a video stream.
While using BitBlt or PrintWindow I'm able to capture image data only if this window is not HW accelerated, else I'm getting a black texture.
Tried using User32.dll's undocumented DwmGetDxSharedSurface to get the DirectX surface handle. But it fails with an error:
ERROR_GRAPHICS_PRESENT_REDIRECTION_DISABLED - desktop windowing
management subsystem is off
(Edit: Fails for certain applications, i.e. "calculator.exe")
Tried using Dwmapi.dll's undocumented functions DwmpDxUpdateWindowSharedSurface and DwmpDxGetWindowSharedSurface. I've managed to retrieve what looks like a valid DirectX surface handle. (it's d3dFormat, width and height information was valid) Dx's OpenSharedResource was not complaining and managed to create a valid ID3D11Texture2D. Problem is.. all bytes are zeros (getting a black texture). I might be doing something wrong here or.. undocumented DWM functionas does not work anymore on Windows 10...
Edit: I'm able to get image data for some applications like Windows
explorer, Paint, etc, but for some like i.e. Slack i get all
zeros/black image.
Edit: When capturing i.e. VLC, I get this:
Question:
Is there any other way to capture image data of the HW accelerated window?
Note: I don't want to capture the entire desktop.
You can use PrintWindow with nFlags=2
Or use Magnification API (exclude windows)
Or try to hack dwm.exe.

GetDC, ReleaseDC, CS_OWNDC with OpenGL and Gdi+

Reading about GetDC/ReleaseDC I should it appears always do the latter, and that CS_OWNDC on a window is considered evil:
https://blogs.msdn.microsoft.com/oldnewthing/20060601-06/?p=31003
Looking through my code I see I'm holding onto a DC retrieved from GetDC, which I've sent into wglCreateContextAttribARB. I'm presuming the context is created on that DC so it would be bad manners to subsequently release it from under the driver. Is my assumption correct? At the moment I'm calling ReleaseDC when I destroy my other OpenGL resources.
Additionally, elsewhere in my libraries I'm calling GetDC to instantiate a GDI+ Graphics object, then releasing it again when I've finished drawing. I'm thinking it would be nice to persist the DC and Graphics object between draw calls for performance reasons, only recreating it on WM_DISPLAYCHANGE etc.
So, is there a definitive guide to best practice in this area? The way I diligently release the GDI+ DC but persist the OpenGL DC seems somewhat inconsistent.
OpenGL and GDI+ behave differently.
In OpenGL you need a context, which is attached with a DC. This means that the DC must exist while the context exists. Thus, you do need CS_OWNDC style for the window where OpenGL draws.
Call ReleaseDC after you have deleted the context.
GDI+ is used in MS Windows like any common DC: retrieve a DC, draw to it, release that DC. In this scenary the use of CS_OWNDC can be evil, as pointed out in the link you posted.
The way MS GDI+ uses the graphics hardware (i.e. creating a context or whatever) is irrelevant for you.
EDIT due to Chris Becke's comment:
The CS_OWNDC usage is not required
Quoting https://msdn.microsoft.com/es-es/library/windows/desktop/dd374387(v=vs.85).aspx:
The hdc parameter must refer to a drawing surface supported by OpenGL.
It need not be the same hdc that was passed to wglCreateContext when
hglrc was created, but it must be on the same device and have the same
pixel format.
The CS_OWNDC usage is recommended.
In the old days of Windows 9x acquiring and releasing a device context was expensive and slow. Having a fixed dc was much more efficent. Using the CS_OWNDC flag at window registration was the way to have a fixed dc.
The CS_OWNDC usage provides a private device context (see https://msdn.microsoft.com/en-us/library/windows/desktop/ms633574(v=vs.85).aspx#class_styles).
Quoting from MS docs (https://msdn.microsoft.com/en-us/library/windows/desktop/dd162872(v=vs.85).aspx):
Although a private device context is convenient to use, it is
memory-intensive in terms of system resources, requiring 800 or more
bytes to store. Private device contexts are recommended when
performance considerations outweigh storage costs.
You must be aware that you must avoid ReleaseDC with a private device context:
An application can retrieve a handle to the private device context by
using the GetDC function any time after the window is created. The
application must retrieve the handle only once. Thereafter, it can
keep and use the handle any number of times. Because a private device
context is not part of the display device context cache, an
application need never release the device context by using the
ReleaseDC function.
In the common scenary where you draw to an unique window by retriving a DC, setting the curent context, drawing, swapping buffers and releasing the DC the usage of CS_OWNDC instead of GetDC&ReleaseDC is natural.
It can be also the case where wglGetCurrentDC() is used (e.g. by an extern library) regarless of your GetDC/ReleaseDC code. Normally no issues will happen. But if the current gl-context is NULL (as you would do right after ReleaseDC) then wglGetCurrentDC() will fail.
Code without CS_OWNDC
used in two windows with the same pixel format would look like this:
myGLContext = wglCreateContext(...)
//Draw to window A
HDC hdcA = GetDC(hWndA)
wglMakeCurrent(hdcA, myGLContext)
... render...
SwapBuffers(hdcA)
ReleaseDC(hWndA, hdcA)
//Draw to window B
HDC hdcB = GetDC(hWndB)
wglMakeCurrent(hdcB, myGLContext)
... render...
SwapBuffers(hdcB)
ReleaseDC(hWndA, hdcA)
wglMakeCurrent(hdcB, NULL)

How does Deep-Color (10-bit+ per channel) work in Win32?

I don't have any Deep-Color capable hardware attached to my computer, so I haven't been able to experiment myself, and searching online for "win32 deep-color" or "win32 10-bit color" (or 30-bit, 48-bit or 64-bit) yields nothing relevant or recent. The top result is still this NVIDIA PDF from 2009: https://www.nvidia.com/docs/IO/40049/TB-04701-001_v02_new.pdf - it describes using OpenGL and an NVIDIA API for displaying images with more than 8-bits per channel.
I understand how using OpenGL allows 30-bit color images to be displayed: it effectively bypasses the operating system and the OpenGL surface is rendered in deep-color on the GPU and sent directly to the monitor in an appropriate format over DisplayPort or HDMI.
But what options are there outside of OpenGL?
In Win32, after you create a Window with CreateWindow, you render it by handling the WM_PAINT message, and then calling BeginPaint, which gives you a handle to a GDI device-context, which cannot be more than 32bpp (8-bits per channel).
While GDI ostensibly abstracts away implementation details of the rendering device, including color depth, it is impossible to specify a 10bpp RGB value, for example (the COLORREF struct is hardcoded to use 32-bit (8bpp) DWORD values), a leaky-abstraction.
Does this mean it is impossible to display 30bpp / Deep-color content in the Windows Desktop using a program handling WM_PAINT and that OpenGL is the only way?
And what would happen if you attempted to blit from an in-memory OpenGL rendering buffer back to the window surface? (i.e. what happens if you press PrintScreen while displaying a Deep-Color BluRay disc in a BD player, or displaying 30-bit content in Photoshop?)

What does this bit of MSDN documentation mean?

The first parameter to the EnumFontFamiliesEx function, according to the MSDN documentation, is described as:
hdc [in]
A handle to the device context from which to enumerate the fonts.
What exactly does it mean?
What does device context mean?
Why should a device context be related to fonts?
Question (3) is a legitimately difficult thing to find an explanation for, but the reason is simple enough:
Some devices provide their own font support. For example, a PostScript printer will allow you to use PostScript fonts. But those same fonts won't be usable when rendering on-screen, or to another printer without PostScript support. Another example would be that a plotter (which is a motorized pen) requires vector fonts with a fixed stroke thickness, so raster fonts can't be used with such a device.
If you're interested in device-specific font support, you'll want to know about the GetDeviceCaps function.
The windows API uses the concept of handles extensively. A handle is an integer value that you can use as a token to access an API resource. You can think of it as a kind of "this" pointer, although it is definitely not a pointer.
A device context is an object within the windows API that represents a something that you can draw on or display graphics on. It might be a printer, a bitmap, or a screen, or some other context in which creating graphics makes sense. In Windows, fonts must be selected into device contexts before they can be used. In order to find out what fonts are currently available in any given device context, you can enumerate them. That's where EnumFontFamiliesEx comes in.
Microsoft has other articles on device context,
https://learn.microsoft.com/en-us/windows/win32/gdi/about-device-contexts
An application must inform GDI to load a particular device driver and,
once the driver is loaded, to prepare the device for drawing
operations (such as selecting a line color and width, a brush pattern
and color, a font typeface, a clipping region, and so on). These tasks
are accomplished by creating and maintaining a device context (DC). A
DC is a structure that defines a set of graphic objects and their
associated attributes, and the graphic modes that affect output. The
graphic objects include a pen for line drawing, a brush for painting
and filling, a bitmap for copying or scrolling parts of the screen, a
palette for defining the set of available colors, a region for
clipping and other operations, and a path for painting and drawing
operations. Unlike most of the structures, an application never has
direct access to the DC; instead, it operates on the structure
indirectly by calling various functions.
Obviously font is a kind of drawing.

Can't create Direct2D DXGI Surface

I'm calling this method:
http://msdn.microsoft.com/en-us/library/dd371264(VS.85).aspx
The call fails with E_NOINTERFACE. The documentation is especially unhelpful as to why this may happen. I've enabled all of the DirectX 11 debug stuff and that's the best I got. I know that I have a valid IDXGISurface1* (also tried IDXGISurface) and the other parameters are set correctly. Any ideas as to why this call may fail?
Edit:
I also am having problems creating D3D11 devices. If I pass nullptr as the IDXGIAdapter* argument in D3D11CreateDeviceAndSwapChain, it works fine, but if I enumerate the adapters myself and pass in a pointer (the only one returned), it fails with invalid argument. The MSDN documentation explicitly says that if nullptr is passed, then the system uses the first return from EnumAdapters1. I am running a DX11 system.
Direct2D only works when you create a Direct3D 10.1 device, but it can share surfaces with Direct3D 11. All you need to do is create both devices and render all of your Direct2D content to a texture that you share between them. I use this technique in my own applications to use Direct2D with Direct3D 11. It incurs a slight cost, but it is small and constant per frame.
A basic outline of the process you will need to use is:
Create your Direct3D 11 device like you do normally.
Create a texture with the D3D10_RESOURCE_MISC_SHARED_KEYEDMUTEX option in order to allow access to the ID3D11KeyedMutex interface.
Use the GetSharedHandle to get a handle to the texture that can be shared among devices.
Create a Direct3D 10.1 device, ensuring that it is created on the same adapter.
Use OpenSharedResource function on the Direct3D 10.1 device to get a version of the texture for Direct3D 10.1.
Get access to the D3D10 KeyedMutex interface for the texture.
Use the Direct3D 10.1 version of the texture to create the RenderTarget using Direct2D.
When you want to render with D2D, use the keyed mutex to lock the texture for the D3D10 device. Then, acquire it in D3D11 and render the texture like you were probably already trying to do.
It's not trivial, but it works well, and it is the way that they intended you to interoperate between them. Windows 8 looks like it will introduce full D3D11 compatibility, so it will be just as simple as you expect.
Direct2D uses D3D10 devices not D3D11 devices. D3D11 device is probably that is reported as lacking interface by that E_NOINTERFACE.

Resources