Using DEFAULT_GUI_FONT in high DPI Windows application - windows

I have a Windows application which I want to look good at high DPI monitors. The application is using DEFAULT_GUI_FONT in lots of places, and the font created this way doesn't scale correctly.
Is there any simple way to fix this problem with not too much pain?

you need get NONCLIENTMETRICS by SystemParametersInfo(SPI_GETNONCLIENTMETRICS,) and then use it LOGFONT data, for create self font. or you can query for SystemParametersInfo(SPI_GETICONTITLELOGFONT) and use it

The recommended fonts for different purposes can be obtained from the NONCLIENTMETRICS structure.
For automatically DPI-scaled fonts (Windows 10 1607+, must be per-monitor DPI-aware):
// Your window's handle
HWND window;
// Get the DPI for which your window should scale to
UINT dpi = GetDpiForWindow(window);
// Obtain the recommended fonts, which are already correctly scaled for the current DPI
NONCLIENTMETRICSW non_client_metrics;
if (!SystemParametersInfoForDpi(SPI_GETNONCLIENTMETRICS, sizeof(non_client_metrics), &non_client_metrics, 0, dpi)
{
// Error handling
}
// Create an appropriate font(s)
HFONT message_font = CreateFontIndirectW(&non_client_metrics.lfMessageFont);
if (!message_font)
{
// Error handling
}
For older Windows versions you can use the system-wide DPI and scale the font manually (Windows 7+, must be system DPI-aware):
// Your window's handle
HWND window;
// Obtain the recommended fonts, which are already correctly scaled for the current DPI
NONCLIENTMETRICSW non_client_metrics;
if (!SystemParametersInfoW(SPI_GETNONCLIENTMETRICS, sizeof(non_client_metrics), &non_client_metrics, 0)
{
// Error handling
}
// Get the system-wide DPI
HDC hdc = GetDC(nullptr);
if (!hdc)
{
// Error handling
}
UINT dpi = GetDeviceCaps(hdc, LOGPIXELSY);
ReleaseDC(nullptr, hdc);
// Scale the font(s)
constexpr UINT font_size = 12;
non_client_metrics.lfMessageFont.lfHeight = -((font_size * dpi) / 72);
// Create the appropriate font(s)
HFONT message_font = CreateFontIndirectW(&non_client_metrics.lfMessageFont);
if (!message_font)
{
// Error handling
}
NONCLIENTMETRICS has also many other fonts in it. Make sure to choose the right one for your purpose.
You should set the DPI-awareness level in your application manifest as described here for best compatibility.

WinForms in the .NET framework internally converts the DEFAULT_GUI_FONT (which is in fact used to get the default font for WinForms Forms and Controls in most situations) by scaling its height from pixels (which is the unit GDI fonts use natively) to Points (which is preferred by GDI+). Drawing text using points implies that the physical size of the rendered text depends on the monitor DPI setting.
System.Drawing.Font.SizeInPoints:
float emHeightInPoints;
IntPtr screenDC = UnsafeNativeMethods.GetDC(NativeMethods.NullHandleRef);
try {
using( Graphics graphics = Graphics.FromHdcInternal(screenDC)){
float pixelsPerPoint = (float) (graphics.DpiY / 72.0);
float lineSpacingInPixels = this.GetHeight(graphics);
float emHeightInPixels = lineSpacingInPixels * FontFamily.GetEmHeight(Style) / FontFamily.GetLineSpacing(Style);
emHeightInPoints = emHeightInPixels / pixelsPerPoint;
}
}
finally {
UnsafeNativeMethods.ReleaseDC(NativeMethods.NullHandleRef, new HandleRef(null, screenDC));
}
return emHeightInPoints;
Obviously you cannot use this directly as it's C#. But besides that, this article suggests that you should scale pixel dimensions assuming a 96 dpi design, and use GetDpiForWindow to determine the actual DPI. Note that the "72" in the formula above has nothing to do with the monitor DPI setting, it comes from the fact that .NET likes to use fonts specified in points rather than pixels (otherwise just scale the LOGFONT's height by DPIy/96).
This site suggests something similar, but with GetDpiForMonitor.
I cannot say for sure whether the general approach of manually scaling the font size according to some DPI-dependent factor is a robust and future-proof for scaling fonts (it seems to be the way to go about scaling non-font GUI elements though). However, since .NET basically also just calculates some magic factor based on some sort of DPI value, it's probably a pretty good guess.
Also, you'll want to cache that HFONT. HFONT - LOGFONT conversions are not negligible.
See also (references):
WinForms gets its default using GetStockObject(DEFAULT_GUI_FONT) (there are a few exceptions though, mostly obsolete):
IntPtr handle = UnsafeNativeMethods.GetStockObject(NativeMethods.DEFAULT_GUI_FONT);
try {
Font fontInWorldUnits = null;
// SECREVIEW : We know that we got the handle from the stock object,
// : so this is always safe.
//
IntSecurity.ObjectFromWin32Handle.Assert();
try {
fontInWorldUnits = Font.FromHfont(handle);
}
finally {
CodeAccessPermission.RevertAssert();
}
try{
defaultFont = FontInPoints(fontInWorldUnits);
}
finally{
fontInWorldUnits.Dispose();
}
}
catch (ArgumentException) {
}
https://referencesource.microsoft.com/#System.Drawing/commonui/System/Drawing/SystemFonts.cs,355
The HFONT is converted to GDI+, and then the GDI+ font retrieved this way is transformed using FontInPoints:
private static Font FontInPoints(Font font) {
return new Font(font.FontFamily, font.SizeInPoints, font.Style, GraphicsUnit.Point, font.GdiCharSet, font.GdiVerticalFont);
}
https://referencesource.microsoft.com/#System.Drawing/commonui/System/Drawing/SystemFonts.cs,452
The content of the SizeInPoints getter is already listed above.
https://referencesource.microsoft.com/#System.Drawing/commonui/System/Drawing/Advanced/Font.cs,992

Related

Direct2D fails when drawing a single-channel bitmap

I'm an experienced programmer specialized in Computer Graphics, mainly using Direct3D 9.0c, OpenGL and general algorithms. Currently, I am evaluating Direct2D as rendering technology for a professional application dealing with medical image data. As for rendering, it is a x64 desktop application in windowed mode (not fullscreen).
Already with my very initial steps I struggle with a task I thought would be a no-brainer: Rendering a single-channel bitmap on screen.
Running on a Windows 8.1 machine, I create an ID2D1DeviceContext with a Direct3D swap chain buffer surface as render target. The swap chain is created from a HWND and buffer format DXGI_FORMAT_B8G8R8A8_UNORM. Note: See also the code snippets at the end.
Afterwards, I create a bitmap with pixel format DXGI_FORMAT_R8_UNORM and alpha mode D2d1_ALPHA_MODE_IGNORE. When calling DrawBitmap(...) on the device context, a debug break point is triggered with the debug message "D2d DEBUG ERROR - This operation is not compatible with the pixel format of the bitmap".
I know that this output is quite clear. Also, when changing the pixel format to DXGI_FORMAT_R8G8B8A8_UNORM with DXGI_ALPHA_MODE_IGNORE everything works well and I see the bitmap rendered. However, I simply cannot believe that! Graphics cards support single-channel textures ever since - every 3D graphics application can use them without thinking twice. This goes without speaking.
I tried to find anything here and at Google, without success. The only hint I could find was the MSDN Direct2D page with the (supported pixel formats). The documentation suggests - by not mentioning it - that DXGI_FORMAT_R8_UNORM is indeed not supported as bitmap format. I also find posts talking about alpha masks (using DXGI_FORMAT_A8_UNORM), but that's not what I'm after.
What am I missing that I can't convince Direct2D to create and draw a grayscale bitmap? Or is it really true that Direct2D doesn't support drawing of R8 or R16 bitmaps??
Any help is really appreciated as I don't know how to solve this. If I can't get this trivial basics to work, I think I'd have to stop digging deeper into Direct2D :-(.
And here is the code snippets of relevance. Please note that they might not compile since I ported this on the fly from my C++/CLI code to plain C++. Also, I threw away all error checking and other noise:
Device, Device Context and Swap Chain Creation (D3D and Direct2D):
// Direct2D factory creation
D2D1_FACTORY_OPTIONS options = {};
options.debugLevel = D2D1_DEBUG_LEVEL_INFORMATION;
ID2D1Factory1* d2dFactory;
D2D1CreateFactory(D2D1_FACTORY_TYPE_MULTI_THREADED, options, &d2dFactory);
// Direct3D device creation
const auto type = D3D_DRIVER_TYPE_HARDWARE;
const auto flags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
ID3D11Device* d3dDevice;
D3D11CreateDevice(nullptr, type, nullptr, flags, nullptr, 0, D3D11_SDK_VERSION, &d3dDevice, nullptr, nullptr);
// Direct2D device creation
IDXGIDevice* dxgiDevice;
d3dDevice->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void**>(&dxgiDevice));
ID2D1Device* d2dDevice;
d2dFactory->CreateDevice(dxgiDevice, &d2dDevice);
// Swap chain creation
DXGI_SWAP_CHAIN_DESC1 desc = {};
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
desc.BufferCount = 2;
IDXGIAdapter* dxgiAdapter;
dxgiDevice->GetAdapter(&dxgiAdapter);
IDXGIFactory2* dxgiFactory;
dxgiAdapter->GetParent(__uuidof(IDXGIFactory), reinterpret_cast<void **>(&dxgiFactory));
IDXGISwapChain1* swapChain;
dxgiFactory->CreateSwapChainForHwnd(d3dDevice, hwnd, &swapChainDesc, nullptr, nullptr, &swapChain);
// Direct2D device context creation
const auto options = D2D1_DEVICE_CONTEXT_OPTIONS_NONE;
ID2D1DeviceContext* deviceContext;
d2dDevice->CreateDeviceContext(options, &deviceContext);
// create render target bitmap from swap chain
IDXGISurface* swapChainSurface;
swapChain->GetBuffer(0, __uuidof(swapChainSurface), reinterpret_cast<void **>(&swapChainSurface));
D2D1_BITMAP_PROPERTIES1 bitmapProperties;
bitmapProperties.dpiX = 0.0f;
bitmapProperties.dpiY = 0.0f;
bitmapProperties.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW;
bitmapProperties.pixelFormat.format = DXGI_FORMAT_B8G8R8A8_UNORM;
bitmapProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
bitmapProperties.colorContext = nullptr;
ID2D1Bitmap1* swapChainBitmap = nullptr;
deviceContext->CreateBitmapFromDxgiSurface(swapChainSurface, &bitmapProperties, &swapChainBitmap);
// set swap chain bitmap as render target of D2D device context
deviceContext->SetTarget(swapChainBitmap);
D2D single-channel Bitmap Creation:
const D2D1_SIZE_U size = { 512, 512 };
const UINT32 pitch = 512;
D2D1_BITMAP_PROPERTIES1 d2dProperties;
ZeroMemory(&d2dProperties, sizeof(D2D1_BITMAP_PROPERTIES1));
d2dProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
d2dProperties.pixelFormat.format = DXGI_FORMAT_R8_UNORM;
char* sourceData = new char[512*512];
ID2D1Bitmap1* d2dBitmap;
deviceContext->DeviceContextPointer->CreateBitmap(size, sourceData, pitch, d2dProperties, &d2dBitmap);
Bitmap drawing (FAILING):
deviceContext->BeginDraw();
D2D1_COLOR_F d2dColor = {};
deviceContext->Clear(d2dColor);
// THIS LINE FAILS WITH THE DEBUG BREAKPOINT IF SINGLE CHANNELED
deviceContext->DrawBitmap(bitmap, nullptr, 1.0f, D2D1_INTERPOLATION_MODE_LINEAR, nullptr);
swapChain->Present(1, 0);
deviceContext->EndDraw();
From my little experience, Direct2D seems very limited, indeed.
Have you tried Direct2D effects (ID2D1Effect)? You can write your own [it seems comparatively complicated], or use one of the built-in effects [which is rather simple].
There is one called Color matrix effect (CLSID_D2D1ColorMatrix). It might work to have your DXGI_FORMAT_R8_UNORM (or DXGI_FORMAT_A8_UNORM, any single-channel would do) as input (inputs to effects are ID2D1Image, and ID2D1Bitmap inherits from ID2D1Image). Then set the D2D1_COLORMATRIX_PROP_COLOR_MATRIX for copying the input channel to all output channels. Have not tried it, though.

How to compute the actual height of the text in a static control

My simple Win32 DialogBox contains two static text controls (IDC_STATIC_TITLE and IDC_STATIC_SECONDARY), here's what it looks like in the resource editor:
At run time, the text first string is updated dynamically. Also, the font of the that text string is replaced such that it's bigger than the IDC_STATIC_SECONDARY string below it. The resulting text string might span a single line, two lines, or more.
I want the other static control holding the secondary text to be placed directly underneath the title string at run time. However, my resulting attempt to re-position this control in the WM_INITDIALOG callback isn't working very well. The second string is overlapping the first. I thought I could use DrawText with DT_CALCRECT to compute the height of the primary text string and then move the secondary text string based on the result. My code is coming up a bit short as seen here:
DrawText returns a RECT with coordinates {top=42 bottom=74 left=19 right=461} Subtracting bottom from top is "32". That seems a little short. I suspect I'm not invoking the API correctly and/or an issue with the different mappings between logical and pixel units.
Here's the relevant ATL code. The CMainWindow class just inherits from ATL's CDialogImpl class.
CMainWindow::CMainWindow():
_titleFont(NULL),
_secondaryFont(NULL)
{
LOGFONT logfont = {};
logfont.lfHeight = 30;
_titleFont = CreateFontIndirect(&logfont);
}
LRESULT CMainWindow::OnInitDialog(UINT uMsg, WPARAM wParam, LPARAM lParam, BOOL& bHandled)
{
CString strTitle;
RECT rectDrawText = {}, rectTitle={}, rectSecondary={};
CWindow wndTitle = GetDlgItem(IDC_STATIC_TITLE);
CWindow wndSecondary = GetDlgItem(IDC_STATIC_SECONDARY);
this->GetDlgItemText(IDC_STATIC_TITLE, strTitle);
wndTitle.SetFont(_titleFont); // font created with Create
wndTitle.GetWindowRect(&rectTitle);
wndSecondary.GetWindowRect(&rectSecondary);
ScreenToClient(&rectTitle);
ScreenToClient(&rectSecondary);
rectDrawText = rectTitle;
DrawText(wndTitle.GetDC(), strTitle, strTitle.GetLength(), &rectDrawText, DT_CALCRECT|DT_WORDBREAK); // compute the actual size of the text
UINT height = rectSecondary.bottom - rectSecondary.top; // get the original height of the secondary text control
rectSecondary.top = rectDrawText.bottom; // position it to be directly below the bottom of the title control
rectSecondary.bottom = rectSecondary.top + height; // add the height back
wndSecondary.MoveWindow(&rectSecondary);
return 0;
}
What am I doing wrong?
Despite what its name may make it sound like, wndTitle.GetDC() doesn't return some pointer/reference that's part of the CWindow and that's the same every call. Instead, it retrieves a brand new device context for the window each time. (It's basically a thin wrapper for the GetDC() Windows API call, right down to returning an HDC instead of the MFC equivalent.)
This device context, despite being associated with the window, is loaded with default parameters, including the default font (which IIRC is that old "System" font from the 16-bit days (most of this screenshot)).
So what you need to do is:
Call wndTitle.GetDC() to get the HDC.
Call SelectObject() to select the correct window font in (you can use WM_GETFONT to get this; not sure if MFC has a wrapper function for it), saving the return value, the previous font, for step 4
Call DrawText()
Call SelectObject() to select the previous font back in
Call wndTitle.ReleaseDC() to state that you are finished using the HDC
More details are on the MSDN page for CWindow::GetDC().

How can I off-screen render?

I have a problem with off-screen rendering with OpenGL.
I searched for a lot about FBO and PBO but nothing was helpful for me.
I guess the matter was from memDC which was made by CreateCompatibleDC.
Here is a part of my code
void COpenGLWnd::ShowinWnd(int ID)
{
m_hDC = ::GetDC(m_hWnd);
memDC = CreateCompatibleDC(m_hDC);
SetDCPixelFormat(memDC);
m_hRC = wglCreateContext(memDC);
VERIFY(wglMakeCurrent(memDC, m_hRC));
m_isitStart = 0;
GLuint pbo;
glGenBuffersARB(1,&pbo);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pbo);
glBufferDataARB(GL_PIXEL_PACK_BUFFER_ARB, (m_WndWidth * 3 + 3) / 4 * 4 * m_WndHeight, NULL, GL_STREAM_READ);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pbo);
switch(ID)
{
case T_FADEIN:
GLFadeinRender();
break;
case T_PARANORAMAL:
GLParanormalRender();
break;
case T_3DCUBE:
GL3DcubeRender();
break;
default:
break;
}
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pbo);
glReadBuffer(GL_BACK);
glReadPixels(0,0,m_WndWidth,m_WndHeight,GL_BGR_EXT,GL_UNSIGNED_BYTE, 0);
BYTE* data = (BYTE*) glMapBufferARB(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY_ARB);
if(data)
{
SaveBitmapToDirectFile(data); //this makes bitmap file with pixel BYTE array, "data".
glUnmapBufferARB(GL_PIXEL_PACK_BUFFER_ARB);
}
glBindFramebuffer(GL_PIXEL_PACK_BUFFER_ARB,0);
SwapBuffers(memDC);
glDeleteBuffers(1,&pbo);
wglMakeCurrent(memDC, NULL);
wglDeleteContext(m_hRC);
DeleteDC(memDC);
::ReleaseDC(m_hWnd, m_hDC);
}
If I run this programm without memDC and CreateContext on m_hDC, nothing was matter. Well rendered on window, well written bitmap file. But I want to render in off-screen and only saving bitmap files. How can I handle this?
MemDCs will automatically drop you back to the old, OpenGL-1.1 software rasterizer. This rasterizer is very limited, does not support any kind of modern features, like FBOs, PBuffers and so on.
If you want a GPU accelerated OpenGL context you need to create this on a regular window (or a PBuffer DC, but to get a PBuffer DC you need a window first). You need the window just for getting the context, you don't have to render there and the window can stay hidden all the time (omit the ShowWindow call of the creation process). With a valid pixelformat set for the window create the OpenGL context on its HDC.
Since you already have a window, just go with that then.
With the OpenGL context from a regular window you can then use FBOs for off-screen rendering.

load a ttf font with the Windows API

With CreateFont one can specify font name and a bunch of other properties. However, what if I have a font.ttf file, and I want that particular font to be loaded by windows? How do I specify that specific file to be used?
I'm pretty sure you can't. All requests for fonts go through the font mapper, and it picks out the font file that comes the closest to meeting the specifications you've given. Though I'm not sure it even does in reality, it could at least theoretically use (for example) data from two entirely separate font files to create one logical font.
It's admittedly rather indirect, but you could utilize GDI interop with DWrite when running on Windows 7+.
#include <Windows.h>
#include <WindowsX.h>
#include <DWrite.h>
...
// Make the font file visible to GDI.
AddFontResourceEx(fontFileName, FR_PRIVATE, 0);
if (SUCCEEDED(GetLogFontFromFileName(fontFileName, &logFont)))
{
logFont.lfHeight = -long(desiredPpem);
HFONT hf = CreateFontIndirect(&logFont);
HFONT oldFont = SelectFont(hdc, hf);
...
// Do stuff...
...
SelectFont(hdc, oldFont);
}
RemoveFontResource(fontFileName);
....
HRESULT GetLogFontFromFileName(_In_z_ wchar const* fontFileName, _Out_ LOGFONT* logFont)
{
// DWrite objects
ComPtr<IDWriteFactory> dwriteFactory;
ComPtr<IDWriteFontFace> fontFace;
ComPtr<IDWriteFontFile> fontFile;
ComPtr<IDWriteGdiInterop> gdiInterop;
// Set up our DWrite factory and interop interface.
IFR(DWriteCreateFactory(
DWRITE_FACTORY_TYPE_SHARED,
__uuidof(IDWriteFactory),
reinterpret_cast<IUnknown**>(&dwriteFactory)
);
IFR(g_dwriteFactory->GetGdiInterop(&gdiInterop));
// Open the file and determine the font type.
IFR(g_dwriteFactory->CreateFontFileReference(fontFileName, nullptr, &fontFile));
BOOL isSupportedFontType = false;
DWRITE_FONT_FILE_TYPE fontFileType;
DWRITE_FONT_FACE_TYPE fontFaceType;
UINT32 numberOfFaces = 0;
IFR(fontFile->Analyze(&isSupportedFontType, &fontFileType, &fontFaceType, &numberOfFaces));
if (!isSupportedFontType)
return DWRITE_E_FILEFORMAT;
// Set up a font face from the array of font files (just one)
ComPtr<IDWriteFontFile> fontFileArray[] = {fontFile};
IFR(g_dwriteFactory->CreateFontFace(
fontFaceType,
ARRAYSIZE(fontFileArray), // file count
&fontFileArray[0], // or GetAddressOf if WRL ComPtr
0, // faceIndex
DWRITE_FONT_SIMULATIONS_NONE,
&fontFace
);
// Get the necessary logical font information.
IFR(gdiInterop->ConvertFontFaceToLOGFONT(fontFace, OUT logFont));
return S_OK;
}
Where IFR is just a failure macro that returns on a FAILED HRESULT, and ComPtr is a helper smart pointer class (substitute with your own, or ATL CComPtr, WinRT ComPtr, VS2013 _com_ptr_t...).
One possibility is to EnumFonts(), save the results. Then add your private font with AddFontResourceEx(), and EnumFonts() again, the difference is what you added. Note that TTF and bitmap fonts enumerate differently, but for this test, that shouldn't matter.
If you were using bitmap fonts, they could be easily parsed (.FNT and .FON). TTF you'd likely have to build (or borrow, as another commenter suggested FreeType) a parser to pull the "name" table out of the TTF file.
That seems like a lot of work for a font you're controlling or supplying with your app.
We use AddFontResourceEx() to add a private font, but since we control the font we're adding, we just hardcode the fontname passed to CreateFontIndirect() to match.
If you dont care about installing the font you can do so with AddFontResource then you can fetch the relationship between the physical .TTF and it logical/family name by looking at the mappings in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Fonts.
I mentioned PrivateFontCollection in my comment because I thought you wanted to do this temporarily; you can load a TTF into a PFC with PrivateFontCollection::AddFontFile, fetch back the new FontFamily object from the collection & examime GetFamilyName. (I've done similar with the .net implementation of this but not the raw API)

Capturing a Window that is hidden or minimized

I followed this tutorial (there's a bit more than what's listed here because in my code I get a window via mouse click) for grabbing a window as a bitmap and then rendering that bitmap in a different window.
My question:
When that window is minimized or hidden (SW_HIDE) my screen capture doesn't work, so is it possible to capture a window when it is minimized or hidden?
The PrintWindow api works well, I use it for capturing thumbnails for hidden windows. Despite the name, it is different than WM_PRINT and WM_PRINTCLIENT, it works with pretty much every window except for Direct X / WPF windows.
I added some code (C#) but after reviewing how I used the code, I realized that the window isn't actually hidden when I capture its bitmap, its just off screen so this may not work for your case. Could you show the window off screen, do a print and then hide it again?
public static Bitmap PrintWindow(IntPtr hwnd)
{
RECT rc;
WinUserApi.GetWindowRect(hwnd, out rc);
Bitmap bmp = new Bitmap(rc.Width, rc.Height, PixelFormat.Format32bppArgb);
Graphics gfxBmp = Graphics.FromImage(bmp);
IntPtr hdcBitmap = gfxBmp.GetHdc();
bool succeeded = WinUserApi.PrintWindow(hwnd, hdcBitmap, 0);
gfxBmp.ReleaseHdc(hdcBitmap);
if (!succeeded)
{
gfxBmp.FillRectangle(new SolidBrush(Color.Gray), new Rectangle(Point.Empty, bmp.Size));
}
IntPtr hRgn = WinGdiApi.CreateRectRgn(0, 0, 0, 0);
WinUserApi.GetWindowRgn(hwnd, hRgn);
Region region = Region.FromHrgn(hRgn);
if (!region.IsEmpty(gfxBmp))
{
gfxBmp.ExcludeClip(region);
gfxBmp.Clear(Color.Transparent);
}
gfxBmp.Dispose();
return bmp;
}
There are WM_PRINT and WM_PRINTCLIENT messages you can send to the window, which cause its contents to be rendered into the HDC of your choice.
However, these aren't perfect: while the standard Win32 controls handle these correctly, any custom controls in the app might not.
I am trying to get the bitmap of partially hidden controls.
I used code before that did the drawing, but included windows overlapping it. So.. maybe you want to try this.
The WM_PRINTCLIENT should (in my understanding) redraw all inside the control, even if it is not really visible.
const int WM_PRINT = 0x317, WM_PRINTCLIENT = 0x318, PRF_CLIENT = 4,
PRF_CHILDREN = 0x10, PRF_NON_CLIENT = 2,
COMBINED_PRINTFLAGS = PRF_CLIENT | PRF_CHILDREN | PRF_NON_CLIENT;
SendMessage(handle, WM_PRINTCLIENT, (int)hdc, COMBINED_PRINTFLAGS);
//GDIStuff.BitBlt(hdc, 0, 0, width, height, hdcControl, 0, 0, (int)GDIStuff.TernaryRasterOperations.SRCCOPY);
The before code is commented out now. It is based on the code found here: Pocket PC: Draw control to bitmap (accepted answer). It is basically the same as Tim Robinson suggests in this thread.
Also, have a look here
http://www.tcx.be/blog/2004/paint-control-onto-graphics/

Resources