Windows API `GetPixel()' always return `CLR_INVALID`, but `SetPixel()` is worked well? - winapi

My OS is windows 7 64-bits with 2 monitors display.
I use GetPixel(), but it always return CLR_INVALID as result like that:
COLORREF result = GetPixel(dc,x,y);
My GetDeviceCaps(RASTERCAPS) returns result that RC_BITBLT is enabled.
GetDeviceCaps(COLORMGMTCAPS) returns result is CM_GAMMA_RAMP.
Most importantly, if I SetPixel(dc,x,y,RGB(250,250,250)) in advance, and GetPixel(dc,x,y) later, I can ALWAYS retreive correct result like that:
COLORREF result = SetPixel(dc,x,y,RGB(250,250,250));
COLORREF cr = GetPixel(dc,x,y);
So I think my coordination should be alright. I have no idea about why GetPixel() always return CLR_INVALID, but SetPixel() is always worked well? Any suggestions?

From GetPixel documentation
A bitmap must be selected within the device context, otherwise,
CLR_INVALID is returned on all pixels.
Try the below code and see if it works for your device context.
HDC dc = ... // <-- your device context
HDC memDC = CreateCompatibleDC(dc);
HBITMAP memBM = CreateCompatibleBitmap(dc, 1, 1);
SelectObject(memDC, memBM);
int x = ... // point's coordinates
int y = ...
BitBlt(memDC, 0, 0, 1, 1, dc, x, y, SRCCOPY);
COLORREF cr = GetPixel(memDC, 0, 0);
std::cout << cr << std::endl;
DeleteDC(memDC);
DeleteObject(memBM);

Related

Call to GetDIBits() succeeds, but program terminates

When I call the following function in a Windows program, the program abruptly terminates.
The purpose of ScanRect() is to copy a rectangle at specified coordinates on the screen and load the pixel values into a memory buffer.
Every function call within ScanRect() succeeds, including both calls to GetDIBits(). The first call, with lpvBits set to NULL, causes it to fill the BITMAPINFOHEADER of bmInfo with information about the pixel data, reporting a value of 32 bits per pixel. The second call to GetDIBits() copies 80 lines of the rectangle into memory buffer pMem, returning the value 80 for the number of lines copied.
Everything seems to succeed, but then the program suddenly terminates. I inserted the line Sleep(8192) after the second call to GetDIBits(), and the program terminates after the 8 seconds have elapsed.
What is causing the program to terminate?
EDIT: the original code is revised per suggestions in this thread. No errors are detected when the function is run, but the program still terminates unexpectedly. I realize the memory buffer size is hard coded, but it is way bigger than needed for the rectangle used in the testing. That should not cause an error. Of course I will have the program compute the necessary buffer size after I find out why the program is terminating.
VOID ScanRect(int x, int y, int iWidth, int iHeight) // 992, 96, 64, 80
{ HDC hDC = GetDC(NULL);
if (!hDC)
{
cout << "!hDC" << endl; // error handling ...
}
else
{ HBITMAP hBitmap = CreateCompatibleBitmap(hDC, iWidth, iHeight);
if (!hBitmap)
{
cout << "!hBitmap" << endl; // error handling ...
}
else
{ HDC hCDC = CreateCompatibleDC(hDC); // compatible with screen DC
if (!hCDC)
{
cout << "!hCDC" << endl; // error handling ...
}
else
{ HBITMAP hOldBitmap = (HBITMAP) SelectObject(hCDC, hBitmap);
BitBlt(hCDC, 0, 0, iWidth, iHeight, hDC, x, y, SRCCOPY);
BITMAPINFO bmInfo = {0};
bmInfo.bmiHeader.biSize = sizeof(bmInfo.bmiHeader);
if (!GetDIBits(hCDC, hBitmap, 0, iHeight, NULL, &bmInfo, DIB_RGB_COLORS))
{
cout << "!GetDIBits" << endl; // error handling ...
}
else
{ HANDLE hHeap = GetProcessHeap();
LPVOID pMem = HeapAlloc(hHeap, HEAP_ZERO_MEMORY, 65536); // TODO: calculate a proper size based on bmInfo's pixel information ...
if (!pMem)
{
cout << "!pMem" << endl;
}
else
{ int i = GetDIBits(hCDC, hBitmap, 0, iHeight, pMem, &bmInfo, DIB_RGB_COLORS);
cout << "i returned by GetDIBits() " << i << endl;
HeapFree(hHeap, NULL, pMem);
}
}
SelectObject(hCDC, hOldBitmap);
DeleteDC(hCDC);
}
DeleteObject(hBitmap);
}
ReleaseDC(NULL, hDC);
}
}
The biCompression value is returned by first GetDIBits is BI_BITFIELDS and before you call second GetDIBits, you need to call bmInfo.bmiHeader.biCompression = BI_RGB;. According to c++ read pixels with GetDIBits(), Setting it to BI_RGB is essential in order to avoid extra 3 DWORDs to be written at the end of structure.
More details
Like #BenVoigt said in comments, you need to restore the old HBITMAP that you replaced with SelectObject() before you destroy the HDC that owns it. You are selecting hBitmap into hCDC, and then destroying hCDC before destroying hBitmap.
https://learn.microsoft.com/en-us/windows/win32/gdi/operations-on-graphic-objects
Each of these functions returns a handle identifying a new object. After an application retrieves a handle, it must call the SelectObject() function to replace the default object. However, the application should save the handle identifying the default object and use this handle to replace the new object when it is no longer needed. When the application finishes drawing with the new object, it must restore the default object by calling the SelectObject() function and then delete the new object by calling the DeleteObject() function. Failing to delete objects causes serious performance problems.
Also, you should free the GDI objects in the reverse order that you create them.
And, don't forget error handling.
Try something more like this instead:
VOID ScanRect(int x, int y, int iWidth, int iHeight) // 992, 96, 64, 80
{
HDC hDC = GetDC(NULL);
if (!hDC)
{
// error handling ...
}
else
{
HBITMAP hBitmap = CreateCompatibleBitmap(hDC, iWidth, iHeight);
if (!hBitmap)
{
// error handling ...
}
else
{
HDC hCDC = CreateCompatibleDC(hDC); // compatible with screen DC
if (!hCDC)
{
// error handling ...
}
else
{
HBITMAP hOldBitmap = (HBITMAP) SelectObject(hCDC, hBitmap);
BitBlt(hCDC, 0, 0, iWidth, iHeight, hDC, x, y, SRCCOPY);
SelectObject(hCDC, hOldBitmap);
BITMAPINFO bmInfo = {0};
bmInfo.bmiHeader.biSize = sizeof(bmInfo.bmiHeader);
if (!GetDIBits(hCDC, hBitmap, 0, iHeight, NULL, &bmInfo, DIB_RGB_COLORS))
{
// error handling ...
}
else
{
HANDLE hHeap = GetProcessHeap();
LPVOID pMem = HeapAlloc(hHeap, HEAP_ZERO_MEMORY, 65536); // TODO: calculate a proper size based on bmInfo's pixel information ...
if (!pMem)
{
// error handling ...
}
else
{
int i = GetDIBits(hCDC, hBitmap, 0, iHeight, pMem, &bmInfo, DIB_RGB_COLORS);
HeapFree(hHeap, NULL, pMem);
}
}
DeleteDC(hCDC);
}
DeleteObject(hBitmap);
}
ReleaseDC(NULL, hDC);
}
}
Note the TODO on the call to HeapAlloc(). You really should be calculating the buffer size based on the bitmap's actual width, height, pixel depth, scanline padding size, etc. Don't use a hard-coded buffer size. I will leave this as an exercise for you to figure out. Although, in this particular example, 64K should be large enough for a 64x80 32bpp bitmap, it will just waste 45K of unused memory.

difference when rendering horizontal line using TextOut char by char vs all at once

I am writing a win32 low level gui app that emulates a console app. I use a fixed width font, my test uses Cascadia Mono, but I have the same issue with any fixed width font.
The console app is trying to draw a horizontal line using U2500 character.
I output the characters that app is passing me one by one. When I do that I get spaces between the horizontal lines, when I output in one call to textout those gaps are filled in.
I made this using the VS c++ windows app template and added this code to the WM_PAINT handling
auto nHeight = -MulDiv(48, GetDeviceCaps(hdc, LOGPIXELSY), 72);
auto hfont = CreateFont(
nHeight,
0,
0,
0,
100,//200,
0,
0,
0,
DEFAULT_CHARSET,
OUT_OUTLINE_PRECIS,
CLIP_DEFAULT_PRECIS,
CLEARTYPE_QUALITY,
FIXED_PITCH,
L"Cascadia Mono"
);
TEXTMETRIC tm;
SelectObject(hdc, hfont);
GetTextMetrics(hdc, &tm);
auto str = L"kkkkkk─────k";
TextOut(hdc, 0, 0, L"kkkkkk─────k", 12);
for (int i = 0; i < 12; i++)
{
TextOut(hdc, i * tm.tmAveCharWidth, tm.tmHeight, &str[i], 1);
}
This displays
you can see that this is not due to me miscalculating the char cell width, the strings are exactly aligned , just there are some added pixels in the upper one, also notice some extra 'knobiness' where the joins are. V odd. Also note that the right edge of the last K before the line starts is slightly chopped off in the char by char one, but not in the all at once one.
So why am I doing it char by char, because I need to specify font weight, bg, fg for each cell.
Instead of using TextOut, you can use DrawText which is a bit more hi-level, like this:
for (int i = 0; i < 12; i++)
{
RECT rc;
rc.left = i * tm.tmAveCharWidth;
rc.top = tm.tmHeight;
rc.right = rc.left + 50; // todo: make sure this is ok
rc.bottom = rc.top + 100;
DrawText(hdc, (LPWSTR)&str[i], 1, &rc, 0);
}
And it seems to fix the "lineness" of it, although it's not 100% exactly the same (there are some pixels that show a difference):

Rendering Windows screenshot capture bitmap as DirectX texture

I'm making progress developing a '3d desktop' directx app that needs to display the current contents of a desktop window (e.g. "Calculator") as a 2D texture on a rectangular surface in directx (11). I'm sooo close but really struggling with the screenshot BMP -> Texture2D step. I do have screenshot->HBITMAP and DDSFile->rendered texture successfully working but can't complete the screenshot->rendered texture.
So far I have working the 'capture the window as a screenshot' bit:
RECT user_window_rectangle;
HWND user_window = FindWindow(NULL, TEXT("Calculator"));
GetClientRect(user_window, &user_window_rectangle);
HDC hdcScreen = GetDC(NULL);
HDC hdc = CreateCompatibleDC(hdcScreen);
UINT screenshot_width = user_window_rectangle.right - user_window_rectangle.left;
UINT screenshot_height = user_window_rectangle.bottom - user_window_rectangle.top;
hbmp = CreateCompatibleBitmap(hdcScreen, screenshot_width, screenshot_height);
SelectObject(hdc, hbmp);
PrintWindow(user_window, hdc, PW_CLIENTONLY);
At this point I have the window bitmap referenced by HBITMAP hbmp.
Also working is my code to render a DDS file as a texture on a directx/3d rectangle:
ID3D11Device *dev;
ID3D11DeviceContext *dev_context;
...
dev_context->PSSetShaderResources(0, 1, &shader_resource_view);
dev_context->PSSetSamplers(0, 1, &tex_sampler_state);
...
DirectX::TexMetadata tex_metadata;
DirectX::ScratchImage image;
hr = LoadFromDDSFile(L"Earth.dds", DirectX::DDS_FLAGS_NONE, &tex_metadata, image);
hr = CreateShaderResourceView(dev, image.GetImages(), image.GetImageCount(), tex_metadata, &shader_resource_view);
Pixel shader is:
Texture2D ObjTexture
SamplerState ObjSamplerState
float4 PShader(float4 pos : SV_POSITION, float4 color : COLOR, float2 tex : TEXCOORD) : SV_TARGET\
{
return ObjTexture.Sample( ObjSamplerState, tex );
}
The samplerstate (defaulting to linear) is:
D3D11_SAMPLER_DESC sampler_desc;
ZeroMemory(&sampler_desc, sizeof(sampler_desc));
sampler_desc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sampler_desc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
sampler_desc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
sampler_desc.MinLOD = 0;
sampler_desc.MaxLOD = D3D11_FLOAT32_MAX;
hr = dev->CreateSamplerState(&sampler_desc, &tex_sampler_state);
Question: how do I replace the LoadFromDDSFile bit with some equivalent that takes the HBITMAP from the windows screencapture and ends up with it on the graphics card as ObjTexture ?
Below is my best shot of bridging from the screenshot HBITMAP hbmp to the shader resource screenshot_texture, but it gives a memory access violation from the graphics driver (I think due to my "data.pSysmem = &bmp.bmBits", but no idea really):
GetObject(hbmp, sizeof(BITMAP), (LPSTR)&bmp)
D3D11_TEXTURE2D_DESC screenshot_desc = CD3D11_TEXTURE2D_DESC(DXGI_FORMAT_R8G8B8A8_UNORM, bmp.bmWidth, bmp.bmHeight, 1,
1,
D3D11_BIND_SHADER_RESOURCE
);
int bytes_per_pixel = 4;
D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(D3D11_SUBRESOURCE_DATA));
data.pSysMem = &bmp.bmBits; //pixel buffer
data.SysMemPitch = bytes_per_pixel * bmp.bmWidth;// line size in byte
data.SysMemSlicePitch = bytes_per_pixel * bmp.bmWidth * bmp.bmHeight;// total buffer size in byte
hr = dev->CreateTexture2D(
&screenshot_desc, //texture format
&data, // pixel buffer use to fill the texture
&screenshot_texture // created texture
);
:::::::::::::::::::::::::SOLUTION::::::::::::::::::::::::::::::::::::::::::
The main issue was trying to use &bmp.bmBits directly as a pixel buffer caused memory conflicts within the graphics driver - this was resolved by using 'malloc' to allocate an appropriately sized block of memory to store the pixel data. Thanks to Chuck Walbourn for helping with my poking around in the dark to work out how the pixel data is actually stored (it was actually 32 bits/pixel by default). It's still possible/likely some of code is relying on luck to read the pixel data correctly, but it's been improved with Chuck's input.
My basic technique was;
FindWindow to get the client window on the desktop
CreateCompatibleBitmap and SelectObject and PrintWindow to get a HBITMAP to the snapshot
malloc to allocate the correct amount of space for a (byte*)pixel buffer
GetDIBits to populate the (byte*)pixel buffer from the HBITMAP
CreateTexture2D to build the texture buffer
CreateShaderResourceView to map the texture to the graphics pixel shader
So working code to screenshot a windows desktop window and pass that as a texture to a direct3d app is:
RECT user_window_rectangle;
HWND user_window = FindWindow(NULL, TEXT("Calculator")); //the window can't be min
if (user_window == NULL)
{
MessageBoxA(NULL, "Can't find Calculator", "Camvas", MB_OK);
return;
}
GetClientRect(user_window, &user_window_rectangle);
//create
HDC hdcScreen = GetDC(NULL);
HDC hdc = CreateCompatibleDC(hdcScreen);
UINT screenshot_width = user_window_rectangle.right - user_window_rectangle.left;
UINT screenshot_height = user_window_rectangle.bottom - user_window_rectangle.top;
hbmp = CreateCompatibleBitmap(hdcScreen, screenshot_width, screenshot_height);
SelectObject(hdc, hbmp);
//Print to memory hdc
PrintWindow(user_window, hdc, PW_CLIENTONLY);
BITMAPINFOHEADER bmih;
ZeroMemory(&bmih, sizeof(BITMAPINFOHEADER));
bmih.biSize = sizeof(BITMAPINFOHEADER);
bmih.biPlanes = 1;
bmih.biBitCount = 32;
bmih.biWidth = screenshot_width;
bmih.biHeight = 0-screenshot_height;
bmih.biCompression = BI_RGB;
bmih.biSizeImage = 0;
int bytes_per_pixel = bmih.biBitCount / 8;
BYTE *pixels = (BYTE*)malloc(bytes_per_pixel * screenshot_width * screenshot_height);
BITMAPINFO bmi = { 0 };
bmi.bmiHeader = bmih;
int row_count = GetDIBits(hdc, hbmp, 0, screenshot_height, pixels, &bmi, DIB_RGB_COLORS);
D3D11_TEXTURE2D_DESC screenshot_desc = CD3D11_TEXTURE2D_DESC(
DXGI_FORMAT_B8G8R8A8_UNORM, // format
screenshot_width, // width
screenshot_height, // height
1, // arraySize
1, // mipLevels
D3D11_BIND_SHADER_RESOURCE, // bindFlags
D3D11_USAGE_DYNAMIC, // usage
D3D11_CPU_ACCESS_WRITE, // cpuaccessFlags
1, // sampleCount
0, // sampleQuality
0 // miscFlags
);
D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(D3D11_SUBRESOURCE_DATA));
data.pSysMem = pixels; // texArray; // &bmp.bmBits; //pixel buffer
data.SysMemPitch = bytes_per_pixel * screenshot_width;// line size in byte
data.SysMemSlicePitch = bytes_per_pixel * screenshot_width * screenshot_height;
hr = dev->CreateTexture2D(
&screenshot_desc, //texture format
&data, // pixel buffer use to fill the texture
&screenshot_texture // created texture
);
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
srvDesc.Format = screenshot_desc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip = 0;
srvDesc.Texture2D.MostDetailedMip = screenshot_desc.MipLevels;
dev->CreateShaderResourceView(screenshot_texture, NULL, &shader_resource_view);
You are making a lot of assumptions here that the BITMAP returned is actually in 32-bit RGBA form. It is likely not at all in that format, and in any case you need to validate the contents of bmPlanes to be 1 and bmBitsPixel to be 32 if you are assuming it is 4-bytes per pixel. You should read more about the BMP format.
BMPs uses BGRA order, so you can use DXGI_FORMAT_B8G8R8A8_UNORM for the case of bmBitsPixel being 32.
Secondly, you need to derive pitch from bmWidthBytes and not bmWidth.
data.pSysMem = &bmp.bmBits; //pixel buffer
data.SysMemPitch = bmp.bmWidthBytes;// line size in byte
data.SysMemSlicePitch = bmp.bmWidthBytes * bmp.bmHeight;// total buffer size in byte
If bmBitsPixel is 24, there is no DXGI format equivalent to that. You have to copy the data to a 32-bit format such as DXGI_FORMAT_B8G8R8X8_UNORM.
If bmBitsPixel is 15 or 16, you can use DXGI_FORMAT_B5G5R5A1_UNORM on a system with Direct3D 11.1, but remember that 16-bit DXGI formats are not always supported depending on the driver. Otherwise you'll have to convert this data to something else.
For bmBitsPixel values of 1, 2, 4, or 8 you have to convert them as there are no DXGI texture formats that are equivalent.
The main issue was trying to use &bmp.bmBits directly as a pixel buffer caused memory conflicts within the graphics driver - this was resolved by using 'malloc' to allocate an appropriately sized block of memory to store the pixel data. Thanks to Chuck Walbourn for helping with my poking around in the dark to work out how the pixel data is actually stored (it was actually 32 bits/pixel by default). It's still possible/likely some of code is relying on luck to read the pixel data correctly, but it's been improved with Chuck's input.
My basic technique was;
FindWindow to get the client window on the desktop
CreateCompatibleBitmap and SelectObject and PrintWindow to get a HBITMAP to the snapshot
malloc to allocate the correct amount of space for a (byte*)pixel buffer
GetDIBits to populate the (byte*)pixel buffer from the HBITMAP
CreateTexture2D to build the texture buffer
CreateShaderResourceView to map the texture to the graphics pixel shader
So working code to screenshot a windows desktop window and pass that as a texture to a direct3d app is:
RECT user_window_rectangle;
HWND user_window = FindWindow(NULL, TEXT("Calculator")); //the window can't be min
if (user_window == NULL)
{
MessageBoxA(NULL, "Can't find Calculator", "Camvas", MB_OK);
return;
}
GetClientRect(user_window, &user_window_rectangle);
//create
HDC hdcScreen = GetDC(NULL);
HDC hdc = CreateCompatibleDC(hdcScreen);
UINT screenshot_width = user_window_rectangle.right - user_window_rectangle.left;
UINT screenshot_height = user_window_rectangle.bottom - user_window_rectangle.top;
hbmp = CreateCompatibleBitmap(hdcScreen, screenshot_width, screenshot_height);
SelectObject(hdc, hbmp);
//Print to memory hdc
PrintWindow(user_window, hdc, PW_CLIENTONLY);
BITMAPINFOHEADER bmih;
ZeroMemory(&bmih, sizeof(BITMAPINFOHEADER));
bmih.biSize = sizeof(BITMAPINFOHEADER);
bmih.biPlanes = 1;
bmih.biBitCount = 32;
bmih.biWidth = screenshot_width;
bmih.biHeight = 0-screenshot_height;
bmih.biCompression = BI_RGB;
bmih.biSizeImage = 0;
int bytes_per_pixel = bmih.biBitCount / 8;
BYTE *pixels = (BYTE*)malloc(bytes_per_pixel * screenshot_width * screenshot_height);
BITMAPINFO bmi = { 0 };
bmi.bmiHeader = bmih;
int row_count = GetDIBits(hdc, hbmp, 0, screenshot_height, pixels, &bmi, DIB_RGB_COLORS);
D3D11_TEXTURE2D_DESC screenshot_desc = CD3D11_TEXTURE2D_DESC(
DXGI_FORMAT_B8G8R8A8_UNORM, // format
screenshot_width, // width
screenshot_height, // height
1, // arraySize
1, // mipLevels
D3D11_BIND_SHADER_RESOURCE, // bindFlags
D3D11_USAGE_DYNAMIC, // usage
D3D11_CPU_ACCESS_WRITE, // cpuaccessFlags
1, // sampleCount
0, // sampleQuality
0 // miscFlags
);
D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(D3D11_SUBRESOURCE_DATA));
data.pSysMem = pixels; // texArray; // &bmp.bmBits; //pixel buffer
data.SysMemPitch = bytes_per_pixel * screenshot_width;// line size in byte
data.SysMemSlicePitch = bytes_per_pixel * screenshot_width * screenshot_height;
hr = dev->CreateTexture2D(
&screenshot_desc, //texture format
&data, // pixel buffer use to fill the texture
&screenshot_texture // created texture
);
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
srvDesc.Format = screenshot_desc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip = 0;
srvDesc.Texture2D.MostDetailedMip = screenshot_desc.MipLevels;
dev->CreateShaderResourceView(screenshot_texture, NULL, &shader_resource_view);

UpdateLayeredWindow and DrawText

I'm using UpdateLayeredWindow to display an application window. I have created my own custom buttons and i would like to create my own static text. The problem is that when i try to draw the text on the hdc, the DrawText or TextOut functions overwrite the alpha channel of my picture and the text will become transparent. I tried to find a solution to this but i could not find any. My custom controls are designed in such way that they will do all the drawing in a member function called Draw(HDC hDc), so they can only access the hdc. I would like to keep this design. Can anyone help me? I am using MFC and i would want to achieve the desired result without the use of GDI+.
I know this is an old post ... but I just had this very same problem ... and it was driving me CRAZY.
Eventually, I stumbled upon this post by Mike Sutton to the microsoft.public.win32.programmer.gdi newsgroup ... from almost 7 years ago!
Basically, the DrawText (and TextOut) do not play nicely with the alpha channel and UpdateLayeredWindow ... and you need to premultiply the R, G, and B channels with the alpha channel.
In Mike's post, he shows how he creates another DIB (device independent bitmap) upon which he draws the text ... and alpha blends that into the other bitmap.
After doing this, my text looked perfect!
Just in case, the link to the newsgroup post dies ... I am going to include the code here. All credit goes to Mike Sutton (#mikedsutton).
Here is the code that creates the alpha blended bitmap with the text on it:
HBITMAP CreateAlphaTextBitmap(LPCSTR inText, HFONT inFont, COLORREF inColour)
{
int TextLength = (int)strlen(inText);
if (TextLength <= 0) return NULL;
// Create DC and select font into it
HDC hTextDC = CreateCompatibleDC(NULL);
HFONT hOldFont = (HFONT)SelectObject(hTextDC, inFont);
HBITMAP hMyDIB = NULL;
// Get text area
RECT TextArea = {0, 0, 0, 0};
DrawText(hTextDC, inText, TextLength, &TextArea, DT_CALCRECT);
if ((TextArea.right > TextArea.left) && (TextArea.bottom > TextArea.top))
{
BITMAPINFOHEADER BMIH;
memset(&BMIH, 0x0, sizeof(BITMAPINFOHEADER));
void *pvBits = NULL;
// Specify DIB setup
BMIH.biSize = sizeof(BMIH);
BMIH.biWidth = TextArea.right - TextArea.left;
BMIH.biHeight = TextArea.bottom - TextArea.top;
BMIH.biPlanes = 1;
BMIH.biBitCount = 32;
BMIH.biCompression = BI_RGB;
// Create and select DIB into DC
hMyDIB = CreateDIBSection(hTextDC, (LPBITMAPINFO)&BMIH, 0, (LPVOID*)&pvBits, NULL, 0);
HBITMAP hOldBMP = (HBITMAP)SelectObject(hTextDC, hMyDIB);
if (hOldBMP != NULL)
{
// Set up DC properties
SetTextColor(hTextDC, 0x00FFFFFF);
SetBkColor(hTextDC, 0x00000000);
SetBkMode(hTextDC, OPAQUE);
// Draw text to buffer
DrawText(hTextDC, inText, TextLength, &TextArea, DT_NOCLIP);
BYTE* DataPtr = (BYTE*)pvBits;
BYTE FillR = GetRValue(inColour);
BYTE FillG = GetGValue(inColour);
BYTE FillB = GetBValue(inColour);
BYTE ThisA;
for (int LoopY = 0; LoopY < BMIH.biHeight; LoopY++) {
for (int LoopX = 0; LoopX < BMIH.biWidth; LoopX++) {
ThisA = *DataPtr; // Move alpha and pre-multiply with RGB
*DataPtr++ = (FillB * ThisA) >> 8;
*DataPtr++ = (FillG * ThisA) >> 8;
*DataPtr++ = (FillR * ThisA) >> 8;
*DataPtr++ = ThisA; // Set Alpha
}
}
// De-select bitmap
SelectObject(hTextDC, hOldBMP);
}
}
// De-select font and destroy temp DC
SelectObject(hTextDC, hOldFont);
DeleteDC(hTextDC);
// Return DIBSection
return hMyDIB;
}
Here is the code that drives the CreateAlphaTextBitmap method:
void TestAlphaText(HDC inDC, int inX, int inY)
{
const char *DemoText = "Hello World!\0";
RECT TextArea = {0, 0, 0, 0};
HFONT TempFont = CreateFont(50, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, "Arial\0");
HBITMAP MyBMP = CreateAlphaTextBitmap(DemoText, TempFont, 0xFF);
DeleteObject(TempFont);
if (MyBMP)
{
// Create temporary DC and select new Bitmap into it
HDC hTempDC = CreateCompatibleDC(inDC);
HBITMAP hOldBMP = (HBITMAP)SelectObject(hTempDC, MyBMP);
if (hOldBMP)
{
// Get Bitmap image size
BITMAP BMInf;
GetObject(MyBMP, sizeof(BITMAP), &BMInf);
// Fill blend function and blend new text to window
BLENDFUNCTION bf;
bf.BlendOp = AC_SRC_OVER;
bf.BlendFlags = 0;
bf.SourceConstantAlpha = 0x80;
bf.AlphaFormat = AC_SRC_ALPHA;
AlphaBlend(inDC, inX, inY, BMInf.bmWidth, BMInf.bmHeight, hTempDC, 0, 0, BMInf.bmWidth, BMInf.bmHeight, bf);
// Clean up
SelectObject(hTempDC, hOldBMP);
DeleteObject(MyBMP);
DeleteDC(hTempDC);
}
}
}

Capture screen shot with mouse cursor

I have used the following code to get screen shot on Windows.
hdcMem = CreateCompatibleDC (hdc) ;
int cx = GetDeviceCaps (hdc, HORZRES);
int cy = GetDeviceCaps (hdc, VERTRES);
HBITMAP hBitmap(NULL);
hBitmap = CreateCompatibleBitmap (hdc, cx, cy) ;
SelectObject (hdcMem, hBitmap) ;
BitBlt(hdcMem, 0, 0, cx, cy, hdc, 0, 0, SRCCOPY);
However, the mouse cursor doesn't show up.
How could I get the cursor? or Is there a library can do that?
Thanks in advance.
After your BitBlt and before you select the bitmap back out of hdcMem, you can do this:
CURSORINFO cursor = { sizeof(cursor) };
::GetCursorInfo(&cursor);
if (cursor.flags == CURSOR_SHOWING) {
RECT rcWnd;
::GetWindowRect(hwnd, &rcWnd);
ICONINFOEXW info = { sizeof(info) };
::GetIconInfoExW(cursor.hCursor, &info);
const int x = cursor.ptScreenPos.x - rcWnd.left - rc.left - info.xHotspot;
const int y = cursor.ptScreenPos.y - rcWnd.top - rc.top - info.yHotspot;
BITMAP bmpCursor = {0};
::GetObject(info.hbmColor, sizeof(bmpCursor), &bmpCursor);
::DrawIconEx(hdcMem, x, y, cursor.hCursor, bmpCursor.bmWidth, bmpCursor.bmHeight,
0, NULL, DI_NORMAL);
}
The code above figures out if the cursor is showing, using the global cursor state since you're probably taking a screenshot of a window (or windows) in another process. It then gets the target window coordinates for adjusting from screen. It gets specific info about the cursor, including its hotspot. It computes the drawing position of the icon. Finally, it gets the actual size of the cursor icon so that it can draw it without any stretching.
The only limitations to this approach that I know of are:
You don't get cursor shadows if you have them enabled.
If it's an animated cursor, this just shows the first frame. As far as I know, there's no way to determine the current frame.

Resources