Call to GetDIBits() succeeds, but program terminates - winapi

When I call the following function in a Windows program, the program abruptly terminates.
The purpose of ScanRect() is to copy a rectangle at specified coordinates on the screen and load the pixel values into a memory buffer.
Every function call within ScanRect() succeeds, including both calls to GetDIBits(). The first call, with lpvBits set to NULL, causes it to fill the BITMAPINFOHEADER of bmInfo with information about the pixel data, reporting a value of 32 bits per pixel. The second call to GetDIBits() copies 80 lines of the rectangle into memory buffer pMem, returning the value 80 for the number of lines copied.
Everything seems to succeed, but then the program suddenly terminates. I inserted the line Sleep(8192) after the second call to GetDIBits(), and the program terminates after the 8 seconds have elapsed.
What is causing the program to terminate?
EDIT: the original code is revised per suggestions in this thread. No errors are detected when the function is run, but the program still terminates unexpectedly. I realize the memory buffer size is hard coded, but it is way bigger than needed for the rectangle used in the testing. That should not cause an error. Of course I will have the program compute the necessary buffer size after I find out why the program is terminating.
VOID ScanRect(int x, int y, int iWidth, int iHeight) // 992, 96, 64, 80
{ HDC hDC = GetDC(NULL);
if (!hDC)
{
cout << "!hDC" << endl; // error handling ...
}
else
{ HBITMAP hBitmap = CreateCompatibleBitmap(hDC, iWidth, iHeight);
if (!hBitmap)
{
cout << "!hBitmap" << endl; // error handling ...
}
else
{ HDC hCDC = CreateCompatibleDC(hDC); // compatible with screen DC
if (!hCDC)
{
cout << "!hCDC" << endl; // error handling ...
}
else
{ HBITMAP hOldBitmap = (HBITMAP) SelectObject(hCDC, hBitmap);
BitBlt(hCDC, 0, 0, iWidth, iHeight, hDC, x, y, SRCCOPY);
BITMAPINFO bmInfo = {0};
bmInfo.bmiHeader.biSize = sizeof(bmInfo.bmiHeader);
if (!GetDIBits(hCDC, hBitmap, 0, iHeight, NULL, &bmInfo, DIB_RGB_COLORS))
{
cout << "!GetDIBits" << endl; // error handling ...
}
else
{ HANDLE hHeap = GetProcessHeap();
LPVOID pMem = HeapAlloc(hHeap, HEAP_ZERO_MEMORY, 65536); // TODO: calculate a proper size based on bmInfo's pixel information ...
if (!pMem)
{
cout << "!pMem" << endl;
}
else
{ int i = GetDIBits(hCDC, hBitmap, 0, iHeight, pMem, &bmInfo, DIB_RGB_COLORS);
cout << "i returned by GetDIBits() " << i << endl;
HeapFree(hHeap, NULL, pMem);
}
}
SelectObject(hCDC, hOldBitmap);
DeleteDC(hCDC);
}
DeleteObject(hBitmap);
}
ReleaseDC(NULL, hDC);
}
}

The biCompression value is returned by first GetDIBits is BI_BITFIELDS and before you call second GetDIBits, you need to call bmInfo.bmiHeader.biCompression = BI_RGB;. According to c++ read pixels with GetDIBits(), Setting it to BI_RGB is essential in order to avoid extra 3 DWORDs to be written at the end of structure.
More details

Like #BenVoigt said in comments, you need to restore the old HBITMAP that you replaced with SelectObject() before you destroy the HDC that owns it. You are selecting hBitmap into hCDC, and then destroying hCDC before destroying hBitmap.
https://learn.microsoft.com/en-us/windows/win32/gdi/operations-on-graphic-objects
Each of these functions returns a handle identifying a new object. After an application retrieves a handle, it must call the SelectObject() function to replace the default object. However, the application should save the handle identifying the default object and use this handle to replace the new object when it is no longer needed. When the application finishes drawing with the new object, it must restore the default object by calling the SelectObject() function and then delete the new object by calling the DeleteObject() function. Failing to delete objects causes serious performance problems.
Also, you should free the GDI objects in the reverse order that you create them.
And, don't forget error handling.
Try something more like this instead:
VOID ScanRect(int x, int y, int iWidth, int iHeight) // 992, 96, 64, 80
{
HDC hDC = GetDC(NULL);
if (!hDC)
{
// error handling ...
}
else
{
HBITMAP hBitmap = CreateCompatibleBitmap(hDC, iWidth, iHeight);
if (!hBitmap)
{
// error handling ...
}
else
{
HDC hCDC = CreateCompatibleDC(hDC); // compatible with screen DC
if (!hCDC)
{
// error handling ...
}
else
{
HBITMAP hOldBitmap = (HBITMAP) SelectObject(hCDC, hBitmap);
BitBlt(hCDC, 0, 0, iWidth, iHeight, hDC, x, y, SRCCOPY);
SelectObject(hCDC, hOldBitmap);
BITMAPINFO bmInfo = {0};
bmInfo.bmiHeader.biSize = sizeof(bmInfo.bmiHeader);
if (!GetDIBits(hCDC, hBitmap, 0, iHeight, NULL, &bmInfo, DIB_RGB_COLORS))
{
// error handling ...
}
else
{
HANDLE hHeap = GetProcessHeap();
LPVOID pMem = HeapAlloc(hHeap, HEAP_ZERO_MEMORY, 65536); // TODO: calculate a proper size based on bmInfo's pixel information ...
if (!pMem)
{
// error handling ...
}
else
{
int i = GetDIBits(hCDC, hBitmap, 0, iHeight, pMem, &bmInfo, DIB_RGB_COLORS);
HeapFree(hHeap, NULL, pMem);
}
}
DeleteDC(hCDC);
}
DeleteObject(hBitmap);
}
ReleaseDC(NULL, hDC);
}
}
Note the TODO on the call to HeapAlloc(). You really should be calculating the buffer size based on the bitmap's actual width, height, pixel depth, scanline padding size, etc. Don't use a hard-coded buffer size. I will leave this as an exercise for you to figure out. Although, in this particular example, 64K should be large enough for a 64x80 32bpp bitmap, it will just waste 45K of unused memory.

Related

How to set alpha value for all pixels in a bitmap using MFC or GDI or GDI+

I am in an MFC application. I created a bitmap using a memory DC I want to save it to DIB file.
I found this code to be most elegant so far:
void Save(CBitmap * bitmap) {
CImage image;
image.Attach((HBITMAP)pcBitmap->GetSafeHandle());
image.Save("bla.bmp", Gdiplus::ImageFormatBMP);
}
The resulting file is 32 BPP colorspace with all alpha values set to '0'.
Now I want use the Bitmap as toolbar bitmap:
CMFCToolbar::GetImages()->Load("bla.bmp");
But all the icons are gone.
MFC internally calls PreMultiplyAlpha() when importing the bitmap.
Then RGB components of all pixels are '0'. Effectively the whole bitmap was zeroed.
How can I set the alpha value for each pixel to '0xFF' before saving?
I tried:
void Save(CBitmap * bitmap) {
CImage image;
image.Attach((HBITMAP)pcBitmap->GetSafeHandle());
image.SetHasAlphaChannel(true);
image.AlphaBlend(myBitmapDC, 0, 0);
image.Save("bla.bmp", Gdiplus::ImageFormatBMP);
}
But that affects only RGB values of the pixels.
So far I resisted to iterate over each pixel and modifying the memory of the bitmap. I'm asking for an elegant solution. Maybe a one-liner.
Use GetDIBits to read 32-bit pixel data, and loop through the bits to set alpha to 0xFF.
bool Save(CBitmap *bitmap)
{
if(!bitmap)
return false;
BITMAP bm;
bitmap->GetBitmap(&bm);
if(bm.bmBitsPixel < 16)
return false;
DWORD size = bm.bmWidth * bm.bmHeight * 4;
BITMAPINFOHEADER bih = { sizeof(bih), bm.bmWidth, bm.bmHeight, 1, 32, BI_RGB };
BITMAPFILEHEADER bfh = { 'MB', 54 + size, 0, 0, 54 };
CClientDC dc(0);
std::vector<BYTE> vec(size, 0xFF);
int test = GetDIBits(dc, *bitmap, 0, bm.bmHeight, &vec[0],
(BITMAPINFO*)&bih, DIB_RGB_COLORS);
for(DWORD i = 0; i < size; i += 4)
vec[i + 3] = 0xFF;
CFile fout;
if(fout.Open(filename, CFile::modeCreate | CFile::modeWrite))
{
fout.Write(&bfh, sizeof(bfh));
fout.Write(&bih, sizeof(bih));
fout.Write(&vec[0], size);
return true;
}
return false;
}
As an alternative (but I am not sure if this is reliable) initialize the memory with 0xFF. GetDIBits will set the RGB part but won't overwrite the alpha values:
std::vector<BYTE> vec(size, 0xFF);
GetDIBits...
Or using GDI+
bool Save(CBitmap *bitmap)
{
if(!bitmap)
return false;
BITMAP bm;
bitmap->GetBitmap(&bm);
if(bm.bmBitsPixel < 16)
return false; //needs palette
Gdiplus::GdiplusStartupInput tmp;
ULONG_PTR token;
Gdiplus::GdiplusStartup(&token, &tmp, NULL);
Gdiplus::Bitmap *src = Gdiplus::Bitmap::FromHBITMAP(*bitmap, NULL);
Gdiplus::Bitmap *dst = src->Clone(0, 0, src->GetWidth(), src->GetHeight(),
PixelFormat32bppARGB);
LPCOLESTR clsid_bmp = L"{557cf400-1a04-11d3-9a73-0000f81ef32e}";
CLSID clsid;
CLSIDFromString(clsid_bmp, &clsid);
bool result = dst->Save(L"file.bmp", &clsid) == 0;
delete src;
delete dst;
Gdiplus::GdiplusShutdown(token);
return result;
}

SwapBuffer Nvidia Crash

EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
Hey guys, whenever I call SwapBuffers(hDC), I get a crash. If I create it with WGL_CONTEXT_DEBUG_BIT_ARB, I get a
Too many posts were made to a semaphore.
from Windows as I call SwapBuffers. What could be the cause of this?
Update: No crash occurs if I don't draw, just clear and swap.
Here's a bit of the code with the irrelevant bits cut out:
static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be
{
sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor
1, // Version Number
PFD_DRAW_TO_WINDOW | // Format Must Support Window
PFD_SUPPORT_OPENGL | // Format Must Support OpenGL
PFD_DOUBLEBUFFER, // Must Support Double Buffering
PFD_TYPE_RGBA, // Request An RGBA Format
32, // Select Our Color Depth
0, 0, 0, 0, 0, 0, // Color Bits Ignored
0, // No Alpha Buffer
0, // Shift Bit Ignored
0, // No Accumulation Buffer
0, 0, 0, 0, // Accumulation Bits Ignored
24, // 24Bit Z-Buffer (Depth Buffer)
0, // No Stencil Buffer
0, // No Auxiliary Buffer
PFD_MAIN_PLANE, // Main Drawing Layer
0, // Reserved
0, 0, 0 // Layer Masks Ignored
};
if (!(hDC = GetDC(windowHandle)))
return false;
unsigned int PixelFormat;
if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd)))
return false;
if (!SetPixelFormat(hDC, PixelFormat, &pfd))
return false;
hRC = wglCreateContext(hDC);
if (!hRC) {
std::cout << "wglCreateContext Failed!\n";
return false;
}
if (wglMakeCurrent(hDC, hRC) == NULL) {
std::cout << "Make Context Current Second Failed!\n";
return false;
}
... // OGL Buffer Initialization
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glBindVertexArray(vao);
glUseProgram(myprogram);
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart);
SwapBuffers(GetDC(window_handle));
I found the answer. The SwapBuffer(hDC) command was simply where the error occurred, but did not have anything to do with it. I believe my error was due to something related to my indices, as if I draw only the first mesh in the model, everything works as intended. Nvidia crashed with this error, Intel went on and disregarded it.
Nonetheless, thank you to Chris Becke for pointing out a future memory leak using GetDC(hwnd).

Windows API `GetPixel()' always return `CLR_INVALID`, but `SetPixel()` is worked well?

My OS is windows 7 64-bits with 2 monitors display.
I use GetPixel(), but it always return CLR_INVALID as result like that:
COLORREF result = GetPixel(dc,x,y);
My GetDeviceCaps(RASTERCAPS) returns result that RC_BITBLT is enabled.
GetDeviceCaps(COLORMGMTCAPS) returns result is CM_GAMMA_RAMP.
Most importantly, if I SetPixel(dc,x,y,RGB(250,250,250)) in advance, and GetPixel(dc,x,y) later, I can ALWAYS retreive correct result like that:
COLORREF result = SetPixel(dc,x,y,RGB(250,250,250));
COLORREF cr = GetPixel(dc,x,y);
So I think my coordination should be alright. I have no idea about why GetPixel() always return CLR_INVALID, but SetPixel() is always worked well? Any suggestions?
From GetPixel documentation
A bitmap must be selected within the device context, otherwise,
CLR_INVALID is returned on all pixels.
Try the below code and see if it works for your device context.
HDC dc = ... // <-- your device context
HDC memDC = CreateCompatibleDC(dc);
HBITMAP memBM = CreateCompatibleBitmap(dc, 1, 1);
SelectObject(memDC, memBM);
int x = ... // point's coordinates
int y = ...
BitBlt(memDC, 0, 0, 1, 1, dc, x, y, SRCCOPY);
COLORREF cr = GetPixel(memDC, 0, 0);
std::cout << cr << std::endl;
DeleteDC(memDC);
DeleteObject(memBM);

Allegro, sprites leaving trail

I'm getting the problem my sprites leave a trail behind when i move them.
Tried drawning a BG with every refresh but then it start flickering.
This is what i do
// ...
int main(int argc, char *argv[])
{
BITMAP *buffer = NULL;
BITMAP *graphics = NULL;
buffer = create_bitmap(SCREEN_W, SCREEN_H);
graphics = load_bitmap("my_graphics.bmp", NULL);
clear_to_color(screen, makecol(0, 0, 0));
clear_to_color(buffer, makecol(0, 0, 0));
while(!key[KEY_ESC])
{
// ...
render_map(100,100);
// ...
}
}
void render_map(int w, int h)
{
// ...
for(int i=0;i < w * h;i++)
{
masked_blit(graphics, buffer, 0, 0, pos_x, pos_y, 32, 32);
}
// ...
blit(buffer, screen, camera_x,camera_y,0,0,SCREEN_W, SCREEN_H);
clear_to_color(buffer, makecol(0, 0, 0));
}
Thanks in advance for any help
Your code is a little hard to read, and you've left out big pieces of it. So it's hard to say for sure, but this line looks suspicious:
blit(buffer, screen, camera_x,camera_y,0,0,SCREEN_W, SCREEN_H);
When using a buffer, you typically will always be calling it like:
blit(buffer, screen, 0,0, 0,0, SCREEN_W,SCREEN_H);
and that is the only time you ever draw to the screen. So the steps are:
clear the buffer (by drawing a background image, tileset, color, etc)
draw everything to the buffer
copy the buffer to the screen
repeat

How to edit a BITMAP once it has been loaded by win32

Once I have loaded a BITMAP from file, with LoadImage:
HBITMAP renderBMP = (HBITMAP)LoadImage( NULL, filePath, IMAGE_BITMAP, 0, 0, LR_DEFAULTSIZE | LR_LOADFROMFILE );
is there a way to easily access and edit the pixels individually?
I can use this to get the bitmap object, but it doesn't seem to help,
BITMAP bm;
GetObject(renderBMP, sizeof(bm), &bm);
because the value of bmBits in the structure is 0.
UPDATE:
Now I am getting a bug with this solution:
struct Pixel { unsigned char r,g,b,a; };
void Frame::PushMemory(HDC hdc)
{
BITMAPINFO bi;
ZeroMemory(&bi.bmiHeader, sizeof(BITMAPINFOHEADER));
bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
GetDIBits(hdc, renderBMP, 0, bi.bmiHeader.biHeight, NULL, &bi, DIB_RGB_COLORS);
/* Allocate memory for bitmap bits */
Pixel* pixels = new Pixel[bi.bmiHeader.biHeight * bi.bmiHeader.biWidth];
int n = sizeof(Pixel) * bi.bmiHeader.biHeight * bi.bmiHeader.biWidth;
int m = bi.bmiHeader.biSizeImage;
GetDIBits(hdc, renderBMP, 0, bi.bmiHeader.biHeight, pixels, &bi, DIB_RGB_COLORS);
// Recompute the output
//ComputeOutput(pixels);
// Push back to windows
//SetDIBits(hdc, renderBMP, 0, bi.bmiHeader.biHeight, pixels, &bi, DIB_RGB_COLORS );
//delete pixels;
}
I get this error:
Run-Time Check Failure #2 - Stack around the variable 'bi' was corrupted.
The last three lines don't seem to matter whether commented in or not.
Use GetDIBits to access pixels. It copies all pixels into specified buffer. After pixels' modification you can use SetDIBits to write pixels back to bitmap.
EDIT:
Example of code:
LPVOID lpvBits=NULL; // pointer to bitmap bits array
BITMAPINFO bi;
ZeroMemory(&bi.bmiHeader, sizeof(BITMAPINFOHEADER));
bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
if (!GetDIBits(hDC, hBmp, 0, height, NULL, &bi, DIB_RGB_COLORS))
return NULL;
/* Allocate memory for bitmap bits */
if ((lpvBits = new char[bi.bmiHeader.biSizeImage]) == NULL)
return NULL;
if (!GetDIBits(hDC, hBmp, 0, height, lpvBits, &bi, DIB_RGB_COLORS))
return NULL;
/* do something with bits */
::SetDIBits( hDC, hBmp, 0, height, ( LPVOID )lpvBits, &bi, DIB_RGB_COLORS );
If you pass the LR_CREATEDIBSECTION flag to LoadImage it creates a special kind of bitmap with a usermode memory section containing the bits of the bitmap.
GetObject on a DIBSection bitmap will fill in the bmPits pointer of the BITMAP structure, or even fill in a DIBSECTION struct with extra data.

Resources