SwapBuffer Nvidia Crash - windows

EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
Hey guys, whenever I call SwapBuffers(hDC), I get a crash. If I create it with WGL_CONTEXT_DEBUG_BIT_ARB, I get a
Too many posts were made to a semaphore.
from Windows as I call SwapBuffers. What could be the cause of this?
Update: No crash occurs if I don't draw, just clear and swap.
Here's a bit of the code with the irrelevant bits cut out:
static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be
{
sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor
1, // Version Number
PFD_DRAW_TO_WINDOW | // Format Must Support Window
PFD_SUPPORT_OPENGL | // Format Must Support OpenGL
PFD_DOUBLEBUFFER, // Must Support Double Buffering
PFD_TYPE_RGBA, // Request An RGBA Format
32, // Select Our Color Depth
0, 0, 0, 0, 0, 0, // Color Bits Ignored
0, // No Alpha Buffer
0, // Shift Bit Ignored
0, // No Accumulation Buffer
0, 0, 0, 0, // Accumulation Bits Ignored
24, // 24Bit Z-Buffer (Depth Buffer)
0, // No Stencil Buffer
0, // No Auxiliary Buffer
PFD_MAIN_PLANE, // Main Drawing Layer
0, // Reserved
0, 0, 0 // Layer Masks Ignored
};
if (!(hDC = GetDC(windowHandle)))
return false;
unsigned int PixelFormat;
if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd)))
return false;
if (!SetPixelFormat(hDC, PixelFormat, &pfd))
return false;
hRC = wglCreateContext(hDC);
if (!hRC) {
std::cout << "wglCreateContext Failed!\n";
return false;
}
if (wglMakeCurrent(hDC, hRC) == NULL) {
std::cout << "Make Context Current Second Failed!\n";
return false;
}
... // OGL Buffer Initialization
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glBindVertexArray(vao);
glUseProgram(myprogram);
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart);
SwapBuffers(GetDC(window_handle));

I found the answer. The SwapBuffer(hDC) command was simply where the error occurred, but did not have anything to do with it. I believe my error was due to something related to my indices, as if I draw only the first mesh in the model, everything works as intended. Nvidia crashed with this error, Intel went on and disregarded it.
Nonetheless, thank you to Chris Becke for pointing out a future memory leak using GetDC(hwnd).

Related

Call to GetDIBits() succeeds, but program terminates

When I call the following function in a Windows program, the program abruptly terminates.
The purpose of ScanRect() is to copy a rectangle at specified coordinates on the screen and load the pixel values into a memory buffer.
Every function call within ScanRect() succeeds, including both calls to GetDIBits(). The first call, with lpvBits set to NULL, causes it to fill the BITMAPINFOHEADER of bmInfo with information about the pixel data, reporting a value of 32 bits per pixel. The second call to GetDIBits() copies 80 lines of the rectangle into memory buffer pMem, returning the value 80 for the number of lines copied.
Everything seems to succeed, but then the program suddenly terminates. I inserted the line Sleep(8192) after the second call to GetDIBits(), and the program terminates after the 8 seconds have elapsed.
What is causing the program to terminate?
EDIT: the original code is revised per suggestions in this thread. No errors are detected when the function is run, but the program still terminates unexpectedly. I realize the memory buffer size is hard coded, but it is way bigger than needed for the rectangle used in the testing. That should not cause an error. Of course I will have the program compute the necessary buffer size after I find out why the program is terminating.
VOID ScanRect(int x, int y, int iWidth, int iHeight) // 992, 96, 64, 80
{ HDC hDC = GetDC(NULL);
if (!hDC)
{
cout << "!hDC" << endl; // error handling ...
}
else
{ HBITMAP hBitmap = CreateCompatibleBitmap(hDC, iWidth, iHeight);
if (!hBitmap)
{
cout << "!hBitmap" << endl; // error handling ...
}
else
{ HDC hCDC = CreateCompatibleDC(hDC); // compatible with screen DC
if (!hCDC)
{
cout << "!hCDC" << endl; // error handling ...
}
else
{ HBITMAP hOldBitmap = (HBITMAP) SelectObject(hCDC, hBitmap);
BitBlt(hCDC, 0, 0, iWidth, iHeight, hDC, x, y, SRCCOPY);
BITMAPINFO bmInfo = {0};
bmInfo.bmiHeader.biSize = sizeof(bmInfo.bmiHeader);
if (!GetDIBits(hCDC, hBitmap, 0, iHeight, NULL, &bmInfo, DIB_RGB_COLORS))
{
cout << "!GetDIBits" << endl; // error handling ...
}
else
{ HANDLE hHeap = GetProcessHeap();
LPVOID pMem = HeapAlloc(hHeap, HEAP_ZERO_MEMORY, 65536); // TODO: calculate a proper size based on bmInfo's pixel information ...
if (!pMem)
{
cout << "!pMem" << endl;
}
else
{ int i = GetDIBits(hCDC, hBitmap, 0, iHeight, pMem, &bmInfo, DIB_RGB_COLORS);
cout << "i returned by GetDIBits() " << i << endl;
HeapFree(hHeap, NULL, pMem);
}
}
SelectObject(hCDC, hOldBitmap);
DeleteDC(hCDC);
}
DeleteObject(hBitmap);
}
ReleaseDC(NULL, hDC);
}
}
The biCompression value is returned by first GetDIBits is BI_BITFIELDS and before you call second GetDIBits, you need to call bmInfo.bmiHeader.biCompression = BI_RGB;. According to c++ read pixels with GetDIBits(), Setting it to BI_RGB is essential in order to avoid extra 3 DWORDs to be written at the end of structure.
More details
Like #BenVoigt said in comments, you need to restore the old HBITMAP that you replaced with SelectObject() before you destroy the HDC that owns it. You are selecting hBitmap into hCDC, and then destroying hCDC before destroying hBitmap.
https://learn.microsoft.com/en-us/windows/win32/gdi/operations-on-graphic-objects
Each of these functions returns a handle identifying a new object. After an application retrieves a handle, it must call the SelectObject() function to replace the default object. However, the application should save the handle identifying the default object and use this handle to replace the new object when it is no longer needed. When the application finishes drawing with the new object, it must restore the default object by calling the SelectObject() function and then delete the new object by calling the DeleteObject() function. Failing to delete objects causes serious performance problems.
Also, you should free the GDI objects in the reverse order that you create them.
And, don't forget error handling.
Try something more like this instead:
VOID ScanRect(int x, int y, int iWidth, int iHeight) // 992, 96, 64, 80
{
HDC hDC = GetDC(NULL);
if (!hDC)
{
// error handling ...
}
else
{
HBITMAP hBitmap = CreateCompatibleBitmap(hDC, iWidth, iHeight);
if (!hBitmap)
{
// error handling ...
}
else
{
HDC hCDC = CreateCompatibleDC(hDC); // compatible with screen DC
if (!hCDC)
{
// error handling ...
}
else
{
HBITMAP hOldBitmap = (HBITMAP) SelectObject(hCDC, hBitmap);
BitBlt(hCDC, 0, 0, iWidth, iHeight, hDC, x, y, SRCCOPY);
SelectObject(hCDC, hOldBitmap);
BITMAPINFO bmInfo = {0};
bmInfo.bmiHeader.biSize = sizeof(bmInfo.bmiHeader);
if (!GetDIBits(hCDC, hBitmap, 0, iHeight, NULL, &bmInfo, DIB_RGB_COLORS))
{
// error handling ...
}
else
{
HANDLE hHeap = GetProcessHeap();
LPVOID pMem = HeapAlloc(hHeap, HEAP_ZERO_MEMORY, 65536); // TODO: calculate a proper size based on bmInfo's pixel information ...
if (!pMem)
{
// error handling ...
}
else
{
int i = GetDIBits(hCDC, hBitmap, 0, iHeight, pMem, &bmInfo, DIB_RGB_COLORS);
HeapFree(hHeap, NULL, pMem);
}
}
DeleteDC(hCDC);
}
DeleteObject(hBitmap);
}
ReleaseDC(NULL, hDC);
}
}
Note the TODO on the call to HeapAlloc(). You really should be calculating the buffer size based on the bitmap's actual width, height, pixel depth, scanline padding size, etc. Don't use a hard-coded buffer size. I will leave this as an exercise for you to figure out. Although, in this particular example, 64K should be large enough for a 64x80 32bpp bitmap, it will just waste 45K of unused memory.

Why would TextOut() be using a different coordinate system than AlphaBlend()?

I'm trying to write a text overlay function that generates a semitransparent background with text on it in the top right hand corner of the viewport. I wrote a test MFC application project with mostly default settings (I don't remember exactly, but AFAIK, none of the settings should cause the problems I'm seeing).
Here is the code:
void DrawSemitransparentRect(CDC& destDC, CRect rect, float percentGrayBackground, COLORREF overlayColour, float overlayPercentOpaque)
{
rect.NormalizeRect();
CDC temp_dc; // Temp dc for semitransparent text background
temp_dc.CreateCompatibleDC(&destDC);
CBitmap layer; // Layer for semitransparent text background
layer.CreateCompatibleBitmap(&destDC, 1, 1);
CBitmap* pOldBitmap = temp_dc.SelectObject(&layer);
BLENDFUNCTION blendFunction = { AC_SRC_OVER, 0, 0, 0 };
auto DrawSemitransparentRectHelper = [&](COLORREF colour, float transparency)
{
temp_dc.SetPixel(0, 0, colour);
blendFunction.SourceConstantAlpha = BYTE(transparency * 255 / 100);
// Draw semitransparent background
VERIFY(destDC.AlphaBlend(rect.left, rect.top, rect.Width(), rect.Height()
, &temp_dc, 0, 0, 1, 1, blendFunction));
};
// Lighten up the area to make more opaque without changing overlay colour.
DrawSemitransparentRectHelper(RGB(255, 255, 255), percentGrayBackground);
// Draw overlay colour
DrawSemitransparentRectHelper(overlayColour, overlayPercentOpaque);
temp_dc.SelectObject(pOldBitmap);
}
void DrawOverlayText(CDC & dc, CFont &windowFont, CRect const& windowRectDP, CString const& overlayText, CRect* pBoundingRectDP)
{
static bool debug = true;
int savedDC = dc.SaveDC();
::SetMapMode(dc.GetSafeHdc(), MM_TWIPS);
// Reset the window and viewport origins to (0, 0).
CPoint windowOrg, viewportOrg;
::SetWindowOrgEx(dc.GetSafeHdc(), 0, 0, &windowOrg);
::SetViewportOrgEx(dc.GetSafeHdc(), 0, 0, &viewportOrg);
LOGFONT logFont;// = { 12 * 10, 0, 0, 0, 100, 0, 0, 0, 0, 0, 0, 0, 255, _T("Times New Roman") };
windowFont.GetLogFont(&logFont);
logFont.lfHeight = 12 * 10; // 12 point font? Why isn't this *20? TWIPS are 20ths of a point.
// Font for the overlay text
CFont font;
font.CreatePointFontIndirect(&logFont, &dc);
CFont* pOldFont = dc.SelectObject(&font);
// window rect in Logical Points
CRect windowRect(windowRectDP);
dc.DPtoLP(windowRect);
// Get text extent in Logical Points
CRect textRect;
dc.DrawText(overlayText, textRect, DT_CALCRECT);
// inflation rectangle to add pixels around text
CRect inflate(8, 0, 8, 4);
dc.DPtoLP(&inflate);
// Create the bounding rect on the right hand of the view, making it a few pixels wider.
CRect boundingRect(textRect);
if (!debug)
{
boundingRect.InflateRect(inflate);
}
boundingRect.NormalizeRect();
boundingRect += CPoint(windowRect.Width() - boundingRect.Width(), 0);
CRect boundingRectDP(boundingRect);
if (pBoundingRectDP || !debug)
{
// Get the bounding rect in device coordinates
dc.LPtoDP(boundingRectDP);
*pBoundingRectDP = boundingRectDP;
}
if (!debug)
{
// round the bottom corners of the text box by clipping it
CRgn clip;
boundingRectDP.NormalizeRect();
clip.CreateRoundRectRgn(
boundingRectDP.left + 1 // +1 needed to make rounding coner match more closely to bottom right coner
, boundingRectDP.top - boundingRectDP.Height() // Getting rid of top rounded corners
, boundingRectDP.right
, boundingRectDP.bottom + 1
, 16, 16 // rounding corner may have to be more dynamic for different DPI screens
);
::SelectClipRgn(dc.GetSafeHdc(), (HRGN)clip.GetSafeHandle());
clip.DeleteObject();
}
// Calculatte centre position of text
CPoint centrePos(
boundingRect.left + (boundingRect.Width() - textRect.Width()) / 2 + 1
, boundingRect.top + (boundingRect.Height() - textRect.Height()) / 2 + 1);
if (debug)
{
// in debug mode, output text and then put semitransparent bounding rect over it.
dc.SetBkMode(debug ? OPAQUE : TRANSPARENT);
dc.SetBkColor(RGB(255, 0, 0));
dc.SetTextColor(RGB(0, 0, 0));
dc.TextOut(centrePos.x, centrePos.y, overlayText);
DrawSemitransparentRect(dc, boundingRect, 60, RGB(0, .25 * 255, .75 * 255), 40);
}
else
{
// 2 pixel offset in Logical Points
CPoint textShadowOffset(2, 2);
dc.DPtoLP(&textShadowOffset);
// in !debug mode, output semitransparent bounding rect and then put text over it.
DrawSemitransparentRect(dc, boundingRect, 60, RGB(0, .25 * 255, .75 * 255), 40);
dc.SetBkMode(debug ? OPAQUE : TRANSPARENT);
dc.SetTextColor(RGB(0, 0, 0));
dc.TextOut(centrePos.x, centrePos.y, overlayText);
dc.SetTextColor(RGB(255, 255, 255));
dc.TextOut(centrePos.x - textShadowOffset.x, centrePos.y - textShadowOffset.y, overlayText);
}
// Restore DC's state
dc.SelectObject(pOldFont);
dc.RestoreDC(savedDC);
}
// OnPaint() function for CView derived class.
void COverlayOnCViewView::OnPaint()
{
CPaintDC dc(this); // device context for painting
CString m_overlayText = _T("abcdefg ABCDEFG");
CFont windowFont;
LOGFONT logFont = { -12, 0, 0, 0, 400, 0, 0, 0, DEFAULT_CHARSET, 0, 0, CLEARTYPE_QUALITY, 0, _T("Segoe UI") };
windowFont.CreatePointFontIndirect(&logFont, &dc);
CRect windowRect;
GetClientRect(windowRect);
DrawOverlayText(dc, windowFont, windowRect, m_overlayText, nullptr);
}
Now, this works perfectly well in the default project, where I get the following:
But when I put it into another preexisting project, I get this:
You can see that the text is actually positioned above the translucent rectangle.
If I move the rectangle down the height of the text box, by changing
boundingRect += CPoint(windowRect.Width() - boundingRect.Width(), 0);
to
boundingRect += CPoint(windowRect.Width() - boundingRect.Width(), textRect.Height());
I get:
It's like the text function is specifying the bottom left corner rather than the top left corner for placement.
I wrote the free functions so that it should work with any DC, even if that DC has had its coordinate system manipulated, but perhaps I've forgotten to reset something?
The default project is using MFC 14.0.24212.0, but the project I tried to import this code into is using MFC 12.0.21005.1. Could that be an issue? I'm not sure how to change the default project to use the earlier version of MFC to test that.
Edit
Note that in the default project, I could have put the code into the OnDraw() function like this:
void COverlayOnCViewView::OnDraw(CDC* pDC)
{
COverlayOnCViewDoc* pDoc = GetDocument();
ASSERT_VALID(pDoc);
if (!pDoc)
return;
// TODO: add draw code for native data here
CString m_overlayText = _T("abcdefg ABCDEFG");
CFont windowFont;
LOGFONT logFont = { -12, 0, 0, 0, 400, 0, 0, 0, DEFAULT_CHARSET, 0, 0, CLEARTYPE_QUALITY, 0, _T("Segoe UI") };
windowFont.CreatePointFontIndirect(&logFont, pDC);
CRect windowRect;
GetClientRect(windowRect);
DrawOverlayText(*pDC, windowFont, windowRect, m_overlayText, nullptr);
}
The only reason why I didn't was because the application I'm putting this into doesn't have one and I wanted to mimic that project as closely as possible. If you create a default application to test this, remember either to put the ON_WM_PAINT() macro in the MESSAGE MAP or use the OnDraw() function shown instead. They both seem to have the same results in the default project.

Windows API `GetPixel()' always return `CLR_INVALID`, but `SetPixel()` is worked well?

My OS is windows 7 64-bits with 2 monitors display.
I use GetPixel(), but it always return CLR_INVALID as result like that:
COLORREF result = GetPixel(dc,x,y);
My GetDeviceCaps(RASTERCAPS) returns result that RC_BITBLT is enabled.
GetDeviceCaps(COLORMGMTCAPS) returns result is CM_GAMMA_RAMP.
Most importantly, if I SetPixel(dc,x,y,RGB(250,250,250)) in advance, and GetPixel(dc,x,y) later, I can ALWAYS retreive correct result like that:
COLORREF result = SetPixel(dc,x,y,RGB(250,250,250));
COLORREF cr = GetPixel(dc,x,y);
So I think my coordination should be alright. I have no idea about why GetPixel() always return CLR_INVALID, but SetPixel() is always worked well? Any suggestions?
From GetPixel documentation
A bitmap must be selected within the device context, otherwise,
CLR_INVALID is returned on all pixels.
Try the below code and see if it works for your device context.
HDC dc = ... // <-- your device context
HDC memDC = CreateCompatibleDC(dc);
HBITMAP memBM = CreateCompatibleBitmap(dc, 1, 1);
SelectObject(memDC, memBM);
int x = ... // point's coordinates
int y = ...
BitBlt(memDC, 0, 0, 1, 1, dc, x, y, SRCCOPY);
COLORREF cr = GetPixel(memDC, 0, 0);
std::cout << cr << std::endl;
DeleteDC(memDC);
DeleteObject(memBM);

OpenGL ES 2.0 Convert int[] to GLubyte[]

The following works as a texture...
GLubyte bytePix[4 * 3] ={
255, 0, 0, //red
0, 255, 0, //green
0, 0, 255, //blue
255, 255, 0 //yellow
};
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, pixelWidth, pixelHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, pbytePix);
Problem is I am passing in my BMP as an int[] so I would need something more like this...
int bytePix[4 * 3] ={
255, 0, 0, //red
0, 255, 0, //green
0, 0, 255, //blue
255, 255, 0 //yellow
};
But this doesn't show the same result.
My question is how do I convert the latter into a GLubtye[] or some other recognizable format.
On your platform, sizeof(int) clearly isn't equal to sizeof(GLubyte). I guess the immediate question is — why are you using int? It's likely just to be a huge waste of space if you're storing only values in the range 0–255.
You can't just use GL_INT or GL_UNSIGNED_INT in place of GL_UNSIGNED_BYTE, even if they are the same size as your int as you're using only a byte's range within each integer.
That aside, you'll notice that glTexImage2D doesn't have a stride parameter unlike glVertexAttribPointer and most of the other functions that exist primarily to provide data. So even though you have your values within bytes and those bytes are a predictable space apart, OpenGL can't pull them apart and repack them for you.
So the easiest option is to do it yourself:
void glTexImage2DWithStride(..., GLsizei stride, ...)
{
// the following is written to assume GL_RGB; adapt as necessary
GLubyte *byteBuffer = (GLubyte *)malloc(width * height * 3);
for(int c = 0; c < width * height * 3; c++)
byteBuffer[c] = originalBuffer[c];
glTexImage2D(..., byteBuffer, ...);
free(byteBuffer);
}
Failing that, supposing your int is four times as large as a byte, you could upload the original as an RGBA texture that's four times as large as its real size, then shrink it down in a shader, combining the .r or .as (as per your endianness) into the correct output channels.
Since ubyte and int are different in size, I guess you have to create a new ubyte array and convert explicitly element by element with a for loop, before passing it to OpenGL.

Allegro, sprites leaving trail

I'm getting the problem my sprites leave a trail behind when i move them.
Tried drawning a BG with every refresh but then it start flickering.
This is what i do
// ...
int main(int argc, char *argv[])
{
BITMAP *buffer = NULL;
BITMAP *graphics = NULL;
buffer = create_bitmap(SCREEN_W, SCREEN_H);
graphics = load_bitmap("my_graphics.bmp", NULL);
clear_to_color(screen, makecol(0, 0, 0));
clear_to_color(buffer, makecol(0, 0, 0));
while(!key[KEY_ESC])
{
// ...
render_map(100,100);
// ...
}
}
void render_map(int w, int h)
{
// ...
for(int i=0;i < w * h;i++)
{
masked_blit(graphics, buffer, 0, 0, pos_x, pos_y, 32, 32);
}
// ...
blit(buffer, screen, camera_x,camera_y,0,0,SCREEN_W, SCREEN_H);
clear_to_color(buffer, makecol(0, 0, 0));
}
Thanks in advance for any help
Your code is a little hard to read, and you've left out big pieces of it. So it's hard to say for sure, but this line looks suspicious:
blit(buffer, screen, camera_x,camera_y,0,0,SCREEN_W, SCREEN_H);
When using a buffer, you typically will always be calling it like:
blit(buffer, screen, 0,0, 0,0, SCREEN_W,SCREEN_H);
and that is the only time you ever draw to the screen. So the steps are:
clear the buffer (by drawing a background image, tileset, color, etc)
draw everything to the buffer
copy the buffer to the screen
repeat

Resources