My OpenGL game switches Aero DWM Glass off - windows

I wrote a free game a few years ago: http://www.walkover.org. For the lobby and menus, it uses normal dialogs like win32. When the actual game starts it uses OpenGL.
Now, on Windows 7, when the actual game starts, it switches windows aero glass off and switches it back on when the game is over.
Is there something I can do to prevent this from happening? Some special flags that keep the glass on if it is on? (For newer, I have been using DirectX and this doesn#t happen there.) Maybe some (new) flag I have to specify somewhere?
I'm using this pixelformatdescriptor:
static PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd
1, // version number
PFD_DRAW_TO_WINDOW | // support window
PFD_SUPPORT_OPENGL | // support OpenGL
PFD_DOUBLEBUFFER, // double buffered
PFD_TYPE_RGBA, // RGBA type
32, // 24-bit color depth
0, 0, 0, 0, 0, 0, // color bits ignored
0, // no alpha buffer
0, // shift bit ignored
0, // no accumulation buffer
0, 0, 0, 0, // accum bits ignored
0, // 32-bit z-buffer
0, // no stencil buffer
0, // no auxiliary buffer
PFD_MAIN_PLANE, // main layer
0, // reserved
0, 0, 0 // layer masks ignored
};

This problem is caused by all of the following:
Windows 7
OpenGL
Creating a window that matches your screen rect exactly (Fullscreen)
Calling Swap Buffers for the second time
Old graphics card drivers
I have the same problem and I tested 2 AAA game engines that use OpenGL and can confirm they both had the same problem. DirectX seems unaffected.
This means your window create is fine, it is not a code problem.
I would recommend updating your graphics card drivers to solve the problem.
A short term work around I found was to create a window that is 1 pixel larger than your screen resolution (on either width or height)
This tricks windows into not detecting that you have gone into fullscreen and doesn't trigger the problem.

I think this can happen if you create an OpenGL rendering context that is incompatible with Aero Glass. IIRC one of the cases that can cause this is if you create a window using 16-bit colour. Aero Glass can't render in 16-bit colour, so for some technical reason, Windows has to completely disable Aero Glass (even though only your app is using it). Try using 32-bit colour.
There might be other settings that disable it for similar reasons. In short, double check all the settings you're creating OpenGL with, and think "would Aero Glass switch off if the desktop was switched to this mode?".

Not sure exactly what you're doing that's causing this, but it's probably just as well -- Microsoft's OpenGL implementation in Windows 7 is not exactly perfect, to put it nicely. In fact, in most of my OpenGL code, I've taken to explicitly turning off the DWM. Otherwise, bugs in Microsoft's implementation basically stop it from working at all. Though I can work around that some extent in other ways, the performance is so poor it renders the code essentially unusable anyway (I haven't done serious measurements, but at an immediate guess I'd say at least 5:1 performance loss).

I once encountered a problem in my application that GDI couldn't read OpenGL-rendered window contents when trying to do BitBlt from the window DC. This was back then when Vista just came out. Using PFD_SUPPORT_GDI would work around this, but it would also disable Aero. I guess this issue made a lot of old apps broken, so they forced it to be on in some cases. Not sure what's your case here. I would bet on a driver stub issue.
Just my own limited experience with OpenGL, Aero and GDI.

Related

Differences between GetDC() and BeginPaint()?

I am having trouble with some of my owner drawn listboxes on High DPI monitors on Windows 10 in a dialog box. The text is chopped off at the bottom. We saw the problem on Windows 7 and were able to fix it. It is not necessarily High DPI, but when the user sets a different text scaling. I solved the problem, so I thought (!), by using a CClientDC (wrapper around GetDC()) and calling GetTextMetrics() to determine the text height. Previously, our icons had always been taller than our text so it was not a problem. With larger DPI monitors we saw some customers reporting problems when they scaled the text.
Now we are getting new reports under Windows 10. The former problem is fine under Windows 7--but Windows 7 only scales to 100, 125, and 150 percent. Windows 10 (and maybe 8? -- but no customer reports) allows user defined scaling.
So, I tracked down the problem somewhat... I knew what the font height was when I called GetTextMetrics() during WM_MEASUREITEM. I went and put some code in to debug what GetTextMetrics() was during my WM_DRAWITEM. Well, they were different--20 pixels high during WM_MEASUREITEM, and 25 pixels high during WM_DRAWITEM. Obviously, that is a problem. I want the GetTextMetrics() to have the same results in both places.
My thought was that the only real difference I could think of was that during WM_MEASUREITEM I am calling GetDC() via CClientDC constructor, and that during WM_DRAWITEM I am using an already constructed HDC (which probably was from a return of GetPaint() inside GDI32.dll or another system DLL).
I thought maybe the BeginPaint() does something like select the windows HFONT into the HDC...
So, inside my WM_MEASUREITEM after getting the DC, I select the font of the listbox into the HDC, and then I call GetTextMetrics(). Lo and behold, the numbers match now in WM_MEASUREITEM and WM_DRAWITEM.
However, I don't know if I just got lucky. It's all just guesswork at this point.
Does BeginPaint() select the window font into the DC whereas GetDC() does not? Does the default handler of WM_PAINT for an owner drawn LISTBOX or COMBOBOX do something like select the window font into the paint DC?
BOOL DpiAwareMeasureGraphItem(LPMEASUREITEMSTRUCT lpM, CWnd* pWnd)
{
int iItemHeight = INTERG_BITMAP_HEIGHT + 4;
if (pWnd)
{
CClientDC dc(pWnd);
if (dc.GetSafeHdc())
{
CFont* pOldFont = dc.SelectObject(pWnd->GetFont()); // seems to fix it on Windows 10, but is it luck?
TEXTMETRIC tm;
memset(&tm, 0, sizeof(tm));
dc.GetTextMetrics(&tm);
LONG tmHeight = tm.tmHeight + 4; //pad
iItemHeight = max(iItemHeight, tmHeight);
dc.SelectObject(pOldFont);
}
}
lpM->itemHeight = iItemHeight;
return (TRUE);
}
Neither GetDC() or BeginPaint() initialise the DC they return with anything other than the default system font. But WM_DRAWITEM is different - it gives you an already-initialised DC to draw into.
The method you stumbled across is the right one. WM_MEASUREITEM doesn't supply a DC at all, so if you need one for size calculations you're responsible for obtaining it and setting it up with the appropriate font.

glfwSwapBuffers() and vertical refresh on Windows

I want to do something that is very trivial with OpenGL and GLFW: I want to scroll a 100x100 white filled rectangle from left to right and back again. The rectangle should be moved by 1 pixel per frame and the scrolling should be perfectly smooth. This is my code:
int main(void)
{
GLFWwindow* window;
int i = 0, mode = 0;
if(!glfwInit()) return -1;
window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);
if(!window) {
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
glfwSwapInterval(1);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 640, 0, 480, -1, 1);
glDisable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glColor3f(1.0, 1.0, 1.0);
while(!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glRecti(i, 190, i + 100, 290);
glfwSwapBuffers(window);
glfwPollEvents();
if(!mode) {
i++;
if(i >= 539) mode = 1;
} else {
i--;
if(i <= 0) mode = 0;
}
}
glfwTerminate();
return 0;
}
On Mac OS X and Linux this code is working really fine. The scrolling is perfectly sync'ed with the vertical refresh and you cannot see any stuttering or flickering. It is really perfectly smooth.
On Windows, things are more difficult. By default, glfwSwapInterval(1) doesn't have any effect on Windows when desktop compositing is enabled. GLFW docs say that this has been done because enabling the swap interval with DWM compositing enabled can lead to severe jitter. This behaviour can be changed by compiling GLFW with GLFW_USE_DWM_SWAP_INTERVAL defined. In that case, the code above works really fine on Windows as well. The scrolling is perfectly sync'ed and there is no jitter. I tested it on a variety of different machines running XP, Vista, 7, and 8.
However, there has to be a very good reason that made the GLFW authors disable the swap interval on Windows by default so I suppose that there are many configurations where it will indeed lead to severe jitter. Maybe I was just lucky that none of my machines showed it. So defining GLFW_USE_DWM_SWAP_INTERVAL is not really a solution I can live with because there has to be a reason why it is disabled by default, although it somewhat escapes me that the GLFW team didn't come up with a nicer solution because as it stands now, GLFW programs aren't really portable because of this issue. Take the code above as an example: It will be perfectly sync'ed on Linux and OS X, but on Windows it will run at lightning speed. This somewhat defies GLFW's concept of portability in my eyes.
Given the situation that GLFW_USE_DWM_SWAP_INTERVAL cannot be used on Windows because the GLFW team explicitly warns about its use, I'm wondering what else I should do. The natural solution is of course a timer which measures the time and makes sure that glfwSwapBuffers() is not called more often than the monitor's vertical refresh rate.
However, this also is not as simple as it sounds since I cannot use Sleep() because this would be much too imprecise. Hence, I'd have to use a polling loop with QueryPerformanceCounter(). I tried this approach and it pretty much works but the CPU usage is of course up to 100% now because of the polling loop sleep. When using GLFW_USE_DWM_SWAP_INTERVAL, on the other hand, CPU usage is at a mere 1%.
An alternative would be to set up a timer that fires at regular intervals but AFAIK the precision of CreateTimerQueueTimer() is not very satisfying and probably doesn't yield perfectly sync'ed results.
To cut a long story short: I'd like to ask what is the recommended way of dealing with this problem? The example code above is of course just for illustration purposes. My general question is that I'm looking for a clean way to make glfwSwapBuffers() swap buffers in sync with the monitor's vertical refresh on Windows. On Linux and Mac OS X this is already working fine but on Windows there is the problem with severe jitter that the GLFW docs talk about (but which I don't see here).
I'm still somewhat puzzled that GLFW doesn't provide an inbuilt solution to this problem and pretty much leaves it up to the programmer to workaround this. I'm still a newbie to OpenGL but from my naive point of view, I think that having a function that swaps buffers in sync with vertical refresh is a feature of fundamental importance so it escapes me why GLFW doesn't have it on Windows.
So once again my question is: How can I workaround the problem that glfwSwapInterval() doesn't work correctly on Windows? What is the suggested approach to solve this problem? Is there a nicer way than using a poll timer that will hog the CPU?
I think your issue has solved itself by a strange coincidence in timing. This commit has been added to GLFW's master branch just a few days ago, and it is removing the GLFW_USE_DWM_SWAP_INTERVAL because they are now using DWM's DWMFlush() API to do the syncing when DWM is in use. The changelog for this commit includes:
[Win32] Removed GLFW_USE_DWM_SWAP_INTERVAL compile-time option
[Win32] Bugfix: Swap interval was ignored when DWM was enabled
So probably grabbing the newest git HEAD for GLFW is all you need to do.

OpenGL 3.1+ with Ruby

I followed this post to play with OpenGL (programmable pipeline) on Ruby
Basically, I'm just trying to create a blue window, and here's the code.
Ray::GL.major_version = 3
Ray::GL.minor_version = 2
Ray::GL.core_profile = true # if you want/need one
window = Ray::Window.new("Test Window", [800, 600])
window.make_current
glClearColor(0, 0, 1, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
Instead, I got a white window created. This indicated that I was missing something, but I couldn't figure out what I was missing as the resources for OpenGL on Ruby seemed limited. I have been searching all over the web, but all I found was fixed-pipeline OpenGL stuff for Ruby.
Yes, I could use Ray's built-in functions to set the background color and draw stuff, but I didn't want to do that. I just wanted to use Ray to setup the window, then called OpenGL APIs directly. However, I couldn't figure out what I was missing in the code above.
I would greatly appreciate any hint or pointer to this (maybe I needed to swap the buffer? but then I didn't know how to do it with Ray). Is there any body familiar with using Ray that can give me some hints on this?
Or, are there any other tools that would allow me to setup OpenGL binding (for none fixed-pipeline)?
It would appear that you set the clear color to be blue, then cleared the back buffer to make it blue. But, as you said, you have not swapped the buffers to put the back buffer onto your screen. As far as swapping buffers goes, here's another answer from stack overflow
"Swapping the front and back buffer of a double buffered window is a function provided by the underlying graphics system, i.e. Win32 GDI, or X11 GLX. The function's you're looking for are wglSwapBuffers and/or glXSwapBuffers. On MacOS X NSOpenGLViews are automatically swapped.
However most likely you're using some framework, like GLUT, GLFW or Qt, which provide a portable wrapper around those functions. Read the framework's documentation."
I've never used Ray, so I'd say just keep rooting around in the documentation or look through example projects to see how buffer swapping is done.

Setting up a Win32 OpenGL Window with a GL_RGBA Color Buffer

I am trying to set up an OpenGL window with an alpha channel in its color buffer. Unfortunately, my current setup is creating a GL_RGB back and front buffer (as reported by gDEBugger, and as shown by my experiments).
I set up the window like so:
PIXELFORMATDESCRIPTOR pfd;
ZeroMemory(&pfd,sizeof(pfd));
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 24; //note by docs that alpha doesn't go here (though putting 32 changes nothing)
pfd.cDepthBits = 24;
pfd.iLayerType = PFD_MAIN_PLANE;
I have also tried more specifically:
PIXELFORMATDESCRIPTOR pfd = {
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
24,
8,0, 8,8, 8,16, 8,24, //A guess for lil endian; usually these are 0 (making them 0 doesn't help)
0, 0,0,0,0,
24, //depth
8, //stencil
0, //aux
PFD_MAIN_PLANE,
0,
0, 0, 0
};
My understanding is that when I call ChoosePixelFormat later, it's returning a format without an alpha channel (though why, I don't know).
I should clarify that when I say alpha buffer, I mean just a simple alpha buffer for rendering purposes (each color has an alpha value that fragments can be tested against, and so on). I do NOT mean a semi-transparent window or some other effect. [EDIT: So no, I am not at this time interested in making the window itself transparent. I just want an alpha channel for the default framebuffer.]
[EDIT: This is part of a cross-platform windowing backend I'm writing. My code is always portable. I am not using a library that provides this functionality since I need more control.]
There are two key points to consider here:
You can do this with the default framebuffer. However, you need to request it properly. Windows's default selection mechanism doesn't seem to weight having RGBA very highly. The best course seems to be to enumerate all possible pixelmodes and then select the one you want "manually" as it were. By doing this, I was also able to specify that I wanted a 24 bit depth buffer, an accumulation buffer, and an 8-bit stencil buffer.
The OpenGL context must be fully valid for alpha blending, depth testing, and any even remotely advanced techniques to work. Curiously, I was able to get rendered triangles without having a fully valid OpenGL context, leading to my confusion! Maybe it was being emulated in software? I figured it out when glewInit (not to mention making a few VBOs) failed miserably. The key point is that the OpenGL context must be valid. In my case, I wasn't setting it up properly.
This problem was in the context of my currently writing a lightweight cross-platform windowing toolkit. Currently, it supports Windows through the Win32 API, Linux through X11, and I just started porting to Mac OS through X11.
At the request of some in the comments, I hereby present my humble effort thereof that others may benefit. Mac support doesn't work yet, menus aren't implemented on Linux, and user input on Linux is only half-there. As of 1/18/2013, it may temporarily be found here (people of the future, I may have put new versions on my website (look for "Portcullis")).

How to toggle fullscreen on Mac OSX using SDL 1.3 so that it actually works?

There is SDL_WM_ToggleFullScreen. However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it. Ok, annoying, but I can reload my textures. However, when I reload my textures after toggling, it crashes on certain textures inside of a memcpy being called by glTexImage2D. Huh, it sure didn't crash when I loaded those textures the first time around. I even try deleting all my textures before the toggle, but I get the same result.
As a test, I reload textures without toggling fullscreen, and the textures reload fine. So, toggling the fullscreen does something funny to OpenGL. (And, by "toggling", I mean going in either direction: windowed->fullscreen or fullscreen->windowed - I get the same results.)
As an alternative, this code seems to toggle fullscreen as well:
SDL_Surface *surface = SDL_GetVideoSurce();
Uint32 flags = surface->flags;
flags ^= SDL_FULLSCREEN;
SDL_SetVideoMode(surface->w, surface->h, surface->format->BitsPerPixel, flags);
However, calling SDL_GetError after this code says that the "Invalid window" error was set during the SDL_SetVideoMode. And if I ignore it and try to load textures, there is no crashing, but my textures don't show up either (perhaps OpenGL is just immediately returning from glTexImage2D without actually doing anything). As a test, at startup time, I try calling SDL_SetVideoMode twice in a row to perform a toggle right before I even load my textures the first time. In that particular case, the textures do actually show up. Huh, what?
I am using SDL 1.3.0-6176 (posted on the SDL website January 7, 2012).
Update:
My texture uploading code is below (nothing surprising here). For clarification, this application (including the code below) is already working without any issues as a finished application for iOS, Android, PSP, and Windows. Windows is the only other version, other than Mac, that is using SDL under the hood.
unsigned int Texture::init(const void *data, int width, int height)
{
unsigned int textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
return textureID;
}
it crashes on certain textures inside of a memcpy being called by glTexImage2D.
This shouldn't happen. There's some bug somewhere, either in your code or that of the OpenGL implementation. And I'd say it's yours. So please post your texture allocation and upload code for us to see.
However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it.
Yes, unfortunately this is the way how it works on MacOS X. And due to the design of its OpenGL driver model, there's litte you can do about it (did I mention that MacOS X sucks; oh yes, I did. On occasion. Like over a hundred times). The same badly designed driver model also makes MacOS X so slow in catching up with OpenGL development.

Resources