As described on MSDN, it is allowed to draw outside WM_PAINT. Does this also apply when the window is in minimized state?
I did some tests and GetDC(hwnd) returns a device context even when the window is minimized and drawing to it doesn't cause any problems, although in practice nothing is drawn of course because the window isn't visible.
This is perfectly fine with me. I'm only asking this question to learn whether it is harmless to draw to a minimized window outside WM_PAINT or whether I have to write code that checks if the window is minimized and doesn't draw in that case. If drawing to a minimized window is harmless, however, I can just skip writing such code.
Of course you can draw outside of WM paint. WM_PAINT is sent to your window to indicate that some areas of the Window must be redrawn. For example when another window like a dialog box is popped on to of yours and then later removed.
There are many legitimate cases to draw outside of WM paint, for example you need to update an animation of some sort.
Just keep in mind that all drawing commands are subject to clipping regions outside of which they have no effect
Note this applies to drawing on actual windows, drawing on in memory bitmaps and printer contexts could be done any time.
It's very likely that a mature platform like Windows has code that is written defensively to block that sort of no-op. But the instructions say that drawing should be done only in response to a WM_PAINT message, and you should obey that. Otherwise you don't have a leg to stand on if the program develops a bug.
For hobby programming, that hardly matters. For commercial programming, it could mean the difference between being sued or not sued for damages.
Related
When a window is resized by aero snap, User32.GetWindowPlacement(hWnd).rcNormalPosition still stores its original rectangle, while User32.GetWindowRect is affected.
Since aero snap seems independent from WINDOWPLACEMENT, now we cannot collect the complete information of the actual placement simply using user32.dll. Thus I'm wondering if there's a way to get the aero snap state of a window, indicating whether the window is docked and which side the window is docked to.
Aero Snap is a feature of the Shell, not the windowing system. Thus, the windowing system cannot provide that information, because it is not aware of those states.
And the Shell doesn't make this information available either. So, in essence, the system doesn't provide the Aero Snap state of any given window through a public API.
I like having my main windows remember all of their placement information so that I it can be restored when they are restarted. In that past, it was enough to save a copy of the window placement structure and to set it back when recreating the window.
The introduction of snap required keeping some extra information. I detected whether a window appeared to be snapped by comparing its window rectangle to the work area rectangle of the monitor that contains the window. If it seemed to be snapped to one of the edges, I recorded that along with the placement information. Upon creating the window, I first restore the window placement, and then, if I have a snap state recorded, I change the window's size and position accordingly.
You can distinguish between a window that's been snapped to a monitor edge from one that's been carefully sized and placed there because the snapped window's rectangle won't match the one in the window placement.
This approach worked great in Windows 7. I recently discovered that Windows 10 added more flexibility to the snap locations and sizes as well as playing more games to achieve the annoyingly invisible resize borders. So my detection code doesn't always recognize a snapped window, but that should fixable.
Some popular software (like the major browsers) seem to remember that they were snapped, and I assume they use the same approach.
I'm trying to implement a cross-platform UI library that takes as little system resource as possible. I'm considering to either use my own software renderer or opengl.
For stationary controls everything's fine, I can repaint only when it's needed. However, when it comes to implementing animations, especially animated blinking carets like the 'phase' caret in sublime text, I don't see a easy way to balance resource usage and performance.
For a blinking caret, it's required that the caret be redrawn very frequently(15-20 times per sec at least, I guess). On one hand, the software renderer supports partial redraw but is far too slow to be practical(3-4 fps for large redraw regions, say, 1000x800, which makes it impossible to implement animations). On the other hand, opengl doesn't support partial redraw very well as far as I know, which means the whole screen needs to be rendered at 15-20 fps constantly.
So my question is:
How are carets usually implemented in various UI systems?
Is there any way to have opengl to render to only a proportion of the screen?
I know that glViewport enables rendering to part of the screen, but due to double buffering or other stuff the rest of the screen is not kept as it was. In this way I still need to render the whole screen again.
First you need to ask yourself.
Do I really need to partially redraw the screen?
OpenGL or better said the GPU can draw thousands of triangles at ease. So before you start fiddling with partial redrawing of the screen, then you should instead benchmark and see whether it's worth looking into at all.
This doesn't however imply that you have to redraw the screen endlessly. You can still just redraw it when changes happen.
Thus if you have a cursor blinking every 500 ms, then you redraw once every 500 ms. If you have an animation running, then you continuously redraw while that animation is playing (or every time the animation does a change that requires redrawing).
This is what Chrome, Firefox, etc does. You can see this if you open the Developer Tools (F12) and go to the Timeline tab.
Take a look at the following screenshot. The first row of the timeline shows how often Chrome redraws the windows.
The first section shows a lot continuously redrawing. Which was because I was scrolling around on the page.
The last section shows a single redraw every few 500 ms. Which was the cursor blinking in a textbox.
Open the image in a new tab, to see better what's going on.
Note that it doesn't tell whether Chrome is fully redrawing the window or only that parts of it. It is just showing the frequency of the redrawing. (If you want to see the redrawn regions, then both Firefox and Chrome has "Show Paint Rectangles".)
To circumvent the problem with double buffering and partially redrawing. Then you could instead draw to a framebuffer object. Now you can utilize glScissor() as much as you want. If you have various things that are static and only a few dynamic things. Then you could have multiple framebuffer objects and only draw the static contents once and continuously update the framebuffer containing the dynamic content.
However (and I can't emphasize this enough) benchmark and check if this is even needed. Having two framebuffer objects could be more expensive than just always redrawing everything. The same goes for say having a buffer for each rectangle, in contrast to packing all rectangles in a single buffer.
Lastly to give an example let's take NanoGUI (a minimalistic GUI library for OpenGL). NanoGUI continuously redraws the screen.
The problem with not just continuously redrawing the screen is that now you need a system for issuing a redraw. Now calling setText() on a label needs to callback and tell the window to redraw. Now what if the parent panel the label is added to isn't visible? Then setText() just issued a redundant redrawing of the screen.
The point I'm trying to make is that if you have a system for issuing redrawing of the screen. Then that might be more prone to errors. Thus unless continuously redrawing is an issue, then that is definitely a more optimal starting point.
If the content of the last frame isn't changed on receiving WM_PAINT, is it possible to simply direct the operating system to redraw the window using the old back buffer instead of redrawing the whole scene again to the new back buffer and swapping it?
No. There is no such "backbuffer". And when drawing occurs you don't know what areas may be covered by other windows. The clipping area isn't a real good indicator.
The only thing you know is that such areas need to be redrawn. Each window cares about its own client area. If you want to buffer something, you have to do it on your own.
The reason is simple: Imagine you have hundreds of windows. To hold a buffer for each window is inefficient, when just a view on the top are visible. So the Windows makers decide not to store any windows content and just notify windows on the top to redraw themselves.
OK. Since we have a DWM (Dynamic Window Manager) things changed a lot. But the principle is still: You are responsible to draw. If you want to buffer something, you have to do it on your own.
I've never quite understood why erasing the background has a separate windows message. I looks a bit redundant to me. When I've created owner-drawn buttons, I've always ended up erasing the background from inside WM_PAINT. I have sometimes even done all the painting from inside WM_ERASEBKGND and left WM_PAINT empty. Both seem to work fine. Is there any advantage to separating the painting into 2 operations?
This is entirely guesses:
Back in the olden days, filling a rectangle with colour was a relatively slow operation. But filling one big rectangle was still a lot quicker than filling lots of little rectangles.
I guess that if you had a window with a child window, and both had the same registered background brush, then Windows was smart enough to realise it didn't need to send a WM_ERASEBKGND to the child when it had already cleared the parent. With a moderately complex dialog box on a very slow PC, this might be a significant improvement.
Or to ask it another way, how OnEraseBkgnd() works?
I'm building a custom control and I hit on this problem. Childs are rectangles, as usual. I had to disable OnEraseBkgnd() and I use only the OnPaint(). What I need is to efficiently clear the area behind the childs and without flickering. Techniques like using back buffers are not an option.
Edit: I am very interested in the algorithm that's under the hood of OnEraseBkgnd(). But any helpful answer will also be accepted.
Usually in Windows, the easiest (but not most effective) means of reducing flicker, is to turn off the WM_ERASEBKGND notification handling. This is because if you erase the background in the notification handler, then paint the window in the WM_PAINT handler, there is a short delay between the two - this delay is seen as a flicker.
Instead, if you do all of the erasing and drawing in the WM_PAINT handler, you will tend to see a lot less flicker. This is because the delay between the two is reduced. You will still see some flicker, especially when resizing because there is still a small delay between the two actions, and you cannot always get in all of the drawing before the next occurance of the vertical blanking interrupt for the monitor. If you cannot use double buffering, then this is probably the most effective method you will be able to use.
You can get better drawing performance by following most of the usual recommendations around client area invalidation - do not invalidate the whole window unless you really need to. Try to invalidate only the areas that have changed. Also, you should use the BeginDeferWindowPos functions if you are updating the positions of a collection of child windows at the same time.