I am looking to create a custom rounded frame for an application window (border-radius and shadow)
From a performance point of view, what would be the best technique for this?
a. Use regions (SetWindowRgn) for the rounded application window and a layered window (UpdateLayeredWindow) for the shadow.
b. Use layered windows for both the rounded application window and the shadow.
The docs for UpdateLayeredWindow specify:
For best drawing performance by the layered window and any underlying
windows, the layered window should be as small as possible.
I am asking this specifically for the application main window, so a large window that can have a high complexity and is most of the times visible on the screen.
Should I go with regions or layered window for the application window? Which one would be lighter on the CPU/memory?
SetWindowRgn disables DWM for the given Window. DWM ist the component that is responsible for performantly drawing the Window frame using the available graphics hardware. That should pretty much rule out SetWindowRgn. Also, SetWindowRgn produces very "ancient" looking results because antialiasing is not possible. A pixel can be either fully transparent or fully opaque.
For best drawing performance by the layered window and any underlying
windows, the layered window should be as small as possible.
I believe that in 2018, this hint is less relevant. The documentation was written 18 years ago when the hardware was way more limited than today.
Still, UpdateLayeredWindow is not the fastest way to draw custom window frames, especially when you have to update the bitmap often (e. g. during window resize). The bottleneck is that these updates have to go from system memory to graphics memory. To minimize window size, create four small windows which are only large enough to draw the borders/corners of your window. This trick is pulled by Visual Studio for instance. Using Spy++ one can see 4 instances of "VisualStudioGlowWindow" which are layered windows that are just 9 pixels wide/tall (on my system):
If you want maximum performance, you may also look into Direct Composition, combined with the WS_EX_NOREDIRECTIONBITMAP extended window style, as explained in the article "High-Performance Window Layering Using the Windows Composition Engine". This technique requires Windows 8 at least.
Related
When a window is resized by aero snap, User32.GetWindowPlacement(hWnd).rcNormalPosition still stores its original rectangle, while User32.GetWindowRect is affected.
Since aero snap seems independent from WINDOWPLACEMENT, now we cannot collect the complete information of the actual placement simply using user32.dll. Thus I'm wondering if there's a way to get the aero snap state of a window, indicating whether the window is docked and which side the window is docked to.
Aero Snap is a feature of the Shell, not the windowing system. Thus, the windowing system cannot provide that information, because it is not aware of those states.
And the Shell doesn't make this information available either. So, in essence, the system doesn't provide the Aero Snap state of any given window through a public API.
I like having my main windows remember all of their placement information so that I it can be restored when they are restarted. In that past, it was enough to save a copy of the window placement structure and to set it back when recreating the window.
The introduction of snap required keeping some extra information. I detected whether a window appeared to be snapped by comparing its window rectangle to the work area rectangle of the monitor that contains the window. If it seemed to be snapped to one of the edges, I recorded that along with the placement information. Upon creating the window, I first restore the window placement, and then, if I have a snap state recorded, I change the window's size and position accordingly.
You can distinguish between a window that's been snapped to a monitor edge from one that's been carefully sized and placed there because the snapped window's rectangle won't match the one in the window placement.
This approach worked great in Windows 7. I recently discovered that Windows 10 added more flexibility to the snap locations and sizes as well as playing more games to achieve the annoyingly invisible resize borders. So my detection code doesn't always recognize a snapped window, but that should fixable.
Some popular software (like the major browsers) seem to remember that they were snapped, and I assume they use the same approach.
One of the things that I've noticed (at least on Windows anyway), is that the mouse cursor is drawn with much less latency than even standard Windows elements.
A good example of this would be to start dragging on the desktop. You can easily notice that the drag rectangle is lagging significantly behind the cursor.
My first question is: why is this the case?
I can't imagine drawing a rectangle being so much more expensive than drawing the cursor. Certainly not by a frame or two.
And my second question is, would it ever be possible to match one's application rendering 1:1 with cursor input?
A good use case for this would be either this selection rectangle, or drag previews for draggable items. Both of which lag behind quite significantly from the OS mouse pointer (independent of any framework or library used).
Selecting icons on the desktop with the selection rectangle is not that slow on my system (DWM on), it is lagging a little bit but not enough for me to really care.
The "Show Window Contents while Dragging" option has always been rather slow which is why it was not on by default in older Windows versions.
The mouse cursor on the other hand can be rendered directly by your hardware. That is, Windows sends the cursor image to your graphics card and after that Windows only has to tell the graphics card the cursor position and this is much faster than all the messages and user/kernel context switches involved when you resize and paint a window. The mouse driver probably uses hardware interrupts/timers with a higher priority than your normal software as well.
You can try to disable hardware cursors with a registry hack but the HID/mouse driver and the raw input thread in win32k will still have a higher priority than your application.
In Windows 7, we have the glass-like windows where parts of other windows or the desktop shines through:
Somewhere, Windows must know which regions are translucent in order to render the window correctly.
Many test automation tools have the ability to use bitmaps for comparing expected results and the translucent parts of windows can cause problems.
I wonder whether it is possible to detect the translucent regions of a window programmatically, e.g. by an API call, in order to implement a screen comparison tool that is robust against glassy windows.
The usual workaround is to disable Aero, but even then, the window color can depend on other system settings which need to be considered. Detecting the transparent regions could be even more reliable than detecting control panel appearence colors.
Also, since we have semi-automated tests, I'd turn off Aero for a short time only and turn it back on when the automated part of the test is finished. This causes unwanted flickering.
Note that I don't want to detect transparent regions in already captured images as discussed in Detecting transparent glass in images. I'd like to do it at a time where the OS can still distinguish transparency.
I want my program's window to be as big as possible without overlapping the window manager's various small windows e.g. the pager. Is there any way to ask the wm what the maximized window size is, before I create my window?
_NET_WORKAREA property of the root window is probably closest match. However on a multi-headed system it will give you the combined work area on all monitors.
If that's what you want, fine (but see here on making a window span multiple monitors). If you want to maximize over a single monitor, then there's a problem as there's no per-monitor API like _NET_WORKAREA. Your best bet is creating a window in a maximized state and then querying its size. If that's not an option, I'm afraid you will have to query the number and sizes of available monitors, and then go and calculate the work area of each monitor by subtracting "struts" from the full area (see here about _NET_WM_STRUT and _NET_WM_STRUT_PARTIAL).
I want to make a skinning engine capable of drawing custom-shaped windows with alpha blending. That is, it'll use layered windows (UpdateLayeredWindow). A typical window will contain among its background a couple dozens of other bitmaps ranging from 10×10 to, say, 300×150 pixels. In the worst case most of these elements will have smooth animation up to 30 fps. Everything will be alpha-blended and I am going to use Direct2D for this (yes, I know older Windows versions doesn't support it). In general, Winamp's modern skin engine is the closest example.
Given all this and taking in account modern PCs performance, can I just redraw the whole window every single frame or do I have to constrain to some sort of clip rectangle?
D2D required you to render with WM_Paint messages
Honneslty, use The IAnimation interface, and just let D2D and windows worry about how often to redraw , though i will let you know , winamp is done with adobe air, and layerd windows with d2d causes issues. (Kinda think you have to use a DXGI render target, but with the window being layerd it needs a DC to be returned to an end paint call so it can update it's alpha channel)
I have some experience with this.
If you need to support Windows XP, using UpdateLayeredWindow is the only choice available for solving this problem. The documentation for this call says it copies the whole bitmap to the screen each time it is called and this bottleneck showed up in my benchmarking as the real limiting factor. If your window is 300x300 you pay that price on every update, even if you are careful to modify only a couple of pixels. It would be very easy to over-optimize the rendering side for no real benefit so implement something simple, measure, and then decide if you need to optimize.
If you can drop support for Windows XP then you can avoid UpdateLayeredWindow completely and use DwmExtendFrameIntoClientArea to create the same effect as a layered window. You'll write less code, avoid the UpdateLayeredWindow bottleneck, and D2D will be easier to work with.