Xlib: How to ask window manager for maximized window size? - xlib

I want my program's window to be as big as possible without overlapping the window manager's various small windows e.g. the pager. Is there any way to ask the wm what the maximized window size is, before I create my window?

_NET_WORKAREA property of the root window is probably closest match. However on a multi-headed system it will give you the combined work area on all monitors.
If that's what you want, fine (but see here on making a window span multiple monitors). If you want to maximize over a single monitor, then there's a problem as there's no per-monitor API like _NET_WORKAREA. Your best bet is creating a window in a maximized state and then querying its size. If that's not an option, I'm afraid you will have to query the number and sizes of available monitors, and then go and calculate the work area of each monitor by subtracting "struts" from the full area (see here about _NET_WM_STRUT and _NET_WM_STRUT_PARTIAL).

Related

How to get the aero snap state of a window?

When a window is resized by aero snap, User32.GetWindowPlacement(hWnd).rcNormalPosition still stores its original rectangle, while User32.GetWindowRect is affected.
Since aero snap seems independent from WINDOWPLACEMENT, now we cannot collect the complete information of the actual placement simply using user32.dll. Thus I'm wondering if there's a way to get the aero snap state of a window, indicating whether the window is docked and which side the window is docked to.
Aero Snap is a feature of the Shell, not the windowing system. Thus, the windowing system cannot provide that information, because it is not aware of those states.
And the Shell doesn't make this information available either. So, in essence, the system doesn't provide the Aero Snap state of any given window through a public API.
I like having my main windows remember all of their placement information so that I it can be restored when they are restarted. In that past, it was enough to save a copy of the window placement structure and to set it back when recreating the window.
The introduction of snap required keeping some extra information. I detected whether a window appeared to be snapped by comparing its window rectangle to the work area rectangle of the monitor that contains the window. If it seemed to be snapped to one of the edges, I recorded that along with the placement information. Upon creating the window, I first restore the window placement, and then, if I have a snap state recorded, I change the window's size and position accordingly.
You can distinguish between a window that's been snapped to a monitor edge from one that's been carefully sized and placed there because the snapped window's rectangle won't match the one in the window placement.
This approach worked great in Windows 7. I recently discovered that Windows 10 added more flexibility to the snap locations and sizes as well as playing more games to achieve the annoyingly invisible resize borders. So my detection code doesn't always recognize a snapped window, but that should fixable.
Some popular software (like the major browsers) seem to remember that they were snapped, and I assume they use the same approach.

Why is the mouse cursor drawn faster than applications?

One of the things that I've noticed (at least on Windows anyway), is that the mouse cursor is drawn with much less latency than even standard Windows elements.
A good example of this would be to start dragging on the desktop. You can easily notice that the drag rectangle is lagging significantly behind the cursor.
My first question is: why is this the case?
I can't imagine drawing a rectangle being so much more expensive than drawing the cursor. Certainly not by a frame or two.
And my second question is, would it ever be possible to match one's application rendering 1:1 with cursor input?
A good use case for this would be either this selection rectangle, or drag previews for draggable items. Both of which lag behind quite significantly from the OS mouse pointer (independent of any framework or library used).
Selecting icons on the desktop with the selection rectangle is not that slow on my system (DWM on), it is lagging a little bit but not enough for me to really care.
The "Show Window Contents while Dragging" option has always been rather slow which is why it was not on by default in older Windows versions.
The mouse cursor on the other hand can be rendered directly by your hardware. That is, Windows sends the cursor image to your graphics card and after that Windows only has to tell the graphics card the cursor position and this is much faster than all the messages and user/kernel context switches involved when you resize and paint a window. The mouse driver probably uses hardware interrupts/timers with a higher priority than your normal software as well.
You can try to disable hardware cursors with a registry hack but the HID/mouse driver and the raw input thread in win32k will still have a higher priority than your application.

Window regions vs layered windows

I am looking to create a custom rounded frame for an application window (border-radius and shadow)
From a performance point of view, what would be the best technique for this?
a. Use regions (SetWindowRgn) for the rounded application window and a layered window (UpdateLayeredWindow) for the shadow.
b. Use layered windows for both the rounded application window and the shadow.
The docs for UpdateLayeredWindow specify:
For best drawing performance by the layered window and any underlying
windows, the layered window should be as small as possible.
I am asking this specifically for the application main window, so a large window that can have a high complexity and is most of the times visible on the screen.
Should I go with regions or layered window for the application window? Which one would be lighter on the CPU/memory?
SetWindowRgn disables DWM for the given Window. DWM ist the component that is responsible for performantly drawing the Window frame using the available graphics hardware. That should pretty much rule out SetWindowRgn. Also, SetWindowRgn produces very "ancient" looking results because antialiasing is not possible. A pixel can be either fully transparent or fully opaque.
For best drawing performance by the layered window and any underlying
windows, the layered window should be as small as possible.
I believe that in 2018, this hint is less relevant. The documentation was written 18 years ago when the hardware was way more limited than today.
Still, UpdateLayeredWindow is not the fastest way to draw custom window frames, especially when you have to update the bitmap often (e. g. during window resize). The bottleneck is that these updates have to go from system memory to graphics memory. To minimize window size, create four small windows which are only large enough to draw the borders/corners of your window. This trick is pulled by Visual Studio for instance. Using Spy++ one can see 4 instances of "VisualStudioGlowWindow" which are layered windows that are just 9 pixels wide/tall (on my system):
If you want maximum performance, you may also look into Direct Composition, combined with the WS_EX_NOREDIRECTIONBITMAP extended window style, as explained in the article "High-Performance Window Layering Using the Windows Composition Engine". This technique requires Windows 8 at least.

Windows device coordinates vs virtual coordinates

I've tried to find an answer for this on MSDN, but I'm not getting a clear picture of how this is intended to work. All of my work is on Windows 8.1.
Here is my issue. I am working on a Laptop with a high resolution monitor, 3200x1800. I've been using EnumDisplayMonitors to get the bounding rectangle of my screen.
This seems to work fine if my display settings are default. But I've noticed that when I change the Window display settings to provide larger text, the resolution returned by EnumDisplayMonitor changes. Rather than getting 3200x1800 I will get 2133x1200.
I'm guessing since I asked for larger text, Windows chooses to represent the screen as a smaller resolution.
It seems that if I look at the virtual screen properties, everything is represented in the actual coordinates of my screen, i.e. 3200x1800. But the APIs for getting the window and monitor rectangles seem to operate on this "other" coordinate space.
Is there any documentation/Windows APIs to handle the conversion between these "other coordinates" and the "virtual coordinates"? i.e. if I want EnumDisplayMonitor or GetMonitorInfo to give me the true screen coordinates, how could I convert 2133x1200 to 3200x1800?
You have increased the DPI of the video adapter to 150% (144 dots per inch) to keep text readable and avoid having windows the size of a postage stamp. Quite necessary on such high resolution displays. But you haven't told Windows that your program knows how to deal with it.
So it assumes your program is an old one that was never designed to run on such monitors. It helps and lies to you. It gets your program to render its output to a memory buffer, then takes that output, rescales it by 150% and copies it to the video adapter. This is something you can see, text looks fuzzier if you put your program's output next to a program that doesn't ask for this kind of scaling, like Notepad.
And of course, it lies to you when you ask for the size of the screen. It tells you that it is 150% smaller than it really is. So that, after rescaling, a window you create will fill the screen.
Which is all just fine but of course not ideal, your program doesn't look as good as it should. You have to tell Windows that you know how to deal with the higher resolution. Do beware that this looks easier than it is in practice. Getting text to look crisp is trivial, it is bitmaps that are problematic. And in general a fertile source of bugs, even the big companies can get this wrong.
Before I start with an answer, let me ask: what are you really trying to do ? Or more specific - why do you need to know the monitor resolution ? The standard way to do this is to call GetWindowRect(GetDesktopWindow(), &rect) I'm not sure if the screen coordinates change based on DPI settings - but you should try that instead of GetMonitorInfo as the latter is for more advanced stuff. And if GetWindowRect still returns back a scaled rect, just call DPtoLP, LPtoDP or other mapping coordinate function as appropriate.
When you adjust the display settings as you described, you are actually changing the DPI settings of the screen. As such, certain APIs go into compatibility mode so that they allow the app to create larger elements and windows without knowing anything about this setting.
Why do you need to know the actual screen resolution since most of the windowing APIs will behave accordingly when the DPI scaling changes?
I suspect you could call SetProcessDPIAware or the manifest file equivalent. But do read this MSDN article first to understand DPI scaling.

PtInRect vs child windows

I've seen cases where people use DrawFrameControl along with PtInRect (where the mouse position is tested to the rectangle of the frame control), to simulate the effect of having a control (like a button). Why would you want to do this, rather than using child windows?
An example where this technique is used is this docking framework, where the docking window's close button isn't a physical window.
For an application I'm writing, I'm using a list view control which will hold up to 1000 items. Each item will hold, let's say, 10 buttons. All buttons are custom drawn.
Would it be considered a more efficient (and faster) approach to use the PtInRect mechanism for this?
Each process has a limit of about 10,000 window handles. Not only would it be inefficient to create windows for 10 buttons on each of 1,000 items, it wouldn't necessarily be possible.
To answer your question: yes, creating "virtual" buttons by painting and hit-testing yourself is a much better solution.
Having separate child windows brings certain overhead with them. Each child has its own attributes: window class, styles, window procedure, position, owning thread, etc. If you need to manage large array of such children many of their attributes will most probably be the same, wasting system resources. It may also become harder to manage them as separate windows:
You may notice painting glitches. Each window paints somewhat independently from the others, so when someone moves another window on top of your multi-children window and then moves that away, depending on how you organize things you may see unpleasant intermediate states like the background being redrawn with holes visible at the location of each child moments before children repaint. CS_PARENTDC might help with that.
Whenever you need to reposition these children, you need use the DeferWindowPos family of functions or else suffer similar repainting issues.
All these children need separate messages to manipulate.
After all it may be even simpler to simulate these children. My feeling is that attempting to put 10000 buttons on top of a list view will be much harder to implement and will suffer from the above visual problems. Using DrawFrameControl with owner-draw List View items would most probably give you simpler implementation with better visual result.
Besides there is a limit on the number of windows you can create, as #arx noted.

Resources