What is the proper calculation (in dialog units) to determine how high a Combo Box in a Win32 resource should be?
If I choose a number and all items fit, is that sufficient, even on different DPI scaling modes?
Related
Recently, I read this excellent page about DPI on Win32:
DPI and device-independent pixels
However, I am confused about GetDeviceCaps(hdc, LOGPIXELSX/Y) vs GetDpiForSystem(). On systems where I tested, all three values always match.
Questions:
Is it possible for GetDeviceCaps(hdc, LOGPIXELSX) and GetDeviceCaps(hdc, LOGPIXELSY) to return different values? I assume that LOGPIXELS means DPI. (Please correct me if wrong!)
If the previous answer is yes, then is GetDeviceCaps(GetDC(NULL), LOGPIXELSX/Y) the same as GetDpiForSystem()?
When possible, I should be using GetDpiForWindow(hWnd), but I want to clarify my understanding about "system" DPI in my questions above.
As far as I'm concerned, GetDeviceCaps(hdc, LOGPIXELSX/Y) is not a match with GetDpiForSystem().
LOGPIXELSX:Number of pixels per logical inch along the screen width.
In a system with multiple display monitors, this value is the same for
all monitors.
LOGPIXELSY:Number of pixels per logical inch along the screen height.
In a system with multiple display monitors, this value is the same for
all monitors.
This function is not able to return a per-monitor DPI. For that, you should use GetDpiForWindow().
GetDpiForWindow() also returns a different value based on the DPI_AWARENESS value.
GetDpiForSystem() returns the system DPI. The return value will be dependent based upon the calling context. If the current thread has a DPI_AWARENESS value of DPI_AWARENESS_UNAWARE, the return value will be 96.
Is it possible for GetDeviceCaps(hdc, LOGPIXELSX) != GetDeviceCaps(hdc, LOGPIXELSY)? I assume that "LOGPIXELS" means DPI. (Please correct me if wrong!)
For monitors, I believe LOGPIXELSX == LOGPIXELSY even if your display has non-square pixels (which is extremely rare nowadays). There are still many printers out there that have different horizontal and vertical dot densities. In some cases, the printer driver may hide that, giving the illusion of square pixels. Among ones that don't, you not only have to be careful to use the right value for the context, but you should be aware the some drivers forget to swap the horizontal and vertical values when you change the page orientation to landscape.
LOGPIXELSX and LOGPIXELSY refer to the number of pixels per logical inch, an idea that's been buried in recent versions of Windows. You used to be able to tell Windows that, when a program wants to display something that's 1-inch long, use my logical inch value rather than the actual DPI. (In the olden days of CRT monitors, it was usually impossible for Windows to discover the actual DPI anyway.)
Common values for logical inches were 96 and 120. If you had a really high-end monitor, you might've used 144. If you were into WYSIWYG applications or desktop publishing, you'd usually choose a value that was about 20% larger than an actual inch on your screen. I've heard various rationales for this, but I usually chose a higher value for easier reading.
When possible, I should be using GetDpiForWindow(hWnd)
That's fine. I use LOGPIXELSX and LOGPIXELSY because I'm old school and have always written high-DPI aware code.
but I want to clarify my understanding about "system" DPI in my questions above.
I believe the system DPI is the scaled DPI of the primary monitor. The scaling gives the programmer the same functionality as a logical inch, but it's presented to the user in a different way conceptually.
On systems where I tested, all three values always match.
Yes, it's very common for all of the methods to report the same DPI.
If your program is not high-DPI aware, you'll get 96 regardless of how you ask.
If it is aware, the system DPI is the DPI of the primary monitor. (Well, it's the possibly scaled native DPI of the monitor, but that's the same value you'll be told for the DPI of monitor.)
That covers a lot of common cases. To truly and thoroughly test, you'd need a second monitor with a different native DPI than the primary.
A couple points to keep in mind.
GetDeviceCaps approach is specific to the device context you reference with the HDC parameter. Remember that you can have printer DCs, enhanced metafile DCs, and memory DCs in addition to screen DCs. For screens, the return value will depend on the DPI awareness.
DPI awareness comes into play for screens (not printers). Your program's UI thread can be:
DPI unaware, in which all methods will return 96 dots per inch and actual differences will/might be handled by the system scaling things on the backend.
DPI aware, in which case most will return the system DPI. I believe the system DPI is the (scaled?) DPI of the primary monitor, but I'm not sure about that.
per-monitor DPI aware, in which case the GetDeviceCaps and GetDpiForWindow will return the actual DPI of the monitor that's referenced by the DC. I don't recall what it returns if the DC spans multiple monitors.
It might by the actual DPI if the monitors spanned have the same DPI, or it might be the system DPI, or it might be the DPI of one of the spanned monitors.
GetDpiForMonitor ignores DPI awareness altogether. I assume it remains for backward compatibility.
On a machine with a high DPI monitor connected when I try to get the cursor (though GetIconInfo or GetIconInfoEx) I get an HBITMAP which is 3 times the normal size.
Is there a way to get a cursor normal size so that I don't have to resize it myself?
I get artifacts when I resize it my self
Since it was marked as duplicate question (Load cursor with certain resolution), let me explain why it's not:
First of all I am not loading any cursor. I'm using the system's default. Also when I query the system for the cursor size, whether the cursor is on a hi-DPI or normal-DPI monitor I always get 64 pixels, the same value. Also I get the same value whether I have from control panel the monitor's scaling factor to 100% or more. Also the same value I get whether or not I have small, medium or large cursor (from control panel mouse ease of access)
You do not state what normal size refers to, so I'm going to assume, that it's the cursor size that the hardware mouse pointer is displayed at (32×32 at 96 DPI 100 % scale).
Dimensions of the bitmap returned by GetIconInfo (and the cursor itself) is affected by the DPI-scale specified in the control panel, which can be depending on Windows version same for the whole system or vary between monitors. Additionally the bitmap size is also affected by whether or not your application is marked as DPI-aware, otherwise Windows scales everything for the application.
DPI scale
Mouse cursor size
DPI aware
GetIconInfo bitmap
100 %
32×32
-
32×32
150 %
48×48
No
72×72
150 %
48×48
Yes
48×48
200 %
64×64
No
128×128
200 %
64×64
Yes
64×64
I want my program's window to be as big as possible without overlapping the window manager's various small windows e.g. the pager. Is there any way to ask the wm what the maximized window size is, before I create my window?
_NET_WORKAREA property of the root window is probably closest match. However on a multi-headed system it will give you the combined work area on all monitors.
If that's what you want, fine (but see here on making a window span multiple monitors). If you want to maximize over a single monitor, then there's a problem as there's no per-monitor API like _NET_WORKAREA. Your best bet is creating a window in a maximized state and then querying its size. If that's not an option, I'm afraid you will have to query the number and sizes of available monitors, and then go and calculate the work area of each monitor by subtracting "struts" from the full area (see here about _NET_WM_STRUT and _NET_WM_STRUT_PARTIAL).
I'm implementing the CSS3 flexible box layout module as defined by the W3C, which is similar to Mozilla's box model for xul. While these standards specify how the model should behave, they don't give any details on how they should be implemented.
The parts of the model I'm interested in are:
Boxes have a width and height.
Boxes can contain other boxes.
Container boxes (parent boxes) are responsible for sizing and positioning the boxes they contain (child boxes).
Boxes have orientation which may be horizontal or vertical. The orientation determines how the child boxes are positioned and resized.
Child boxes may be flexible or inflexible. If the child box is inflexible it is drawn at the size specified in the width and height parameters. If it is flexible, then it is resized to fit into the available space in the parent container.
Flexibility is relative to other child boxes in the same container, boxes with higher flexibility are resized more than boxes with lower flexibility.
Child boxes can be constrained to a minimum or maximum size. If the child box is flexible, the parent box will never resize it below the minimum size, or above the maximum size.
Features 1-5 can be implemented quite efficiently. Feature 6 is problematic as the most efficient algorithm I can come up with is quite naive. The algorithm works as follows:
Place all the boxes in a list.
Loop through each child box and resize it using the flexibility to determine the amount to resize it by.
If the size exceeds one of the limits, then set the box size to the limit and remove it from the list, and start from the beginning of the list.
Step 3 is where the efficiency drops. For example, if there are ten items in the list, and the last one has a constraint, then the algorithm calculates the size for the first nine items, then when it reaches the tenth one it needs to redo all of the calculations. I have considered keeping the list sorted and first sizing all the constrained boxes, however this comes with the cost of added complexity and the overhead of sorting the list.
I expect there is a recognised optimal solution considering this is a fairly common feature in browsers and frameworks (XUL, .Net, Flex, etc).
Most box/container layout algorithms use a 2 pass algorithm. In the case of .NET (WPF) they are called "Measure" and "Arrange". Each control can measure its content and report a "desired size" in the recursive measure pass.
During the second pass (arrange) if the childrens desired sizes will not fit inside the parent, the parent uses its layout algorithm to provide the actual size to each child, for example by assigning the actual size weighted by desired size. Minimum/maximum sizes, box flexibility etc can come into play here.
More information on the WPF layout system
http://msdn.microsoft.com/en-us/library/ms745058.aspx
Xul layout
http://www-archive.mozilla.org/projects/xul/layout.html
WindowFromPhysicalPoint is new with Vista. Its documentation is almost identical to WindowFromPoint. What's the difference? Both seem to take an absolute point (offset from screen origin) and return the topmost (Z order) HWND that contains the point.
http://msdn.microsoft.com/en-us/library/ms633533(VS.85).aspx
Windows Vista introduces the concept of physical coordinates. Desktop Window Manager (DWM) scales non-dots per inch (dpi) aware windows when the display is high dpi. The window seen on the screen corresponds to the physical coordinates. The application continues to work in logical space. Therefore, the application's view of the window is different from that which appears on the screen. For scaled windows, logical and physical coordinates are different.
WindowFromPhysicalPoint operates in physical screen coordinates, while WindowFromPoint works with logical ones. To understand the different read this page.
TL;DR; version would be:
Suppose you design a dialog box with a button at coordinates (100,
48). When this dialog box is displayed at the default 96 dpi, the
button is located at physical coordinates of (100, 48). At 120 dpi, it
is located at physical coordinates of (125, 60). But the logical
coordinates are the same at any dpi setting: (100, 48).
So unless you design your app to be DPI aware I would stick with logical coordinates, since most APIs and window messages operate in logical space. Another reason to use logical coordinates is to make your app backward compatible with Windows XP.