CGDisplayCopyAllDisplayModes leaves out one valid mode - macos

When programmatically working with display modes in OS X (documentation), I've found that the CGDisplayCopyAllDisplayModes leaves out the rightmost option that is presented in System Preferences.
A simple utility that prints the size of the current display mode and all available display mode sizes outputs this
current size: 1920x1200
available sizes:
2880x1800
1440x900
2560x1600
2048x1280
1024x768
800x600
640x480
1680x1050
1280x800
1920x1200 is a valid option
All of the other options that System Preferences gives are represented in the list. Does anyone have any idea why 1920x1200 may not be included? I have tried changing to another one of the pre-defined values in system prefs, but it did not cause 1920x1200 to be included.
Edit (the accepted answer is much better than these shenanigans, but I'm leaving this info just in case)
The "scaled" display modes can be found by referencing a private API.
You can create a header file that makes the private methods available: see this gist that I borrowed from this project.
Then you can see all modes, including the scaled ones like this
print("Private modes:\n")
var numDisplayModes: Int32 = 0
CGSGetNumberOfDisplayModes(mainDisplayID, &numDisplayModes)
print("Num modes \(numDisplayModes)")
for i in 0...(numDisplayModes-1) {
var pmode: CGPrivDisplayMode = CGPrivDisplayMode()
CGSGetDisplayModeDescriptionOfLength(mainDisplayID, CInt(i), &pmode, CInt(sizeof(CGPrivDisplayMode)))
print("\t\(pmode.modeNumber): \(pmode.width)x\(pmode.height) -- \(pmode.density) \n")
}

There's public API that's only documented in the header. CGDisplayCopyAllDisplayModes() takes an options parameter, which is a dictionary. The docs (and even the headers) say that it's unused and you must pass NULL, but you can pass a dictionary with the key kCGDisplayShowDuplicateLowResolutionModes and value kCFBooleanTrue.
The option name is not really clear. It includes a bunch of extra modes.
Also, you may need to use CGDisplayModeGetPixelWidth() and CGDisplayModeGetPixelHeight() to distinguish between the point size and the pixel size of the backing store. (CGDisplayModeGetWidth() and CGDisplayModeGetHeight() return point size. By comparing those values, you can determine if the mode is scaled.)

Related

How to calculate size of Windows bitmap font using FreeType

The Problem
I am loading the classic serife.fon file from Microsoft Windows using FreeType.
Here is how I set the size:
FT_Set_Pixel_Sizes(face, 0, fontHeight);
I use 0 for the fontWidth so that it will be auto-calculated based on the height.
How do I find the correct value for fontHeight such that the resulting font will be exactly 9 pixels tall?
Notes
Using trial and error, I know that the correct value is 32 - but I don't understand why.
I am not sure how relevant this is for bitmap fonts, but according to the docs:
pixel_size = point_size * resolution / 72
Substituting in the values:
point_size = 32
resolution = 96 (from FT_Get_WinFNT_Header)
gives:
pixel_size = 42.6666666
This is a long way from our target height of 9!
The docs do go on to say:
pixel_size computed in the above formula does not directly relate to the size of characters on the screen. It simply is the size of the EM square if it was to be displayed. Each font designer is free to place its glyphs as it pleases him within the square.
But again, I am not sure if this is relevant for bitmap fonts.
fon files are exe files with a fnt payload, where the fnt payload can be a vector or raster font. If this is a raster font (which is most likely) then the dfPixHeight value in the fnt header will tell you what size it's meant to be, which is exposed by FreeType2 as the pixel_height field of the FT_WinFNT_Header.
(And of course, note that using any size other than "the actual raster-size of the FNT" is going to lead to hilarious headaches because bitmap scaling is the kind of madness that's so bad, OpenType instead went with "just embed as many bitmaps as you need, at however many sizes you need, because that's the only way your bitmaps are going to look good")
The FNT-specific FT2 documentation can be found over on https://www.freetype.org/freetype2/docs/reference/ft2-winfnt_fonts.html but you may need to read it in conjunction with https://jeffpar.github.io/kbarchive/kb/065/Q65123 (or https://web.archive.org/web/20120215123301/http://support.microsoft.com/kb/65123) to find any further mappings that you might need between names/fields as defined in the FNT spec and FT2's naming conventions.

ID2D1RenderTarget::GetSize returing physical pixels instead of DIP

I'm currently getting started with Win32 and Direct2D and reached the chapter on DPI and DIP. At the very bottom it says ID2D1RenderTarget::GetSize returns size as DIP and ID2D1RenderTarget::GetPixelSize as physical pixels. Their individual documentation confirms that.
However I cannot observe that ID2D1RenderTarget::GetSize actually returns DIP.
I tested it by
setting the scale of one of my two otherwise identical displays to 175%,
adding <dpiAwareness xmlns="http://schemas.microsoft.com/SMI/2016/WindowsSettings">PerMonitorV2</dpiAwareness> to my application manifest,
obtaining
D2D1_SIZE_U sizeP = pRenderTarget->GetPixelSize();
D2D1_SIZE_F size = pRenderTarget->GetSize();
in method MainWindow::CalculateLayout from this example (and printing the values),
and moving the window from one screen to the other, and arbitrarily resizing it.
I can see the window-border changing size when moving from one display to another. However, the values in both sizeP and size (besides being int and float) are always identical and correspond to the physical size of the ID2D1HwndRenderTarget.
Since I do not expect the documentation to be flawed, I wonder what I am missing to actually get the DIP of the window of the ID2D1HwndRenderTarget pRenderTarget.
The size is only relative to the DPI of the render target, set using ID2D1RenderTarget::SetDpi. This value is not automatically connected to the value provided by the display system, which can be queried using ID2D1Factory::GetDesktopDpi or GetDpiForMonitor.

Detecting if a display supports 30-bit Color

Apple recently enabled 30-bit color support on OS X. They’ve posted some sample code that shows how to enable this. However, they don’t seem to provide an example for how you can detect when your app is running on a display that supports 30-bit color.
We’d like to be able to detect when the display(s) support 30-bit color and only enable 30-bit color for displays that support it and revert to 24-bit color otherwise.
Does anyone know how to do that?
So far I’ve tried using the CGDisplay APIs (CGDisplayCopyDisplayMode and CGDisplayModeCopyPixelEncoding) to query the display’s pixel encoding. But these seem to always return 24-bit encodings and CGDisplayModeCopyPixelEncoding was deprecated in Mac OS X 10.11. I’ve also tried using NSScreen’s “depth” property, but this also returns 24-bits per pixel.
The built-in System Information app is obviously able to get at this information, I just can’t figure out how they’re doing it. Any hints?
As of macOS 10.12 Apple has some new APIs that allow you to detect if a display is capable of wide gamut color (i.e. deep color). There are a few ways to do this:
Use NSScreen's - (BOOL)canRepresentDisplayGamut:(NSDisplayGamut)displayGamut
NSArray<NSScreen *> * screens = [NSScreen screens];
BOOL hasWideGamutScreen = NO;
for ( NSScreen * screen in screens )
{
if ( [screen canRepresentDisplayGamut:NSDisplayGamutP3] )
{
hasWideGamutScreen = YES;
break;
}
}
Use CGColorSpaceIsWideGamutRGB(...):
hasWideGamutScreen = CGColorSpaceIsWideGamutRGB( screen.colorSpace.CGColorSpace );
NSWindow also has - (BOOL)canRepresentDisplayGamut:(NSDisplayGamut)displayGamut.
I don't know that you're guaranteed to be on a 30-bit capable display when the display is considered "wide gamut RGB" or capable of NSDisplayGamutP3, but this appears to be Apple's official way of determining if a display is capable of wide gamut color.
There are various bad options.
First, if you log a display mode (i.e. cast to id and pass to NSLog(#"%#", ...)), you'll find that the real pixel encoding is in there. That's interesting, but you really don't want to parse that description.
If you pass (__bridge CFDictionaryRef)#{ (__bridge NSString*)kCGDisplayShowDuplicateLowResolutionModes: #YES } as the options parameter to CGDisplayCopyAllDisplayModes(), you'll find that you get a bunch of additional display modes. This key is documented in the headers but not the reference documentation. For a Retina display, some of the extra modes are the 2x-scaled counterparts of unscaled display modes. Others are the 24-bit counterparts of the 30-bit-masquerading-as-24-bit modes. These are identical in every way you can query via the API, but the logging shows the difference. (By the way, attempting to switch to one of those modes will fail.)
I think, but you'd have to verify, that you don't get these pairs of seemingly-identical modes except for a 30-bit-color-capable display.
You may be able to get the information from IOKit. You'd have to use the deprecated function CGDisplayIOServicePort() to get the service port of the IOFramebuffer object representing the GPU-display pair. You could then use IORegistryEntrySearchCFProperty() to search up the containment hierarchy in the Service plane to find an object which has a property like "display-bpc" or "display-pixel-component-bits" and get its value. At least, there's such an object and properties on a couple of systems I'm able to test, although they both use AMD GPUs and the property is on an AMD-specific object, so it may not be reliable.
Finally, you can launch a subprocess to run system_profiler -xml SPDisplaysDataType and use the property-list-serialization API to build a property list object from the resulting XML. Then, you can find the information in there. You'd find the relevant display by matching _spdisplays_display-vendor-id with CGDisplayVendorNumber(), _spdisplays_display-product-id with CGDisplayModelNumber(), and _spdisplays_display-serial-number with CGDisplaySerialNumber(). Then, the depth is under the key spdisplays_depth, where a value of CGSThirtyBitColor means 30-bit color.
You should also file a bug report with Apple requesting a sane way to do this.
What worked for me is passing the depthLimit property of my NSWindow to NSBitsPerPixelFromDepth and checking that the return value is greater than 24. Caveat: I only have two iMacs to test on, one from 2012 and another from 2017, and they're both running High Sierra.

Windows CF_DIB to CF_BITMAP in clipboard

I have no practice with windows programming at all, but now I have a problem I want to fix in some program. I need to place an image to windows clipboard and I have raw pointer to valid DIB (device independent bitmap)(in my experiments the dib header version is 3). The program uses the model with delayed clipboard rendering, which means that at first we use SetClipboardData(CF_DIB, NULL) and then on WM_RENDERFORMAT message the program place the actual data to clipboard with SetClipboardData(format, dibDataPointer).
When I open clipbrd.exe (on windows xp) and I choose the DIB view, I can see an Image without any problem. But in msdn is written that the system can render automatically from CF_DIB to CF_BITMAP format. I think that's why when I look in clipbrd.exe I see 2 formats: DIB and BITMAP. When I select in clipbrd.exe the bitmap format I got an error. At first when I looked at the code I saw that there is no case for CF_BITMAP in system message handler function, so when system asks to render CF_BITMAP nothing valid is placed to clipboard, so I added something like this:
switch(format){
case CF_DIB:
case CF_BITMAP: //new code
if(format == CF_BITMAP)//new cOde
format = CF_DIB;// new code
....
SetClipboardData(format, dibDataPointer);
....
and hope (actually, I knew that won't gonna work, but gave this way a try) that the system will recognize that I'm going to give as a response for CF_BITMAP a DIB data and the system will convert in automatically.
So how can I place proper data for WM_RENDERFORMAT message with CF_BITMAP format from the system if I have a DIB data (generally it would be better if I could use the system ability to convert DIB to BITMAP rather then create BITMAP from DIB manually)?
Update:
So I found how to fix the issue. At first it's needed to register for delayed rendering only the CF_DIB with SetClipboardData(CF_DIB, NULL). The CF_BITMAP format will be added to the available clipboard types by Windows automatically. Then you need to pass the dib data with header of the first version which is described by BITMAPINFOHEADER (I have v3 version, I doubt that v4 and v5 headers are going to work) structure with positive biHeight (Y Coordinate) on WM_RENDERFORMAT with CF_DIB format required (the system won't ask you for CF_BITMAP because you didn't register it manually). And in this particular case the system will convert CF_DIB to CF_BITMAP automatically. I don't know will this work with any of the compression method for bitmap data, because I've tested only the BI_RGB uncompressed images.
Every other version of bitmapinfo dib header is reverse compatible with BITMAPINFOHEADER and can be successfully copied with memcpy. But don't forget to set biSize to sizeof(BITMAPINFOHEADER). The second part is to setup positive Y coordinate. (I really hope that DIB format with compressed data should always have positive height.) But for uncompressed bitmaps biHeight can be less then zero, and should be made to a positive value. This will cause the image to be upside down, so it's needed to reverse image rows. The mention should be made that the rows are aligned by 4 bytes.
And the worst thing is that all this standards for headers are described in microsoft documentation. But. for example, Paint can get the dib info v3 header with negative height value, the clipbrd.exe can get v3 header with positive height. The wordpad wants only v1 header with positive height. And the windows converts DIB ro BITMAP only with v1 header and positive height. This all are applications which are distributed with windows (no clipbrd.exe in Vista or later). This is a terrible hell. I hope there won't be any more programming for Windows in my entire life.

Enumerating devices and display modes for OpenGL rendering

I'm currently writing an OpenGL renderer and am part-way through writing some classes for enumerating display adaptors, devices and modes for use in drop-down lists.
I'm using EnumDisplayDevices to get the adaptors and then EnumDisplaySettings for each device, giving me bpp, width, height and refresh rate. However I'm not sure how to find out which modes are available full-screen (there doesn't appear to be a flag for it in the DEVMODE structure). Can I assume all modes listed can in-principle be instantiated full-screen?
As a follow up question, is this approach to device enumeration generally the best way of getting the required information on Windows?
OpenGL has not this distinction between windowed and fullscreen mode. If you want an OpenGL program to be fullscreen you just set the window to be toplevel, borderless, without decoration, stay on top and maximum size.
The above is actually a dumb question. By definition windowed mode must be the current display settings. All other modes must be available full-screen (provided the OS supports them, i.e. 640x480 not advisable in Vista/7).
Hmmph, not correct at all, and with an attitude too. You have variety of functions that can be used.
SetPixelFormat, ChoosePixelFormat, ChangeDisplaySettings.
The PixelFormat functions will let you enumerator available modes. ChangeDisplaySettings with allow you to set whatever screen mode (including bit depth) your app wants. Look them up in MSDN.

Resources