The first parameter to the EnumFontFamiliesEx function, according to the MSDN documentation, is described as:
hdc [in]
A handle to the device context from which to enumerate the fonts.
What exactly does it mean?
What does device context mean?
Why should a device context be related to fonts?
Question (3) is a legitimately difficult thing to find an explanation for, but the reason is simple enough:
Some devices provide their own font support. For example, a PostScript printer will allow you to use PostScript fonts. But those same fonts won't be usable when rendering on-screen, or to another printer without PostScript support. Another example would be that a plotter (which is a motorized pen) requires vector fonts with a fixed stroke thickness, so raster fonts can't be used with such a device.
If you're interested in device-specific font support, you'll want to know about the GetDeviceCaps function.
The windows API uses the concept of handles extensively. A handle is an integer value that you can use as a token to access an API resource. You can think of it as a kind of "this" pointer, although it is definitely not a pointer.
A device context is an object within the windows API that represents a something that you can draw on or display graphics on. It might be a printer, a bitmap, or a screen, or some other context in which creating graphics makes sense. In Windows, fonts must be selected into device contexts before they can be used. In order to find out what fonts are currently available in any given device context, you can enumerate them. That's where EnumFontFamiliesEx comes in.
Microsoft has other articles on device context,
https://learn.microsoft.com/en-us/windows/win32/gdi/about-device-contexts
An application must inform GDI to load a particular device driver and,
once the driver is loaded, to prepare the device for drawing
operations (such as selecting a line color and width, a brush pattern
and color, a font typeface, a clipping region, and so on). These tasks
are accomplished by creating and maintaining a device context (DC). A
DC is a structure that defines a set of graphic objects and their
associated attributes, and the graphic modes that affect output. The
graphic objects include a pen for line drawing, a brush for painting
and filling, a bitmap for copying or scrolling parts of the screen, a
palette for defining the set of available colors, a region for
clipping and other operations, and a path for painting and drawing
operations. Unlike most of the structures, an application never has
direct access to the DC; instead, it operates on the structure
indirectly by calling various functions.
Obviously font is a kind of drawing.
Related
I am now converting a project's render engine from GDI to D2D. The GDI use "CreateFontIndirect" to assign font size "-13", font family "Segeo UI". The D2D use "CreateTextFormat" to assign font size "13", font family "Segeo UI". The effect is shown as follow picture:
In GDI case, the system didn't find chinese character in "Segeo UI", it will find in regedit "SystemLink" to locate the chinese font, on my machine is "YaHei". But In D2D case, the system didn't find "YaHei", Which chinese font it will choose to draw, How does it work?
It works according to DirectWrite layout logic. See IDWriteTextLayout2::SetFontFallback(), you'll be able to provide your own fallback implementation, if default configuration is not satisfactory.
Basically, layout object will call your custom fallback methods to map characters to fonts, you can then detect which characters you want to map to which font, potentially reusing system fallback implementation for cases you don't care about.
I don't have any Deep-Color capable hardware attached to my computer, so I haven't been able to experiment myself, and searching online for "win32 deep-color" or "win32 10-bit color" (or 30-bit, 48-bit or 64-bit) yields nothing relevant or recent. The top result is still this NVIDIA PDF from 2009: https://www.nvidia.com/docs/IO/40049/TB-04701-001_v02_new.pdf - it describes using OpenGL and an NVIDIA API for displaying images with more than 8-bits per channel.
I understand how using OpenGL allows 30-bit color images to be displayed: it effectively bypasses the operating system and the OpenGL surface is rendered in deep-color on the GPU and sent directly to the monitor in an appropriate format over DisplayPort or HDMI.
But what options are there outside of OpenGL?
In Win32, after you create a Window with CreateWindow, you render it by handling the WM_PAINT message, and then calling BeginPaint, which gives you a handle to a GDI device-context, which cannot be more than 32bpp (8-bits per channel).
While GDI ostensibly abstracts away implementation details of the rendering device, including color depth, it is impossible to specify a 10bpp RGB value, for example (the COLORREF struct is hardcoded to use 32-bit (8bpp) DWORD values), a leaky-abstraction.
Does this mean it is impossible to display 30bpp / Deep-color content in the Windows Desktop using a program handling WM_PAINT and that OpenGL is the only way?
And what would happen if you attempted to blit from an in-memory OpenGL rendering buffer back to the window surface? (i.e. what happens if you press PrintScreen while displaying a Deep-Color BluRay disc in a BD player, or displaying 30-bit content in Photoshop?)
I'm thinking to create a simple game that displays itself on the external monitor, if it's available.
I would be pleased to have this as simple as possible, in other words the programming handles the activation of the external monitor, and targets the gamewindow there automatically on start (by a commandline tool, api, ?). A mirror view would do fine as well.
Is this even possible? Would there be a good alternative, besides having (simpleminded) users having to set their monitor etc. by themselves?
I do not have a preferred language to work with; Java, C(++), C#, anything would do as long as it runs on Windows 7+.
Here are just a few examples of APIs related to multiple monitors / displays (pretty much, first relevant results of a Google search):
http://vb.mvps.org/articles/vsm20090302.pdf
http://www.codeproject.com/KB/system/multiplemonitor.aspx
http://www.realtimesoft.com/multimon/programming/
EnumDisplayMonitors will be a common point for most of these, the documentation of which is available at http://msdn.microsoft.com/en-us/library/dd162610%28VS.85%29.aspx :
The EnumDisplayMonitors function enumerates display monitors
(including invisible pseudo-monitors associated with the mirroring
drivers) that intersect a region formed by the intersection of a
specified clipping rectangle and the visible region of a device
context. EnumDisplayMonitors calls an application-defined
MonitorEnumProc callback function once for each monitor that is
enumerated. Note that GetSystemMetrics (SM_CMONITORS) counts only the
display monitors.
See also ChangeDisplaySettingsEx, which can be used to configure the displays, including "Position of the device in a multi-monitor configuration."
I'm currently writing an OpenGL renderer and am part-way through writing some classes for enumerating display adaptors, devices and modes for use in drop-down lists.
I'm using EnumDisplayDevices to get the adaptors and then EnumDisplaySettings for each device, giving me bpp, width, height and refresh rate. However I'm not sure how to find out which modes are available full-screen (there doesn't appear to be a flag for it in the DEVMODE structure). Can I assume all modes listed can in-principle be instantiated full-screen?
As a follow up question, is this approach to device enumeration generally the best way of getting the required information on Windows?
OpenGL has not this distinction between windowed and fullscreen mode. If you want an OpenGL program to be fullscreen you just set the window to be toplevel, borderless, without decoration, stay on top and maximum size.
The above is actually a dumb question. By definition windowed mode must be the current display settings. All other modes must be available full-screen (provided the OS supports them, i.e. 640x480 not advisable in Vista/7).
Hmmph, not correct at all, and with an attitude too. You have variety of functions that can be used.
SetPixelFormat, ChoosePixelFormat, ChangeDisplaySettings.
The PixelFormat functions will let you enumerator available modes. ChangeDisplaySettings with allow you to set whatever screen mode (including bit depth) your app wants. Look them up in MSDN.
I was trying some code to capture part of the screen using getPixel on Windows, with the device context being null (to capture screen instead of window), but it was really slow. It seems that GetDIBits() can be fast, but it seems a bit complcated... I wonder if there is a library that can put the whole region into an array, and pixel[x][y] will return the 24 bit color code of the pixel?
Or does such library exist on the Mac? Or if Ruby or Python already has such a library that can do that?
I've never done this but I'd try to:
Create a memory Device Context (DC)
using CreateCompatibleDC passing it
the Device Context of the desktop
(GetDC(NULL)).
Create a Bitmap (using
CreateCompatibleBitmap) the same
size as the region your capturing.
Select the bitmap into the DC you
created (using SelectObject).
Do a BitBlt from the
desktop DC to the DC you created (using SRCCOPY flag).
Working with Device Contexts can cause GDI leaks if you do things in the wrong order so make sure that you read the documentation on all the GDI functions you use (e.g. SelectObject, GetDC, etc.).