Given an HFONT, how do I tell if it's a symbol font? A pdf library I'm using needs to treat symbol fonts differently, so I need a way to programatically tell if any given font is a symbol font or not.
Use GetObject to get the font's properties to a LOGFONT structure. Check the lfCharSet member; if it's SYMBOL_CHARSET, you have a symbol font.
Mark Ransom's answer is going to work 99.999% of the time, but there's a theoretical possibility that it could give the wrong answer.
To avoid this possibility, you should use GetTextMetrics to get the TEXTMETRICS of the actual font and check if the tmCharSet is SYMBOL_CHARSET.
What's the difference between checking lfCharSet and tmCharSet?
When you create an HFONT, Windows makes an internal copy of the LOGFONT. It describes the font you want, which could be different than the font you get.
When you select the HFONT into a device (or information) context, the font mapper finds the actual font that best matches the LOGFONT associated with that HFONT. The best match, however, might not be an exact match. So when you need to find out something about the actual font, you should take care to query the HDC rather than the HFONT.
If you query the HFONT with GetObject, you just get the original LOGFONT back. GetObject doesn't tell you anything about the actual font because it doesn't know what actual font the font mapper chose (or will choose).
APIs that ask about the font selected into a particular DC, like GetTextMetrics, GetTextFace, etc., will give you information about the actual font.
For this problem, Mark's answer (using GetObject) is probably always going to work, because the odds of the font mapper choosing a symbol font when you want a textual font (or vice versa) are minuscule. In general, though, when you want to know something about the actual font, find a way to ask the HDC.
Related
FreeType has font metrics for the underline position, but I can't seem to find any metrics for the strikethrough position. How do text engines usually compute this value? Should I just put it at 1/3*ascent or whatever looks good? I suppose that for Latin at least this should be 1/2*height of "m" but I'm looking for a more general solution.
This information is not provided for all the various font formats supported by Freetype; so it is not exposed on the "main" interface.
In the (common but not universal) case of TrueType or OpenType fonts it can be retrieved in the TT_OS2 table, fields yStrikeoutSize and yStrikeoutPosition; you should be prepared for the table to be lacking, or yStrikeoutSize to be null or negative thus unusable.
I do not remember of an equivalent for plain Postscript fonts (.pfb/.pfa, even in .afm.)
The various bitmap formats might have the information available; an example is strike_out in Windows FNT; notice this is the position, while the size defaults to be the same as underlining. Basically every format is alone here.
In Windows, the CreateFontIndirect() call can silently substitute compatible fonts if the requested font is not requested. The GetObject() call does not reflect this substitution; it returns the same LOGFONT passed in. How can I find what font was actually created? Alternately, how can I force Windows to only return the exact font requested?
In Windows, the CreateFontIndirect() call can silently substitute compatible fonts if the requested font is not requested. The GetObject() call does not reflect this substitution; it returns the same LOGFONT passed in.
It's not CreateFontIndirect that's doing the substitution. The substitution happens when the font is selected into the DC. CreateFontIndirect just gives you a handle to an internal copy of the LOGFONT. That's why GetObject gives you the same LOGFONT back.
How can I find what font was actually created?
If you select the HFONT into the target DC, you can then ask the DC for the information about the font that was actually chosen as the best match to the LOGFONT.
The face name is available with GetTextFace.
You can get metrics with GetTextMetrics.
If the selected font is TrueType or OpenType, you can get additional metrics with GetOutlineTextMetrics.
That essentially tells you what font was actually created.
Aside:
When doing something like print preview, you can start with a LOGFONT, select it into the printer DC (or IC), grab the details of the actual font (printers often substitute fonts), and then create a new LOGFONT that's more representative of the actual font. Select that into the screen DC, and--with appropriate size conversions--do a pretty good match of what the user will actually get.
To get the appropriate font on different language versions of the OS,
call EnumFontFamiliesEx with the desired font characteristics in the
LOGFONT structure, then retrieve the appropriate typeface name and
create the font using CreateFont or CreateFontIndirect.
While it's not a universal way to get the actual font name from an HFONT, you can check beforehand what CreateFontIndirect would (most likely) return.
Judging from how MSDN suggest this as a good solution for getting a font family from the attributes it seems like the way Windows internally performs the substitution.
I'm doing color transformations on glyphs rendered with CTFontDrawGlyphs, but I do not want to do those transformations to the emoji glyphs, since they have already a meaningful color information.
So, when I have a CTRun of glyphs, can I detect if it is actually emoji/color font?
I can do a string compare to the postscript name with "AppleColorEmoji", but seems awfully wasteful to do all the time, and somewhat hacky if there ever happens to be another font with the same features.
Ah, I can get the symbolic traits with CTFontGetSymbolicTraits, and check for kCTFontTraitColorGlyphs (or kCTFontColorGlyphsTrait), which, while undocumented, is available in the public headers.
I want to adjust a Static control's size to its content size, so I need to calculate the size of its text content first. I found a way to use GetTextExtentPoint32 to calculate the size, but I need to set the DC's font to the same as the control's font first. Is there a better way to do this? I've set the Static control's font once, I think maybe I don't need to set the DC's font the second time.
What is the best way to calculate the size of a Static control's text content? And is there a better way to autosize the Static control?
It sounds to me like you've already figured out the correct way to do it. Call GetTextExtentPoint32 to figure out the ideal size of the control given the text that it contains, and then resizing the control to the calculated size.
It's a lot of work, but that's what happens when you're working with the raw Win32 API. You don't have a handy wrapper library that abstracts all this for you in a Control.AutoSize() function. You could easily write your own function and re-use it, but the Win32 standard controls do not expose an "auto-size" API.
As far as the font, you will definitely need to make sure that the device context is using the same font as the control, otherwise you'll calculate the wrong size. But you don't have to create a new device context, request a handle the static control's font, and select that into your new DC. Instead, you can just use the static control's DC using the GetDC function and passing in the handle to your static control window. Make sure that if you call GetDC, you always follow up with a call to ReleaseDC when you're finished!
However, do note some caveats of the GetTextExtentPoint32 function that may interfere with the accuracy of the size you calculate:
It ignores clipping.
It does not take into account new lines (\n) or carriage returns (\r\n) when computing the height.
It does not take into account prefix characters (those preceded in the string with ampersand) and used to denote keyboard mnemonics if your static control does not have the SS_NOPREFIX style.
It may not return an accurate result in light of the kerning that may be implemented automatically by some devices.
(This is all mentioned in the linked documentation, but does anyone actually read that?)
Perhaps an easier alternative is to draw the text the same way that the static control is already doing. Unless you have the SS_SIMPLE style set (which uses TextOut or ExtTextOut to draw text as an optimization), static controls draw their text by calling the DrawText function with the appropriate parameters, given the other control styles that are set (reference).
You can do exactly the same thing, and add the DT_CALCRECT flag in your call to the DrawText function, which causes it to determine the width and height of the rectangle required to draw the specified text without actually drawing the text.
Most windows using static text controls are dialogs, where the static control's size is expressed in dialog units (DLU), which account roughly for the size of the font. In this way, dialog controls tend to have sensible sizes.
If you are not using dialogs, you can attempt to fake dialog behavior using MapDialogRect.
Otherwise yes you must use GetTextExtentPoint32.
There is no autosize for static control as far as I know. You are doing it correct.
Use GetWinDowText to get the text of static window
Use GetDC to get the dc for the window
Use WM_GETFONT to get the font for the window and select the font into the dc
Use one of the text size calculation function to calculate the text size
Restore the the original dc font
Release dc
You will always have to select the proper font into the dc to get accurate result. Also I personally prefer DrawText with DT_CALCRECT to calculate the size of a text. Refer http://msdn.microsoft.com/en-us/library/windows/desktop/dd162498%28v=vs.85%29.aspx
With DrawText, you dont have to supply the character count if the text is NULL terminated. Plus you can combine various formatting option to adjust the calculation. For example, an Ampersand(&) character in a static control text underlines the next character. With Drawtext you will be able to calculate the size properly but in GetTextExtentPoint32 there is no provision to specify this.
I'm enumerating Windows fonts like this:
LOGFONTW lf = {0};
lf.lfCharSet = DEFAULT_CHARSET;
lf.lfFaceName[0] = L'\0';
lf.lfPitchAndFamily = 0;
::EnumFontFamiliesEx(hdc, &lf,
reinterpret_cast<FONTENUMPROCW>(FontEnumCallback),
reinterpret_cast<LPARAM>(this), 0);
My callback function has this signature:
int CALLBACK FontEnumerator::FontEnumCallback(const ENUMLOGFONTEX *pelf,
const NEWTEXTMETRICEX *pMetrics,
DWORD font_type,
LPARAM context);
For TrueType fonts, I typically get each face name multiple times. For example, for multiple calls, I'll get pelf->elfFullName and pelf->elfLogFont.lfFaceName set as "Arial". Looking more closely at the other fields, I see that each call is for a different script. For example, on the first call pelf->elfScript will be "Western" and pelf->elfLogFont.lfCharSet will be the numeric equivalent of ANSI_CHARSET. On the second call, I get "Hebrew" and HEBREW_CHARSET. Third call "Arabic" and ARABIC_CHARSET. And so on. So far, so good.
But the font signature (pMetrics->ntmFontSig) field for all versions of Arial is identical. In fact, the font signature claims that all of these versions of Arial support Latin-1, Hebrew, Arabic, and others.
I know the character sets of the strings I'm trying to draw, so I'm trying to instantiate an appropriate font based on the font signatures. Because the font signatures always match, I always end up selecting the "Western" font, even when displaying Hebrew or Arabic text. I'm using low level Uniscribe APIs, so I don't get the benefit of Windows font linking, and yet my code seems to work.
Does lfCharSet actually carry any meaning or is it a legacy artifact? Should I just set lfCharSet to DEFAULT_CHARSET and stop worrying about all the script variations of each face?
For my purposes, I only care about TrueType and OpenType fonts.
I think I found the answer. Fonts that get enumerated multiple times are "big" fonts. Big fonts are single fonts that include glyphs for multiple scripts or code pages.
The Unicode portion of the FONTSIGNATURE (fsUsb) represents all the Unicode subranges that the font can handle. This is independent of the character set. If you use the wide character APIs, you can use all the included glyphs in the font, regardless of which character set was specified when you create the font.
The code page portion of the FONTSIGNATURE (fsCsb) represents the code pages that the font can handle. I believe this is only significant when the font is not a "big" font. In that case, the fsUsb masks will be all zeros, and the fsCsb will specify the appropriate character set(s). In those cases, it's important to get the lfCharSet correct in the LOGFONT.
When instantiating a "big" font and using the wide character APIs, it apparently doesn't matter which lfCharSet you specify.