In Windows, the CreateFontIndirect() call can silently substitute compatible fonts if the requested font is not requested. The GetObject() call does not reflect this substitution; it returns the same LOGFONT passed in. How can I find what font was actually created? Alternately, how can I force Windows to only return the exact font requested?
In Windows, the CreateFontIndirect() call can silently substitute compatible fonts if the requested font is not requested. The GetObject() call does not reflect this substitution; it returns the same LOGFONT passed in.
It's not CreateFontIndirect that's doing the substitution. The substitution happens when the font is selected into the DC. CreateFontIndirect just gives you a handle to an internal copy of the LOGFONT. That's why GetObject gives you the same LOGFONT back.
How can I find what font was actually created?
If you select the HFONT into the target DC, you can then ask the DC for the information about the font that was actually chosen as the best match to the LOGFONT.
The face name is available with GetTextFace.
You can get metrics with GetTextMetrics.
If the selected font is TrueType or OpenType, you can get additional metrics with GetOutlineTextMetrics.
That essentially tells you what font was actually created.
Aside:
When doing something like print preview, you can start with a LOGFONT, select it into the printer DC (or IC), grab the details of the actual font (printers often substitute fonts), and then create a new LOGFONT that's more representative of the actual font. Select that into the screen DC, and--with appropriate size conversions--do a pretty good match of what the user will actually get.
To get the appropriate font on different language versions of the OS,
call EnumFontFamiliesEx with the desired font characteristics in the
LOGFONT structure, then retrieve the appropriate typeface name and
create the font using CreateFont or CreateFontIndirect.
While it's not a universal way to get the actual font name from an HFONT, you can check beforehand what CreateFontIndirect would (most likely) return.
Judging from how MSDN suggest this as a good solution for getting a font family from the attributes it seems like the way Windows internally performs the substitution.
Related
I wrote a script which parses information from PDF files and outputs it to HTML. It's written in Python, using pdfminer.
On some text segments, the font style can have semantic significance. For instance: bold, italic and color should trigger different behavior. Pdfminer provides scripts with the font name, but not the color, and it has a number of other issues; so I'm working on a Swift version of that program, using Apple's PDFKit, to extract the same features.
I now find that I have the opposite problem. While PDFKit makes it easy to retrieve color, retrieving the original font name seems to be non-obvious. PDFSelection objects have an attributedString property, but for fonts that are not installed on my computer, the NSFont object is Helvetica. Of course, the fonts in question are fairly expensive, and acquiring a copy just for this purpose would be poor form.
Short of dropping to CGPDFContentStream (which is way too big of a hammer for what I want to get), is there a way of getting the original font name? I know in advance what the fonts are going to be, can I use that to my advantage?
PDFKit seems to use the standard font lookup system and then falls back on some default, so this can be resolved by spoofing the font to ensure that PDFKit doesn't need to fall back. Inspecting the document, I was able to identify that it uses the following fonts (referenced with their PostScript name):
"NeoSansIntel"
"NeoSansIntelMedium"
"NeoSansIntel,Italic"
I used a free font creation utility to create dummy fonts with these PostScript names, and I added them to my app bundle. I then used CTFontManagerRegisterFontsForURLs to load these fonts (in the .process scope), and now PDFKit uses these fonts for attributed strings that need them.
Of course, the fonts are bogus and this is useless for rendering. However, it works perfectly for the purpose of identifying text that uses these font.
I have some old code that I want to port to metro. The old code uses the GDI function GetFontData to get font data from a table whose tag is being passed to it. I plan to replace it with IDWriteFontFace::TryGetFontTable.To do so I have to create a IDWriteFontFace object which requires path to the font file corresponding to the font it represents. But what I don't understand is where does GetFontData figure out the font files from whose tables it is supposed to fetch the data from? Does it do so from the device context that is passed to it?
The font is the one currently selected in the Device Context. You can retrieve it by using GetCurrentObject with object type OBJ_FONT. You can then safely cast the returned HGDIOBJ to a HFONT.
As for retrieving the font file name, that's not easy. See that SO Question
There always some font selected in Device Context (I say always because there is default font). So GetFontData returns that font depending on HDC hdc parameter.
As manuell mentioned GetFontData is similar to (HFONT)(GetCurrentObject(hdc, OBJ_FONT))
Can anyone tell me how I get the HFONT handle to send a WM_SETFONT message? The font is calibri and should already be installed on Windows, all I can see is the add/create functions here
http://msdn.microsoft.com/en-us/library/windows/desktop/dd144821%28v=vs.85%29.aspx
the CreateFont function states it returns a handle but Im wondering why I would need to create something thats already there.
The only thing that's "there" is a set of font files in the c:\windows\fonts directory. They contain the outline of the font. You really do have to call CreateFont(). At which point Windows actually accesses the file and creates the specific font you ask for. With the requested height, weight, etc.
I want to adjust a Static control's size to its content size, so I need to calculate the size of its text content first. I found a way to use GetTextExtentPoint32 to calculate the size, but I need to set the DC's font to the same as the control's font first. Is there a better way to do this? I've set the Static control's font once, I think maybe I don't need to set the DC's font the second time.
What is the best way to calculate the size of a Static control's text content? And is there a better way to autosize the Static control?
It sounds to me like you've already figured out the correct way to do it. Call GetTextExtentPoint32 to figure out the ideal size of the control given the text that it contains, and then resizing the control to the calculated size.
It's a lot of work, but that's what happens when you're working with the raw Win32 API. You don't have a handy wrapper library that abstracts all this for you in a Control.AutoSize() function. You could easily write your own function and re-use it, but the Win32 standard controls do not expose an "auto-size" API.
As far as the font, you will definitely need to make sure that the device context is using the same font as the control, otherwise you'll calculate the wrong size. But you don't have to create a new device context, request a handle the static control's font, and select that into your new DC. Instead, you can just use the static control's DC using the GetDC function and passing in the handle to your static control window. Make sure that if you call GetDC, you always follow up with a call to ReleaseDC when you're finished!
However, do note some caveats of the GetTextExtentPoint32 function that may interfere with the accuracy of the size you calculate:
It ignores clipping.
It does not take into account new lines (\n) or carriage returns (\r\n) when computing the height.
It does not take into account prefix characters (those preceded in the string with ampersand) and used to denote keyboard mnemonics if your static control does not have the SS_NOPREFIX style.
It may not return an accurate result in light of the kerning that may be implemented automatically by some devices.
(This is all mentioned in the linked documentation, but does anyone actually read that?)
Perhaps an easier alternative is to draw the text the same way that the static control is already doing. Unless you have the SS_SIMPLE style set (which uses TextOut or ExtTextOut to draw text as an optimization), static controls draw their text by calling the DrawText function with the appropriate parameters, given the other control styles that are set (reference).
You can do exactly the same thing, and add the DT_CALCRECT flag in your call to the DrawText function, which causes it to determine the width and height of the rectangle required to draw the specified text without actually drawing the text.
Most windows using static text controls are dialogs, where the static control's size is expressed in dialog units (DLU), which account roughly for the size of the font. In this way, dialog controls tend to have sensible sizes.
If you are not using dialogs, you can attempt to fake dialog behavior using MapDialogRect.
Otherwise yes you must use GetTextExtentPoint32.
There is no autosize for static control as far as I know. You are doing it correct.
Use GetWinDowText to get the text of static window
Use GetDC to get the dc for the window
Use WM_GETFONT to get the font for the window and select the font into the dc
Use one of the text size calculation function to calculate the text size
Restore the the original dc font
Release dc
You will always have to select the proper font into the dc to get accurate result. Also I personally prefer DrawText with DT_CALCRECT to calculate the size of a text. Refer http://msdn.microsoft.com/en-us/library/windows/desktop/dd162498%28v=vs.85%29.aspx
With DrawText, you dont have to supply the character count if the text is NULL terminated. Plus you can combine various formatting option to adjust the calculation. For example, an Ampersand(&) character in a static control text underlines the next character. With Drawtext you will be able to calculate the size properly but in GetTextExtentPoint32 there is no provision to specify this.
Given an HFONT, how do I tell if it's a symbol font? A pdf library I'm using needs to treat symbol fonts differently, so I need a way to programatically tell if any given font is a symbol font or not.
Use GetObject to get the font's properties to a LOGFONT structure. Check the lfCharSet member; if it's SYMBOL_CHARSET, you have a symbol font.
Mark Ransom's answer is going to work 99.999% of the time, but there's a theoretical possibility that it could give the wrong answer.
To avoid this possibility, you should use GetTextMetrics to get the TEXTMETRICS of the actual font and check if the tmCharSet is SYMBOL_CHARSET.
What's the difference between checking lfCharSet and tmCharSet?
When you create an HFONT, Windows makes an internal copy of the LOGFONT. It describes the font you want, which could be different than the font you get.
When you select the HFONT into a device (or information) context, the font mapper finds the actual font that best matches the LOGFONT associated with that HFONT. The best match, however, might not be an exact match. So when you need to find out something about the actual font, you should take care to query the HDC rather than the HFONT.
If you query the HFONT with GetObject, you just get the original LOGFONT back. GetObject doesn't tell you anything about the actual font because it doesn't know what actual font the font mapper chose (or will choose).
APIs that ask about the font selected into a particular DC, like GetTextMetrics, GetTextFace, etc., will give you information about the actual font.
For this problem, Mark's answer (using GetObject) is probably always going to work, because the odds of the font mapper choosing a symbol font when you want a textual font (or vice versa) are minuscule. In general, though, when you want to know something about the actual font, find a way to ask the HDC.