How to get the height at which to draw a strikethrough from FreeType - freetype

FreeType has font metrics for the underline position, but I can't seem to find any metrics for the strikethrough position. How do text engines usually compute this value? Should I just put it at 1/3*ascent or whatever looks good? I suppose that for Latin at least this should be 1/2*height of "m" but I'm looking for a more general solution.

This information is not provided for all the various font formats supported by Freetype; so it is not exposed on the "main" interface.
In the (common but not universal) case of TrueType or OpenType fonts it can be retrieved in the TT_OS2 table, fields yStrikeoutSize and yStrikeoutPosition; you should be prepared for the table to be lacking, or yStrikeoutSize to be null or negative thus unusable.
I do not remember of an equivalent for plain Postscript fonts (.pfb/.pfa, even in .afm.)
The various bitmap formats might have the information available; an example is strike_out in Windows FNT; notice this is the position, while the size defaults to be the same as underlining. Basically every format is alone here.

Related

How to calculate size of Windows bitmap font using FreeType

The Problem
I am loading the classic serife.fon file from Microsoft Windows using FreeType.
Here is how I set the size:
FT_Set_Pixel_Sizes(face, 0, fontHeight);
I use 0 for the fontWidth so that it will be auto-calculated based on the height.
How do I find the correct value for fontHeight such that the resulting font will be exactly 9 pixels tall?
Notes
Using trial and error, I know that the correct value is 32 - but I don't understand why.
I am not sure how relevant this is for bitmap fonts, but according to the docs:
pixel_size = point_size * resolution / 72
Substituting in the values:
point_size = 32
resolution = 96 (from FT_Get_WinFNT_Header)
gives:
pixel_size = 42.6666666
This is a long way from our target height of 9!
The docs do go on to say:
pixel_size computed in the above formula does not directly relate to the size of characters on the screen. It simply is the size of the EM square if it was to be displayed. Each font designer is free to place its glyphs as it pleases him within the square.
But again, I am not sure if this is relevant for bitmap fonts.
fon files are exe files with a fnt payload, where the fnt payload can be a vector or raster font. If this is a raster font (which is most likely) then the dfPixHeight value in the fnt header will tell you what size it's meant to be, which is exposed by FreeType2 as the pixel_height field of the FT_WinFNT_Header.
(And of course, note that using any size other than "the actual raster-size of the FNT" is going to lead to hilarious headaches because bitmap scaling is the kind of madness that's so bad, OpenType instead went with "just embed as many bitmaps as you need, at however many sizes you need, because that's the only way your bitmaps are going to look good")
The FNT-specific FT2 documentation can be found over on https://www.freetype.org/freetype2/docs/reference/ft2-winfnt_fonts.html but you may need to read it in conjunction with https://jeffpar.github.io/kbarchive/kb/065/Q65123 (or https://web.archive.org/web/20120215123301/http://support.microsoft.com/kb/65123) to find any further mappings that you might need between names/fields as defined in the FNT spec and FT2's naming conventions.

Find out if font has monospaced numbers

There are proportional fonts (i.e. not monospaced) that nevertheless provide monospaced numbers. E.g. see this Excel screenshot using Arial:
Note how the numbers are nicely aligned. How can I find out programmatically (probably WinAPI) if a font supports this feature?
You won't find an API for that because there isn't any specific metadata value within the font file to indicate "the glyphs for digits in this font have fixed width". Some fonts may support both proportional and fixed-width ("lining") digits, in which case the font is likely to support the 'lnum' OpenType Layout feature. You should pick a font that supports this feature and then explicitly activate that feature when drawing the text.

Direct2D: How to convert fallback to SystemLink mode?

I am now converting a project's render engine from GDI to D2D. The GDI use "CreateFontIndirect" to assign font size "-13", font family "Segeo UI". The D2D use "CreateTextFormat" to assign font size "13", font family "Segeo UI". The effect is shown as follow picture:
In GDI case, the system didn't find chinese character in "Segeo UI", it will find in regedit "SystemLink" to locate the chinese font, on my machine is "YaHei". But In D2D case, the system didn't find "YaHei", Which chinese font it will choose to draw, How does it work?
It works according to DirectWrite layout logic. See IDWriteTextLayout2::SetFontFallback(), you'll be able to provide your own fallback implementation, if default configuration is not satisfactory.
Basically, layout object will call your custom fallback methods to map characters to fonts, you can then detect which characters you want to map to which font, potentially reusing system fallback implementation for cases you don't care about.

Why does Core Text return Myriad Pro Semibold when requesting a bold version of Myriad Pro

I have the common Adobe Myriad Pro fonts installed. These include Myriad Pro Regular, Myriad Pro Bold and Myriad Pro Semibold. Assume that I have a CTFontRef baseFont that points to Myriad Pro Regular, and that the font size I desire is size. I run the following code:
CTFontRef boldFont = CTFontCreateCopyWithSymbolicTraits(baseFont, size, NULL, kCTFontBoldTrait, kCTFontBoldTrait);
The returned font is Myriad Pro Semibold, not Myriad Pro Bold.
Is there a way of coercing this to return Myriad Pro Bold instead, other than requesting the named style 'Bold'? I wanted to keep this code entirely generic without hard-wiring style names.
I have tried this in various permutations, including passing the bold trait as part of an attribute dictionary when I initially create my font, avoiding the two-step process described here, but it still returns the semibold font in preference to the normal bold. I've also poked around the fonts themselves a little. The full bold font has a weight of 700 in its <OS/2> table, and the semibold font has a weight of 600. The PANOSE weights correspond with this. However, the macStyle fields in the <head> table of the semibold and bold fonts both have the bold flag set, so presumably this is what Core Text is using. But is there any way to make it more discriminating?
Based on a reading of the documentation, backed up by some knowledge of font handling in general but not Core Text specifically, I'd say it may be possible, but it's not straightforward.
The CTFontCreateCopyWithSymbolicTraits() documentation specifies that the symTraitValue and symTraitMask parameters have type CTFontSymbolicTraits. The CTFontDescriptor() documentation defines that "Bold" value that you are using as
kCTFontBoldTrait = (1 << 1)
So this is clearly a boolean trait. However, as you've seen, font weight is a spectrum, not a boolean trait, even though decades of "bold" buttons in word processor UIs have presented it as a boolean trait. CTFontCreateCopyWithSymbolicTraits() doesn't have the expressive power you need.
One other approach which might work is to try calling CTFontDescriptorCreateMatchingFontDescriptors(). You pass this function a CTFontDescriptorRef to an initial font, and a CFSetRef with attributes which must be present. This function returns an array of font descriptors, all of which match the attributes you requested.
So, you could pass it a CTFontDescriptorRef for Myriad Pro Regular, and maybe a CFSetRef saying you want bold, and then look through every font descriptor in the returned array to find the one with the heaviest weight.
I haven't written this code, and my ignorance of Core Text means I may be missing something, but that seems like a plausible approach.
For the CTFontDescriptor you can specify an attribute kCTFontTraitsAttribute which should be an CFDictionaryRef where you can specify the kCTFontWeightTrait which takes a CFNumberRef that represents floating point between -1 and 1, giving you a spectrum of weights, 1 being the most bold variant, and 0 being the regular/medium.

How do you change the letter-spacing/tracking in core text?

This could probably also be asked as "Is kCTKernAttributeName a misnomer?"
I need to change the letter spacing/tracking of some text in iOS. (The font I'm using is a little too tight at small sizes.) There are core graphics routines that will change character spacing, but those routines don't handle Unicode. There are other core graphics routines that are defined in terms of glyphs but those seem like a world of hurt, among other things, not having the safety net of reverting back to system fonts for glyphs that don't exist in my font.
So core text seems like the way to do this and core text supports kCTKernAttributeName on CFAttributedString. I think this will do what I want, though this really isn't kerning since kerning is a generally a character-pair attribute and this (appears to be, from the docs) just a uniform adjustment to the glyph advance for all glyphs, i.e., tracking.
If anyone knows before I go down the rather painful path of converting to the core text API ...
kCTKernAttribute name should do what you want. Setting it over a range of text adjusts the inter-glyph spacing consistently, irrespective of the specific glyphs.
I think part of the problem is that kerning seems to have been a virtual synonym of tracking (it's still just "adjust the spacing between (letters or characters) in a piece of text to be printed" in the dictionary that comes with OS X), and is now adopting just the meaning of pair kerning because of the redundancy. Probably an etymologist would be better placed to comment on that side of things...

Resources