How to Programmatically Identify a PI Font (a Dingbat) under OS X - macos

There is a class of fonts called Pi fonts whose glyphs, under OS X, get mapped to the private Unicode space 0xF021-0xF0FF such that if you subtract 0xF000 from each unicode character to retrieve the 8-bit version of the character and be able to draw that character as if it were a standard Roman character.
My question is how do I recognize these fonts? It's obvious the system can do so because there is a category on the Special Characters palette called "Pi Fonts" which apparently has the various such fonts installed on my system. In my case they are BookshelSymbolSeven, MSReferenceSpeciality, MT-Extras, Marlett, MonotypeSorts, Webdings, and various Wingdings. If I use the old fashioned QuickDraw routines to ask for the TextEncoding of these fonts, I get a value of 0x20000 which I do not see in the system header file TextCommon.h. Am I supposed to treat any font with a TextEncoding of 0x20000 as a Pi Font? And I'd rather not use any QuickDraw font handling routines for obvious reasons.

The closest thing I know is NSSymbolicTraits called NSFontSymbolicClass of NSFontDescriptor. The code
NSFontDescriptor*fontDesc=[NSFontDescriptor fontDescriptorWithFontAttributes:nil];
fontDesc=[fontDesc fontDescriptorWithSymbolicTraits:NSFontSymbolicClass];
NSArray*foo=[fontDesc matchingFontDescriptorsWithMandatoryKeys:[NSSet setWithObject:NSFontTraitsAttribute]];
NSLog(#"%#",foo);
gives me a list of Pi fonts + Braille + a bit more.

Related

transform a svg into a copy/paste-able emoticon like this one 🦄

Not sure if SO is the best place for this question, but don't know where else to ask.
Is there any way to transform a svg like this one for ex: (https://svgsilh.com/image/1775543.html) into something that i can use inside an editor with copy/paste like this one? 🦄
No, because the unicorn emoticon is one example of a character. And just as with letters, digits, and punctuation, the appearance of emoticons and other plain-text symbols is decided by fonts.
LSerni wrote the following:
The reason you can "copy and paste" that icon is that the icon already has a UTF-8 code and your editor is UTF-8 aware. And this is why the same emoticon is slightly different between Apple, Android and so on: it's because it's always code XYZ, but code XYZ is rendered with different icons on different platforms.
But that's not entirely correct. The difference in rendering lies more in the font than in the operating system that displays emoticons. Unless the font supplies its own version of a symbol, that symbol will usually be supplied by the font specified by default by the operating system, and different operating systems supply different symbol fonts.

Italic and bold Latin, and Greek letters using custom unicode font in gnuplot to produce (e)ps or pdf

I would like to create a postscript or pdf figure with enhanced notations, italic or bold Latin characters, and sometimes (regular) Greek characters. How to do that in general?
Let's say I downloaded CMU Sans Serif, a font that has glyphs for all the strange characters I ever want to use. I converted them to pfa with an online tool and copied the files to the path of working directory.
Expectations
Let's say I'd like to produce the following notation somewhere.
What I tried: original
I create a gnuplot script encoded in a utf-8 file (without BOM) with the content
set term postscript eps enhanced "CMUSansSerif" 15 fontfile add 'CMUSansSerif.pfa' fontfile add 'CMUSansSerif-Oblique.pfa' fontfile add 'CMUSansSerif-Bold.pfa'
set encoding utf8
set o "print.eps"
p x t "Label: {/CMUSansSerif-Bold important }{/CMUSansSerif-Oblique note}: ∫⟨α₂ + β²⟩ = äßű"
set o
and executed with the newest gnuplot, version 5.2.6.
What I got
I used a vector graphics editor to open the eps file and relevant part looks like this:
What I also tried
According to Ethan's answer I added adobeglyphnames to the termoptions. It made at least the letters available but other Unicode symbols are still unavailable. The result is:
Question
What went wrong? How could I produce the desired output?
So many possibilities, where things can go wrong: Is the font not suitable for this task? Did I download a wrong version of it? Did the pfa converter do a bad job? Did I include the font files incorrectly? Was there something wrong with the set encoding? Do I use a bad vector graphics editor? Do I have wrong fonts installed and the vector graphics editor tries to use them?
I am afraid that the answer is that in general PostScript is the wrong tool for this. If it is at all possible for you to work with PDF output instead, I suggest you do that. It is even possible the resulting PDF file can be translated to a PostScript file by standard tools (e.g. pdf2ps). That is likely to work if the non-ascii characters are limited to Greek and other relatively common symbols but I don't know how much of the full unicode tables are covered by those standard tools.
If you really need to produce PostScript with additional unicode characters directly from gnuplot, you can find full instructions and sample character encoding tables in the gnuplot distribution files:
.../term/PostScript/unicode_maps.README
.../term/PostScript/unicode_big.map
.../term/PostScript/unicode_small.map
I am not familiar with the online tool font conversion you used but probably it failed because it did not have, or at any rate did not use, suitable character encoding tables for the desired conversion.
===
One other thought. There are two ways that a *.pfa font can encode unicode characters that are common enough to have a name assigned by Adobe for use in PostScript. (1) It may use generic names like uni0439 for Unicode code points. (2) It may use Adobe-specific names from the list here:
agl-aglfn glyph list
When selecting PostScript output from gnuplot you can tell it which of these two conventions is used by the font you provide. The default is "noadobeglyphnames".
set term postscript {no}adobeglyphnames
==
(recipe for using "set term pdfcairo")
Font handling is unfortunately system-specific, so I cannot tell you how to install or configure fonts on all your target machines. I will show you a procedure that works on a linux desktop that uses the fontconfig utilities for system font handling.
Create directory /home/share/fonts/CMUSans
Add this directory to the search list in file /etc/fonts/local.conf
Copy *.ttf files into this directory from the CMU Sans Serif zip archive you link to in your original query. The system fontconfig system tools should now be able to find these fonts. By inspection they self-report as "CMU Sans Serif"
in gnuplot (tested with version 5.2.6)
set term pdfcairo font "CMU Sans Serif,15"
set output 'enhanced_utf8.pdf'
load 'enhanced_utf8.dem'
convert output pdf file to PostScript with the following command
pdf2ps enhanced_utf8.pdf enhanced_utf8.ps
Screenshot of the result is shown below
It seems that CMU Sans Serif doesn't contain the UTF-8 characters you are asking for. Check the font with a font editor like Birdfont. Although the webpage shows symbols you want to use, the font itself does not contain them. However, your browser may show symbols, but they are just fallback representations from other fonts.

As a letter / character may have color? like this: ✔️

I have found this letter / character in facebook, but how can this have a color? is just insane for me, look this: ✔️
Added image (From Firefox on windows)
It's not an ASCII character, it's likely an emoji. Emoji are part of Unicode and the actual glyph displayed to the user is open to interpretation by the platform displaying it. The spec suggests a name/description, but the implementation varies.
So while you may see a colored check mark, I see black & white. Other times, a single glyph will have multiple styles made available on a particular platform; for example, I can select multiple "skin" tones when I use a smiley face on my iPhone, but your Android device may only show a generic one.
Edit: The image edited into the original post is a perfect example. Using Chrome on Windows, I see a black check mark. The screenshot from Firefox shows green.
The symbols used here aren't ascii-encoded. They use the much more vast range of Unicode encoding. Ascii(extended) is restricted to a 256 symbol set.
The unicode interpretation for symbols/glyphs(small pictorial representation)(these ticks aren't characters), can vary for different platforms as some the range of unicode is open for usage and isn't set as global.
Which is why, while the unicode encryption remains the same for every device irrespective, the decryption is differently interpreted by different devices/online-platforms, allowing us to perceive either a coloured or a black symbol.

Qt font OS X and FontBook

I noticed this problem using Qt libraries, but, to be honest, than I found that is a global OS X issue. I have a music font, say the classic Petrucci.ttf or Marl.ttf... As I put characters using QPainter::drawText() function, I can see only standard letters.
Also this happens using asian fonts or other symbolic fonts. The same Qt code compiled on Windows can render the fonts in a correct way. So, for instance, the letter "a" is a note or a pause, or a clef...
In Mac, letter "a" remains an "a".
The strange thing is that if I open the font in the Font Book, the font is rendered correctly. So I also tried to use it on OpenOffice, Pages or TextEdit. Always on Mac symbolic fonts are rendered normally, typed letters remain letters.
Do you have any suggestions??
Thanks...
OK, one answer I found is to check the entire font map and grab the unicode value of the range of interest. For example, for the font I'm using is 0xf021-0xf0fb, so, you can draw a non text symbol like this:
char c = 0xf021;
QString str = QString( QChar( c ) );
painter.drawText( 10, 10, str );

Is it possible to determine the fonts Windows chooses for font-linking?

Suppose you have a string with text in two or more scripts. When you use a GDI function like TextOut, (modern versions of) Windows will do "font-linking". That is, GDI will draw what it can with your selected font and draw the rest in an appropriate font that it chooses automagically. For example, if part of your text is in English (using the Roman alphabet), and part of it is Chinese (using CJK characters), and you have Arial selected, the English portion will be drawn in Arial, and the Chinese portion will be drawn in another font that has the CJK glyphs.
My question is, is there a way to determine which fonts TextOut will choose (or did choose) for the font linking?
I have to draw some text with the low-level Uniscribe API, which doesn't do automatic font-linking. I've implemented my own font-linking, but sometimes my algorithm chooses a different font than TextOut does for the same text. I'm trying to understand the Windows algorithm better, but I'm not real good at identifying fonts on sight (especially in unfamiliar scripts).
The font is selected by a registry entry. It is well described in this article. Quoting the relevant part:
If font linking is enabled on your
device, you can examine the registry
by enumerating the subkeys of the
registry key at
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\FontLink\SystemLink
to determine the mappings of linked
fonts to base fonts. You can add links
by using Regedit to create additional
subkeys.

Resources