When i use https://www.fontcopypaste.com/ to change a font like this "ππ
΄π
²πππ
΄ π
³ππ
Έπ
π
΄" how does it actually affect the letters? I'm trying to name an external drive like that, and it appears to make my path in my program I'm making on that drive not work. Same thing with folder I've named like that. In my program my path might be c:\SECURE DRIVE\DOCUMENTS but it doesn't work. I even tried c:\ππ
΄π
²πππ
΄ π
³ππ
Έπ
π
΄\ο½ο½ο½ο½ο½ο½
ο½ο½ο½\ With the same font i used on the actual drive and folder and it still doesn't work. How can I make this work or something like it?
EDIT:
This is what happens when I change the folder name with a copy and paste font. The paths in my code dont recognize it anymore.
VS CODE has a tooltip that says character U+1d411 "R" could be confused with U+0052 "R" because similarities in code.
File paths don't use fonts and you are not really copying a font, you are copying text that uses various Unicode codepoints to emulate funny fonts.
These paths should work fine if you are using the Unicode functions like CreateFileW or _wfopen.
If you are using CreateFileA, fopen or std::string you are going to run into codepage conversion issues. This applies to any framework and 3rd-party code you might be using.
Some of these codepoints might be outside the basic multilingual plane (BMP) and will require correct handling of surrogate pairs when dealing with UTF-16LE.
Related
Is there an option for to me to ask Ghostscript to indent the Postscript it creates?
Everything starts at the beginning of a line and I find it difficult to follow.
Alternatively, I am using Emacs and ps-mode.
If anyone know how to indent code in this mode I would appreciate a tip (apologize because this may not be relevant to this StackExchange)
No, there is no option for indenting the output.
PostScript is pretty much regarded as a write-only language anyway, and the output of ps2write (which is what I assume you are using though you don't say) is particularly difficult since it fundamentally outputs PDF syntax with a PostScript program on the front to parse it into PostScript operations.
Why do you want to read it ?
[EDIT]
You can always edit your question, you don't need to post a new answer.
I'm afraid what you want to do isn't as simple as you might think.
It might be possible for this use case if the PDF files you receive are always created the same way, but there are significant problems.
The font you use as a substitute for the missing font must be encoded the same way. Say for example the font in the PDF file is encoded so that 0x41 is 'A', you need to make sure that the replacement font is also encoded so that 0x41 is an 'A'. So just the findfont, scalefont, setfont sequence is not always going to be sufficient, sometimes you will need to re-encode the font.
CIDFonts will be a major stumbling block. Firstly because ps2write simply doesn't emit CIDFonts at all. These were not part of level 2 PostScript. As a result all text in a CIDFont will be embedded as bitmaps. If your original file doesn't contain the CIDFont then you'll get the fallback CIDFont bitmapped.
Secondly CIDFonts can use multiple-byte character codes, of variable length. You can't simply replace a CIDFont with a Font, it just won't work.
The best solution, obviously, is to have the PDF files created with the fonts required embedded. This is best practice. If you can't get that, then I'd suggest that rather than trying to hand edit PostScript, you use the fontmap.GS and cidfmap files which Ghostscript uses to find font.
Ghostscript already has a load of code to do font substitution automatically, using both Fonts and CIDFonts as substitutes, and it does all the hard work of re-encoding the fonts or building CMaps as required. If you are on Windows much of this may already be done for you, when you install Ghostscript it will ask if you want to create font mappings. If you said yes then it will
Add the font substitutions you want to use in those files (they have comments explaining the layout) and then use the pdfwrite device to make a new PDF file. Set EmbedAllFonts to true (you may need to add a AlwayEmbed font array as well, listing the fonts specifically) and SubsetFonts to false.
That should create a new PDF file where the missing fonts have been replaced by your defined substitutes, those substitutes will have been embedded in the new PDF file and they have will not been subset (Acrobat will generally refuse to edit text in a subset font).
The switches I mentioned above are standard Adobe Distiller parameters, but they are documented for pdfwrite here. There's some documentation on adding fonts here and here and specifically for CIDFonts here.
Basically I'd suggest you define your substitutions and let Ghostscript do the work for you.
This is not an answer to the problem but rather an answer to KenS's question about "Why do you want to read it?"
I tried to put it in the comment box but it was too long.
I am a retired engineer with a strong programming background.
I would like to read and understand the postscript code for the reason shown below.
I play duplicate bridge as a hobby. I recieve a PDF file of what is know as a convention card (a single page document of bridge agreements).
Frequently I would like to edit these files.
When I open with Adobe Illustrator I have to spend a significant amount of time replacing fonts that are not on my system with fonts that I do have.
I can take the PDF and export it as a postscript file using Ghostscript.
I was going to write a little program to replace the embedded fonts with the fonts that I use to replace them.
I was going to leave the postscript file unaltered and insert things like
/HelveticaMonospacedPro-RG findfont
12 scalefont setfont
just above where the text is written.
I was planning on using the fonts that I have on my system (e.g., HelveticaMonospacedPro-RG).
Not sure if SO is the best place for this question, but don't know where else to ask.
Is there any way to transform a svg like this one for ex: (https://svgsilh.com/image/1775543.html) into something that i can use inside an editor with copy/paste like this one? π¦
No, because the unicorn emoticon is one example of a character. And just as with letters, digits, and punctuation, the appearance of emoticons and other plain-text symbols is decided by fonts.
LSerni wrote the following:
The reason you can "copy and paste" that icon is that the icon already has a UTF-8 code and your editor is UTF-8 aware. And this is why the same emoticon is slightly different between Apple, Android and so on: it's because it's always code XYZ, but code XYZ is rendered with different icons on different platforms.
But that's not entirely correct. The difference in rendering lies more in the font than in the operating system that displays emoticons. Unless the font supplies its own version of a symbol, that symbol will usually be supplied by the font specified by default by the operating system, and different operating systems supply different symbol fonts.
I would like to create a postscript or pdf figure with enhanced notations, italic or bold Latin characters, and sometimes (regular) Greek characters. How to do that in general?
Let's say I downloaded CMU Sans Serif, a font that has glyphs for all the strange characters I ever want to use. I converted them to pfa with an online tool and copied the files to the path of working directory.
Expectations
Let's say I'd like to produce the following notation somewhere.
What I tried: original
I create a gnuplot script encoded in a utf-8 file (without BOM) with the content
set term postscript eps enhanced "CMUSansSerif" 15 fontfile add 'CMUSansSerif.pfa' fontfile add 'CMUSansSerif-Oblique.pfa' fontfile add 'CMUSansSerif-Bold.pfa'
set encoding utf8
set o "print.eps"
p x t "Label: {/CMUSansSerif-Bold important }{/CMUSansSerif-Oblique note}: β«β¨Ξ±β + Ξ²Β²β© = Γ€ΓΕ±"
set o
and executed with the newest gnuplot, version 5.2.6.
What I got
I used a vector graphics editor to open the eps file and relevant part looks like this:
What I also tried
According to Ethan's answer I added adobeglyphnames to the termoptions. It made at least the letters available but other Unicode symbols are still unavailable. The result is:
Question
What went wrong? How could I produce the desired output?
So many possibilities, where things can go wrong: Is the font not suitable for this task? Did I download a wrong version of it? Did the pfa converter do a bad job? Did I include the font files incorrectly? Was there something wrong with the set encoding? Do I use a bad vector graphics editor? Do I have wrong fonts installed and the vector graphics editor tries to use them?
I am afraid that the answer is that in general PostScript is the wrong tool for this. If it is at all possible for you to work with PDF output instead, I suggest you do that. It is even possible the resulting PDF file can be translated to a PostScript file by standard tools (e.g. pdf2ps). That is likely to work if the non-ascii characters are limited to Greek and other relatively common symbols but I don't know how much of the full unicode tables are covered by those standard tools.
If you really need to produce PostScript with additional unicode characters directly from gnuplot, you can find full instructions and sample character encoding tables in the gnuplot distribution files:
.../term/PostScript/unicode_maps.README
.../term/PostScript/unicode_big.map
.../term/PostScript/unicode_small.map
I am not familiar with the online tool font conversion you used but probably it failed because it did not have, or at any rate did not use, suitable character encoding tables for the desired conversion.
===
One other thought. There are two ways that a *.pfa font can encode unicode characters that are common enough to have a name assigned by Adobe for use in PostScript. (1) It may use generic names like uni0439 for Unicode code points. (2) It may use Adobe-specific names from the list here:
agl-aglfn glyph list
When selecting PostScript output from gnuplot you can tell it which of these two conventions is used by the font you provide. The default is "noadobeglyphnames".
set term postscript {no}adobeglyphnames
==
(recipe for using "set term pdfcairo")
Font handling is unfortunately system-specific, so I cannot tell you how to install or configure fonts on all your target machines. I will show you a procedure that works on a linux desktop that uses the fontconfig utilities for system font handling.
Create directory /home/share/fonts/CMUSans
Add this directory to the search list in file /etc/fonts/local.conf
Copy *.ttf files into this directory from the CMU Sans Serif zip archive you link to in your original query. The system fontconfig system tools should now be able to find these fonts. By inspection they self-report as "CMU Sans Serif"
in gnuplot (tested with version 5.2.6)
set term pdfcairo font "CMU Sans Serif,15"
set output 'enhanced_utf8.pdf'
load 'enhanced_utf8.dem'
convert output pdf file to PostScript with the following command
pdf2ps enhanced_utf8.pdf enhanced_utf8.ps
Screenshot of the result is shown below
It seems that CMU Sans Serif doesn't contain the UTF-8 characters you are asking for. Check the font with a font editor like Birdfont. Although the webpage shows symbols you want to use, the font itself does not contain them. However, your browser may show symbols, but they are just fallback representations from other fonts.
I have an app that uses AngularJS along with angular-translate to provide localization. The app currently uses only English and German.
On the login page is a required field, an email. If there is an error, the app displays "A valid email is required" in English.
In German (and forgive me if this is mangled, this is Google Translate, I don't know any German) the phrase is "Eine gΓΌltige E-Mail erforderlich".
In the second word, you'll notice an international character, it looks like a "u" with two little dots over it. When the app is set to display in German, that character gets escaped and much weirdness happens on the screen.
Looking that the docs, it seems like using $translateProvider.useSanitizeValueStrategy() is supposed to handle this, but it's not. If I use $translateProvider.useSanitizeValueStrategy('escaped') then it look like this onscreen:
If use $translateProvider.useSanitizeValueStrategy('sanitize')(which I'd really prefer of course) then it looks like this:
I also happened to come across this article which states that my *.js translation file needs to be UTF-8 encoded. I opened up that file in NotePad++, changed the encoding to UTF-8 Without BOM and saved it, but I'm still seeing the error. And VS really hates the file now.
I know, it's a little late, but maybe others have similar Problems:
Adressing the UI:
Are you using the attribute style e.g.
<span translate="key"><span>
or inline style
<span>{{key | translate}}</span>
in your view?
I am working with the second style without issues.
Addressing your Problem with UTF-8:
I am not using Visual Studio nor Notepad++, so I don't know how Notepad++ handles the conversion. Possibly it does not convert the characters at all, but only changes the file to be seen as UTF-8.
Sublime Text 2 (1), on the other hand, offers you to 'Save with Encoding', which converts all characters accordingly. I stressed this conversion pretty much, so that I can recommend this approach with a clean conscience.
(1) I have no relations to Sublime Text, this is not meant be any form of commercial advertisement
Matlab cannot display Arabic/Persian labels of the figure. Also I cannot see my installed fonts and I don't want to add the labels by another program. How can I fix this problem?
What you're looking for is a way to display unicode characters in axes labels.
It seems that this problem was encountered before, but there's no simple solution for it. See workarounds here and here.
One important thing though - do not edit .m files containing unicode\utf-8 characters (such as Arabic, Farsi, Hebrew, Chinese, etc...) in MATLAB, because it messes up the characters upon saving. Use an external editor (like Notepad++) to edit and save the files (as UTF-8 without BOM), and only run in MATLAB.