Windows: Getting glyph outlines for substitution characters from other fonts - windows

I need to render fonts into a 3d game world, so I use the GetGlyphOutline outline function to get the glyph shapes to render into a texture. However, I want to be able to handle the case where characters are not present in the given font (as is often the case for asian other other international text). Windows text rendering will automatically substitute fonts which have the needed characters. But GetGlyphOutline will not. How can I detect this case, and get the outlines for the substituted glyphs? Mac OS X Core Text has a function to get a matching substitution font for a given font and a string - is there anything similar on windows?

Found out what I needed to know myself: The IMLangFontLink interface, especially the MapFont method contain the needed functionality to find out which substitution fonts should be used on windows.

I too have puzzled with GetGlyphOutline. I'm not sure if you were able to do the same, but I was able to get mixed-script text outlines by using TextOut() in combination with BeginPath(), EndPath() and GetPath().
For example, even with the Arial font, I am able to get the path of the Japanese text 「テスト」 (using C++, but can easily be done in C as well):
SelectObject(hdc, hArialFont);
BeginPath(hdc);
TextOut(hdc, 100, 100, L"\u30c6\u30b9\u30c8"); // auto font subbing
EndPath(hdc);
// get number of points in path
int pc = GetPath(hdc, NULL, NULL, 0);
if (pc > 0)
{
std::vector<POINT> points(pc);
std::vector<BYTE> types(pc); // PT_MOVETO, PT_LINETO, PT_BEZIERTO
GetPath(hdc, &points[0], &types[0], pc);
// it seems the first four points are the bounding rect
// subsequent points match up to their types
for (int i = 4; i < pc; i++)
{
if (types[i] == PT_LINETO)
LineTo(hdc, points[i].x, points[i].y); // etc
}
}

Related

Correct combining characters positions with Harfbuzz

I am trying to render text with Harfbuzz and a signed distance field atlas.
The code is basically this:
void drawText(const std::wstring &str, Vec2 pos)
{
// Init harfbuzz
hb_buffer_t *hbBuf = hb_buffer_create();
hb_buffer_set_direction(hbBuf, HB_DIRECTION_LTR);
hb_buffer_set_script(hbBuf, HB_SCRIPT_LATIN);
hb_buffer_set_language(hbBuf, hb_language_from_string("en", 2));
// Process string
hb_buffer_add_utf32(hbBuf, reinterpret_cast<const uint32_t*>(str.c_str()), -1, 0, -1);
hb_shape(font.hb, hbBuf, nullptr, 0);
// Display string
unsigned int nbGlyphs;
hb_glyph_info_t *glyphInfos = hb_buffer_get_glyph_infos(hbBuf, &nbGlyphs);
hb_glyph_position_t *glyphPos = hb_buffer_get_glyph_positions(hbBuf, &nbGlyphs);
for(unsigned int i = 0; i < nbGlyphs; i++)
{
Vec2 drawPos = pos + Vec2(glyphPos[i].x_offset, glyphPos[i].y_offset) / 64.f;
drawGlyph(glyphInfos[i].codepoint, drawPos);
pos.x += glyphPos[i].x_advance / 64.f;
pos.y += glyphPos[i].y_advance / 64.f;
}
}
The text looks correctly shaped for an English phrase, but when I test it with diacritics, they look misplaced.
I am testing it with aâa aâ̈a bb̂b bb̂̈b bb̧b bb͜b bb︠︡b. The Unicode string does not contain precombined characters. Harfbuzz uses the precombined character â, which makes this one look good. Most other diacritics are off.
Text with diacritics on the left of where they should be
When I multiply x_offset by 0.5, the combining characters are better placed. The accents and the cedilla are at the right x position. The accents do not stack and are too low on the b. The arc under BBB (U+035C) should join the two last letters instead of being centered on the 2nd b.
I also tried with U+FE20 and U+FE21 on the previous group of b. In my tests, U+FE21 is on the 2nd b, but it looks like it should be on the 3rd.
Test with glyphPos[i].x_offset * 0.5f, better but still wrong
I tried with several fonts, but of those fonts, only NotoSansDisplay-Regular.ttf had combining characters. I did not manage to make a program display that string as expected on my Debian system (testing, with HarfBuzz 2.6.4-1).
With Windows, I got better results. Here is what I expect: the accents are stacked, the combining double breve below it at the right place, the cedilla is off.
Text rendering closer to what I expect
Am I doing something wrong with HarfBuzz, or I am testing to niche cases that HarfBuzz does support yet?
EDIT:
The actual problem was not described above.
I loaded a font with FreeType FT_New_Face then created a hb_font_t with hb_ft_font_create.
For every string drawn, I called FT_Set_Pixel_Sizes but kept that hb_font_t.
You should try shaping the same text and font with hb-view / hb-shape. That would help you narrow down where the problem is. I'm making a wild guess that the problem is in how / whether you are accounting for glyph origin in your atlas.
Create a new hb_font_t with hb_ft_font_create every time the font size i changed with FT_Set_Pixel_Sizes.

TextView with long text invisible with LAYER_TYPE_HARDWARE or LAYER_TYPE_SOFTWARE

I'm having a problem rendering a long TextView in a hardware accelerated activity (android:hardwareAccelerated="true"). The textview has no background color (i.e. it is transparent). When the text is longer than a certain length, the textview renders with a solid black background instead of a transparent background.
The text in the TextView can be edited by the user, and is being forced to not wrap except at actual newlines. I'm doing this by calculating the width of the text like so:
int textWidth = 0;
String[] lines = string.split("\\n");
for (String line : lines) {
int lineWidth = (int) tv.getPaint().measureText(line);
if (lineWidth > textWidth) {
textWidth = lineWidth;
}
}
int width = m.getPaddingLeft() + tv.getPaddingLeft() + textWidth
+ tv.getPaddingRight() + m.getPaddingRight();
Then I Override the onMeasure method of the ViewGroup to force the width to be at least as long as the text:
#Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
int newWidth = Math.max(getMeasuredWidth(), width);
setMeasuredDimension(newWidth, getMeasuredHeight());
}
All of this is working as expected, but it allows the text to get really big - too big apparently.
Attempted Solutions:
I guessed that the problem was with OpenGL being unable to render something that long, so I queried the GL_MAX_TEXTURE_SIZE OpenGL parameter and compared it to the width. Sure enough, the problem occurs when width > GL_MAX_TEXTURE_SIZE.
To solve this problem, I wrote some code to disable hardware acceleration on the view when the text is too long:
int[] maxGlTexSize = new int[1];
GLES20.glGetIntegerv(GLES20.GL_MAX_TEXTURE_SIZE, maxGlTexSize, 0);
if (width > maxGlTexSize[0]) {
Log.e("Debug", "Too big for GL");
tv.setLayerType(View.LAYER_TYPE_SOFTWARE, null);
} else {
Log.i("Debug", "Small enough for GL");
tv.setLayerType(View.LAYER_TYPE_NONE, null);
}
However, this code doesn't work for me. When the condition is met (the text is too long), the textview becomes invisible. This also happens if I try to use LAYER_TYPE_HARDWARE. (I tried that because the Hardware Acceleration guide says to set the layer type to HARDWARE for large views with alpha.)
Also, I did try permanently setting the view layer type. The results were slightly different for the two types:
LAYER_TYPE_SOFTWARE: When the activity is created with text smaller than the limit, it renders fine. When text is added to surpass the limit, the view disappears. When the text is shortened to be within the limit again, it reappears.
LAYER_TYPE_HARDWARE: Identical to LAYER_TYPE_SOFTWARE except that the text does not reappear when shortened after being too long. The activity must be recreated in order for the text to reappear.
TL;DR
I'm having a view rendering problem caused by OpenGL limitations, but view.setLayerType(View.LAYER_TYPE_SOFTWARE, null); is making the view disappear rather than fixing the problem.
After thinking about this problem for a bit, I realized that it makes sense that changing the layer type of the view doesn't solve the problem. If the activity is hardware accelerated, the view still has to be stored in a GPU texture to be rendered to the screen, regardless of whether or not the view is hardware accelerated.
To solve the problem, I simply lowered the resolution (size) of the text until the view's width was less than the GL_MAX_TEXTURE_SIZE. This works well because the text doesn't need to be high resolution if the user is displaying a lot of it, because they will scale it down to fit all of the text on the screen.

GetObject() on bitmap handle from LoadImage() sometimes returns incorrect bitmap size

We are seeing an intermittent problem in which owner drawn buttons under Windows XP that are using a bitmap as a backdrop are displaying the bitmap incorrectly. The window containing multiple buttons that are using the same bitmap file for the bitmap image used for the button backdrop will display and most of the buttons will be correct though in some cases there may be one or two buttons which are displaying the bitmap backdrop reduced to a smaller size.
If you exit the application and then restart it you may see the same behavior of the incorrect display of the icon on the buttons however it may or may not be the same buttons as previously. Nor is this behavior of incorrect display of icons on the buttons always seen. Sometimes it shows and sometimes it does not. Since once we load an icon for a button we just keep it, once the button is displayed incorrectly it will always be displayed incorrectly.
Using the debugger we have finally found that what appears to be happening is that when the GetObject() function is called, the data returned for the bitmap size is sometimes incorrect. For instance in one case the bitmap was 75x75 pixels and the size returned by GetObject() was 13x13 instead. Since this size is used as part of the drawing of the bitmap, the displayed backdrop becomes a small decoration on the button window.
The actual source area is as follows.
if (!hBitmapFocus) {
CString iconPath;
iconPath.Format(ICON_FILES_DIR_FORMAT, m_Icon);
hBitmapFocus = (HBITMAP)LoadImage(NULL, iconPath, IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE);
}
if (hBitmapFocus) {
BITMAP bitmap;
int iNoBytes = GetObject(hBitmapFocus, sizeof(BITMAP), &bitmap);
if (iNoBytes < 1) {
char xBuff[128];
sprintf (xBuff, "GetObject() failed. GetLastError = %d", GetLastError ());
NHPOS_ASSERT_TEXT((iNoBytes > 0), xBuff);
}
cxSource = bitmap.bmWidth;
cySource = bitmap.bmHeight;
//Bitmaps cannot be drawn directly to the screen so a
//compatible memory DC is created to draw to, then the image is
//transfered to the screen
CDC hdcMem;
hdcMem.CreateCompatibleDC(pDC);
HGDIOBJ hpOldObject = hdcMem.SelectObject(hBitmapFocus);
int xPos;
int yPos;
//The Horizontal and Vertical Alignment
//For Images
//Are set in the Layout Manager
//the proper attribute will have to be checked against
//for now the Image is centered on the button
//Horizontal Alignment
if(btnAttributes.horIconAlignment == IconAlignmentHLeft){//Image to left
xPos = 2;
}else if(btnAttributes.horIconAlignment == IconAlignmentHRight){//Image to right
xPos = myRect.right - cxSource - 5;
}else {//Horizontal center
xPos = ((myRect.right - cxSource) / 2) - 1;
}
//Vertical Alignment
if(btnAttributes.vertIconAlignment == IconAlignmentVTop){//Image to top
yPos = 2;
}else if(btnAttributes.vertIconAlignment == IconAlignmentVBottom){//Image to bottom
yPos = myRect.bottom - cySource - 5;
}else{//Vertical Center
yPos = ((myRect.bottom - cySource) / 2) - 1;
}
pDC->BitBlt(xPos, yPos, cxSource, cySource, &hdcMem, 0, 0, SRCCOPY);
hdcMem.SelectObject(hpOldObject);
}
Using the debugger we can see that the iconPath string is correct and the bitmap is loaded as hBitmapFocus is not NULL. Next we can see that the call to GetObject() is made and the value returned for iNoBytes equals 24. For those buttons that display correctly the values in bitmap.bmWidth and bitmap.bmHeight are correct however for those that do not the values are much too small leading to an incorrect sizing when drawing the bitmap.
The variable is defined in the class header as
HBITMAP hBitmapFocus;
As part of doing the research for this I found this stack overflow question, GetObject returns strange size and I am wondering if there is some kind of an alignment issue here.
Does the bitmap variable used in the call to GetObject() need to be on some kind of an alignment boundary? While we are using packed for some of our data we are using pragma directives to only specify specific portions of code containing specific structs in include files that need to be packed on one byte boundaries.
Please read this Microsoft KB how to load a bitmap with palette information. It has a great example as well.
On the side note: I do not see anywhere in your code where you call ::DeleteObject(hBitmapFocus). It is very important to call this, as you can run out of GDI objects very quickly.
It is always a good idea to use Windows Task manager to see that your program does not exhaust the GDI resources. Just add "GDI Objects" column to the Task Manager and see that the number of objects is not constantly increasing in your app, but stays within an expected range, similar to other programs

Unwanted padding in CATextLayer

I am having a problem of drawing a character in a CATextLayer such that the layer is exactly the size of the character.
I use the code below to get the size of the glyph corresponding to the character in the string. At the moment I am neglecting diacretics, so that I am assuming that there is a one to one correlation between glyphs and characters.
I also get nice bounding box values for several characters for the Helvetica font at size 128pt:
Character | x | y | width | height |
B | 9.4 | 0.0 | 70.8 | 91.8 |
y | 1.3 | -27.4 | 61.2 | 96.0 |
I am not sure where the origin of the coordinate system is located in which the coordinates are expressed. I am assuming that (0,0) is at the very left and vertically located on the baseline of the font. That is why 'y' has a negative y value.
I am using this code to calculate the size of a capital B and resize its CATextLayer accordingly.
- (CATextLayer *) testTextLayer
{
CATextLayer *l = [CATextLayer layer];
l.string = #"B";
NSUInteger len = [l.string length];
l.fontSize =128.f;
CGColorRef blackColor = CGColorCreateGenericGray(0.f, 1.f);
l.foregroundColor = blackColor;
CGColorRelease(blackColor);
// need to set CGFont explicitly to convert font property to a CGFontRef
CGFontRef layerFont = CGFontCreateWithFontName((CFStringRef)#"Helvetica");
l.font = layerFont;
// get characters from NSString
UniChar *characters = (UniChar *)malloc(sizeof(UniChar)*len);
CFStringGetCharacters((__bridge CFStringRef)l.string, CFRangeMake(0, [l.string length]), characters);
// Get CTFontRef from CGFontRef
CTFontRef coreTextFont = CTFontCreateWithGraphicsFont(layerFont, l.fontSize, NULL, NULL);
// allocate glyphs and bounding box arrays for holding the result
// assuming that each character is only one glyph, which is wrong
CGGlyph *glyphs = (CGGlyph *)malloc(sizeof(CGGlyph)*len);
CTFontGetGlyphsForCharacters(coreTextFont, characters, glyphs, len);
// get bounding boxes for glyphs
CGRect *bb = (CGRect *)malloc(sizeof(CGRect)*len);
CTFontGetBoundingRectsForGlyphs(coreTextFont, kCTFontDefaultOrientation, glyphs, bb, len);
CFRelease(coreTextFont);
l.position = CGPointMake(200.f, 100.f);
l.bounds = bb[0];
l.backgroundColor = CGColorCreateGenericRGB(0.f, .5f, .9f, 1.f);
free(characters);
free(glyphs);
free(bb);
return l;
}
This is the result I am getting from the above code. It seems to me that the size is correct, however there is some kind of padding taking place around the character.
Now my questions
Am I right with the assumption of the origin of the bounding box of the glyph?
How can one draw the letter such that it fits neatly into the layer, without this padding? Or alternatively, how can one control this padding?
Maybe I am missing an obvious point here. Is there now way after setting the size and the font of the layer to shrink wrap the layer around the character in a defined way (meaning with optional padding, a bit like in CSS)?
How about creating a CGPath from a glyph with CTFontCreatePathForGlyph and then getting its bounding box with CGPathGetBoundingBox?
An alternative would be to create a CTRun somehow and use the CTRunGetImageBounds function which also returns a "tight" bounding box, but this probably requires more lines of code than the aforementioned approach and you'd need to have a graphics context.
I assume this has to do with the built-in space around letters. Their bounding box usually includes some amount of space that serves as a general spacing if you assemble glyphs in a line. These are then optimized with kerning tables.
And yes, extracting the actual bezier path is a very good method of getting a tight bounding box. I have been doing that for years, though I have no experience using CATextLayers.

JLabel not displaying all the characters even after dynamically changing font size

I am trying to fit a sentence that changes often, in to a few jlabels. Widths of my 3 jlabels stay unchanged all the time. What I am doing is changing the font size so all the characters can fit with out non being out of the display range of the labels. What I do is call below code snippet when ever sentence is changed.
Here is my code
String sentence = "Some long sentence";
int SentenceLength = sentence.length();
int FontSize = 0;
// sum of widths of the three labels
int TotalLblLength=lbl_0ValueInWords.getWidth()+lbl_1ValueInWords.getWidth()+lbl_1ValueInWords.getWidth();
/*decide the font size so that all the characters can be displayed
with out exceeding the display renge(horizontal) of the 3 labels
Inconsolata -> monopace font
font size == width of the font*2 (something I observed, not sure
if this is true always) */
FontSize=(TotalLblLength/SentenceLength)*2;
// max font size is 20 - based on label height
FontSize=(FontSize>20)?20:FontSize;
lbl_0ValueInWords.setFont(new java.awt.Font("Inconsolata", 0,FontSize));
lbl_1ValueInWords.setFont(new java.awt.Font("Inconsolata", 0,FontSize));
lbl_2ValueInWords.setFont(new java.awt.Font("Inconsolata", 0,FontSize));
int CharCount_lbl0 = width_lbl0 / (FontSize / 2);
int CharCount_lbl1 = width_lbl1 / (FontSize / 2);
int CharsCount_lbl2 = width_lbl2 / (FontSize / 2);
/*Set texts of each label
if sentence has more than the number of characters that can fit in the
1st label, excessive characters are moved to the 2nd label. same goes
for the 2nd and 3rd labels*/
if (SentenceLength > CharCount_lbl0) {
lbl_0ValueInWords.setText(sentence.substring(0, CharCount_lbl0));
if (SentenceLength > CharCount_lbl0 + CharCount_lbl1) {
lbl_1ValueInWords.setText(sentence.substring(CharCount_lbl0, CharCount_lbl0 + CharCount_lbl1));
lbl_2ValueInWords.setText(sentence.substring(CharCount_lbl0 + CharCount_lbl1, SentenceLength));
} else {
lbl_1ValueInWords.setText(sentence.substring(CharCount_lbl0, SentenceLength));
}
} else {
lbl_0ValueInWords.setText(sentence);
}
But even after resetting font size sometimes the last character goes out of the display range. I have removed margines from the jlabels that may cause this. This happens for random length sentences. I can solve the problem for the application by reducing label width used for the calculations(hopefully)
Can anyone explain me the reason? Could be because of some defect in the fonts symmetry?
There is no such thing as font symmetry?
There are 2 types of fonts for what you are dealing with. Monospace fonts, and non-monospace fonts. Monospace fonts have the same exact width for every single character you can type. The others do not.
On top of that, fonts are rendered differently across different OS's. Something on windows will be around 10-20% longer on Mac because they space out the fonts differently.
Whatever it is you are trying to do with JLabels, stop. You should not be using 3 JLabels to show 3 lines of text because they dont fit. Scrap them and use a JTextArea. It has text wrap, you can set the font, and remove the margin/border/padding and make it non-editable. You can customize it very easily so it is indistinguishable from a JLabel, but it will save you a ton of work.
Pick the right tool for the right job.

Resources