Why is render glyph so slow? - render

I use linux framebuffer draw chinese by freetype2, and compare the fbterm
Font::Glyph *Font::getGlyph(u32 unicode)
void Screen::draw##bits(u32 x, u32 y, u32 w, u8 fc, u8 bc, u8 *pixmap)
and remove cache glyph part.
if (glyphCache[unicode]) return glyphCache[unicode];
But my program render chinese is so slow.
My code is the same the fbterm almost (render glyph part).
I only know, if I skip the FT_Load_Glyph(face, index, FT_LOAD_DEFAULT); part,
I can render so fast, but I guess that this is not key point.
any suggestions?

It is different font type.
If I use unifont.pcf.gz, get glyph needs more time.
If I use bsmi00lp.ttf,,get glyph needs less time, and render glyph very fast.

Related

Blob detection on embedded platform, memory restricted

I have a STM32H7 MCU with 1MB of RAM and 1MB of ROM. I need to make a blob detection algorithm on a binary image array of max size 1280x1024.
I have searched about blob detection algorithms and found out that they are mainly divided into 2 categories, LINK:
Algorithms based on label-propagation (One component at a time):
They first search an unlabeled object pixel, label the pixel with a new label; then, in the later processing, they propagate the same label to all object pixels that are connected to the pixel. A demo code would look something like this:
void setLabels(){
int m=2;
for(int y=0; y<height; y++){
for(int x=0; x<width; x++){
if(getPixel(x,y) == 1) compLabel(x,y,m++);
}
}
}
void compLabel(int i, int j,int m){
if(getPixel(i,j)==1){
setPixel(i,j,m); //assign label
compLabel(i-1,j-1,m);
compLabel(i-1,j,m);
compLabel(i-1,j+1,m);
compLabel(i,j-1,m);
compLabel(i,j+1,m);
compLabel(i+1,j-1,m);
compLabel(i+1,j,m);
compLabel(i+1,j+1,m);
}
}
Algorithms based on label-equivalent-resolving (Two-pass): They consist of two steps: in the first step, they assign a provisional label to each object pixel. In the second step, they integrate all provisional labels assigned to each object, which are called equivalent labels, to a unique label, which called the representative label, and replace the provisional label of each object pixel by its representative label.
The down sides of the 1st algorithm is that it is using recursive calls for all the pixel around the original pixel. I am afraid that it will cause hard fault errors on STM32 because of the limited stack.
The down sides of the 2nd algorithm is that it requires a lot of memory for the labeling image. For instance, for the max. resolution of 1280x1024 and for the max. number of labels 255 (0 for no label), image label size is 1.25MB. Way more than we have available.
I am looking for some advice on how to proceed. How to get center coordinates and area information of all blobs in the image without using to much memory? Any help is appreciated. I presume that the 2nd algorithm is out of the picture since there is no memory available.
You firstly have to go over you image with a scaling kernel to scale your image back to something that is able to be processed. 4:1 or 9:1 are good possibilities. Or you are going to have to get more RAM. Because this situation seems unworkable otherwise. Bit access is not really fast and is going to kill your efficiency and I don't even think that you need that big of an image. (at least that is my experience with vision systems)
You can then store the pixels in straight unsigned char array which can be labeled with the first method you named. It doesn't have to be a recursive process. You can also determine if a blob was relabeled to another blob and set a flag to do it again.
This makes it possible to have an externally visible function have a while loop which keeps calling your labeling function without creating a big stack.
Area determination is then done by going over the image and counting the instance of a pixel for every labeled blob.
The center of a certain blob can be found by calulating the moments of a blob and then calculating the center of mass. This is some pretty hefty math so don't be discouraged, it is a though apple to bite through but it is a great solution.
(small hint: you can take the C++ code from OpenCV and look through their code to find out how it's done)

FreeType - help me understand glyph's advance.y property

I'm learning the basics of the FreeType API for use in OpenGL and I'm confused about one thing. You load the font, then you load each glyph one by one into the font's glyph slot. The glyph has a number of fields, including advance, which has an x and a y field. Now, I understand that it is stated that y isn't used much, but on the offchance that I am in a situation where y is used, what I don't understand is that each character is being rendered in isolation to the glyph slot, so how can the glyph know that all subsequent characters should be rendered with a specific fractional offset? What if you were to render a lot of the same character in succession? Wouldn't you end up with either a slow diagonal incline or decline in your final text block?
Historically advance.y is mostly for vertical text, like used in Asia (FT_LOAD_VERTICAL_LAYOUT will trigger it.) In a normal rendering case, you should not get at the same time both non-zero values for advance.x and advance.y.
But it is also useful to use Freetype in a more generic way. If you want to write Latin upright text in a 30° inclined way, you still can use the same structures: you apply (through FT_Set_Transform) the 30° inclination matrix to each glyph, but also to the advance vector; and the result will indeed have a diagonal incline; as intended!

Are pixels ever not a square on a monitor?

I'm working on making my application DPI sensitive using this MSDN guide where the technique for scaling uses X and Y logical pixels from a device context.
int _dpiX = 96, _pdiY = 96;
HDC hdc = GetDC(NULL);
if (hdc)
{
_dpiX = GetDeviceCaps(hdc, LOGPIXELSX);
_dpiY = GetDeviceCaps(hdc, LOGPIXELSY);
ReleaseDC(NULL, hdc);
}
Then you can scale X and Y coordinates using
int ScaleX(int x) { return MulDiv(x, _dpiX, 96); }
int ScaleY(int y) { return MulDiv(y, _dpiY, 96); }
Is there ever a situation where GetDeviceCaps(hdc, LOGPIXELSX) and GetDeviceCaps(hdc, LOGPIXELSY) would return different numbers for a monitor. The only device I'm really concerned about is a monitor so do I need to have separate ScaleX(int x) and ScaleY(int y) functions? Could I use just one Scale(int px) function? Would there be a downside to doing this?
Thanks in advance for the help.
It is theoretically possible, but I don't know of any recent monitor that uses non-square pixels. There are so many advantages to square pixels, and so much existing software assumes square pixels, that it seems unlikely for a mainstream monitor to come out with a non-square pixel mode.
In many cases, if you did have a monitor with non-square pixels, you probably could apply a transform to make it appear as though it has square pixels (e.g., by setting the mapping mode).
That said, it is common for printers to have non-square device units. Many of them have a much higher resolution in one dimension than in the other. Some drivers make this resolution available to the caller. Others will make it appear as though it has square pixels. If you ever want to re-use your code for printing, I'd advise you to not conflate your horizontal and vertical scaling.
Hardware pixels of LCD panels are always square. Using CRT, you can have rectangular square, like using 320x200 or 320x400 resolution on 4:3 monitor (these resolution were actualy used). On LCD you can get rectangular pixels by using non-native resolution on monitor - widescreen resolution on 5:4 monitor and vice versa.

Can I hardcode glyph indices in my code?

Given that the Windows API function GetGlyphIndices() can translate a 2 byte UNICODE char code into a glyph index, I intend to hardcode those glyph indices, instead of the UNICODE points. Is that possible ?
I understand that MS could later change the value returned by this function for one particular UNICODE point, but it's my expectation that the current glyph index will be maintained in the glyph set, in that situation.
In other words, my understanding is that if MS decides to associate a new glyph index with a UNICODE point, it will enlarge the glyph set keeping the old glyphs.
Could someone confirm this ?
There is no guarantee that new glyphs will always be appended. (And what if a glyph gets deleted?)

What is rgbReserved?

Hi there
I want to know what is this rgbReserved in RGBQuadArray and why when I change RGB colors there are some ugly lines displayed on the image? Is it related to rgbReserved?
In memory, a line of a bitmap is often stored in the format BBGGRR00BBGGRR00BBGGRR00... so that each pixel will occupy exactly four bytes, or 32 bits. This simplifies a lot of things, and can speed up computations and image manipulation. But if the bitmap specifies the red, green, and blue intensities as bytes (in the range 0..255), and doesn't contain an alpha channel, then each pixel will only need three bytes. So there is a forth unused byte per pixel. And in the pixel structure it's got to be named something. Given that the usable members are called rgbRed, rgbGreen, and rgbBlue, rgbReserved feels rather OK. Maybe rgbUnused would be even more suitable, but there is a tradition in Win32 to name (currently) unused parameters "Reserved", as in "reserved for future use". In fact, if you app works with transparent bitmaps containing an alpha channel, each pixel might be of the form BBGGRRAA, so you could use the rgbReserved as rgbAlpha.
The latter part of your question cannot be answered as it stands. I have no idea why your code doesn't work. Maybe the pixel intensities are overflowing? Maybe there is some silly bug somewhere?
As a final note: If you wonder what a member of a Win32 structure is, you can always consult the official documentation.

Resources