I'm currently using the Freetype library to render text to my opengl es 2.0 app (both ios and android) and I've come to the point where I seriously have to consider performance. I have a store of textures (one for each character) and whenever I render text, the character calls from that texture area and then opengl binds to it, renders it to the screen... For each letter! Very inefficient. So, I'm looking into ways to make this faster, and one of the ideas I had was to pre render each word instead of each letter. So, now for the big question: How do I combine textures side by side? I know I need to create a buffer and that somehow it will come from FT_LOAD_CHAR, but I don't really know much else.
Actually, you have misunderstood how to use FreeType. You can load font using FT library and load all characters in to a single bitmap texture. Then you can just use texture coordinates to point the right UV.
This link can help to some extend. Draw FreeType Font in OpenGL ES
Related
my question may seems not new, but as far as I searched for days I couldn't find my answer.
I'm trying to make a webpage with PIXI.js which uses webGL.
My webpage is mouse movement parallax, I mean all the movements an object can have is few pixels when the user moves his/her mouse pointer.
Now my problem: I have some simple images and i don't know to use svg or png.
My images are like these:
https://1drv.ms/i/s!Aj-BeFYyTnRzhTBSVEXXeJ2c-O7V
https://1drv.ms/i/s!Aj-BeFYyTnRzhTFeTzJLrWaq_VFh
https://1drv.ms/i/s!Aj-BeFYyTnRzhTIa9lAaS9dKX1DL
I want to make my webpage as smooth as possible and I don't know to use png or svg. I searched a lot, some says it depends on the png and svg, in my case my svgs won't be too complex but some says because svg use CPU and the WebGL use GPU using them both, cause lack of performance, and also some says which using svg in PIXI makes no difference than the png because PIXI makes texure from them and there won't be any deference...
I'm new to webGL and Pixi so now with these answers I became confused, by the way, for my case the images size is not mattered, I only wanted as much smoothness as possible.
thanks a lot for your help.
It doesn't make a difference for runtime performance, the SVGs will be rasterized into textures either way. However during initialization where the browser neeeds to rasterize the SVGs to create a texture from them there might be a significant performance penalty depending on how complex your SVGs are.
However since you're developing for the web aforementioned penalty is easily offset by the fact that you're loading the SVGs from a server which introduces way more latency than rasterizing the SVG will, even more so if you consider the size difference between a rasterized PNG and a SVG(assuming you're not planning to create tiny textures from them).
So final verdict, go with SVG, its lossless and small aswell as resizable and editable from within client code. It also saves you from exporting your source assets to PNG everytime you change something.
My app uses an atlas and reaches parts of it to display items using glTexCoordPointer.
It works well with power-of-two textures, but I wanted to use NPOT to reduce the amount of memory used.
Actually, the picture itself is well loaded with the linear filter and clamp-to-edge wrapping (the content displayed comes from the pic, even with alpha), but the display is deformed.
The coordinates are not the correct ones, and the "shape" is more a trapezoid than a rectangle.
I guessed I had to play with glEnable(), passing GL_TEXTURE_2D in the case of a POT texture, and GL_APPLE_texture_2D_limited_npot in the other case, but I cannot find a way to do so.
Also, I do not have the GL_TEXTURE_RECTANGLE_ARB, I don't know if it is an issue...
Anyone had the same kind of problem ?
Since OpenGL-2 (i.e. for about 10 years) there are no longer constraints on the size of a regular texture. You can use whatever image size you want, it will just work.
I know this has been asked before (I did search) but I promise you this one is different.
I am making an app for Mac OS X Mountain Lion, but I need to add a little bit of a bloom effect. I need to render the entire scene to a texture the size of the screen, reduce the size of the texture, pass it through a pixel buffer, then use it as a texture for a quad.
I ask this again because a few of the usual techniques do not seem to function. I cannot use the #version, layout, or out in my fragment shader, as they do not compile. If I just use gl_FragColor as normal, I get random pieces of the screen behind my app rather than the scene I am trying to render. The documentation doesn't say anything about such things.
So, basically, how can I render to a texture properly with the Mac implementation of OpenGL? Do you need to use extensions to do this?
I use the code from here
Rendering to a texture is best done using FBOs, which let you render directly into the texture. If your hardware/driver doesn't support OpenGL 3+, you will have to use the FBO functionality through the ARB_framebuffer_object core extension or the EXT_framebuffer_object extension.
If FBOs are not supported at all, you will either have to resort to a simple glCopyTexSubImage2D (which involves a copy though, even if just GPU-GPU) or use the more flexible but rather intricate (and deprecated) PBuffers.
This tutorial on FBOs provides a simple example for rendering to a texture and using this texture for rendering afterwards. Since the question lacks specific information about the particular problems you encountered with your approach, those rather general googlable pointers to the usual render-to-texture resources need to suffice for now.
Simply using ID3DXFont::DrawText() alone produces jagged fonts, especially at larger font sizes. Is there a way to use DirectX9 to render fonts extremely smoothly like in the Windows 8 Metro UI?
I am sort of trying to avoid DirectWrite because it requires Direct2D, but I am using DirectX9.
DirectWrite doesn't actually require Direct2D, and can be used on its own (IDWriteBitmapRenderTarget). It's just much, much easier to use Direct2D to render DirectWrite text into an IWICBitmap (via ID2D1Factory::CreateWicBitmapRenderTarget()), and then draw that bitmap using DX (maybe by copying to a DX surface, or using some shared surface approach; I'm not familiar with the specifics here). You can create an IWICBitmap via IWICImagingFactory::CreateBitmap().
Metro doesn't appear to use ClearType, so grayscale should be just fine. Proper ClearType text would actually require per-component alpha and as such it generally doesn't work to render it into a bitmap with an alpha channel.
Are you using DX9 so that you can run on XP, or are you using it for another reason? If you are able to require Win7 or Vista SP2 + Platform Update as your minimum, then I highly recommend looking into using D2D+DW to render text into a bitmap and then use DX to draw the bitmap.
You could create a bitmap font render, which can ranged from "very simple" to "very complicated".
For a simple way, there is this example, that although is in Managed DirectX, can easily be done in C++/DirectX 9. Bitmap Fonts. You can also easily create bitmap font sheets with this free app Bitmap Font Generator (there is also links to how to create bitmap renders in here).
Then there is more complex ways to do fonts, such as using signed distanced fields as described at
Improved Alpha-Tested Magnification for Vector Textures.
ID3DXFont is not capable to draw really cool and smooth fonts. To achieve this I render my own texture atlas for font and then draw it with quads with textures.
Second way is to draw font from atlas with similar way, but using Distance field method.
Distance field with simple pixel shader will provide you very smooth and scalable fonts.
At this point I achieved this: my distance field font
I'm planning on writing a game using javascript / canvas and I just had 1 question: What kind of performance considerations should I think about in regards to loading images vs just drawing using canvas' methods. Because my game will be using very simple geometry for the art (circles, squares, lines), either method will be easy to use. I also plan to implement a simple particle engine in the game, so I want to be able to draw lots of small objects without much of a performance hit.
Thoughts?
If you're drawing simple shapes with solid fills then drawing them procedurally is the best method for you.
If you're drawing more detailed entities with strokes, gradient fills and other performance sensitive make-up you'd be better off using image sprites. Generating graphics procedurally is not always efficient.
It is possible to get away with a mix of both. Draw graphical entities procedurally on the canvas once as your application starts up. After that you can reuse the same sprites by painting copies of them instead of generating the same drop-shadow, gradient and strokes repeatedly.
If you do choose to draw sprites you should read some of the tips and optimization techniques on this thread.
My personal suggestion is to just draw shapes. I've learned that if you're going to use images instead, then the more you use the slower things get, and the more likely you'll end up needing to do off-screen rendering.
This article discusses the subject and has several tests to benchmark the differences.
Conculsions
In brief — Canvas likes small size of canvas and DOM likes working with few elements (although DOM in Firefox is so slow that it's not always true).
And if you are planing to use particles I thought that you might want to take a look to Doodle-js.
Image loading out of the cache is faster than generating it / loading it from the original resource. But then you have to preload the images, so they get into the cache.
It really depends on the type of graphics you'll use, so I suggest you implement the easiest solution and solve the performance problems as they appear.
Generally I would expect copying a bitmap (drawing an image) to get faster compared to recreating it from primitives, as the complexity of the image gets higher.
That is drawing a couple of squares per scene should need about the same time using either method, but a complex image will be faster to copy from a bitmap.
As with most gaming considerations, you may want to look at what you need to do, and use a mixture of both.
For example, if you are using a background image, then loading the bitmap makes sense, especially if you will crop it to fit in the canvas, but if you are making something that is dynamic then you will need to using the drawing API.
If you target IE9 and FF4, for example, then on Windows you should get some good performance from drawing as they are taking advantage of the graphics card, but, for more general browsers you will want to perhaps look at using sprites, which will either be images you draw as part of the initialization and move, or load bitmapped images.
It would help to know what type of game you are looking at, how dynamic the graphics will need to be, how large the bitmapped images would be, what type of framerate you are hoping for.
The landscape is changing with each browser release. I suggest following the HTML5 Games initiative that Facebook has started, and the jsGameBench test suite. They cover a wide range of approaches from Canvas to DOM to CSS transforms, and their performance pros and cons.
http://developers.facebook.com/blog/post/454
http://developers.facebook.com/blog/archive
https://github.com/facebook/jsgamebench
If you are just drawing simple geometry objects you can also use divs. They can be circles, squares and lines in a few CSS lines, you can position them wherever you want and almost all browser support the styles (you may have some problems with mobile devices using Opera Mini or old Android Browser versions and, of course with IE7-) but there wouldn't be almost any performance hit.