How to render formatted text in Direct3D9? - windows

I'm writing an application which needs to draw a lot of text - several lines, maybe tens of lines - in Direct3D9. The text can be heavily formatted (i.e. different typefaces, styles, sizes) and can contain unicode symbols from different charsets. Worst of all, it can change on the fly, so it needs to be dynamic (render once display always won't do).
There are two problems with that: first, I'll probably need a lot of calls to D3DXCreateFont, which are supposedly costly (I'm not sure myself). Another approach is to create all the fonts before drawing the multi-part-line, and then just switch between them - is this better? Well, I can also create font objects on-the-fly as I draw the line, add them to some kind of font-object-cache and then look in cache before trying to create a new one? What do you think, which is the best approach?
The second problem is that D3DXFont seem to not understand underline/strike-through font styles. Although D3DFONT is based on LOGFONT, it omits those fields (supports italic though). Let's say I really really need underline/strike-through, what do I do? Is there a way to force ID3DXFont to do underline? Should I just draw those lines myself (how do I do that fast)? Or maybe I should switch to drawing with GDI on HDC and then copying those pixels into texture - will that provide reasonable performance?

Well, I can also create font objects on-the-fly as I draw the line, add them to some kind of font-object-cache and then look in cache before trying to create a new one? What do you think, which is the best approach?
The caching approach. Don't even try to recreate the fonts on-the-fly whenever you need them, with no caching. I'm unaware of D3DXCreateFont's real cost, but the function is by design not intended to be called too frequently.
Regarding underline/strikeout - you could render the lines yourself on top using IDirect3DDevice9::DrawPrimitiveUP with D3DPT_LINELIST and a pass-through shader to leave the line vertices unchanged and untransformed.
Or maybe I should switch to drawing with GDI on HDC and then copying those pixels into texture - will that provide reasonable performance?
I won't because it needs to transfer the texture contents from system RAM to VRAM.

Related

AnimatedSprite vs AnimatedImage in QML

In QML I have multiple ways of including animations. Within others there are
AnimatedImage
AnimatedSprite
which both seem to be of similar use. With the right tools, it is quite easy to transform a sprite sheet into an animated gif or MNG file which could be handled by an AnimatedImage. The other way around is not that much harder.
In the documentation of Sprite they say:
The sprite engine internally copies and cuts up images to fit in an easier to read internal format, which leads to some graphics memory limitations. Because it requires all the sprites for a single engine to be in the same texture, attempting to load many different animations can run into texture memory limits on embedded devices. In these situations, a warning will be output to the console containing the maximum texture size.
On the other hand, the AnimatedImage usually caches the single frames, especially when the animation should loop, (which might also bring the maximum texture size to risk?)
I know that the Sprite has some fancy state machine and stuff, but the AnimatedSprite seems to be stripped of this.
As the production of content for either of those is the same work, I want to know if one of them is superior in any usecase, or whether their usecases and their performance are just entirely the same and which one to use is a question of flavour.
Actually I did not find a single reference that mentioned both in the same context...

Grouping rectangles in iTextSharp

I have multiple rectangles and they all share the same spot color. Is there a way to merge / group them into one vector object so the generated pdf has smaller size?
If you are creating the document from scratch, then the answer is trivial: yes!
It's sufficient to draw all the paths of the rectangles that share the same spot color and then use the operator that fills, stroke or fills & strokes the paths.
If you are talking about optimizing an existing PDF document, you're in for some heavy programming. You would need to parse every content stream looking for rectangle operators (assuming that the rectangles aren't drawn using move-to and line-to operators), check where these shapes are filled and/or stroked, and then rearrange all these operators. This would require a lot of thought. I would know where to begin, but I can't predict where it would end. Maybe it would turn out that it makes more sense to define a single rectangle as a Form XObject and reuse that single external object, maybe not. It's hard to predict.
Moreover: you are talking about operators in a stream. These streams are compressed anyway, so you may be doing a lot of work to gain only a very small decrease in size.
I would say: what you are asking for may be possible, but it is unclear why you would do this, because it would result in only a limited decrease in file size.
If size is an issue, there may be other places where you are "wasting bytes" that could result in a more desirable result. I am very curious to hear why you think the rectangles using spot colors are the culprit. You are reusing the spot color instance, aren't you? If you are creating a new spot color instance for every rectangle you draw, you have found the real culprit and you can avoid having to group the rectangles.

Three.js How to increase canvas-text texture quality

What parameters, modes, tricks, etc can be applied to get sharpness for texts ?
I'm going to draw a lot so I cant use 3d text.
I'm using canvas to write the text and some symbols. I'm creating somethinbg like label information.
Thanks
This is no simple matter since you'll run into memory issues with 100k "font textures". Since you want 100k text elements you'll have several difficulties to manage. I had a similar problem too once and tossed together a few techniques in order to make it work. Simply put you need some sort of LOD ("Level of Detail") to make that work. That setup might look like following:
A THREE.ParticleSystem built up with BufferGeometry where every position is one text-position
One "highres" TextureAtlas with 256 images on it which you allocate dynamically with those images that are around you (4096px x 4096px with 256x256px images)
At least one "lowres" TextureAtlas where you have 16x16px images. You prepare that one beforehand. Same size like previous, but there you have all preview images of your text and every image is 16x16px in size.
A kdtree data structure to use a nearestneighbour algorithm with to figure out which positions are near the camera (alike http://threejs.org/examples/#webgl_nearestneighbour)
The sub-imaging module to continually replace highres textures with directly on the GPU: https://github.com/mrdoob/three.js/pull/4661
An index for every position to tell it which position on the TextureAtlas it should use for display
You see where I'm going. Here's some docs on my experiences:
The Stackoverflow post: Display many thousand images in three.js
The blog where I (begun) to explain what I was doing: http://blogs.fhnw.ch/threejs/
This way it will take quite some time until you have satisfying results. The only way to make this simpler is to get rid of the 16x16px preview images. But I wouldn't recommend that... Or of course something depending on your setup. Maybe you have levels? towns? Or any other structure where it would make sense to only display a portion of these texts? That might be worth a though before tackling the big thing.
If you plan to really work on this and make this happen the way I described I can help you with some already existing code and further explanations. Just tell me where you're heading :)

Adjusting hard values in processing for any screen size

So I'm making a game with my group on processing for a project and we all have different computers. The problem is we built the game on one computer, however at this point we have realized the the (1200,800) size we used does not work on our professors computer. Unfortunately we have hard coded thousands of values to fit on this resolution. Is there any way to make it fit on all computers?
From my own research I found you can use screen.width and screen.height in order to get the size of the screen, I set the game window to about half the screen size. However all the images I had loaded for background and stuff are 1200x800 So I am unsure how to go about modifying ALL of my pictures (backgrounds), and hard values.
Is there anyway to fix this without having to go manually change the 1000's of hard values? (Yes I am fully aware how bad it is I hard coded the numbers).
Any help would be greatly appreciated. As mentioned in title, the language is processing.
As I'm sure you have learned your lesson about hard-coding numbers, I won't say anything about it :)
You may have heard of embedding a processing PApplet inside a traditional java JFrame or similar. If you are okay with scaling the image that your PApplet draws (ie it draws it at the resolution that you've coded, and then the resulting image is scaled up or down to match the screen), then you could embed your papplet in a frame, capture the papplet's output to an image, scale the image, then draw it to the screen. A quick googling yielded this SO question. It may make your game look funny if the resolutions are too different, but this is a quick and dirty way. It's possible that you'll want to have this done in a separate thread, as suggested here.
Having said that, I do not recommend it. One of the best thing (IMO) of Processing is not having to mess directly with AWT/Swing. It's also a messy kludge and the "right thing to do" is just to go back and change the hard-coded numbers to variables. For your images, you can use PImage's resize(). You say your code is several hundred lines long, but in reality that isn't a huge amount-- the best thing to do is just to suck it up and be unhappy for a few hours. Good luck!

HTML5 Canvas Performance: Loading Images vs Drawing

I'm planning on writing a game using javascript / canvas and I just had 1 question: What kind of performance considerations should I think about in regards to loading images vs just drawing using canvas' methods. Because my game will be using very simple geometry for the art (circles, squares, lines), either method will be easy to use. I also plan to implement a simple particle engine in the game, so I want to be able to draw lots of small objects without much of a performance hit.
Thoughts?
If you're drawing simple shapes with solid fills then drawing them procedurally is the best method for you.
If you're drawing more detailed entities with strokes, gradient fills and other performance sensitive make-up you'd be better off using image sprites. Generating graphics procedurally is not always efficient.
It is possible to get away with a mix of both. Draw graphical entities procedurally on the canvas once as your application starts up. After that you can reuse the same sprites by painting copies of them instead of generating the same drop-shadow, gradient and strokes repeatedly.
If you do choose to draw sprites you should read some of the tips and optimization techniques on this thread.
My personal suggestion is to just draw shapes. I've learned that if you're going to use images instead, then the more you use the slower things get, and the more likely you'll end up needing to do off-screen rendering.
This article discusses the subject and has several tests to benchmark the differences.
Conculsions
In brief — Canvas likes small size of canvas and DOM likes working with few elements (although DOM in Firefox is so slow that it's not always true).
And if you are planing to use particles I thought that you might want to take a look to Doodle-js.
Image loading out of the cache is faster than generating it / loading it from the original resource. But then you have to preload the images, so they get into the cache.
It really depends on the type of graphics you'll use, so I suggest you implement the easiest solution and solve the performance problems as they appear.
Generally I would expect copying a bitmap (drawing an image) to get faster compared to recreating it from primitives, as the complexity of the image gets higher.
That is drawing a couple of squares per scene should need about the same time using either method, but a complex image will be faster to copy from a bitmap.
As with most gaming considerations, you may want to look at what you need to do, and use a mixture of both.
For example, if you are using a background image, then loading the bitmap makes sense, especially if you will crop it to fit in the canvas, but if you are making something that is dynamic then you will need to using the drawing API.
If you target IE9 and FF4, for example, then on Windows you should get some good performance from drawing as they are taking advantage of the graphics card, but, for more general browsers you will want to perhaps look at using sprites, which will either be images you draw as part of the initialization and move, or load bitmapped images.
It would help to know what type of game you are looking at, how dynamic the graphics will need to be, how large the bitmapped images would be, what type of framerate you are hoping for.
The landscape is changing with each browser release. I suggest following the HTML5 Games initiative that Facebook has started, and the jsGameBench test suite. They cover a wide range of approaches from Canvas to DOM to CSS transforms, and their performance pros and cons.
http://developers.facebook.com/blog/post/454
http://developers.facebook.com/blog/archive
https://github.com/facebook/jsgamebench
If you are just drawing simple geometry objects you can also use divs. They can be circles, squares and lines in a few CSS lines, you can position them wherever you want and almost all browser support the styles (you may have some problems with mobile devices using Opera Mini or old Android Browser versions and, of course with IE7-) but there wouldn't be almost any performance hit.

Resources