Font layout algorithm, Win32 - winapi

I'm trying to come up with a platform independent way to render unicode text to some platform specific surface, but assuming that most platforms support something at least kinda similar, maybe we can talk in terms of the win32 API. I'm most interested in rendering LARGE buffers and supporting rich text, so while I definitely don't want to ever look inside a unicode buffer, I'd like to be told myself what to draw and be hinted where to draw it, so if the buffer is modified, I might properly ask for updates on partial regions of the buffer.
So the actual questions. GetTextExtentExPointW clearly allows me to get the widths of each character, how do I get the width of a nonbreaking extent of text? If I have some long word, he should probably be put on a new line rather than splitting the word. How can I tell where to break the text? Will I need to actually look inside the unicode buffer? This seems very dangerous. Also, how do I figure how far each baseline should be while rendering?
Finally, this is already looking like it's going to be extremely complicated. Are there alternative strategies for doing what I'm trying to do? I really would not at all like to rerender a HUGE buffer each time I change tiny chunks of it. Something between looking at individual glyphs and just giving a box to spat text in.
Finally, I'm not interested in using anything like knuth's word breaking algorithm. No hyphenation. Ideally I'd like to render justified text, but that's on me if the word positioning is given. Ragged right hand margin is fine by me.

What you're trying to do is called shaping in unicode jargon. Don't bother writing your own shaping engine, it's a full-time job that requires continuous updates to take into account changes in unicode and opentype standards. If you want to spend any time on the rest of your app you'll need to delegate shaping to a third-party engine (harbuzz-ng, uniscribe, icu, etc)
As others wrote:
– unicode font rendering is hellishly complex, much more than you expect
– winapi is not cross platform at all
The three common strategies for rendering unicode text are:
1. write one backend per system (plugging on the system native text stack) or
2. select one set of cross-platform libs (for example freebidi + harfbuzz-ng + freetype + fontconfig, or a framework like QT) and recompile them for each target system or
3. take compliance shortcuts
The only strategy I strongly advise against is the last one. You can not control unicode.org normalization (adding upper ss to German), you do not understand worldwide script uses (both African languages and Vietnamese are Latin variants but they exercise unexpected unicode properties), you will underestimate font creators ingenuity (oh, indic users requested this opentype property, but it will be really handy for this English use-case…).
The two first strategies have their own drawbacks. It's simpler to maintain a single text backend but deploying a complete text stack on a foreign system is far from hassle-free. Almost every project that tries cross-plarform has to get rid of msvc first since it targets windows and its language dialect won't work on other platforms and cross-platform libs will typically only compile easily in gcc or llvm.
I think harfbuzz-ng has reached parity with uniscribe even on windows so that's the lib I'd target if I wanted cross-platform today (chrome, firefox and libreoffice use it at least on some platforms). However Libreoffice at least uses the multi-backend strategy. No idea if it reflects current library state more than some past historic assessment. There are not so many cross-platform apps with heavy text usage to look at, and most of them carry the burden of legacy choices.

Unicode rendering is surprisingly complicated. Line breaking is just the beginning; there are many other subtleties that I don't think you've appreciated (vertical text, right-to=left text, glyph combining, and many more). Microsoft has several teams dedicated to doing nothing but implementing text rendering, for example.
Sounds like you're interested in DirectWrite. Note that this is NOT platform independent, obviously. It's also possible to do a less accurate job if you don't care about being language independent; many of the more unusual features only occur in rarer languages. (Chinese being a notable exception.)

If you want some perfect multi platform there will be problems. If you draw one sentence with GDI, one GDI+, one with Direct2D, one on Linux, one on Mac all with the same font size on the same buffer, you'll have differences some round some position to int some other use float for examples.
There is not one, but a least two problems. Drawing text and computing text position, line break etc are very different. Some library do both some do only computing or rendering part. A very simplified explanation is drawing do only render one single char at the position you ask, with transformations zoom, rotation and anti aliasing. Computing do everything else chose each char position in word, sentences line break, paragraphs etc
If you want to be platform independent you could use FreeType to read font files and get every information on each characters. That library get exact same result on each platform and predictability in font is good. The main problem with font is lot of bad, missed, or even wrong information in characters descriptions.Nobody do text perfectly because it's very hard (tipping hat to word, acrobat and every team who deal directly with fonts)
If your computing of font is good. There is lot of work to do everything you could see in a good word processor software (space between characters, spaces between word, line break, alignment, rotation, grease, aliasing...) then you could do the rendering. It should be the easier. You can with same computation do GDI, Direct2D, PDF or printing rendering path.

Related

alternative to gdi when writing text editor in winapi

Is there some alternative to GDI when one want to
write nice working, fast text editor under
winapi? I want something what would work
with older windows versions for example XP, too.
I heard that GDI is slow, maybe there is something
more proper to GDI when writng text editor?
Does maybe somebody know what to that purpose
are using miscleanous nice text editors?
GDI is not too fast. But probably for editor, it should be sufficient. It also depends on the inteligence of the paint algorithm. When being edited, for example, you should only re-render the affected line(s). Even when inserting new lines, you may just scroll most of the ones below with ScrollWindow() or ScrollWindowEx().
As an alternative you may look at Uniscribe (USP10.DLL). However I am not sure whether it
relies on GDI or not. It is more or less replacement of TextOut() and similar GDI functions to support properly different scripting systems, including aspects like right-to-left reading, mixtures of left-to-right and right-to-left (e.g. arabian with embedded European personal names etc.)
Then there is also DirectWrite, which is supposed to be used together with Direct2D. That should be faster as Direct2D offloads a lot work to graphics card, while GDI eats mainly CPU and system memory. Note however these APIs are only available since Windows 7.

How to make math equations in Xcode?

I am a total beginner with Xcode and Objective C, but I have some experience with OOP in C++. I bought this book. I read about how to make a simple app, and skimmed the rest of the book. What I want to do is make an iPhone app people can use to look up math equations such as the quadratic eqauation, pythagorean identity, etc. I plan to include a lot of stuff, and do a lot of things better than other apps I have seen. However, before I pay Apple $99 to be a full fledged iOS developer, I want to know that it isn't too hard to make the Greek letters and Math notation that we see in math books. So for example, what code is needed to make an iPhone app that display . Of course I want to use features that I understand are included in Xcode for doing this sort of thing, rather than, make a graphic with another program that my app would use when needed. Besides that specific example, where is the Apple documentation for making other math symbols and notation that my iPhone app will display? If this is the wrong place to ask, it would be great if you could tell me of a beter place to post my question.
It's going to require a lot of writing to get good layouts using the system frameworks. All the building blocks are there, but your program would need significant rendering customization to get the layouts you expect. In detail, the characters you need are there, but you will need to write a bunch of supporting code in order to resize, position, and layout these characters correctly.
You may want to look for a suitably licensed library you can use which specializes in this purpose. Perhaps a LaTeX renderer would offer some good leads.
Use core animiation layers to construct the elements of a parsed equation. Use Quartz to draw lines, symbols, for rendering visual elments of the operation with the equation. Also use Core Plot. And then eventually output to Latex once parsed into hierarchical data structure. Also check out Graham Cox's GCMathParser.
Similar question: Drawing formulas with Quartz 2d

Why is GUI code so computationally expensive?

All you Stackoverflowers,
I was wondering why GUI code is responsible for sucking away many, many cpu cycles. In principle, the graphical rendering is far less complex than Doom (although most corporate GUIs will introduce lots of window dressing). The event handling layer is also seemingly a heavy cost, however, it seems that a well-written implementation should switch between contexts efficiently on modern processors with a lot of memory/cache.
If anybody has run a profiler on their big GUI application, or a common API itself, I'm interested in where the bottlenecks lie.
Possible explanations (that I imagine) may be:
High levels of abstraction between hardware and application interface
Lots of levels of indirection to the correct code to execute
Low priority (compared to other processes)
Misbehaving applications flooding API with calls
Excessive object orientation?
Complete poor design choices in API (not just issues, but design philosophy)
Some GUI frameworks are much better than others, so I'd like to hear varied perspectives. For example, the Unix/X11 system is much different than Windows and even than WinForms.
Edit: Now a community wiki - go for it. I have one more thing to add -- I'm an algorithms guy in school and would be interested if there are inefficient algorithms in GUI code and which they are. Then again, it's probably just the implementation overhead.
I've no idea generally, but I'd like to add another item to your list - font rendering and calculations. Finding vector glyphs in a font and converting them to bitmap representations with anti-aliasing is no small task. And often it needs to be done twice - first to calculate the width/height of the text for positioning, and then actually drawing the text at the right coordinates.
Also, most drawing code today relies on clipping mechanisms to update just a part of the GUI. So, if just one part needs to be redrawn, the code actually redraws the whole window behind the scenes, and then takes just the needed part to actually update.
Added:
In the comments I found this:
I'm also very interested in this. It can't be that the gui is rendered using only the cpu because if you don't have proper drivers for your gfx-card, desktop graphics render incredibly slow. If you have gfx-drivers however desktop-gfx go kinda fast but never as fast as a directx/opengl app.
Here's the deal as I understand it: every graphic card out there today supports a generic interface for drawing. I'm not sure if it's called "VESA", "SVGA", or if those are just old names from the past. Anyway, this interface involves doing everything through interrupts. For every pixel there is an interrupt call. Or something like that. The proper VGA driver however is able to take advantage of DMA and other enhancements that make the whole process WAY less CPU-intensive.
Added 2: Ah, and for OpenGL/DirectX - that's another feature of today's graphics cards. They are optimized for 3D operations in exclusive mode. That's why the speed. The normal GUI just utilizes basic 2D drawing procedures. So it gets to send the contents of the whole screen every time it wants an update. 3D applications however send a bunch of textures and triangle definitions to the VRAM (video-RAM) and then just reuse them for drawing. They just say something like "take the triangle set #38 with the texture set #25 and draw them". All these things are cached in the VRAM so this is again way faster.
I'm not sure, but I would suspect that the modern 3D-accelerated GUIs (Vista Aero, compiz on Linux, etc.) also might take advantage of this. They could send common bitmaps to the VGA up front and then just reuse them directly from the VRAM. Any application-drawn surfaces however would still need to be sent directly every time for updates.
Added 3: More ideas. :) The modern GUI's for Windows, Linux, etc. are widget-oriented (that's control-oriented for Windows speakers). The problem with this is that each widget has its own drawing code and associated drawing surface (more or less). When the window needs to get redrawn, it calls the drawing code for all its child-widgets, who in turn call the drawing code for their child-widgets, etc.. Every widget redraws its whole surface, even though some of it is obscured by other widgets. With above mentioned clipping techniques some of this drawn information is immediately discarded to reduce flickering and other artifacts. But still it's lots of manual drawing code that includes bitmap blitting, stretching, skewing, drawing lines, text, flood-filling, etc.. And all this gets translated to a series of putpixel calls that get filtered through clipping filters/masks and other stuff. Ah, yes, and alpha blending has also become popular today for nice effects which means even more work. So... yes, you could say this is because of lots of abstraction and indirection. But... could you really do it any better? I don't think so. Only 3D techniques might help, because they take advantage of GPU for alpha-calculations and clipping.
Let's begin by saying that writing libraries is much harder than writing a stand-alone code. The requirement that your abstraction be reusable in as many contexts as possible, including contexts which you haven't though of yet, makes the task challenging even for experienced programmers.
Amongst libraries, writing a GUI toolkit library is a famously difficult problem. This is because the programs which use GUI libraries range over a very wide variety of domains with very different needs. Mr Why and Martin DeMollo discussed the requirements placed of GUI libraries a little while ago.
Writing GUI widgets themselves is difficult because computer users are very sensitive minute details of the behavior of the interface. Non-native widget never feel right, don't they? In order to get non-native widget right -- in order to get any widget right, in fact -- you need to spend an inordinate amount of time tweaking the details of the behavior.
So, GUI are slow because of the inefficiencies introduced by the abstraction mechanisms used to create highly-reusable components, that added to shortness of time available to optimize the code once so much time has been spent just getting the behavior right.
Uhm, that's quite a lot.
The most simple but probably obvious answer is that the programmers behind these GUI apps, are really bad programmers. You can go along way in writing code which does the most bizarre things and it will be faster but few people seem to care how to do this or they deem it to be an expensive non-profitable time wasted effort.
To set things straight off-loading computations to the GPU won't necessarily fix any problems. The GPU is just like the CPU except it's less general purpose and more a data paralleled processor. It can do graphics computations exceptionally well. Whatever graphics API/OS and driver combination you have doesn't really matter that much... well OK, with Vista as an example, they changed the desktop composition engine. This engine is far better composting only that which has changed, and since the number one bottle neck for GUI apps is redrawing is a neat optimization strategy. This idea of virtualizing your computational needs and only update the smallest change every time.
Win32 sends WM_PAINT messages to windows when they need to be redrawn, this can be a result of windows occluding each other. However it's up to the window itself to figure out whats actually changed. More than so nothing did change or the change that was made was trivial enough so that it could have been just preformed on top of what ever top most surface you had.
This kind of graphics handling doesn't necessarily exist today. I would say that people have refrained from writing really efficient and virtualizing rendering solutions because the benefit/cost ration is rather low/high (bad).
Something Windows Presentation Foundation (WPF) does, which I think is far superior to most other GUI API is that it splits layout updates and rendering updates into two separate passes. And while WPF is managed code the rendering engine is not. What happens with rendering is that the managed WPF rendering engine builds a command queue (this is what DirectX and OpenGL does) which is then handed of to the native rendering engine. What's a bit more elegant here is that WPF will then try to retain any computation which didn't change the visual state. A trick if you may, where you avoid costly rendering calls for things that doesn't have to be rendered (virtualizing).
In contrast to WM_PAINT which tells a Win32 window to repaint itself a WPF app would check what parts of that window requires repainting and only repaint the smallest change.
Now WPF is not supreme, it's a solid effort from Microsoft but it's not the holy grail yet... the code which runs the pipeline could still be improved and the memory footprint of any managed app is still more than I would want. But I hope this is the kind of answer you are looking for.
WPF is able to do some things asynchronously rather decent, which is a huge deal if you wanna make a really responsive low-latency/low-cpu UI. Asynchronous operations is more than off-loading work on a different thread.
To summarize things slow and expensive GUI means too much repainting and the kind of repainting which is very expensive i.e. the entire surface area.
I does to some degree depend on the language. You might have noticed that Java and RealBasic applications are a fair bit slower than their C-based (C++, C#, Objective-C) counterparts.
However GUI applications are much more complex than command line apps. The Terminal window needs only to draw a simple window that doesn't support buttons.
There are also multiple loops for extra inputs and features.
I think that you can find some interesting thoughts on this topic in "Window System Design: If I had it to do over again in 2002" by James Gosling (the Java guy, also known for his work on pre-X11 windowing systems). Available online here[pdf].
The article focuses on the positive side (how to make it fast), not on the negative side (what's making it slow), but it is still a good read on the topic.

UI design and cultural sensitivity/awareness [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
When designing a user interface for an application that is going to be used internationally it is possible to accidentally design an aspect of the UI that is offensive to or inappropriate in another culture.
Have you ever encountered such an issue and if so, how did you resolve the design problem?
Some examples:
A GPS skyplot in a surveying application to be used in Northern Ireland. Satellites had to be in a different colour to indicate whether they were in ascent or descent in the sky. Lots of satellites in ascent are considered good as it indicates that GPS coverage will be getting better in the next few hours.I chose green for ascent and orange for descent. I had not realised that these colours are associated with Irish Catholics and Irish Protestants. It was suggested that we change the colours. In the end blue and a deep pink were chosen.
For applications that are going to be translated into German, I've found that you should add about 50% extra space for the German text compared to the English text.
A friend was working on a battlefield planning application for a customer in the Middle East. It was mandated that all crosshairs should take the form of a diagonal cross, to avoid any religious significance.
(Edit - added this) In the UK a tick mark (something like √) means yes whereas a cross (x) means no. In Windows 3.1 selected checkboxes used a cross, which confused me the first time I saw it. Since Windows 95 they've used (what I would call) a tick mark. As far as I can tell both a tick and a cross are called a check mark in the US, and mean the same thing
Edit
Please ensure that any reply you add to this question is as culturally sensitive as the user interfaces we're all trying to build! Thanks.
You should try to follow the i18n and l10n pointers provided by the look and feel guidelines for the UI library you're using, or platform you're delivering to. They often contain hints on how to avoid cultural issues, and may even contain icon libraries that have had extensive testing for such potential banana skins.
Windows User Experience Interaction Guidelines
Java Look and Feel Design Guidelines
Apple Human Interface Guidelines
GNOME Human Interface Guidelines
KDE User Interface Guidelines
I guess the most important thing is designing your application with i18n in mind from the ground up, so that your UI can be resized depending on the translated text; mnemonics are appropriate for different languages; labels are to the left for latin languages, but on the right for Hebrew and Arabic, etc, etc.
Designing with i18n and l10n in mind means that shipping your product to a location with a different culture, language or currency will not require a re-write, or a different version, just different resources.
Generally speaking, I believe you'll run into more problems with graphics and icons that you will with text (apart from embarrassing translations) simply because people identify more strongly with symbols than particular passages of text.
The idiom to use a big (green) checkmark symbol to mean OK/Yes/Correct is somewhat confusing in Sweden, where the checkmark is typically used to mean "wrong". I.e. when grading tests in school, a teacher will often use a capital R (from the Swedish word for "Right") for a correct answer, and a checkmark ("bock" in Swedish) for "wrong".
I find this issue interesting not only because I'm in the affected group (I'm Swedish), but also because it highlights that these kinds of issues can appear where you might not expect them to. Sweden is a generic Western culture, you might assume that usage of these kinds of symbols should be the same.
Another Yes/No example for Japan
I have to use an online database tool that has some user settings that can be toggled on and off. On is indicated by a green cross (×), Off is indicated by a red circle (○).
In Japan this is confusing since Off (NG, stop, closed) in general would be indicated by a cross (× : batsu) and On (ok, open) by a circle (○ : maru).
Adding to that the green red color combination makes things very confusing.
There is a good reason why the Windows resources (and not only) contain more than just strings.
A lot of elements should be considered localizable:
- colors
- images (including icons, toolbars, etc.)
- sounds
- font and font sizes
- alignment
- control flags and attributes (think UI mirroring for Arabic & Hebrew)
- dialog sizes
- etc.
This way all most of the problems can be addressed by the localizers, without any code changes.
For dialogs resizing should either be done by the localizers (to leaving extra space is not necessary), or should use auto-layout (available in frameworks like Java, .NET, Flex, wxWidgets, Qt, etc.)
This might also be a good read: http://msdn.microsoft.com/en-us/goglobal/bb688120.aspx
You will not be able to identify all issues on your own.
Specialists in UI/cultures will cost you lots of money.
Design first for you primary region and then discover and fix (if you can) issues one by one.
/* Here was my opinion on the religious issues as applied to software design. Removed as the thread starter did not like it. */
Well these issues begin to play role when you grow to the big/international/corporate level. Until that happens better not bother. They call it "premature optimization".
The German language, you're right, content/markup ratio is noticeably lower when compared to English. Another difference is that the words tend to be very long which means not only text area will be longer, but you'll likely run into problems when words expand out of the container and not wrap.
How would you like that one: Kesselsteinentfernungsmittelherstellungsbetrieb ?
Honestly, you won't be able do design it that way as to please everyone. World cultures are in many way contradictory. As soon as you reached certain expansion level, you'll inevitably become a rip-off target for lots of jerk around the world who find the UI and colors of your application "offensive".

Old ASCII Protocol Avatar Question

For anyone that remembers the protocol Avatar, (I'm pretty sure this was it's name) I'm trying to find information on it. All I've found so far, is that it's an ANSI style compression protocol, done by compressing common ANSI escape sequences.
But, back in the day, (The early 90's) I swore I remembered that it was used to compress ASCII text for modems like early 2400 baud BIS modems. (I don't recall all the protocol versions, names, etc from back then, sorry).
Anyways, this made reading messages, and using remote shells a lot nicer, due to the display speed. It didn't do anything for file transfers or what not, it was just a way of compressing ASCII text down as small as possible.
I'm trying to do research on this topic, and figured this is a good place to start looking. I think that the protocol used every trick in the book to compress ASCII, like common word replacement to a single byte, or maybe even a bit.
I don't recall the ratio you could get out of it, but as I recall, it was fairly decent.
Anyone have any info on this? Compressing ASCII text to fewer than 7 bits, or protocol information on Avatar, or maybe even an answer to if it even DID any of the ASCII compression I'm speaking of?
Wikipedia has something about AVATAR protocol:
The AVATAR protocol (Advanced Video
Attribute Terminal Assembler and
Recreator) is a system of escape
sequences occasionally used on
Bulletin Board Systems (BBSes). It has
largely the same functionality as the
more popular ANSI escape codes, but
has the advantage that the escape
sequences are much shorter. AVATAR can
thus render colored text and artwork
much faster over slow connections.
The protocol is defined by FidoNet
technical standard proposal FSC-0025.
Avatar was later extended by in late
1989 to AVT/0 (sometimes referred to
as AVT/0+) which included facilities
to scroll areas of the screen (useful
for split screen chat, or full screen
mail writing programs), as well as
more advanced pattern compression.
Avatar was originally implemented in
the Opus BBS, but later popularised by
RemoteAccess. RemoteAccess came with a
utility, AVTCONV that allowed for easy
translation of ANSI documents into
Avatar helping its adoption.
Also:
FSC-0025 - AVATAR proposal at FidoNet Technical Standards Committee.
FSC-0037 - AVT/0 extensions
If I remember correctly, the Avatar compression scheme was some simple kind of RLE (Run-Length Encoding) that would compress repeated strings of the same characters to something smaller. Unfortunately, I don't remember the details either.
Did you check out AVATAR on Wikipedia?

Resources