My mac laptop has 1,024,000 pixels.
What's the simplest way to turn my display completely black and go nuts with writing little programs to twiddle pixels to my heart's delight?
To make it more concrete, say I wanted to implement the Chaos Game to draw a Sierpinski triangle, at the pixel level, with nothing else on the screen.
What are ways to do that?
Perhaps pick Processing
Create Quartz in your Captured Console
Surely a Screen Saver would be a Serendipitous Solution ?
One approach would be to download sample code for a screen saver module and then then use that as a template for your own screen saver. That way you don't have to write much beyond the actual drawing code, and you get your own custom screen saver module to boot.
I you're using C or C++ you can do this stuff with SDL. It allows low level access to the pixels of a single window or the full screen (plus the keyboard, mouse, and soundcard) and works on most Windows, OSX, and Linux.
There are some excellent tutorials on how to use this library at http://lazyfoo.net/SDL_tutorials/index.php. I think you want tutorial #31 after the first few.
A good way to go is GLUT, the (slightly) friendly multiplatform wrapper to OpenGL. Here's some code to twiddle some points:
#include <GL/glut.h>
void
reshape(int w, int h)
{
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, w, 0, h, -1, 1);
}
void
display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBegin(GL_POINTS);
for (int i = 0; i<399; ++i)
{
glVertex2i(i, (i*i)%399);
}
glEnd();
glFlush();
}
int
main(int argc, char **argv)
{
glutInit(&argc, argv);
glutCreateWindow("some points");
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return 0;
}
That's C++ but it should be nearly identical in Python and many other languages.
Related
I need to find the percentage of skintone of a person in a given image.
I have been able to count all the pixels with skin colour so far but I am having trouble ignoring the background of the person so I can count the number of pixels for the percentage.
BackgroundSubtractorMOG2 bg;
bg.nmixtures =3;
bg.bShadowDetection=false;
bg.operator ()(img,fore);
bg.getBackgroundImage(back);
img is my image. I was trying to separate the back and fore mat objects, but with the above code snippet back and fore take the same value as the img. Nothing is happening.
Can you point me in the right direction as to what changes I have to make to get it right?
I was able to run some similar code found here:
http://mateuszstankiewicz.eu/?p=189
I had to change a couple of things, but it ended up working properly (back and fore are not the same as img when displayed:
int main(int argc, char *argv[]) {
Mat frame, back, fore;
VideoCapture cap(0);
BackgroundSubtractorMOG2 bg;
vector<std::vector<Point> > contours;
namedWindow("Frame");
namedWindow("Background");
namedWindow("Foreground");
for(;;) {
cap >> frame;
bg.operator ()(frame, fore);
bg.getBackgroundImage(back);
erode(fore, fore, Mat());
dilate(fore, fore, Mat());
findContours(fore, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
drawContours(frame, contours, -1, Scalar(0, 0, 255), 2);
imshow("Frame", frame);
imshow("Background", back);
imshow("Foreground", fore);
if(waitKey(1) == 27) break;
}
return 0;
}
The problem
I have just now begun working with OpenGL using GLUT. The code below compiles and displays two wireframe cubes and a sphere. The problem is that when I attempt to drag or resize the window it induces a noticeable delay before following my mouse.
This problem does not occur on my colleague's computer, same code.
I am working with Visual Studio 2012 c++ express on a Windows 7 computer.
I am a not an experienced programmer.
The code
// OpenGLHandin1.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <GL/glut.h>
void initView(int argc, char * argv[]){
//init here
glutInit(&argc, argv);
//Simple buffer
glutInitDisplayMode( GLUT_SINGLE | GLUT_RGBA );
glutInitWindowPosition(100,100);
glutInitWindowSize(800,400);
glutCreateWindow("Handin 2");
}
void draw(){
glClearColor(0,0,0,1);
glClear(GL_COLOR_BUFFER_BIT);
//Background color
glPushMatrix();
glLoadIdentity();
glTranslatef(0.6, 0, 0);
glColor3f(0.8,0,0);
glutWireCube(1.1); //Draw the cube
glPopMatrix();
glPushMatrix();
glLoadIdentity();
glTranslatef(-0.5, 0, -0.2);
glColor3f(0,0.8,0);
glutWireCube(1.1); //Draw the cube
glPopMatrix();
glPushMatrix();
glLoadIdentity();
glTranslatef(0, 1.2, 0);
glRotatef(90, 1, 0, 0);
glColor3f(1,1,1);
glutWireSphere(0.6, 20, 20); //Draw the sphere
glPopMatrix();
//draw here
//glutSwapBuffers();
glutPostRedisplay();
glFlush();
}
void reshape (int w, int h){
glViewport(0,0,w ,h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45, (float)w/(float)h, 1.5, 10);
gluLookAt(1.5, 2.5, 4,
0, 0.6, 0,
0, 1, 0); //Orient the camera
glRotatef(5, 0, 0, 1);
glMatrixMode(GL_MODELVIEW);
}
int main(int argc, char * argv[])
{
initView(argc,argv);
glutDisplayFunc(draw);
glutReshapeFunc(reshape);
glutMainLoop();
}
Solution:
It seems that the simple solution using Sleep(1) in the render function worked. You've also asked why - I'm not sure I will be able to solve this properly, but here's my best guess:
Why does it even work?
Your fellow students can have VSync turned on by default in their drivers. This causes their code to run only as fast as the screen can refresh, most probably 60 fps. It gives you around 16 miliseconds to render the frame, and if the code is efficient (taking, say, 2 ms for render) it leaves plenty of time for the CPU to do other OS-related stuff, such as moving your window.
Now, if you disable vertical sync, the program will try to render as many frames as possible, effectively clogging all other processes. I've suggested you to use Sleep, because it reveals this one particular issue. It doesn't really matter if it's 1 or 3 ms, what it really does is say "hey, CPU, I'm not doing anything in particular right now, so you may do other things".
But isn't it slowing my program?
Using Sleep is a common technique. If you're concerned with that lost 1 ms every frame, you can also try putting Sleep(0), as it should act exactly the same - giving the spare time to the CPU. You could also try enabling vertical sync and verifying if indeed my guess was correct.
As a side note, you can also look at CPU usage graphs with and without sleep. It should be 100% (or 50% on a dual-core CPU) without (running as fast as possible), and much lower with, depending on your program requirements and your CPU's speed.
Additional remarks about Sleep(0)
After the sleep interval has passed, the thread is ready to run. If you specify 0 milliseconds, the thread will relinquish the remainder of its time slice but remain ready. Note that a ready thread is not guaranteed to run immediately. Consequently, the thread may not run until some time after the sleep interval elapses. - it's from here.
Also note that on Linux systems behavior might be slightly different; but I'm not a linux expert; perhaps a passer-by could clarify.
I'm trying use Qt framework(4.7.4) to demonstrate a sliding display in which new pixel data is added to first row of the screen and previous pixels are scrolled one pixel below in every refresh.
It is refreshed 20 times per second and in every refresh, random green points (pixels) are drawn on black background.
The problem is; there is highly noticeable flickers in every refresh. I have researched through the web and optimized my code as much as possible. I tried to use raster rendering with both QPainter (on QWidget) and QGraphicsScene(on QGraphicsView) and even I tried to use OpenGL rendering on QGLWidget. However, at the end I have still the same flicker problem.
What may cause this flickering? I begin to suspect that my LCD monitor can not refresh the display for black to green transitions. I have also noticed that if I select a gray background instead of black, there happens no flicker.
The effect you're seeing is purely psychovisual. It's a human defect, not a software defect. I'm serious. You can verify by fixing the value of x - you'll still be repainting the entire pixmap on the window, there won't be any flicker - because there is no flicker per se.
The psychovisual flicker occurs when the scroll rate is not tied to the passage of real time. When occasionally the time between updates varies due to CPU load, or due to system timer inaccuracies, our visual system integrates two images and it appears as if the overall brightness is changed.
You've correctly noticed that the perceived flicker is reduced as you reduce the contrast ratio of the image by setting the background to grey. This is an additional clue that the effect is psychovisual.
Below is a way of preventing this effect. Notice how the scroll distance is tied to the time (here: 1ms = 1pixel).
#include <QElapsedTimer>
#include <QPaintEvent>
#include <QBasicTimer>
#include <QApplication>
#include <QPainter>
#include <QPixmap>
#include <QWidget>
#include <QDebug>
static inline int rand(int range) { return (double(qrand()) * range) / RAND_MAX; }
class Widget : public QWidget
{
float fps;
qint64 lastTime;
QPixmap pixmap;
QBasicTimer timer;
QElapsedTimer elapsed;
void timerEvent(QTimerEvent * ev) {
if (ev->timerId() == timer.timerId()) update();
}
void paintEvent(QPaintEvent * ev) {
qint64 time = elapsed.elapsed();
qint64 delta = time - lastTime;
lastTime = time;
if (delta > 0) {
const float weight(0.05);
fps = (1.0-weight)*fps + weight*(1E3/delta);
if (pixmap.size() != size()) {
pixmap = QPixmap(size());
pixmap.fill(Qt::black);
}
int dy = qMin((int)delta, pixmap.height());
pixmap.scroll(0, dy, pixmap.rect());
QPainter pp(&pixmap);
pp.fillRect(0, 0, pixmap.width(), dy, Qt::black);
for(int i = 0; i < 30; ++i){
int x = rand(pixmap.width());
pp.fillRect(x, 0, 3, dy, Qt::green);
}
}
QPainter p(this);
p.drawPixmap(ev->rect(), pixmap, ev->rect());
p.setPen(Qt::yellow);
p.fillRect(0, 0, 100, 50, Qt::black);
p.drawText(rect(), QString("FPS: %1").arg(fps, 0, 'f', 0));
}
public:
explicit Widget(QWidget *parent = 0) : QWidget(parent), fps(0), lastTime(0), pixmap(size())
{
timer.start(1000/60, this);
elapsed.start();
setAttribute(Qt::WA_OpaquePaintEvent);
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
Widget w;
w.show();
return a.exec();
}
I'd recommend you do not scroll the pixmap in-place, but create a second pixmap and use drawPixmap() to copy everything but one line from pixmap 1 to pixmap 2 (with the scroll offset). Then continue painting on pixmap 2. After the frame, exchange the references to both pixmaps, and start over.
The rationale is that copying from one memory area to a different one can be optimised more easily than modifying one memory area in-place.
I'm playing around with cocos2d-iphone and it looks great!
But i want to for each update draw another circle on screen, this lowers the framerate very fast!
Can i draw multiple primitives a somewhat faster way?
This is the code i currently us
-(void) draw
{
glLineWidth(1);
glColor4ub(100,100,255,0);
float angle = 0;
float radius = 10.0f;
int numSegments = 10;
bool drawLineToCenter = NO;
NSInteger point;
for (point=0;point < [points count];point++)
{
ccDrawCircle([[points objectAtIndex:point] CGPointValue], radius, angle, numSegments, drawLineToCenter);
}
}
Use sprites instead of primitives. Then you can use CCSpriteBatchNode.
The primitive draw methods of cocos2d are mainly there for debugging purposes, not to make up your game art. Primarily they are not batched operations, which means every new primitive you draw will issue a draw call. And that's expensive.
Is there a good way for displaying unicode text in opengl under Windows? For example, when you have to deal with different languages. The most common approach like
#define FONTLISTRANGE 128
GLuint list;
list = glGenLists(FONTLISTRANGE);
wglUseFontBitmapsW(hDC, 0, FONTLISTRANGE, list);
just won't do because you can't create enough lists for all unicode characters.
You should also check out the FTGL library.
FTGL is a free cross-platform Open
Source C++ library that uses Freetype2
to simplify rendering fonts in OpenGL
applications. FTGL supports bitmaps,
pixmaps, texture maps, outlines,
polygon mesh, and extruded polygon
rendering modes.
This project was dormant for awhile, but is recently back under development. I haven't updated my project to use the latest version, but you should check it out.
It allows for using any True Type Font via the FreeType font library.
I recommend reading this OpenGL font tutorial. It's for the D programming language but it's a nice introduction to various issues involved in implementing a glyph caching system for rendering text with OpenGL. The tutorial covers Unicode compliance, antialiasing, and kerning techniques.
D is pretty comprehensible to anyone who knows C++ and most of the article is about the general techniques, not the implementation language.
Id recommend FTGL as already recommended above, however I have implemented a freetype/OpenGL renderer myself and thought you might find the code handy if you want reinvent this wheel yourself. I'd really recommend FTGL though, its a lot less hassle to use. :)
* glTextRender class by Semi Essessi
*
* FreeType2 empowered text renderer
*
*/
#include "glTextRender.h"
#include "jEngine.h"
#include "glSystem.h"
#include "jMath.h"
#include "jProfiler.h"
#include "log.h"
#include <windows.h>
FT_Library glTextRender::ftLib = 0;
//TODO::maybe fix this so it use wchar_t for the filename
glTextRender::glTextRender(jEngine* j, const char* fontName, int size = 12)
{
#ifdef _DEBUG
jProfiler profiler = jProfiler(L"glTextRender::glTextRender");
#endif
char fontName2[1024];
memset(fontName2,0,sizeof(char)*1024);
sprintf(fontName2,"fonts\\%s",fontName);
if(!ftLib)
{
#ifdef _DEBUG
wchar_t fn[128];
mbstowcs(fn,fontName,strlen(fontName)+1);
LogWriteLine(L"\x25CB\x25CB\x25CF Font: %s was requested before FreeType was initialised", fn);
#endif
return;
}
// constructor code for glTextRender
e=j;
gl = j->gl;
red=green=blue=alpha=1.0f;
face = 0;
// remember that for some weird reason below font size 7 everything gets scrambled up
height = max(6,(int)floorf((float)size*((float)gl->getHeight())*0.001666667f));
aHeight = ((float)height)/((float)gl->getHeight());
setPosition(0.0f,0.0f);
// look in base fonts dir
if(FT_New_Face(ftLib, fontName2, 0, &face ))
{
// if we dont have it look in windows fonts dir
char buf[1024];
GetWindowsDirectoryA(buf,1024);
strcat(buf, "\\fonts\\");
strcat(buf, fontName);
if(FT_New_Face(ftLib, buf, 0, &face ))
{
//TODO::check in mod fonts directory
#ifdef _DEBUG
wchar_t fn[128];
mbstowcs(fn,fontName,strlen(fontName)+1);
LogWriteLine(L"\x25CB\x25CB\x25CF Request for font: %s has failed", fn);
#endif
face = 0;
return;
}
}
// FreeType uses 64x size and 72dpi for default
// doubling size for ms
FT_Set_Char_Size(face, mulPow2(height,7), mulPow2(height,7), 96, 96);
// set up cache table and then generate the first 256 chars and the console prompt character
for(int i=0;i<65536;i++)
{
cached[i]=false;
width[i]=0.0f;
}
for(unsigned short i = 0; i < 256; i++) getChar((wchar_t)i);
getChar(CHAR_PROMPT);
#ifdef _DEBUG
wchar_t fn[128];
mbstowcs(fn,fontName,strlen(fontName)+1);
LogWriteLine(L"\x25CB\x25CB\x25CF Font: %s loaded OK", fn);
#endif
}
glTextRender::~glTextRender()
{
// destructor code for glTextRender
for(int i=0;i<65536;i++)
{
if(cached[i])
{
glDeleteLists(listID[i],1);
glDeleteTextures(1,&(texID[i]));
}
}
// TODO:: work out stupid freetype crashz0rs
try
{
static int foo = 0;
if(face && foo < 1)
{
foo++;
FT_Done_Face(face);
face = 0;
}
}
catch(...)
{
face = 0;
}
}
// return true if init works, or if already initialised
bool glTextRender::initFreeType()
{
if(!ftLib)
{
if(!FT_Init_FreeType(&ftLib)) return true;
else return false;
} else return true;
}
void glTextRender::shutdownFreeType()
{
if(ftLib)
{
FT_Done_FreeType(ftLib);
ftLib = 0;
}
}
void glTextRender::print(const wchar_t* str)
{
// store old stuff to set start position
glPushAttrib(GL_TRANSFORM_BIT);
// get viewport size
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(viewport[0],viewport[2],viewport[1],viewport[3]);
glPopAttrib();
float color[4];
glGetFloatv(GL_CURRENT_COLOR, color);
glPushAttrib(GL_LIST_BIT | GL_CURRENT_BIT | GL_ENABLE_BIT | GL_TRANSFORM_BIT);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
//glDisable(GL_DEPTH_TEST);
// set blending for AA
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glTranslatef(xPos,yPos,0.0f);
glColor4f(red,green,blue,alpha);
// call display lists to render text
glListBase(0u);
for(unsigned int i=0;i<wcslen(str);i++) glCallList(getChar(str[i]));
// restore old states
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glPopAttrib();
glColor4fv(color);
glPushAttrib(GL_TRANSFORM_BIT);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glPopAttrib();
}
void glTextRender::printf(const wchar_t* str, ...)
{
if(!str) return;
wchar_t* buf = 0;
va_list parg;
va_start(parg, str);
// allocate buffer
int len = (_vscwprintf(str, parg)+1);
buf = new wchar_t[len];
if(!buf) return;
vswprintf(buf, str, parg);
va_end(parg);
print(buf);
delete[] buf;
}
GLuint glTextRender::getChar(const wchar_t c)
{
int i = (int)c;
if(cached[i]) return listID[i];
// load glyph and get bitmap
if(FT_Load_Glyph(face, FT_Get_Char_Index(face, i), FT_LOAD_DEFAULT )) return 0;
FT_Glyph glyph;
if(FT_Get_Glyph(face->glyph, &glyph)) return 0;
FT_Glyph_To_Bitmap(&glyph, FT_RENDER_MODE_NORMAL, 0, 1);
FT_BitmapGlyph bitmapGlyph = (FT_BitmapGlyph)glyph;
FT_Bitmap& bitmap = bitmapGlyph->bitmap;
int w = roundPow2(bitmap.width);
int h = roundPow2(bitmap.rows);
// convert to texture in memory
GLubyte* texture = new GLubyte[2*w*h];
for(int j=0;j<h;j++)
{
bool cond = j>=bitmap.rows;
for(int k=0;k<w;k++)
{
texture[2*(k+j*w)] = 0xFFu;
texture[2*(k+j*w)+1] = ((k>=bitmap.width)||cond) ? 0x0u : bitmap.buffer[k+bitmap.width*j];
}
}
// store char width and adjust max height
// note .5f
float ih = 1.0f/((float)gl->getHeight());
width[i] = ((float)divPow2(face->glyph->advance.x, 7))*ih;
aHeight = max(aHeight,(.5f*(float)bitmap.rows)*ih);
glPushAttrib(GL_LIST_BIT | GL_CURRENT_BIT | GL_ENABLE_BIT | GL_TRANSFORM_BIT);
// create gl texture
glGenTextures(1, &(texID[i]));
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texID[i]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, texture);
glPopAttrib();
delete[] texture;
// create display list
listID[i] = glGenLists(1);
glNewList(listID[i], GL_COMPILE);
glBindTexture(GL_TEXTURE_2D, texID[i]);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
// adjust position to account for texture padding
glTranslatef(.5f*(float)bitmapGlyph->left, 0.0f, 0.0f);
glTranslatef(0.0f, .5f*(float)(bitmapGlyph->top-bitmap.rows), 0.0f);
// work out texcoords
float tx=((float)bitmap.width)/((float)w);
float ty=((float)bitmap.rows)/((float)h);
// render
// note .5f
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0.0f, .5f*(float)bitmap.rows);
glTexCoord2f(0.0f, ty);
glVertex2f(0.0f, 0.0f);
glTexCoord2f(tx, ty);
glVertex2f(.5f*(float)bitmap.width, 0.0f);
glTexCoord2f(tx, 0.0f);
glVertex2f(.5f*(float)bitmap.width, .5f*(float)bitmap.rows);
glEnd();
glPopMatrix();
// move position for the next character
// note extra div 2
glTranslatef((float)divPow2(face->glyph->advance.x, 7), 0.0f, 0.0f);
glEndList();
// char is succesfully cached for next time
cached[i] = true;
return listID[i];
}
void glTextRender::setPosition(float x, float y)
{
float fac = ((float)gl->getHeight());
xPos = fac*x+FONT_BORDER_PIXELS; yPos = fac*(1-y)-(float)height-FONT_BORDER_PIXELS;
}
float glTextRender::getAdjustedWidth(const wchar_t* str)
{
float w = 0.0f;
for(unsigned int i=0;i<wcslen(str);i++)
{
if(cached[str[i]]) w+=width[str[i]];
else
{
getChar(str[i]);
w+=width[str[i]];
}
}
return w;
}
You may have to generate you own "glyph cache" in texture memory as you go, potentially with some sort of LRU policy to avoid destroying all of the texture memory. Not nearly as easy as your current method, but may be the only way given the number of unicode chars
You should consider using an Unicode rendering library (eg. Pango) to render the stuff into a bitmap and put that bitmap on the screen or into a texture.
Rendering unicode text is not simple. So you cannot simply load 64K rectangular glyphs and use it.
Characters may overlap. Eg in this smiley:
( ͡° ͜ʖ ͡°)
Some code points stack accents on the previous character. Consider this excerpt from this notable post:
...he com̡e̶s, ̕h̵is un̨ho͞ly radiańcé destro҉ying all
enli̍̈́̂̈́ghtenment, HTML tags lea͠ki̧n͘g fr̶ǫm ̡yo͟ur eye͢s̸ ̛l̕ik͏e
liquid pain, the song of re̸gular expression parsing will
extinguish the voices of mortal man from the sphere I can see it
can you see ̲͚̖͔̙î̩́t̲͎̩̱͔́̋̀ it is beautiful the final snuffing of
the lies of Man ALL IS LOŚ͖̩͇̗̪̏̈́T ALL IS LOST the pon̷y he comes
he c̶̮omes he comes the ichor permeates all MY FACE MY FACE ᵒh god no
NO NOO̼OO NΘ stop the an*̶͑̾̾̅ͫ͏̙̤g͇̫͛͆̾ͫ̑͆l͖͉̗̩̳̟̍ͫͥͨe̠̅s
͎a̧͈͖r̽̾̈́͒͑e not rè̑ͧ̌aͨl̘̝̙̃ͤ͂̾̆ ZA̡͊͠͝LGΌ ISͮ̂҉̯͈͕̹̘̱ TO͇̹̺ͅƝ̴ȳ̳
TH̘Ë͖́̉ ͠P̯͍̭O̚N̐Y̡ H̸̡̪̯ͨ͊̽̅̾̎Ȩ̬̩̾͛ͪ̈́̀́͘
̶̧̨̱̹̭̯ͧ̾ͬC̷̙̲̝͖ͭ̏ͥͮ͟Oͮ͏̮̪̝͍M̲̖͊̒ͪͩͬ̚̚͜Ȇ̴̟̟͙̞ͩ͌͝S̨̥̫͎̭ͯ̿̔̀ͅ
If you truly want to render Unicode correctly you should be able to render this one correctly too.
UPDATE: Looked at this Pango engine, and it's the case of banana, the gorilla, and the entire jungle. First it depends on the Glib because it used GObjects, second it cannot render directly into a byte buffer. It has Cario and FreeType backends, so you must use one of them to render the text and export it into bitmaps eventually. That's doesn't look good so far.
In addition to that, if you want to store the result in a texture, use pango_layout_get_pixel_extents after setting the text to get the sizes of rectangles to render the text to. Ink rectangle is the rectangle to contain the entire text, it's left-top position is the position relative to the left-top of the logical rectangle. (The bottom line of the logical rectangle is the baseline). Hope this helps.
Queso GLC is great for this, I've used it to render Chinese and Cyrillic characters in 3D.
http://quesoglc.sourceforge.net/
The Unicode text sample it comes with should get you started.
You could also group the characters by language. Load each language table as needed, and when you need to switch languages, unload the previous language table and load the new one.
Unicode is supported in the title bar. I have just tried this on a Mac, and it ought to work elsewhere too. If you have (say) some imported data including text labels, and some of the labels just might contain unicode, you could add a tool that echoes the label in the title bar.
It's not a great solution, but it is very easy to do.