Say I have a floating point image, e.g. in 32FC1 format for a thermal image, and I want to display it using (preferably) ROS or openCV tools, while also being able to see the current pixel value (e.g. temperature) my mouse is hovering over. How would I do that? Rviz can display the image, but will not show any pixel values. Image_view is also able to display the image, but will show the pixel value in RGB.
Thank you!
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using std::cout;
using std::endl;
// create a global Mat
cv::Mat img_32FC1;
// function to be called on mouse event
// displays values on console, it can be modified to print values on image
void mouseEventCallBack (int event, int x, int y, int flags, void *userdata)
{
if(event == cv::EVENT_MOUSEMOVE)
{
cout<<"x = "<<x<<", y = "<<y<<" value = "<<img_32FC1.at<float>(y,x)<<endl;
}
}
int main()
{
// original color image, CV_8UC3
cv::Mat img_8UC3 = cv::imread("image.jpg",cv::IMREAD_UNCHANGED), img_8UC1;
// convert original image to gray, CV_8UC1
cv::cvtColor(img_8UC3, img_8UC1, cv::COLOR_BGR2GRAY);
// convert to float, CV_32FC1
img_8UC1.convertTo(img_32FC1, CV_32FC1);
img_32FC1 /= 255.0;
// create a window
cv::namedWindow("window",CV_WINDOW_AUTOSIZE);
// set MouseCallback function
cv::setMouseCallback("window", mouseEventCallBack);
// Display image
cv::imshow("window", img_8UC1);
cv::waitKey(0);
cv::destroyAllWindows();
return 0;
}
How Can I add a song to this code using processing?, And synchronize it with a PIR sensor in Arduino?.
import processing.video.*;
import ddf.minim.*;
import ddf.minim.AudioPlayer;
// Size of each cell in the grid
int cellSize = 20;
// Number of columns and rows in our system
int cols, rows;
// Variable for capture device
Capture video;
Minim minim;
AudioPlayer song;
void setup() {
size(1280, 720);
frameRate(30);
cols = width / cellSize;
rows = height / cellSize;
colorMode(RGB, 255, 255, 255, 100);
// This the default video input, see the GettingStartedCapture
// example if it creates an error
video = new Capture(this, width, height);
// Start capturing the images from the camera
video.start();
background(0);
}
{
// we pass this to Minim so that it can load files from the data directory
minim = new Minim(this);
// loadFile will look in all the same places as loadImage does.
// this means you can find files that are in the data folder and the
// sketch folder. you can also pass an absolute path, or a URL.
song = minim.loadFile("untitled.wav");
}
void draw() {
if (video.available()) {
video.read();
video.loadPixels();
// Begin loop for columns
for (int i = 0; i < cols; i++) {
// Begin loop for rows
for (int j = 0; j < rows; j++) {
// Where are we, pixel-wise?
int x = i*cellSize;
int y = j*cellSize;
int loc = (video.width - x - 1) + y*video.width; // Reversing x to mirror the image
float r = red(video.pixels[loc]);
float g = green(video.pixels[loc]);
float b = blue(video.pixels[loc]);
// Make a new color with an alpha component
color c = color(r, g, b, 75);
// Code for drawing a single rect
// Using translate in order for rotation to work properly
pushMatrix();
translate(x+cellSize/2, y+cellSize/2);
// Rotation formula based on brightness
rotate((2 * PI * brightness(c) / 255.0));
rectMode(CENTER);
fill(c);
noStroke();
// Rects are larger than the cell for some overlap
rect(0, 0, cellSize+6, cellSize+6);
popMatrix();
}
}
}
}
I am interested to detect the movement to activate or desactivate this feature.
Please, Can you help me.
This is the error that I got:
The sketch path is not set. ==== JavaSound Minim Error ==== ==== java.lang.reflect.InvocationTargetException
=== Minim Error === === Couldn't load the file untitled.wav
Stack Overflow isn't really designed for general "how do I do this" type questions. It's for specific "I tried X, expected Y, but got Z instead" type questions. But I'll try to help in a general sense:
You need to break your problem down into smaller pieces and then take those pieces on one at a time. Get a simple example working. If you're asking about audio, then forget about the webcam for a second. Create a simple sketch that just plays a sound. Separately from that, create a simple sketch that just gets a webcam working. When you have those working perfectly, then you can think about combining them. But work your way forward in small steps. Write down exactly what you want to happen, in English, and that will be an algorithm that you can think about implementing with code.
Then if you get stuck, you can post a more specific question along with a MCVE. Good luck.
I am new to OpenCV and trying to find contours and draw rectangle on them, here's my code but its throwing cv::Exception when it comes to accumulatedweighted().
i tried to make both src(Original Image) and dst(background) by converting to CV_32FC3 and then finding avg using accumulatedweighted.
#include "opencv2/video/tracking.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <ctype.h>
using namespace cv;
using namespace std;
static void help()
{
cout << "\nThis is a Example to implement CAMSHIFT to detect multiple motion objects.\n";
}
Rect rect;
VideoCapture capture;
Mat currentFrame, currentFrame_grey, differenceImg, oldFrame_grey,background;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
bool first = true;
int main(int argc, char* argv[])
{
//Create a new movie capture object.
capture.open(0);
if(!capture.isOpened())
{
//error in opening the video input
cerr << "Unable to open video file: " /*<< videoFilename*/ << endl;
exit(EXIT_FAILURE);
}
//capture current frame from webcam
capture >> currentFrame;
//Size of the image.
CvSize imgSize;
imgSize.width = currentFrame.size().width; //img.size().width
imgSize.height = currentFrame.size().height; ////img.size().height
//Images to use in the program.
currentFrame_grey.create( imgSize, IPL_DEPTH_8U);//image.create().
while(1)
{
capture >> currentFrame;//VideoCapture& VideoCapture::operator>>(Mat& image)
//Convert the image to grayscale.
cvtColor(currentFrame,currentFrame_grey,CV_RGB2GRAY);//cvtColor()
currentFrame.convertTo(currentFrame,CV_32FC3);
background = Mat::zeros(currentFrame.size(), CV_32FC3);
accumulateWeighted(currentFrame,background,1.0,NULL);
imshow("Background",background);
if(first) //Capturing Background for the first time
{
differenceImg = currentFrame_grey.clone();//img1 = img.clone()
oldFrame_grey = currentFrame_grey.clone();//img2 = img.clone()
convertScaleAbs(currentFrame_grey, oldFrame_grey, 1.0, 0.0);//convertscaleabs()
first = false;
continue;
}
//Minus the current frame from the moving average.
absdiff(oldFrame_grey,currentFrame_grey,differenceImg);//absDiff()
//bluring the differnece image
blur(differenceImg, differenceImg, imgSize);//blur()
//apply threshold to discard small unwanted movements
threshold(differenceImg, differenceImg, 25, 255, CV_THRESH_BINARY);//threshold()
//find contours
findContours(differenceImg,contours,hierarchy,CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0)); //findcontours()
//draw bounding box around each contour
//for(; contours! = 0; contours = contours->h_next)
for(int i = 0; i < contours.size(); i++)
{
rect = boundingRect(contours[i]); //extract bounding box for current contour
//drawing rectangle
rectangle(currentFrame, cvPoint(rect.x, rect.y), cvPoint(rect.x+rect.width, rect.y+rect.height), cvScalar(0, 0, 255, 0), 2, 8, 0);
}
//New Background
convertScaleAbs(currentFrame_grey, oldFrame_grey, 1.0, 0.0);
//display colour image with bounding box
imshow("Output Image", currentFrame);//imshow()
//display threshold image
imshow("Difference image", differenceImg);//imshow()
//clear memory and contours
//cvClearMemStorage( storage );
//contours = 0;
contours.clear();
//background = currentFrame;
//press Esc to exit
char c = cvWaitKey(33);
if( c == 27 ) break;
}
// Destroy All Windows.
destroyAllWindows();
return 0;
}
Please Help to solve this.
you might want to RTFM before asking here.
so, you missed the alpha param as well as the dst Mat in your call to addWeighted
Mat dst;
accumulateWeighted(currentFrame, 0.5 background, 0.5, 0, dst);
also, no idea, what the whole thing should achieve. adding up the current frame before diffing it does not make any sense to me.
if you planned to do background separation, throw it all away, and use one of the builtin backgroundsubtractors instead
The below code helps me to convert OpenGL output to JPEG image using libjpg but the resultant image is flipped vertical...
The code works perfect but the final image is flipped I dont know why ?!
unsigned char *pdata = new unsigned char[width*height*3];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, pdata);
FILE *outfile;
if ((outfile = fopen("sample.jpeg", "wb")) == NULL) {
printf("can't open %s");
exit(1);
}
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_compress(&cinfo);
jpeg_stdio_dest(&cinfo, outfile);
cinfo.image_width = width;
cinfo.image_height = height;
cinfo.input_components = 3;
cinfo.in_color_space = JCS_RGB;
jpeg_set_defaults(&cinfo);
/*set the quality [0..100] */
jpeg_set_quality (&cinfo, 100, true);
jpeg_start_compress(&cinfo, true);
JSAMPROW row_pointer;
int row_stride = width * 3;
while (cinfo.next_scanline < cinfo.image_height) {
row_pointer = (JSAMPROW) &pdata[cinfo.next_scanline*row_stride];
jpeg_write_scanlines(&cinfo, &row_pointer, 1);
}
jpeg_finish_compress(&cinfo);
fclose(outfile);
jpeg_destroy_compress(&cinfo);
OpenGL's coordinate system has the origin in the lower left corner of the image. LIBJPEG assumes that the origin of the image is in the upper left corner of the image. Make the following change to fix your code:
while (cinfo.next_scanline < cinfo.image_height)
{
row_pointer = (JSAMPROW) &pdata[(cinfo.image_height-1-cinfo.next_scanline)*row_stride];
jpeg_write_scanlines(&cinfo, &row_pointer, 1);
}
Is there a good way for displaying unicode text in opengl under Windows? For example, when you have to deal with different languages. The most common approach like
#define FONTLISTRANGE 128
GLuint list;
list = glGenLists(FONTLISTRANGE);
wglUseFontBitmapsW(hDC, 0, FONTLISTRANGE, list);
just won't do because you can't create enough lists for all unicode characters.
You should also check out the FTGL library.
FTGL is a free cross-platform Open
Source C++ library that uses Freetype2
to simplify rendering fonts in OpenGL
applications. FTGL supports bitmaps,
pixmaps, texture maps, outlines,
polygon mesh, and extruded polygon
rendering modes.
This project was dormant for awhile, but is recently back under development. I haven't updated my project to use the latest version, but you should check it out.
It allows for using any True Type Font via the FreeType font library.
I recommend reading this OpenGL font tutorial. It's for the D programming language but it's a nice introduction to various issues involved in implementing a glyph caching system for rendering text with OpenGL. The tutorial covers Unicode compliance, antialiasing, and kerning techniques.
D is pretty comprehensible to anyone who knows C++ and most of the article is about the general techniques, not the implementation language.
Id recommend FTGL as already recommended above, however I have implemented a freetype/OpenGL renderer myself and thought you might find the code handy if you want reinvent this wheel yourself. I'd really recommend FTGL though, its a lot less hassle to use. :)
* glTextRender class by Semi Essessi
*
* FreeType2 empowered text renderer
*
*/
#include "glTextRender.h"
#include "jEngine.h"
#include "glSystem.h"
#include "jMath.h"
#include "jProfiler.h"
#include "log.h"
#include <windows.h>
FT_Library glTextRender::ftLib = 0;
//TODO::maybe fix this so it use wchar_t for the filename
glTextRender::glTextRender(jEngine* j, const char* fontName, int size = 12)
{
#ifdef _DEBUG
jProfiler profiler = jProfiler(L"glTextRender::glTextRender");
#endif
char fontName2[1024];
memset(fontName2,0,sizeof(char)*1024);
sprintf(fontName2,"fonts\\%s",fontName);
if(!ftLib)
{
#ifdef _DEBUG
wchar_t fn[128];
mbstowcs(fn,fontName,strlen(fontName)+1);
LogWriteLine(L"\x25CB\x25CB\x25CF Font: %s was requested before FreeType was initialised", fn);
#endif
return;
}
// constructor code for glTextRender
e=j;
gl = j->gl;
red=green=blue=alpha=1.0f;
face = 0;
// remember that for some weird reason below font size 7 everything gets scrambled up
height = max(6,(int)floorf((float)size*((float)gl->getHeight())*0.001666667f));
aHeight = ((float)height)/((float)gl->getHeight());
setPosition(0.0f,0.0f);
// look in base fonts dir
if(FT_New_Face(ftLib, fontName2, 0, &face ))
{
// if we dont have it look in windows fonts dir
char buf[1024];
GetWindowsDirectoryA(buf,1024);
strcat(buf, "\\fonts\\");
strcat(buf, fontName);
if(FT_New_Face(ftLib, buf, 0, &face ))
{
//TODO::check in mod fonts directory
#ifdef _DEBUG
wchar_t fn[128];
mbstowcs(fn,fontName,strlen(fontName)+1);
LogWriteLine(L"\x25CB\x25CB\x25CF Request for font: %s has failed", fn);
#endif
face = 0;
return;
}
}
// FreeType uses 64x size and 72dpi for default
// doubling size for ms
FT_Set_Char_Size(face, mulPow2(height,7), mulPow2(height,7), 96, 96);
// set up cache table and then generate the first 256 chars and the console prompt character
for(int i=0;i<65536;i++)
{
cached[i]=false;
width[i]=0.0f;
}
for(unsigned short i = 0; i < 256; i++) getChar((wchar_t)i);
getChar(CHAR_PROMPT);
#ifdef _DEBUG
wchar_t fn[128];
mbstowcs(fn,fontName,strlen(fontName)+1);
LogWriteLine(L"\x25CB\x25CB\x25CF Font: %s loaded OK", fn);
#endif
}
glTextRender::~glTextRender()
{
// destructor code for glTextRender
for(int i=0;i<65536;i++)
{
if(cached[i])
{
glDeleteLists(listID[i],1);
glDeleteTextures(1,&(texID[i]));
}
}
// TODO:: work out stupid freetype crashz0rs
try
{
static int foo = 0;
if(face && foo < 1)
{
foo++;
FT_Done_Face(face);
face = 0;
}
}
catch(...)
{
face = 0;
}
}
// return true if init works, or if already initialised
bool glTextRender::initFreeType()
{
if(!ftLib)
{
if(!FT_Init_FreeType(&ftLib)) return true;
else return false;
} else return true;
}
void glTextRender::shutdownFreeType()
{
if(ftLib)
{
FT_Done_FreeType(ftLib);
ftLib = 0;
}
}
void glTextRender::print(const wchar_t* str)
{
// store old stuff to set start position
glPushAttrib(GL_TRANSFORM_BIT);
// get viewport size
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(viewport[0],viewport[2],viewport[1],viewport[3]);
glPopAttrib();
float color[4];
glGetFloatv(GL_CURRENT_COLOR, color);
glPushAttrib(GL_LIST_BIT | GL_CURRENT_BIT | GL_ENABLE_BIT | GL_TRANSFORM_BIT);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
//glDisable(GL_DEPTH_TEST);
// set blending for AA
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glTranslatef(xPos,yPos,0.0f);
glColor4f(red,green,blue,alpha);
// call display lists to render text
glListBase(0u);
for(unsigned int i=0;i<wcslen(str);i++) glCallList(getChar(str[i]));
// restore old states
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glPopAttrib();
glColor4fv(color);
glPushAttrib(GL_TRANSFORM_BIT);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glPopAttrib();
}
void glTextRender::printf(const wchar_t* str, ...)
{
if(!str) return;
wchar_t* buf = 0;
va_list parg;
va_start(parg, str);
// allocate buffer
int len = (_vscwprintf(str, parg)+1);
buf = new wchar_t[len];
if(!buf) return;
vswprintf(buf, str, parg);
va_end(parg);
print(buf);
delete[] buf;
}
GLuint glTextRender::getChar(const wchar_t c)
{
int i = (int)c;
if(cached[i]) return listID[i];
// load glyph and get bitmap
if(FT_Load_Glyph(face, FT_Get_Char_Index(face, i), FT_LOAD_DEFAULT )) return 0;
FT_Glyph glyph;
if(FT_Get_Glyph(face->glyph, &glyph)) return 0;
FT_Glyph_To_Bitmap(&glyph, FT_RENDER_MODE_NORMAL, 0, 1);
FT_BitmapGlyph bitmapGlyph = (FT_BitmapGlyph)glyph;
FT_Bitmap& bitmap = bitmapGlyph->bitmap;
int w = roundPow2(bitmap.width);
int h = roundPow2(bitmap.rows);
// convert to texture in memory
GLubyte* texture = new GLubyte[2*w*h];
for(int j=0;j<h;j++)
{
bool cond = j>=bitmap.rows;
for(int k=0;k<w;k++)
{
texture[2*(k+j*w)] = 0xFFu;
texture[2*(k+j*w)+1] = ((k>=bitmap.width)||cond) ? 0x0u : bitmap.buffer[k+bitmap.width*j];
}
}
// store char width and adjust max height
// note .5f
float ih = 1.0f/((float)gl->getHeight());
width[i] = ((float)divPow2(face->glyph->advance.x, 7))*ih;
aHeight = max(aHeight,(.5f*(float)bitmap.rows)*ih);
glPushAttrib(GL_LIST_BIT | GL_CURRENT_BIT | GL_ENABLE_BIT | GL_TRANSFORM_BIT);
// create gl texture
glGenTextures(1, &(texID[i]));
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texID[i]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, texture);
glPopAttrib();
delete[] texture;
// create display list
listID[i] = glGenLists(1);
glNewList(listID[i], GL_COMPILE);
glBindTexture(GL_TEXTURE_2D, texID[i]);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
// adjust position to account for texture padding
glTranslatef(.5f*(float)bitmapGlyph->left, 0.0f, 0.0f);
glTranslatef(0.0f, .5f*(float)(bitmapGlyph->top-bitmap.rows), 0.0f);
// work out texcoords
float tx=((float)bitmap.width)/((float)w);
float ty=((float)bitmap.rows)/((float)h);
// render
// note .5f
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0.0f, .5f*(float)bitmap.rows);
glTexCoord2f(0.0f, ty);
glVertex2f(0.0f, 0.0f);
glTexCoord2f(tx, ty);
glVertex2f(.5f*(float)bitmap.width, 0.0f);
glTexCoord2f(tx, 0.0f);
glVertex2f(.5f*(float)bitmap.width, .5f*(float)bitmap.rows);
glEnd();
glPopMatrix();
// move position for the next character
// note extra div 2
glTranslatef((float)divPow2(face->glyph->advance.x, 7), 0.0f, 0.0f);
glEndList();
// char is succesfully cached for next time
cached[i] = true;
return listID[i];
}
void glTextRender::setPosition(float x, float y)
{
float fac = ((float)gl->getHeight());
xPos = fac*x+FONT_BORDER_PIXELS; yPos = fac*(1-y)-(float)height-FONT_BORDER_PIXELS;
}
float glTextRender::getAdjustedWidth(const wchar_t* str)
{
float w = 0.0f;
for(unsigned int i=0;i<wcslen(str);i++)
{
if(cached[str[i]]) w+=width[str[i]];
else
{
getChar(str[i]);
w+=width[str[i]];
}
}
return w;
}
You may have to generate you own "glyph cache" in texture memory as you go, potentially with some sort of LRU policy to avoid destroying all of the texture memory. Not nearly as easy as your current method, but may be the only way given the number of unicode chars
You should consider using an Unicode rendering library (eg. Pango) to render the stuff into a bitmap and put that bitmap on the screen or into a texture.
Rendering unicode text is not simple. So you cannot simply load 64K rectangular glyphs and use it.
Characters may overlap. Eg in this smiley:
( ͡° ͜ʖ ͡°)
Some code points stack accents on the previous character. Consider this excerpt from this notable post:
...he com̡e̶s, ̕h̵is un̨ho͞ly radiańcé destro҉ying all
enli̍̈́̂̈́ghtenment, HTML tags lea͠ki̧n͘g fr̶ǫm ̡yo͟ur eye͢s̸ ̛l̕ik͏e
liquid pain, the song of re̸gular expression parsing will
extinguish the voices of mortal man from the sphere I can see it
can you see ̲͚̖͔̙î̩́t̲͎̩̱͔́̋̀ it is beautiful the final snuffing of
the lies of Man ALL IS LOŚ͖̩͇̗̪̏̈́T ALL IS LOST the pon̷y he comes
he c̶̮omes he comes the ichor permeates all MY FACE MY FACE ᵒh god no
NO NOO̼OO NΘ stop the an*̶͑̾̾̅ͫ͏̙̤g͇̫͛͆̾ͫ̑͆l͖͉̗̩̳̟̍ͫͥͨe̠̅s
͎a̧͈͖r̽̾̈́͒͑e not rè̑ͧ̌aͨl̘̝̙̃ͤ͂̾̆ ZA̡͊͠͝LGΌ ISͮ̂҉̯͈͕̹̘̱ TO͇̹̺ͅƝ̴ȳ̳
TH̘Ë͖́̉ ͠P̯͍̭O̚N̐Y̡ H̸̡̪̯ͨ͊̽̅̾̎Ȩ̬̩̾͛ͪ̈́̀́͘
̶̧̨̱̹̭̯ͧ̾ͬC̷̙̲̝͖ͭ̏ͥͮ͟Oͮ͏̮̪̝͍M̲̖͊̒ͪͩͬ̚̚͜Ȇ̴̟̟͙̞ͩ͌͝S̨̥̫͎̭ͯ̿̔̀ͅ
If you truly want to render Unicode correctly you should be able to render this one correctly too.
UPDATE: Looked at this Pango engine, and it's the case of banana, the gorilla, and the entire jungle. First it depends on the Glib because it used GObjects, second it cannot render directly into a byte buffer. It has Cario and FreeType backends, so you must use one of them to render the text and export it into bitmaps eventually. That's doesn't look good so far.
In addition to that, if you want to store the result in a texture, use pango_layout_get_pixel_extents after setting the text to get the sizes of rectangles to render the text to. Ink rectangle is the rectangle to contain the entire text, it's left-top position is the position relative to the left-top of the logical rectangle. (The bottom line of the logical rectangle is the baseline). Hope this helps.
Queso GLC is great for this, I've used it to render Chinese and Cyrillic characters in 3D.
http://quesoglc.sourceforge.net/
The Unicode text sample it comes with should get you started.
You could also group the characters by language. Load each language table as needed, and when you need to switch languages, unload the previous language table and load the new one.
Unicode is supported in the title bar. I have just tried this on a Mac, and it ought to work elsewhere too. If you have (say) some imported data including text labels, and some of the labels just might contain unicode, you could add a tool that echoes the label in the title bar.
It's not a great solution, but it is very easy to do.