OpenGL and wglUseFontBitmaps only draws equally spaced letters - winapi

the fonts I draw in OpenGL by using wglUseFontBitmaps take equal space (width) for every letter, so a "." needs as much space as an "M" for example. I have changed the pitch paramter in font creation to VARIABLE_PITCH but it doesnt change anything. Can it actually be done with wglUseFontBitmaps or is it its nature that every generated bitmap takes equal space?
Furthermor I am querying the width/height of the rastered bitmap text with GetTextExtentPoint32W(). The width returned is ok but the height is always too big. I am drawing a bright rectangle behind the black text to make it always readable within the 3d scene. Why is the queried height so big? Is the space reserved for higher characters something like this "É"?
My Setup:
HFONT font; // Windows Font ID
font = CreateFont( -12, // Height Of Font
0, // Width Of Font
0, // Angle Of Escapement
0, // Orientation Angle
FW_EXTRALIGHT, // Font Weight
FALSE, // Italic
FALSE, // Underline
FALSE, // Strikeout
ANSI_CHARSET, // Character Set Identifier
OUT_TT_PRECIS, // Output Precision
CLIP_DEFAULT_PRECIS, // Clipping Precision
ANTIALIASED_QUALITY, // Output Quality
FF_DONTCARE | VARIABLE_PITCH, // Family And Pitch
(LPCWSTR)L"Arial"); // Font Name
SelectObject(this->hDeviceContext, font);
//init display lists for text drawing
bool createFontLists = wglUseFontBitmaps(this->hDeviceContext, 0, 255, 1000);

I solved the problem myself: It only looks like the font draws every letter with equal space or monospaced, but actually OpenGL drawed a space " " between every character. The problem was the usage of std::wstring for calling the displaylists generated with wglUseFontBitmaps.
Wrong drawing looked like this:
int length = wcslen(text.c_str());
glCallLists (sizeof(wchar_t) * length, GL_UNSIGNED_BYTE, text.c_str());
The first parameters (number of lists to be called) is wrong. It has to be length only. The second parameter, telling OpenGL how to interpret every character in the text.c_str as an offset number defining the next display list to call is wrong here. It has to be GL_UNSIGNED_SHORT which is 2Bytes from 0-65535. With GL_UNSIGNED_BYTE every '0' used for the second byte in the wstring for the 2-Byte character was interpreted as a space. So the correct call is:
glCallLists (length, GL_UNSIGNED_SHORT, text.c_str());
This works for my case, but is not a general way to use unicode charaters with wglUseFontBitmaps. It only safely covers the ASCII characters coming from a wstring here.

Related

Does an any image file format support negative floats?

I'm using OpenGL for implementing some screen space filters. For debugging purposes, I would like to save a bunch of textures so a can compare individual pixel values. The problem is that these 16-bit float textures have negative values. Do you know of any image file formats that support negative values? How could I export them?
Yes there are some such formats ... What you need is to use non clamped floating formats. This is what I am using:
GL_LUMINANCE32F_ARB
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, size, size, 0, GL_LUMINANCE, GL_FLOAT, data);
Here an example:
raytrace through 3D mesh
In there I am passing geometry in a float texture that is not clamped so it has full range not just <0,1> As you can see negative range is included too ...
There are few others but once I found the one was no point to look for more of them...
You can verify clamping using the GLSL debug prints
Also IIRC there was some function to set the clamping behavior of OpenGL but never used it (if my memory serves well).
[Edit1] exporting to file
this is a problem and unless yo are exporting to ASCII or math matrix files I do not know of any format supporting negative pixel values...
So you need to workaround... so simply convert range to non negative ones like this:
float max = ???; //some value that is safely bigger than biggest abs value in your images
for (each pixel x,y)
{
pixel(x,y) = 0.5*(max + pixel(x,y))
}
and then converting back if needed
for (each pixel x,y)
{
pixel(x,y) = (2.0*pixel(x,y)-max);
}
another common way is to normalize to range <0,1> which has the advantage that from RGB colors you can infere 3D direction... (like on normal/bump maps)
pixel(x,y) = 0.5 + 0.5*pixel(x,y)/max;
and back
pixel(x,y) = 2.0*max*pixel(x,y)-max;

HTML canvas fillRect with low opacity doesn't affect dark pixels

Repeatedly drawing a semi-opaque black rectangle over the entire canvas before each animation frame is an easy way to get an afterimage effect for moving shapes and it gives me exactly what I need - up to a point. With too slow a fade it doesn't fade all the way to black. Here's an example:
var canv = document.createElement('canvas');
document.body.appendChild(canv);
var ctx = canv.getContext('2d');
ctx.fillStyle = 'rgba(0, 0, 0, 1)';
ctx.fillRect(0, 0, 100, 100);
ctx.fillStyle = 'rgba(255, 255, 255, 1)';
ctx.fillRect(20, 20, 60, 60);
window.requestAnimationFrame(doFade);
function doFade() {
// Never fades away completely
ctx.fillStyle = 'rgba(0, 0, 0, 0.02)';
ctx.fillRect(20, 20, 60, 60);
window.requestAnimationFrame(doFade);
}
jsfiddle
This looks to me like a numeric precision problem - you can't expect the canvas to keep floating point pixel values around - but I'm not sure how to get around this.
I tried reading the image into a pattern, blanking the canvas, and then filling with the pattern at lower opacity in the hope that I could make rounding error work in my favor, but it seems to have the same result.
Short of reading out the image data and setting to black any pixels below a certain threshold, which would be prohibitively slow, I'm running out of ideas and could use some suggestions.
Thanks!
I thought I'd share my solution for the benefit of anyone else who might run into this problem. I was hoping to avoid doing any pixel-level manipulation, but beyond a certain threshold it's just not possible with the built-in canvas operations because the underlying bitmap is only 8 bits per channel and small fades will work out to less than one least significant bit and won't have any effect on the image data.
My solution was to create an array representing the age of each pixel. After each frame is drawn, I scan the imageData array, looking only at the alpha channel. If the alpha is 255 I know the pixel has just been written, so I set the age to 0 and set the alpha to 254. For any other non-zero alpha values, I increment the pixel age and then set the new alpha based on the pixel age.
The mapping of pixel age to alpha value is done with a lookup table that's populated when the fade rate is set. This lets me use whatever decay curve I want without extra math during the rendering loop.
The CPU utilization is a bit higher, but it's not too much of a performance hit and it can do smooth fades over several seconds and always fades entirely to black eventually.

Leptonica cropping image on rotation

PIX* returnRotatedImage(PIX* image, float theta)
{
PIX* rotated = pixRotate(image, -theta, L_ROTATE_AREA_MAP, L_BRING_IN_BLACK, image->w, image->h);
return rotated;
}
When I execute the above code on an image, the resulting image has the edges cut off.
Example: the original scan, followed by the image after being run through the above function to rotate it by ~89 degrees.
I don't have 10 reputation yet, so I can't embed the images, but here's a link to the two pictures: http://imgur.com/a/y7wAn
I need it to work for arbitrary angles as well (not just angles close to 90 degrees), so unfortunately the solution presented here won't work.
The description for the pixRotate function says:
* (6) The dest can be expanded so that no image pixels
* are lost. To invoke expansion, input the original
* width and height. For repeated rotation, use of the
* original width and height allows the expansion to
* stop at the maximum required size, which is a square
* with side = sqrt(w*w + h*h).
however it seems to be expanding the destination after rotation, and thus the pixels are lost, even if the final image size is correct. If I use pixRotate(..., 0, 0) instead of pixRotate(..., w, h), I end up with the image rotated within the original dimensions: http://i.imgur.com/YZSETl5.jpg.
Am I interpreting the pixRotate function description incorrectly? Is what I want to do even possible? Or is this possibly a bug?
Thanks in advance.

Choosing font size on a text path

I'm working on a radial menu (for a game) and I'm having some trouble with getting text along paths to behave exactly how I'd like it to. Here's an example of the menu at the moment:
I want all the text to be aligned to the center of the node it's in, and for the fontSize to be sufficiently small for the text to fit into the available space. This is achieved fairly trivially for the straight text by scalingthe font and measuring the width until it fits:
var fontSize = 19;
var titleText = new Kinetic.Text({
text: title,
fontSize: fontSize ,
fontFamily: 'Calibri',
fill: 'white',
rotationDeg: rotation
});
while (titleText.getWidth() > availableSpace)
titleText.setFontSize(--fontSize);
However, this approach can't be applied to the curved text because (as far as I can see) there's no way to measure how long a string is when placed along a path.
How should I achieve centering and scaling of text when it is placed along a path?
Here is a hack in terms of centering, but you get the idea:
http://jsfiddle.net/ysaLp/2/
basically when you have each item in your list, you want to add blank spaces to the beginning, dependent on total space in the arc.
you can do (again, inside a loop):
textPathObjectName.setText(' ' + textPathObjectName.getText()); //dependent on the length of the width of text ( .getTextWidth() )
In terms of resizing you get : http://jsfiddle.net/ysaLp/5/
for (var i = 1; i < 5 && titleText.getTextWidth() > circumference ; i++)
titleText.setFontSize(titleText.getFontSize()-i);
which lowers your font until it "fits" into your wedge's circumference
It also helps to start your font at a lower value, since you are calculating the path data right after, set the font size at 15 first.

How to shift pixels of a pixmap efficient in Qt4

I have implemented a marquee text widget using Qt4. I painted the text content onto a pixmap first. And then paint a portion of this pixmap onto a paint device by calling painter.drawTiledPixmap(offsetX, offsetY, myPixmap)
My Imagination is that, Qt will fill the whole marquee text rectangle with the content from myPixmap.
Is there a ever faster way, to shift all existing content to left by 1px and than fill the newly exposed 1px wide and N-px high area with the content from myPixmap?
Well. This is a trick I used to do with slower hardware back in the old days. Basically, the image buffer is allocated twice as wide as needed with 1 extra line at the beginning. Build the image to the left of the buffer. Then draw the image repeatedly with the buffer advancing 1 pixel at a time in the buffer.
int w = 200;
int h = 100;
int rowBytes = w * sizeof(QRgb) * 2; // line buffer is twice as the width
QByteArray buffer(rowBytes * (h + 1), 0xFF); // 1 more line than the height
uchar * p = (uchar*)buffer.data() + rowBytes; // start drawing the image content at 2nd line
QImage image(p, w, h, rowBytes, QImage::Format_RGB32); // 1st line is used as the padding at the start of scroll
image.fill(qRgb(255, 0, 0)); // well. do something to the image
p = image.bits() - rowBytes / 2; // start scrolling at the middle of the 1st (blank) line
for(int i=0;i<w;++i, p+=sizeof(QRgb)) {
QImage scroll(p, w, h, rowBytes, QImage::Format_RGB32); // scrool 1 pixel at a time
scroll.save(QString("%1.png").arg(i));
}
I am not sure this will be any faster than just change the offset of the image and draw it strait. The hardware today is really powerful which renders a lot of old tricks useless. But it's fun to play obscure tricks. :)
Greetings,
one possibility to achieve this would be to:
Create a QGraphicsScene + View and put the pixmap on that twice (as QGraphicsPixmapItem), so they are right next to each other.
Size the view to fit the size of the (one) pixmap.
Then, instead of repainting the pixmap, you simply reposition the view's viewport, moving from one pixmap to the next.
Jump back at the end to create the loop.
This may or may not be faster (in terms of performance) - I have not tested it. But may be worth a try, if only for the sake of experiment.
Your approach is probably one of the fastest one since you use low level painting methods. You can implement an intermediate approach between low level painting and the QGraphicsScene option : using a scroll area containing a label.
Here is a sample of code that create a new scroll area containing a text label. You may scroll the label automatically using a QTimer to trigger the scrolling effect, that gives you a nice marquee widget.
QScrollArea *scrollArea = new QScrollArea();
// ensure that scroll bars never show
scrollArea->setVerticalScrollBarPolicy(Qt::ScrollBarAlwaysOff);
scrollArea->setHorizontalScrollBarPolicy(Qt::ScrollBarAlwaysOff);
QLabel *label = new QLabel("your scrolling text");
// resize the scroll area : 50px length and an height equals to its content height.
scrollArea->resize(50, label->size().height());
scrollArea->setWidget(label);
label->show(); // optionnal if the scroll area is not yet visible
The text label inside the scroll area can be moved from left to right by one pixel using the QScrollArea::scrollContentsBy(int dx, int dy) with a dx parameter equals to -1.
Why not just do it on a pixel by pixel basis? Due to the way caches work writing the pixel to the one before it all the way until you get to the end. Then you can fill the final column by reading from your other image.
Its then pretty easy to SIMD optimise it as well; though you start getting into per-platform optimisations at this point.

Resources