I am doing frame based animation for 300 frames in opengl es 2.0
I want a rectangle to translate by +200 pixels in X axis and also scaled up by double (2 units) in the first 100 frames.
For example, the initial value (frame 0) of the rectangle's centre point is at 100 pixels (i.e. rectCenterX = 100) on the screen;
At 100th frame, rectCenterX = 300 (100 + 200) pixels. Also the rect size is doubled the original size.
Then, the animated rectangle has to stay there for the next 100 frames (without any animation). i.e. the rectCenterX = 300 pixels for frames 101 to 200.
At 101st frame, rectCenterX = 300 pixels. the rect size is double the original size.
At 200th frame, rectCenterX = 300 pixels. the rect size is double the original size.
Then, I want the same animated rectangle to translate by +200 pixels in X axis and also scaled down by half (0.5 units) in the last 100 frames.
At 300th frame, rectCenterX = 500 pixels.the rect size is again as the original size.
I am using simple linear interpolation to calculate the delta-animation value for each frame.
In short,
Animation-Type Animation-Value Start-Frame End-Frame
1.Translate +200 0 100
2.Scale +2 0 100
3.Translate +200 201 300
4.Scale +0.5 201 300
Pseudo code:
The below drawFrame() is executed for 300 times (300 frames) in a loop.
float RectMVMatrix[4][4] = {1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
}; // identity matrix
int totalframes = 300;
float translate-delta; // interpolated translation value for each frame
float scale-delta; // interpolated scale value for each frame
// The usual code for draw is:
void drawFrame(int iCurrentFrame)
{
// mySetIdentity(RectMVMatrix); // comment this line to retain the animated position.
mytranslate(RectMVMatrix, translate-delta, X_AXIS); // to translate the mv matrix in x axis by translate-delta value
myscale(RectMVMatrix, scale-delta); // to scale the mv matrix by scale-delta value
... // opengl calls
glDrawArrays(...);
eglswapbuffers(...);
}
The above code will work fine for first 100 frames. in order to retain the animated rectangle during the frames 101 to 200, i removed the "mySetIdentity(RectMVMatrix);" in the above drawFrame().
Now on entering the drawFrame() for the 2nd frame, the RectMVMatrix will have the animated value of first frame
e.g. RectMVMatrix[4][4] = { 1.01, 0, 0, 2,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
};// 2 pixels translation and 1.01 units scaling after first frame
This RectMVMatrix is used for mytranslate() in 2nd frame. The translate function will affect the value of "RectMVMatrix[0][0]". Thus translation affects the scaling values also.
Eventually output is getting wrong.
How to retain the animated position without affecting the current ModelView matrix?
Although a direct answer would be to save the matrix you will need later I need to propose a different system as yours is extremely unclean and even if you make it for this animation you might have problems maintaining this code, adding animations or changing it.
To have an object animatable in terms of position, scale, rotations or even whole matrix I suggest you to create each of those parameters in the object itself (on the CPU) then on each and every frame set the matrix to identity and then recreate it from the parameters the object owns.
So for instance an object would have a vector called position which is animatable. What you need is hold 3 position parameters as in positionCurrent, positionStart, positionEnd. Now on the object you should call something like animateWithFrameCount As this method is called you need to set:
positionStart = positionCurrent;
positionEnd = input;
currentFrame = 0;
frameCount = inputFrameCount;
Then implement a method such as onFrame where
currentFrame++;
positionCurrent = positionStart + (positionEnd-positionStart)*(currentFrame/frameCount); //linear interpolation
if(currentFrame >= frameCount) {
currentFrame = 0;
frameCount = 0;
positionStart = currentPosition = positionEnd;
}
If you implement this system for all animatable parameters you want all you now need is get the model matrix which again should be a models method (pseudocode):
Matrix16f getModelMatrix {
Matrix16f toReturn = Matrix16f.identity();
toReturn.translate(currentPosition);
toReturn.scale(currentScale);
...
return toReturn;
}
So now your drawFrame becomes quite straight forward. On all your models you need to call onFrame, set the matrix to identity (or maybe some other matrix), multiply the current matrix with the one received from the model and then just draw the thing. No issues with retaining, no conditional statements in your main code...
Related
I have a total of two textures, the first is used as a framebuffer to work with inside a computeshader, which is later blitted using BlitFramebuffer(...). The second is supposed to be an OpenGL array texture, which is used to look up textures and copy them onto the framebuffer. It's created in the following way:
var texarray uint32
gl.GenTextures(1, &texarray)
gl.ActiveTexture(gl.TEXTURE0 + 1)
gl.BindTexture(gl.TEXTURE_2D_ARRAY, texarray)
gl.TexParameteri(gl.TEXTURE_2D_ARRAY, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
gl.TexImage3D(
gl.TEXTURE_2D_ARRAY,
0,
gl.RGBA8,
16,
16,
22*48,
0,
gl.RGBA, gl.UNSIGNED_BYTE,
gl.Ptr(sheet.Pix))
gl.BindImageTexture(1, texarray, 0, false, 0, gl.READ_ONLY, gl.RGBA8)
sheet.Pix is just the pixel array of an image loaded as a *image.NRGBA
The compute-shader looks like this:
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img;
layout(binding = 1) uniform sampler2DArray texAtlas;
void main() {
ivec2 iCoords = ivec2(gl_GlobalInvocationID.xy);
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7));
imageStore(img, iCoords, c);
}
When i run the program however, the result is just a window filled with the same color:
So my question is: What did I do wrong during the shader creation and what needs to be corrected?
For any open code questions, here's the corresponding repo
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7))
That can't work. texture samples the texture at normalized coordinates, so the texture is in [0,1] (in the st domain, the third dimension is the layer and is correct here), coordinates outside of that ar handled via the GL_WRAP_... modes you specified (repeat, clamp to edge, clamp to border). Since int % 16 is always an integer, and even with repetition only the fractional part of the coordinate will matter, you are basically sampling the same texel over and over again.
If you need the full texture sampling (texture filtering, sRGB conversions etc.), you have to use the normalized coordinates instead. But if you only want to access individual texel data, you can use texelFetch and keep the integer data instead.
Note, since you set the texture filter to GL_LINEAR, you seem to want filtering, however, your coordinates appear as if you would want at to access the texel centers, so if you're going the texture route , thenvec3(vec2(iCoords.xy)/vec2(16) + vec2(1.0/32.0) , layer) would be the proper normalization to reach the texel centers (together with GL_REPEAT), but then, the GL_LINEAR filtering would yield identical results to GL_NEAREST.
class Particle{
PVector velocity, location; //PVector variables for each particle.
Particle(){ //Constructor - random location and speed for each particle.
velocity = new PVector(random(-0.5,0.5), random(-0.5,0.5));
location = new PVector(random(0,width),random(0,width));
}
void update() { location.add(velocity); } //Motion method.
void edge() { //Wraparound case for particles.
if (location.x > width) {location.x = 0;}
else if (location.x < 0) {location.x = width;}
if (location.y > height) {location.y = 0;}
else if (location.y < 0) {location.y = height;}
}
void display(ArrayList<Particle> p){ //Display method to show lines and ellipses between particles.
for(Particle other: p){ //For every particle in the ArrayList.
float d = PVector.dist(location,other.location); //Get distance between any two particle.
float a = 255 - d*2.5; //Map variable 'a' as alpha based on distance. E.g. if distance is high, d = 100, alpha is low, a = 255 - 225 = 30.
println("Lowest distance of any two particle =" + d); //Debug output.
if(d<112){ //If the distance of any two particle falls bellow 112.
noStroke(); //No outline.
fill(0,a); //Particle are coloured black, 'a' to vary alpha.
ellipse(location.x, location.y, 8, 8); //Draw ellipse based on location of particle.
stroke(0,a); //Lines are coloured black, 'a' to vary alpha.
strokeWeight(0.7);
line(location.x,location.y,other.location.x,other.location.y); //Draw line between four coordinates, between two particle.
}
}
}
}
ArrayList<Particle> particles = new ArrayList<Particle>(); //Create a new arraylist of type Particle.
void setup(){
size(640,640,P2D); //Setup frame of sketch.
particles.add(new Particle()); //Add five Particle elements into arraylist.
particles.add(new Particle());
particles.add(new Particle());
particles.add(new Particle());
particles.add(new Particle());
}
void draw(){
background(255); //Set white background.
for(Particle p: particles){ //For every 'p' of type Particle in arraylist particles.
p.update(); //Update location based on velocity.
p.display(particles); //Display each particle in relation to other particles.
p.edge(); //Wraparound if particle reaches edge of screen.
}
}
In the above code, there are to shape objects, lines and ellipses. The transparency of which are affected by variable a.
Variable 'a', or alpha, is extrapolated from 'd' which is distance. Hence, when the objects are further, the alpha value of the objects falls.
In this scenario, the alpha values of the line do not change over time e.g. fade with distance. However the ellipses seem to be stuck on alpha '255' despite having very similar code.
If the value of 'a' is hardcoded, e.g.
if(d<112){ //If the distance of any two particle falls bellow 112.
noStroke(); //No outline.
fill(0,100); //Particle are coloured black, set alpha 'a' to be 100, grey tint.
ellipse(location.x, location.y, 8, 8); //Draw ellipse based on location of particle.
the ellipses changes colour as expected to a grey tint.
Edit: I believe I have found the root of the issue. The variable 'a' does not discriminate between the particles that are being iterated. As such, the alpha might be stuck/adding up to 255.
You're going to have to post an MCVE. Note that this should not be your entire sketch, just a few hard-coded lines so we're all working from the same code. We should be able to copy and paste your code into our own machines to see the problem. Also, please try to properly format your code. Your lack of indentation makes your code hard to read.
That being said, I can try to help in a general sense. First of all, you're printing out the value of a, but you haven't told us what its value is. Is its value what you expect? If so, are you clearing out previous frames before drawing the ellipses, or are you drawing them on top of previously drawn ellipses? Are you drawing ellipses elsewhere in your code?
Start over with a blank sketch, and add just enough lines to show the problem. Here's an example MCVE that you can work from:
stroke(0);
fill(0);
ellipse(25, 25, 25, 25);
line(0, 25, width, 25);
stroke(0, 128);
fill(0, 128);
ellipse(75, 75, 25, 25);
line(0, 75, width, 75);
This code draws a black line and ellipse, then draws a transparent line and ellipse. Please hardcode the a value from your code, or add just enough code so we can see exactly what's going on.
Edit: Thanks for the MCVE. Your updated code still has problems. I don't understand this loop:
for(Particle other: p){ //For every particle in the ArrayList.
float d = PVector.dist(location,other.location); //Get distance between any two particle.
float a = 255 - d*2.5; //Map variable 'a' as alpha based on distance. E.g. if distance is high, d = 100, alpha is low, a = 255 - 225 = 30.
println("Lowest distance of any two particle =" + d); //Debug output.
if(d<112){ //If the distance of any two particle falls bellow 112.
noStroke(); //No outline.
fill(0,a); //Particle are coloured black, 'a' to vary alpha.
ellipse(location.x, location.y, 8, 8); //Draw ellipse based on location of particle.
stroke(0,a); //Lines are coloured black, 'a' to vary alpha.
strokeWeight(0.7);
line(location.x,location.y,other.location.x,other.location.y); //Draw line between four coordinates, between two particle.
}
}
}
You're saying for each Particle, you loop through every Particle and then draw an ellipse at the current Particle's location? That doesn't make any sense. If you have 100 Particles, that means each Particle will be drawn 100 times!
If you want each Particle's color to be based off its distance to the closest other Particle, then you need to modify this loop to simply find the closest Particle, and then base your calculations off of that. It might look something like this:
Particle closestNeighbor = null;
float closestDistance = 100000;
for (Particle other : p) { //For every particle in the ArrayList.
if (other == this) {
continue;
}
float d = PVector.dist(location, other.location);
if (d < closestDistance) {
closestDistance = d;
closestNeighbor = other;
}
}
Notice the if (other == this) { section. This is important, because otherwise you'll be comparing each Particle to itself, and the distance will be zero!
Once you have the closestNeighbor and the closestDistance, you can do your calculations.
Note that you're only drawing particles when they have a neighbor that's closer than 112 pixels away. Is that what you want to be doing?
If you have a follow-up question, please post an updated MCVE in a new question. Constantly editing the question and answer gets confusing, so just ask a new question if you get stuck again.
I'm using orthographic projection.
I have 2 triangles creating one long quad.
On this quad i put a texture that repeat him self along the the way.
The world zoom is always changing by the user - and makes the quad length be short or long accordingly. The height is being calculated in the shader so it is always the same size (in pixels).
My problem is that i want the texture to repeat according to it's real (pixel size) and the length of the quad. In other words, that the texture will be always the same size (pixels) and it will fill the quad by repeating it more or less depend on the quad length.
The rotation is important.
For Example
My texture is
I've added to my vertices - texture coordinates for duplicating it 20 times now
as you see below
Because it's too much zoomed out we see the texture squeezed.
Now i'm zooming in and the texture stretched. It will always be 20 times repeat.
I'm sure that i have to play in with the texture coordinates in the frag shader, but don't see the solution. or perhaps there is a better solution to my problem.
---- ADDITION ----
Solved it by:
Calculating the repeat S value in the current zoom (That i'm adding the vertices) and send the map width (in world values) as attribute. Every draw i'm sending the current map width as uniform for calculating the scale.
But i'm not happy with this solution.
OK, found a way to do it with minimum attributes and minimum code in the shader.
Do Once:
Calculating the the repeat count for each line as my world and my screen are 1:1 - 1 in my world is 1 pixel. LineDistance(InWorldUnits)/picWidth(inScreenUnits)
Saving as an attribute.
Every Draw:
Calculating the scale - world to screen - WorldWidth/ScreenWidth
Setting as uniform
Drawing the buffer
In the frag shader
simply multiply this scale with the repeat attribute.
Works perfectly and looks good. Resizing the window is supported as well.
The general solution is to include a texture matrix. So your vertex shader might look something like
attribute vec4 a_position;
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
uniform mat4 u_matrix;
uniform mat4 u_texMatrix;
void main() {
gl_Position = u_matrix * a_position;
v_texcoord = (u_texMatrix * v_texcoord).xy;
}
Now you can set up texture matrix to scale your texture coordinates however you need. If your texture coordinates go from 0 to 1 across the texture and your pattern is 16 pixels wide then if you're drawing a line 100 pixels long you'd need 100/16 as your X scale.
var pixelsLong = 100;
var pixelsTall = 8;
var textureWidth = 16;
var textureHeight = 16;
var xScale = pixelsLong / textureWidth;
var yScale = pixelsTall / textureHeight;
var texMatrix = [
xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1,
];
gl.uniformMatrix4fv(texMatrixLocation, false, texMatrix);
That seems like it would work. Because you're using a matrix you can also easily offset or rotate the texture. See matrix math
I’m working on an app that creates it’s own texture atlas. The elements on the atlas can vary in size but are placed in a grid pattern.
It’s all working fine except for the fact that when I write over the section of the atlas with a new element (the data from an NSImage), the image is shifted a pixel to the right.
The code I’m using to write the pixels onto the atlas is:
-(void)writeToPlateWithImage:(NSImage*)anImage atCoord:(MyGridPoint)gridPos;
{
static NSSize insetSize; //ultimately this is the size of the image in the box
static NSSize boundingBox; //this is the size of the box that holds the image in the grid
static CGFloat multiplier;
multiplier = 1.0;
NSSize plateSize = NSMakeSize(atlas.width, atlas.height);//Size of entire atlas
MyGridPoint _gridPos;
//make sure the column and row position is legal
_gridPos.column= gridPos.column >= m_numOfColumns ? m_numOfColumns - 1 : gridPos.column;
_gridPos.row = gridPos.row >= m_numOfRows ? m_numOfRows - 1 : gridPos.row;
_gridPos.column = gridPos.column < 0 ? 0 : gridPos.column;
_gridPos.row = gridPos.row < 0 ? 0 : gridPos.row;
insetSize = NSMakeSize(plateSize.width / m_numOfColumns, plateSize.height / m_numOfRows);
boundingBox = insetSize;
//…code here to calculate the size to make anImage so that it fits into the space allowed
//on the atlas.
//multiplier var will hold a value that sizes up or down the image…
insetSize.width = anImage.size.width * multiplier;
insetSize.height = anImage.size.height * multiplier;
//provide a padding around the image so that when mipmaps are created the image doesn’t ‘bleed’
//if it’s the same size as the grid’s boxes.
insetSize.width -= ((insetSize.width * (insetPadding / 100)) * 2);
insetSize.height -= ((insetSize.height * (insetPadding / 100)) * 2);
//roundUp() is a handy function I found somewhere (I can’t remember now)
//that makes the first param a multiple of the the second..
//here we make sure the image lines are aligned as it’s a RGBA so we make
//it a multiple of 4
insetSize.width = (CGFloat)roundUp((int)insetSize.width, 4);
insetSize.height = (CGFloat)roundUp((int)insetSize.height, 4);
NSImage *insetImage = [self resizeImage:[anImage copy] toSize:insetSize];
NSData *insetData = [insetImage TIFFRepresentation];
GLubyte *data = malloc(insetData.length);
memcpy(data, [insetData bytes], insetData.length);
insetImage = NULL;
insetData = NULL;
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, atlas.textureIndex);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); //have also tried 2,4, and 8
GLint Xplace = (GLint)(boundingBox.width * _gridPos.column) + (GLint)((boundingBox.width - insetSize.width) / 2);
GLint Yplace = (GLint)(boundingBox.height * _gridPos.row) + (GLint)((boundingBox.height - insetSize.height) / 2);
glTexSubImage2D(GL_TEXTURE_2D, 0, Xplace, Yplace, (GLsizei)insetSize.width, (GLsizei)insetSize.height, GL_RGBA, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
free(data);
glBindTexture(GL_TEXTURE_2D, 0);
glGetError();
}
The images are RGBA, 8bit (as reported by PhotoShop), here's a test image I've been using:
and here's a screen grab of the result in my app:
Am I unpacking the image incorrectly...? I know the resizeImage: function works as I've saved it's result to disk as well as bypassed it so the problem is somewhere in the gl-code...
EDIT: just to clarify, the section of the atlas being rendered is larger than the box diagram. So the shift is occurring withing the area that's written to with glTexSubImage2D.
EDIT 2: Sorted, finally, by offsetting the copied data that goes into the section of the atlas.
I don't fully understand why that is, perhaps it's a hack instead of a proper solution but here it is.
//resize the image to fit into the section of the atlas
NSImage *insetImage = [self resizeImage:[anImage copy] toSize:NSMakeSize(insetSize.width, insetSize.height)];
//pointer to the raw data
const void* insetDataPtr = [[insetImage TIFFRepresentation] bytes];
//for debugging, I placed the offset value next
int offset = 8;//it needed a 2 pixel (2 * 4 byte for RGBA) offset
//copy the data with the offset into a temporary data buffer
memcpy(data, insetDataPtr + offset, insetData.length - offset);
/*
.
. Calculate it's position with the texture
.
*/
//And finally overwrite the texture
glTexSubImage2D(GL_TEXTURE_2D, 0, Xplace, Yplace, (GLsizei)insetSize.width, (GLsizei)insetSize.height, GL_RGBA, GL_UNSIGNED_BYTE, data);
You may be running into the issue I answered already here: stackoverflow.com/a/5879551/524368
It's not really about pixel coordinates, but pixel perfect addressing of texels. This is especially important for texture atlases. A common misconception is, that many people assume texture coordinates 0 and 1 come to lie exactly on pixel centers. But in OpenGL this is not the case, texture coordinates 0 and 1 are exactly on the border between the pixels of a texture wrap. If you build your texture atlas making the 0 and 1 are on pixel centers assumption, then using the very same addressing scheme in OpenGL will lead to either a blurry picture or pixel shifts. You need to account for this.
I still don't understand how that makes a difference to a sub-section of the texture that's being rendered.
It helps a lot to understand that to OpenGL textures are not so much images rather than support samples for an interpolator (hence "sampler" uniforms in shaders). So to get really crisp looking images you've to choose the texture coordinates you're sampling from in a way, so that the interpolator evaluates at exactly the position of the support samples. The position of those samples however are neither integer coordinates nor simply fractions (i/N).
Note that newer versions of GLSL provide the texture sampling function texelFetch which completely bypasses the interpolator and addresses texture pixels directly. If you need pixel perfect texturing you might find this easier to use (if available).
I created two rulers - one vertical and one horizontal:
Now in the vertical ruler, is 'size' of the text visually larger(aprox. 5-6 pixels longer).
Why?
Relevant code:
WM_CREATE:
LOGFONT Lf = {0};
Lf.lfHeight = 12;
lstrcpyW(Lf.lfFaceName, L"Arial");
if (!g_pGRI->bHorizontal)
{
Lf.lfEscapement = 900; // <----For vertical ruler!
}
g_pGRI->hfRuler = CreateFontIndirectW(&Lf);
SelectFont(g_pGRI->hdRuler, g_pGRI->hfRuler);
WM_PAINT:
SetTextColor(g_pGRI->hdRuler, g_pGRI->cBorder);
SetBkColor(g_pGRI->hdRuler, g_pGRI->cBackground);
SetTextAlign(g_pGRI->hdRuler, TA_CENTER);
#define INCREMENT 10
WCHAR wText[16] = {0};
if (g_pGRI->bHorizontal)
{
INT ixTicks = RECTWIDTH(g_pGRI->rRuler) / INCREMENT;
for (INT ix = 0; ix < ixTicks + 1; ix++)
{
MoveToEx(g_pGRI->hdRuler, INCREMENT * ix, 0, NULL);
if (ix % INCREMENT == 0)
{
//This is major tick.
LineTo(g_pGRI->hdRuler, INCREMENT * ix, g_pGRI->lMajor);
wsprintfW(wText, L"%d", INCREMENT * ix);
TextOutW(g_pGRI->hdRuler, INCREMENT * ix + 1, g_pGRI->lMajor + 1, wText, CHARACTERCOUNT(wText));
}
else
{
//This is minor tick.
LineTo(g_pGRI->hdRuler, INCREMENT * ix, g_pGRI->lMinor);
}
}
}
else
{
INT iyTicks = RECTHEIGHT(g_pGRI->rRuler) / INCREMENT;
for (INT iy = 0; iy < iyTicks + 1; iy++)
{
MoveToEx(g_pGRI->hdRuler, 0, INCREMENT * iy, NULL);
if (iy % INCREMENT == 0)
{
//This is major tick.
LineTo(g_pGRI->hdRuler, g_pGRI->lMajor, INCREMENT * iy);
wsprintfW(wText, L"%d", INCREMENT * iy);
TextOutW(g_pGRI->hdRuler, g_pGRI->lMajor + 1, INCREMENT * iy + 1, wText, CHARACTERCOUNT(wText));
}
else
{
//This is minor tick.
LineTo(g_pGRI->hdRuler, g_pGRI->lMinor, INCREMENT * iy);
}
}
}
}
Background
There are several different schemes for rasterizing text in a legible way when the text is small relative to the size of a pixel. For example, if the stroke width is supposed to be 1.25 pixels wide, you either have to round it off to a whole number of pixels, use antialiasing, or use subpixel rendering (like ClearType). Rounding is usually controlled by "hints" built into the font by the font designer.
Hinting is the main reason why text width doesn't always scale exactly with the text height. For example, if, because of rounding, the left hump of a lowercase m is a pixel wider than the right one, a hint might tell the renderer to round the width up to make the letter symmetric. The result is that the character is a tad wider relative to its height than the ideal character.
This issue
What's likely happening here is that when GDI renders the string horizontally, each subsequent character may start at a fractional position, which is simulated by antialiasing or subpixel (ClearType) rendering. But, when rendering vertically, it appears that each subsequent character's starting position is rounded up to the next whole pixel, which tends to make the vertical text a couple pixels "longer" than its horizontal counterpart. Effectively, the kerning is always rounded up to the next whole pixel.
It's likely that more effort was put into the common case of horizontal text rendering, making it easier to read (and possibly faster to render). The general case of rendering at any other angle may have been implemented in a simpler manner, working glyph-by-glyph instead of with the entire string.
Things to Try
If you want them to look that same, you'll probably have to make a small compromise in the visual quality of the horizontal labels. Here are a few things I can think of to try:
Render the labels with regular antialiasing instead of ClearType subpixel rendering. (You can do this by setting the lfQuality field in the LOGFONT.) You would then draw the horizontal labels in the normal manner. For the vertical labels, draw them to an offscreen buffer horizontally, rotate it, and then blit the buffer to the screen. This gives you labels that look identical. The reason I suggest regular antialiasing is that it's invariant to the rotation. ClearType rendering had an inherent orientation and thus cannot be rotated without creating fringing. I've used this approach for graph labels with good results.
Render the horizontal labels character by character, rounding the starting point up to the next whole pixel. This should make the horizontal labels look like the vertical ones. Typographically, they won't look as good, but for small labels like this, it's probably less distracting than having the horizontal and vertical labels visually mismatched.
Another answer suggested rendering the horizontal labels with a very small, but non-zero, escapement and orientation, forcing those to go through the same rendering pipeline as the vertical labels. This may be the easiest solution for short labels like yours. If you had to handle longer strings of text, I'd suggest one of the first two methods.
When using lfEscapement, you will often get strange behaviour as it renders text using a fairly different pipeline.
A trick would be to have lfEscapement set for both. One with 900, and one with a very low value (such as 1 or even 10. Once you have both rendering with escapement, you should be good.
If you're still having issues with smoothing, try doing something like this:
BOOL bSmooth;
//Get previous smooth value.
SystemParametersInfo(SPI_GETFONTSMOOTHING, 0, &bSmooth, 0);
//Set no smoothing.
SystemParametersInfo(SPI_SETFONTSMOOTHING, 0, NULL, 0);
//Draw text.
//Return smoothing.
SystemParametersInfo(SPI_SETFONTSMOOTHING, bSmooth, NULL, 0);