How to display a raw YUV frame in a Cocoa OpenGL program - cocoa

I have been assigned wit the task to write a program that takes a sample raw YUV file and display it in a Cocoa OpenGL program.
I am an intern at my job and I have little or no clue how to start. I have been reading wikipedia & articles on YUV, but I couldn't find any good source code on how to open a raw YUV file, extract the data and convert it into RGB and display it in the view window.
Essentially, I need help with the following aspects of the task
-how to extract the YUV data from the sample YUV file
-how to convert the YUV data into RGB color space
-how to display the RGB color space in OpenGL. (This one I think I can figure out with time, but I really need help with the first two points)
please either tell me the classes to use, or point me to places where i can learn about YUV graphic/video display

I've done this with YUV frames captured from a CCD camera. Unfortunately, there are a number of different YUV formats. I believe the one that Apple uses for the GL_YCBCR_422_APPLE texture format is technically 2VUY422. To convert an image from a YUV422 frame generated by an IIDC Firewire camera to 2VUY422, I've used the following:
void yuv422_2vuy422(const unsigned char *theYUVFrame, unsigned char *the422Frame, const unsigned int width, const unsigned int height)
{
int i =0, j=0;
unsigned int numPixels = width * height;
unsigned int totalNumberOfPasses = numPixels * 2;
register unsigned int y0, y1, y2, y3, u0, u2, v0, v2;
while (i < (totalNumberOfPasses) )
{
u0 = theYUVFrame[i++]-128;
y0 = theYUVFrame[i++];
v0 = theYUVFrame[i++]-128;
y1 = theYUVFrame[i++];
u2 = theYUVFrame[i++]-128;
y2 = theYUVFrame[i++];
v2 = theYUVFrame[i++]-128;
y3 = theYUVFrame[i++];
// U0 Y0 V0 Y1 U2 Y2 V2 Y3
// Remap the values to 2VUY (YUYS?) (Y422) colorspace for OpenGL
// Y0 U Y1 V Y2 U Y3 V
// IIDC cameras are full-range y=[0..255], u,v=[-127..+127], where display is "video range" (y=[16..240], u,v=[16..236])
the422Frame[j++] = ((y0 * 240) / 255 + 16);
the422Frame[j++] = ((u0 * 236) / 255 + 128);
the422Frame[j++] = ((y1 * 240) / 255 + 16);
the422Frame[j++] = ((v0 * 236) / 255 + 128);
the422Frame[j++] = ((y2 * 240) / 255 + 16);
the422Frame[j++] = ((u2 * 236) / 255 + 128);
the422Frame[j++] = ((y3 * 240) / 255 + 16);
the422Frame[j++] = ((v2 * 236) / 255 + 128);
}
}
For efficient display of a YUV video source, you may wish to use Apple's client storage extension, which you can set up using something like the following:
glEnable(GL_TEXTURE_RECTANGLE_EXT);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 1);
glTextureRangeAPPLE(GL_TEXTURE_RECTANGLE_EXT, videoImageWidth * videoImageHeight * 2, videoTexture);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_STORAGE_HINT_APPLE , GL_STORAGE_SHARED_APPLE);
glPixelStorei(GL_UNPACK_CLIENT_STORAGE_APPLE, GL_TRUE);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_RGBA, videoImageWidth, videoImageHeight, 0, GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, videoTexture);
This lets you quickly change out the data stored within your client-side video texture before each frame to be displayed on the screen.
To draw, you could then use code like the following:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glViewport(0, 0, [self frame].size.width, [self frame].size.height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
NSRect bounds = NSRectFromCGRect([self bounds]);
glOrtho( (GLfloat)NSMinX(bounds), (GLfloat)NSMaxX(bounds), (GLfloat)NSMinY(bounds), (GLfloat)NSMaxY(bounds), -1.0, 1.0);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 1);
glTexSubImage2D (GL_TEXTURE_RECTANGLE_EXT, 0, 0, 0, videoImageWidth, videoImageHeight, GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, videoTexture);
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0.0f, videoImageHeight);
glTexCoord2f(0.0f, videoImageHeight);
glVertex2f(0.0f, 0.0f);
glTexCoord2f(videoImageWidth, videoImageHeight);
glVertex2f(videoImageWidth, 0.0f);
glTexCoord2f(videoImageWidth, 0.0f);
glVertex2f(videoImageWidth, videoImageHeight);
glEnd();

Adam Rosenfield’s comment is incorrect. On Macs, you can display YCbCr (the digital equivalent to YUV) textures using the GL_YCBCR_422_APPLE texture format, as specified in the APPLE_ycbcr_422 extension.

This answer is not correct, see the other answers and comments. Original answer left below for posterity.
You can't display it directly. You'll need to convert it to an RGB texture. As you may have gathered from Wikipedia, there are a bunch of variations on the YUV color space. Make sure you're using the right one.
For each pixel, the conversion from YUV to RGB is a straightforward linear transformation. You just do the same thing to each pixel independently.
Once you've converted the image to RGB, you can display it by creating a texture. You need to call glGenTextures() to allocate a texture handle, glBindTexture() to bind the texture to the render context, and glTexImage2D() to upload the texture data to the GPU. To render it, you again call glBindTexture(), followed by the rendering of a quad with texture coordinates set up properly.
// parameters: image: pointer to raw YUV input data
// width: image width (must be a power of 2)
// height: image height (must be a power of 2)
// returns: a handle to the resulting RGB texture
GLuint makeTextureFromYUV(const float *image, int width, int height)
{
float *rgbImage = (float *)malloc(width * height * 3 * sizeof(float)); // check for NULL
float *rgbImagePtr = rgbImage;
// convert from YUV to RGB (floats used here for simplicity; it's a little
// trickier with 8-bit ints)
int y, x;
for(y = 0; y < height; y++)
{
for(x = 0; x < width; x++)
{
float Y = *image++;
float U = *image++;
float V = *image++;
*rgbImagePtr++ = Y + 1.13983f * V; // R
*rgbImagePtr++ = Y - 0.39465f * U - 0.58060f * V; // G
*rgbImagePtr++ = Y + 2.03211f * U; // B
}
}
// create texture
GLuint texture;
glGenTextures(1, &texture);
// bind texture to render context
glBindTexture(GL_TEXTURE_2D, texture);
// upload texture data
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_FLOAT, rgbImage);
// don't use mipmapping (since we're not creating any mipmaps); the default
// minification filter uses mipmapping. Use linear filtering for minification
// and magnification.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// free data (it's now been copied onto the GPU) and return texture handle
free(rgbImage);
return texture;
}
To render:
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex3f( 0.0f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 0.0f); glVertex3f(64.0f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f); glVertex3f(64.0f, 64.0f, 0.0f);
glTexCoord2f(0.0f, 1.0f); glVertex3f( 0.0f, 64.0f, 0.0f);
glEnd();
And don't forget to call glEnable(GL_TEXTURE_2D) at some point during initialization, and call glDeleteTextures(1, &texture) during shutdown.

Related

GPUImage replace colors with colors from textures

Looking at GPUImagePosterizeFilter it seems like an easy adaptation to replace colors with pixels from textures. Say I have an image that is made from 10 greyscale colors. I would like to replace each of the pixel ranges from the 10 colors with pixels from 10 different texture swatches.
What is the proper way to create the textures? I am using the code below (I am not sure on the alpha arguments sent to CGBitmapContextCreate).
CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage;
size_t width = CGImageGetWidth(spriteImage);
size_t height = CGImageGetHeight(spriteImage);
GLubyte * spriteData = (GLubyte *) calloc(width*height*4, sizeof(GLubyte));
CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(spriteContext, CGRectMake(0, 0, width, height), spriteImage);
CGContextRelease(spriteContext);
GLuint texName;
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
free(spriteData);
return texName;
What is the proper way to pass the texture to the filter? In my main I have added:
uniform sampler2D fill0Texture;
In the code below texture is whats passed from the function above.
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(fill0Uniform, 1);
When ever I try to get an image from the spriteContext its nil and when I try using pixels from fill0Texture they are always black. I have thought about doing this with 10 chroma key iterations, but I think replacing all the pixels in a modified GPUImagePosterizeFilter is the way to go.
In order to match colors against the output from the PosterizeFilter, I am using the following code.
float testValue = 1.0 - (float(idx) / float(colorLevels));
vec4 keyColor = vec4(testValue, testValue, testValue, 1.0);
vec4 replacementColor = texture2D( tx0, textureCoord(idx));
float select = step(distance(keyColor,srcColor),.1);
return select * replacementColor;
If the color(already Posterized) passed in matches then the replacement color is returned. The textureCoord(idx) call looks up the replacement color from a gltexture.

openGL image quality (blured)

i use openGL to create an slideshow app. Unfortunatly the images rendered with openGL look blured compared to the gnome image viewer.
Here are the 2 Screenshots
(opengl) http://tinyurl.com/dxmnzpc
(image viewer) http://tinyurl.com/8hshv2a
and this is the base image:
http://tinyurl.com/97ho4rp
the image has the native size of my screen. (2560x1440)
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/freeglut.h>
#include <SDL/SDL.h>
#include <SDL/SDL_image.h>
#include <unistd.h>
GLuint text = 0;
GLuint load_texture(const char* file) {
SDL_Surface* surface = IMG_Load(file);
GLuint texture;
glPixelStorei(GL_UNPACK_ALIGNMENT,4);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
SDL_PixelFormat *format = surface->format;
printf("%d %d \n",surface->w,surface->h);
if (format->Amask) {
gluBuild2DMipmaps(GL_TEXTURE_2D, 4,surface->w, surface->h, GL_RGBA,GL_UNSIGNED_BYTE, surface->pixels);
} else {
gluBuild2DMipmaps(GL_TEXTURE_2D, 3,surface->w, surface->h, GL_RGB, GL_UNSIGNED_BYTE, surface->pixels);
}
SDL_FreeSurface(surface);
return texture;
}
void display(void) {
GLdouble offset_x = -1;
GLdouble offset_y = -1;
int p_viewport[4];
glGetIntegerv(GL_VIEWPORT, p_viewport);
GLfloat gl_width = p_viewport[2];//width(); // GL context size
GLfloat gl_height = p_viewport[3];//height();
glClearColor (0.0,2.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glEnable( GL_TEXTURE_2D );
glTranslatef(0,0,0);
glBindTexture( GL_TEXTURE_2D, text);
gl_width=2; gl_height=2;
glBegin(GL_QUADS);
glTexCoord2f(0, 1); //4
glVertex2f(offset_x, offset_y);
glTexCoord2f(1, 1); //3
glVertex2f(offset_x + gl_width, offset_y);
glTexCoord2f(1, 0); // 2
glVertex2f(offset_x + gl_width, offset_y + gl_height);
glTexCoord2f(0, 0); // 1
glVertex2f(offset_x, offset_y + gl_height);
glEnd();
glutSwapBuffers();
}
int main(int argc, char **argv) {
glutInit(&argc,argv);
glutInitDisplayMode (GLUT_DOUBLE);
glutGameModeString("2560x1440:24");
glutEnterGameMode();
text = load_texture("/tmp/raspberry/out.jpg");
glutDisplayFunc(display);
glutMainLoop();
}
UPDATED TRY
void display(void)
{
GLdouble texture_x = 0;
GLdouble texture_y = 0;
GLdouble texture_width = 0;
GLdouble texture_height = 0;
glViewport(0,0,width,height);
glClearColor (0.0,2.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
//Do pixel calculatons
texture_x = ((2.0*1-1) / (2*width));
texture_y = ((2.0*1-1) / (2*height));
texture_width=((2.0*width-1)/(2*width));
texture_height=((2.0*height-1)/(2*height));
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0,0,0);
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, text);
glBegin(GL_QUADS);
glTexCoord2f(texture_x, texture_height); //4
glVertex2f(0, 0);
glTexCoord2f(texture_width, texture_height); //3
glVertex2f(width, 0);
glTexCoord2f(texture_width, texture_y); // 2
glVertex2f(width,height);
glTexCoord2f(texture_y, texture_y); // 1
glVertex2f(0,height);
glEnd();
glutSwapBuffers();
}
What you run into is a variation of the fencepost problem, that arises from how OpenGL deals with texture coordinates. OpenGL does not address a texture's pixels (texels), but uses the image data as support for a interpolation, that in fact covers a wider range than the images pixels. So the texture coordinates 0 and 1 don't hit the left-/bottom most and right-/top most pixels, but go a little further, in fact.
Let's say the texture is 8 pixels wide:
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
^ ^ ^ ^ ^ ^ ^ ^ ^
0.0 | | | | | | | 1.0
| | | | | | | | |
0/8 1/8 2/8 3/8 4/8 5/8 6/8 7/8 8/8
The digits denote the texture's pixels, the bars the edges of the texture and in case of nearest filtering the border between pixels. You however want to hit the pixels' centers. So you're interested in the texture coordinates
(0/8 + 1/8)/2 = 1 / (2 * 8)
(1/8 + 2/8)/2 = 3 / (2 * 8)
...
(7/8 + 8/8)/2 = 15 / (2 * 8)
Or more generally for pixel i in a N wide texture the proper texture coordinate is
(2i + 1)/(2N)
However if you want to perfectly align your texture with the screen pixels, remember that what you specify as coordinates are not a quad's pixels, but edges, which, depending on projection may align with screen pixel edges, not centers, thus may require other texture coordinates.
Note that if you follow this, irregardless of your filtering mode and mipmaps your image will always look clear and crisp, because the interpolation hits exactly your sampling support, which is your input image. Switching to another filtering mode, like GL_NEAREST may look right at first look, but it's actually not correct, because it will alias your samples. So don't do it.
There are few other issues with your code as well, but they're not as a huge problem. First and foremost, you're choosing a rather arcane way to viewport dimensions. You're (probably without further thought) explout the fact that the default OpenGL viewport is the size of the window the context has been created with. You're using SDL, which has the side effect, that this approach won't bite you, as long as you stick with SDL-1. But switch to any other framework, that may create the context via a proxy drawable, and you're running into a problem.
The canonical way is usually to retrieve the window size from the windowing system (SDL in your case) and then setting the viewport at one of the first actions in the display function.
Another issue is your use of gluBuildMipmaps, because a) you don't want to use mipmaps and b) since OpenGL-2 you can upload texture images of arbitrary size (i.e. you're not limited to powers of 2 for the dimensions), which completely eliminates the need for gluBuildMipmaps. So don't use it. Just use glTexImage2D directly and switch to a non-mipmapping filtering mode.
Update due to question update
The way you calculate the texture coordinates still doesn't look right. It seems like you're starting to count at 1. Texture pixels are 0 base indexed, so…
This is how I'd do it:
Assuming the projection maps the viewport
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, win_width, 0, win_height, -1, 1);
glViewport(0, 0, win_width, win_height);
we calculate the texture coordinates as
//Do pixel calculatons
glBindTexture( GL_TEXTURE_2D, text);
GLint tex_width, tex_height;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &tex_width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &tex_height);
texture_s1 = 1. / (2*width); // (2*0-1
texture_t1 = 1. / (2*height);
texture_s2 = (2.*(tex_width -1) + 1) / (2*width);
texture_t2 = (2.*(tex_height-1) + 1) / (2*height);
Note that tex_width and tex_height give the number of pixels in each direction, but the coordinates are 0 based, so you've to subtract 1 from them for the texture coordinate mapping. Hence we also use a constant 1 in the numerator for the s1, t1 coordinates.
The rest looks okay, given the projection you choose
glEnable( GL_TEXTURE_2D );
glBegin(GL_QUADS);
glTexCoord2f(s1, t1); //4
glVertex2f(0, 0);
glTexCoord2f(s2, t1); //3
glVertex2f(tex_width, 0);
glTexCoord2f(s2, t2); // 2
glVertex2f(tex_width,tex_height);
glTexCoord2f(s1, t2); // 1
glVertex2f(0,tex_height);
glEnd();
I'm not sure if this is really the problem, but I think you don't need/want mipmaps here. Have you tried using glTexImage2D instead of gluBuild2DMipmaps in combination with nearest neighbor filtering (glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN/MAG_FILTER, GL_NEAREST);)?

How to draw circle in opengles

Here is my part of code to show circle on screen but unfortunate circle is not coming on screen.
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix();
glLoadIdentity();
glColor3f(0.0f,1.0f,0.0f);
glBegin(GL_LINE_LOOP);
const float DEG2RAD = 3.14159/180;
for (int i=0; i < 360; i++)
{
float degInRad = i*DEG2RAD;
glVertex2f(cos(degInRad)*8,sin(degInRad)*8);
}
glEnd();
glFlush();
I am not understanding code is seems to look ok but circle is not coming on screen.
Your circle is too big. The default viewport is in the range [(-1 -1), (1 1)].
BTW, you don't need 360 segments. About 30 is usually adequate, depending on how smooth you want it.

Open GLES 1.1 - GLColorPointer creates a rainbow of colours when I just want red

I'm rendering an object like this:
for (int i = 0; i < COLOR_ARRAY_SIZE; i += 4) {
colors[i] = 1.0f;
colors[i + 1] = 0.0f;
colors[i + 2] = 0.0f;
colors[i + 3] = 1.0f;
}
// Clear color and depth buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Set GL11 flags:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnable(GL_DEPTH_TEST);
// make sure nothing messes with the colour
glDisable(GL_BLEND);
glDisable(GL_DITHER);
glDisable(GL_FOG);
glDisable(GL_LIGHTING);
glDisable(GL_TEXTURE_2D);
glShadeModel(GL_FLAT);
// Load projection matrix:
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(projectionMatrix);
// Load model view matrix and scale appropriately
int kObjectScale = 300f;
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(modelViewMatrix);
glTranslatef(0.5f, 0.5f, 0.5f);
glScalef(kObjectScale, kObjectScale, kObjectScale);
// Draw object
glVertexPointer(3, GL_FLOAT, 0, (const GLvoid*) &vertexPositions[0]);
glNormalPointer(GL_FLOAT, 0, (const GLvoid*) &vertexNormals[0]);
glColorPointer(4, GL_FLOAT, 0, (const GLvoid*) &colors[0]);
glDrawElements(GL_TRIANGLES, 11733, GL_UNSIGNED_SHORT,
(const GLvoid*) &indices[0]);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
I would expect that this would render my object all in red, but instead it's a rainbow of different colours. Does anyone know why? I would assume that there is something wrong with my "colors" array buffer but I can't for the life of me see what it is. The actual vertices seem to rendered just fine.
Your for loop is very confused. You're incrementing your value of i by 4 each time. What's more is that you're indexing with an offset of 1, 2 and 3 in lines 3-5. I presume that your define of COLOR_ARRAY_SIZE is 4? Try initializing your color array as follows:
float colors[] = {1.0f, 0.0f, 0.0f, 1.0f};
And then calling glColorPointer as follows:
glColorPointer(4, GL_FLOAT, 0, colors);
Notice that I've set the stride to be 0. If your color array only contains colors then I don't see any reason why you should be using a stride (stride is used to jump over interwoven information in an array).

OpenGL ES glRotatef performing shear instead of rotate?

I am able to draw a sprite on the screen of an iPhone, but when I try to rotate it I am getting some weird results. It seems to be stretching the sprite in the y direction more the closer the sprite gets to pointing down the y-axis (90 and 270 degrees). It displays correctly when pointing down the x and -x axes (0 and 180 degrees). It is basically like it is shearing instead of rotating. Here are the essentials of the code (projection matrix is ortho):
glPushMatrix();
glLoadIdentity();
glTranslatef( position.x, position.y, -1.0f );
glRotatef( rotation, 0.0f, 0.0f, 1.0f );
glScalef( halfSize.x, halfSize.y, 1.0f );
vertices[0] = 1.0f;
vertices[1] = 1.0f;
vertices[2] = 0.0f;
vertices[3] = 1.0f;
vertices[4] = -1.0f;
vertices[5] = 0.0f;
vertices[6] = -1.0f;
vertices[7] = 1.0f;
vertices[8] = 0.0f;
vertices[9] = -1.0f;
vertices[10] = -1.0f;
vertices[11] = 0.0f;
glVertexPointer( 3, GL_FLOAT, 0, vertices );
glDrawArrays( GL_TRIANGLE_STRIP, 0, 4 );
glPopMatrix();
Can anybody explain to me how to fix this please?
halfsize is just half the x and y extent of the sprite; removing the glScalef call does not make any difference.
Here is my matrix setup:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, 320, 480, 0, 0.01, 5);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
OK, hopefully this screenshot will demonstrate what's happening:
If you are scaling by the same amount in the x and y directions, then your projection is causing the distortion.
Just a hunch, but maybe try swapping the 320 and 480 in your Ortho projection. (In case the X and Y on the iPhone is swapped)

Resources