openGL image quality (blured) - image

i use openGL to create an slideshow app. Unfortunatly the images rendered with openGL look blured compared to the gnome image viewer.
Here are the 2 Screenshots
(opengl) http://tinyurl.com/dxmnzpc
(image viewer) http://tinyurl.com/8hshv2a
and this is the base image:
http://tinyurl.com/97ho4rp
the image has the native size of my screen. (2560x1440)
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/freeglut.h>
#include <SDL/SDL.h>
#include <SDL/SDL_image.h>
#include <unistd.h>
GLuint text = 0;
GLuint load_texture(const char* file) {
SDL_Surface* surface = IMG_Load(file);
GLuint texture;
glPixelStorei(GL_UNPACK_ALIGNMENT,4);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
SDL_PixelFormat *format = surface->format;
printf("%d %d \n",surface->w,surface->h);
if (format->Amask) {
gluBuild2DMipmaps(GL_TEXTURE_2D, 4,surface->w, surface->h, GL_RGBA,GL_UNSIGNED_BYTE, surface->pixels);
} else {
gluBuild2DMipmaps(GL_TEXTURE_2D, 3,surface->w, surface->h, GL_RGB, GL_UNSIGNED_BYTE, surface->pixels);
}
SDL_FreeSurface(surface);
return texture;
}
void display(void) {
GLdouble offset_x = -1;
GLdouble offset_y = -1;
int p_viewport[4];
glGetIntegerv(GL_VIEWPORT, p_viewport);
GLfloat gl_width = p_viewport[2];//width(); // GL context size
GLfloat gl_height = p_viewport[3];//height();
glClearColor (0.0,2.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glEnable( GL_TEXTURE_2D );
glTranslatef(0,0,0);
glBindTexture( GL_TEXTURE_2D, text);
gl_width=2; gl_height=2;
glBegin(GL_QUADS);
glTexCoord2f(0, 1); //4
glVertex2f(offset_x, offset_y);
glTexCoord2f(1, 1); //3
glVertex2f(offset_x + gl_width, offset_y);
glTexCoord2f(1, 0); // 2
glVertex2f(offset_x + gl_width, offset_y + gl_height);
glTexCoord2f(0, 0); // 1
glVertex2f(offset_x, offset_y + gl_height);
glEnd();
glutSwapBuffers();
}
int main(int argc, char **argv) {
glutInit(&argc,argv);
glutInitDisplayMode (GLUT_DOUBLE);
glutGameModeString("2560x1440:24");
glutEnterGameMode();
text = load_texture("/tmp/raspberry/out.jpg");
glutDisplayFunc(display);
glutMainLoop();
}
UPDATED TRY
void display(void)
{
GLdouble texture_x = 0;
GLdouble texture_y = 0;
GLdouble texture_width = 0;
GLdouble texture_height = 0;
glViewport(0,0,width,height);
glClearColor (0.0,2.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
//Do pixel calculatons
texture_x = ((2.0*1-1) / (2*width));
texture_y = ((2.0*1-1) / (2*height));
texture_width=((2.0*width-1)/(2*width));
texture_height=((2.0*height-1)/(2*height));
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0,0,0);
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, text);
glBegin(GL_QUADS);
glTexCoord2f(texture_x, texture_height); //4
glVertex2f(0, 0);
glTexCoord2f(texture_width, texture_height); //3
glVertex2f(width, 0);
glTexCoord2f(texture_width, texture_y); // 2
glVertex2f(width,height);
glTexCoord2f(texture_y, texture_y); // 1
glVertex2f(0,height);
glEnd();
glutSwapBuffers();
}

What you run into is a variation of the fencepost problem, that arises from how OpenGL deals with texture coordinates. OpenGL does not address a texture's pixels (texels), but uses the image data as support for a interpolation, that in fact covers a wider range than the images pixels. So the texture coordinates 0 and 1 don't hit the left-/bottom most and right-/top most pixels, but go a little further, in fact.
Let's say the texture is 8 pixels wide:
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
^ ^ ^ ^ ^ ^ ^ ^ ^
0.0 | | | | | | | 1.0
| | | | | | | | |
0/8 1/8 2/8 3/8 4/8 5/8 6/8 7/8 8/8
The digits denote the texture's pixels, the bars the edges of the texture and in case of nearest filtering the border between pixels. You however want to hit the pixels' centers. So you're interested in the texture coordinates
(0/8 + 1/8)/2 = 1 / (2 * 8)
(1/8 + 2/8)/2 = 3 / (2 * 8)
...
(7/8 + 8/8)/2 = 15 / (2 * 8)
Or more generally for pixel i in a N wide texture the proper texture coordinate is
(2i + 1)/(2N)
However if you want to perfectly align your texture with the screen pixels, remember that what you specify as coordinates are not a quad's pixels, but edges, which, depending on projection may align with screen pixel edges, not centers, thus may require other texture coordinates.
Note that if you follow this, irregardless of your filtering mode and mipmaps your image will always look clear and crisp, because the interpolation hits exactly your sampling support, which is your input image. Switching to another filtering mode, like GL_NEAREST may look right at first look, but it's actually not correct, because it will alias your samples. So don't do it.
There are few other issues with your code as well, but they're not as a huge problem. First and foremost, you're choosing a rather arcane way to viewport dimensions. You're (probably without further thought) explout the fact that the default OpenGL viewport is the size of the window the context has been created with. You're using SDL, which has the side effect, that this approach won't bite you, as long as you stick with SDL-1. But switch to any other framework, that may create the context via a proxy drawable, and you're running into a problem.
The canonical way is usually to retrieve the window size from the windowing system (SDL in your case) and then setting the viewport at one of the first actions in the display function.
Another issue is your use of gluBuildMipmaps, because a) you don't want to use mipmaps and b) since OpenGL-2 you can upload texture images of arbitrary size (i.e. you're not limited to powers of 2 for the dimensions), which completely eliminates the need for gluBuildMipmaps. So don't use it. Just use glTexImage2D directly and switch to a non-mipmapping filtering mode.
Update due to question update
The way you calculate the texture coordinates still doesn't look right. It seems like you're starting to count at 1. Texture pixels are 0 base indexed, so…
This is how I'd do it:
Assuming the projection maps the viewport
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, win_width, 0, win_height, -1, 1);
glViewport(0, 0, win_width, win_height);
we calculate the texture coordinates as
//Do pixel calculatons
glBindTexture( GL_TEXTURE_2D, text);
GLint tex_width, tex_height;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &tex_width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &tex_height);
texture_s1 = 1. / (2*width); // (2*0-1
texture_t1 = 1. / (2*height);
texture_s2 = (2.*(tex_width -1) + 1) / (2*width);
texture_t2 = (2.*(tex_height-1) + 1) / (2*height);
Note that tex_width and tex_height give the number of pixels in each direction, but the coordinates are 0 based, so you've to subtract 1 from them for the texture coordinate mapping. Hence we also use a constant 1 in the numerator for the s1, t1 coordinates.
The rest looks okay, given the projection you choose
glEnable( GL_TEXTURE_2D );
glBegin(GL_QUADS);
glTexCoord2f(s1, t1); //4
glVertex2f(0, 0);
glTexCoord2f(s2, t1); //3
glVertex2f(tex_width, 0);
glTexCoord2f(s2, t2); // 2
glVertex2f(tex_width,tex_height);
glTexCoord2f(s1, t2); // 1
glVertex2f(0,tex_height);
glEnd();

I'm not sure if this is really the problem, but I think you don't need/want mipmaps here. Have you tried using glTexImage2D instead of gluBuild2DMipmaps in combination with nearest neighbor filtering (glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN/MAG_FILTER, GL_NEAREST);)?

Related

GLFW simple triangle is lost?

I modified the "Simple example" example in GLFW3.0.4 in Mac OSX 10.8 as an XCode 4.6 project (runs fine when unchanged). I am having a (2D) rectangle drawn with an external library (which draws via shaders). I can see the rectangle but If I draw the sample triangle (immediate drawn) before it, the triangle is seen in the first splash (frame) and then it is lost. If I try to draw it after, the triangle is never seen. I can only see the rectangle and I don't know what settings/states the library is changing!
I tried to inspect the application with OpenGL Profiler. Stopped before CGLFlushDrawable and could not find the triangle in any of the buffers (front, back, depth, stencil).
Am I doing something (prominently) wrong? The profiler allows only gl-function breakpoints. How can I debug this (more efficiently) and find the problem.
Here is (much of the changed parts of) the code:
void glfw2DViewport(GLFWwindow * window) {
float ratio;
int width, height;
glfwGetFramebufferSize(window, &width, &height);
ratio = width / (float) height;
glViewport(0, 0, width, height);
glClearColor(0.8, 0.8, 0.8, 1.0); // Lets see if something black is drawn!!
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// eye is at 0,0,0 looking to positive Z, -1(behind) to 1 are clipping planes:
// https://www.opengl.org/sdk/docs/man2/xhtml/glOrtho.xml
glOrtho(ratio, -ratio, -1.f, 1.f, 1.0f, -1.f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// ----- 2D settings -----
glfwSwapInterval(1);
glEnable(GL_SMOOTH);
glDisable(GL_DEPTH_TEST);
//glDisable(GL_STENCIL_TEST); // Disabling changed nothing!!
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glLineWidth(5.0f);
glEnable(GL_LINE_SMOOTH);
glPointSize(5.0f);
glEnable(GL_POINT_SMOOTH);
}
int main(void) {
GLFWwindow* window;
glfwSetErrorCallback(error_callback);
if (!glfwInit())
exit(EXIT_FAILURE);
window = glfwCreateWindow(640, 480, "Simple example", NULL, NULL);
if (!window) {
glfwTerminate();
exit(EXIT_FAILURE);
}
glfwMakeContextCurrent(window);
glfw2DViewport(window);
//...
while (!glfwWindowShouldClose(window)) {
glMatrixMode(GL_MODELVIEW_MATRIX);
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT);
//drawUnitTriangle(); // can be seen just in the first frame!
glPushClientAttrib(GL_CLIENT_ALL_ATTRIB_BITS); // A vain attempt?
glPushAttrib(GL_ALL_ATTRIB_BITS); // Another vain attempt??
external_library_identity_matrix();
external_library_rectangle(POS,RED);
external_library_flush();
glPopAttrib();
glPopClientAttrib();
// Other vain attempts:
glfwMakeContextCurrent(window);
glMatrixMode(GL_MODELVIEW_MATRIX);
glLoadIdentity();
drawUnitTriangle(); // Nothing is Drawn!!
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwDestroyWindow(window);
glfwTerminate();
exit(EXIT_SUCCESS);
}
Are you sure the posted code is exactly what you are building with? If that's true, please check the argument of glMatrixMode() it should be GL_MODELVIEW, not GL_MODELVIEW_MATRIX. There are two places where you set it like this.
Since you already have glfw2DViewport(), why don't you put it in the while loop and delete other model view setting codes?

How to use fragment shader to draw sphere ilusion in OpenGL ES?

I am using this simple function to draw quad in 3D space that is facing camera. Now, I want to use fragment shader to draw illusion of a sphere inside. But, the problem is I'm new to OpenGL ES, so I don't know how?
void draw_sphere(view_t view) {
set_gl_options(COURSE);
glPushMatrix();
{
glTranslatef(view.plyr_pos.x, view.plyr_pos.y, view.plyr_pos.z - 1.9);
#ifdef __APPLE__
#undef glEnableClientState
#undef glDisableClientState
#undef glVertexPointer
#undef glTexCoordPointer
#undef glDrawArrays
static const GLfloat vertices []=
{
0, 0, 0,
1, 0, 0,
1, 1, 0,
0, 1, 0,
0, 0, 0,
1, 1, 0
};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
glDisableClientState(GL_VERTEX_ARRAY);
#else
#endif
}
glPopMatrix();
}
More exactly, I want to achieve this:
There might be quite a few thing you need to to achieve this... The sphere that is drawn on the last image you posted is a result in using lighting and shine and color. In general you need a shader that can process all that and can normally work for any shape.
This specific case (also some others that can be mathematically presented) can be drawn with a single quad without even needing to push normal coordinates to the program. What you need to do is create a normal in a fragment shader: If you receive vectors sphereCenter, fragmentPosition and float sphereRadius, then sphereNormal is a vector such as
sphereNormal = (fragmentPosition-sphereCenter)/radius; //taking into account all have .z = .0
sphereNormal.z = -sqrt(1.0 - length(sphereNormal)); //only if(length(spherePosition) < sphereRadius)
and real sphere position:
spherePosition = sphereCenter + sphereNormal*sphereRadius;
Now all you need to do is add your lighting.. Static or not it is most common to use some ambient factor, linear and square distance factors, shine factor:
color = ambient*materialColor; //apply ambient
vector fragmentToLight = lightPosition-spherePosition;
float lightDistance = length(fragmentToLight);
fragmentToLight = normalize(fragmentToLight); //can also just divide with light distance
float dotFactor = dot(sphereNormal, fragmentToLight); //dot factor is used to take int account the angle between light and surface normal
if(dotFactor > .0) {
color += (materialColor*dotFactor)/(1.0 + lightDistance*linearFactor + lightDistance*lightDistance*squareFactor); //apply dot factor and distance factors (in many cases the distance factors are 0)
}
vector shineVector = (sphereNormal*(2.0*dotFactor)) - fragmentToLight; //this is a vector that is mirrored through the normal, it is a reflection vector
float shineFactor = dot(shineVector, normalize(cameraPosition-spherePosition)); //factor represents how strong is the light reflection towards the viewer
if(shineFactor > .0) {
color += materialColor*(shineFactor*shineFactor * shine); //or some other power then 2 (shineFactor*shineFactor)
}
This pattern to create lights in fragment shader is one of very many. If you don't like it or you cant make it work I suggest you find another one on the web, otherwise I hope you will understand it and be able to play around with it.

Objects look weird with first-person camera in DirectX

I'm having problems creating a 3D first-person camera in DirectX 11.
I have a camera at (0, 0, -2) looking at (0, 0, 100). There is a box at (0, 0, 0) and the box is rendered correctly. See this image below:
When the position of the box (not the camera) changes, it is rendered correctly. For example, the next image shows the box at (1, 0, 0) and the camera still at (0, 0, -2):
However, as soon as the camera moves left or right, the box should go to the opposite direction, but it looks twisted instead. Here is an example when the camera is at (1, 0, -2) and looking at (1, 0, 100). The box is still at (0, 0, 0):
Here is how I set my camera:
// Set the world transformation matrix.
D3DXMATRIX rotationMatrix; // A matrix to store the rotation information
D3DXMATRIX scalingMatrix; // A matrix to store the scaling information
D3DXMATRIX translationMatrix; // A matrix to store the translation information
D3DXMatrixIdentity(&translationMatrix);
// Make the scene being centered on the camera position.
D3DXMatrixTranslation(&translationMatrix, -camera.GetX(), -camera.GetY(), -camera.GetZ());
m_worldTransformationMatrix = translationMatrix;
// Set the view transformation matrix.
D3DXMatrixIdentity(&m_viewTransformationMatrix);
D3DXVECTOR3 cameraPosition(camera.GetX(), camera.GetY(), camera.GetZ());
// ------------------------
// Compute the lookAt position
// ------------------------
const FLOAT lookAtDistance = 100;
FLOAT lookAtXPosition = camera.GetX() + lookAtDistance * cos((FLOAT)D3DXToRadian(camera.GetXZAngle()));
FLOAT lookAtYPosition = camera.GetY() + lookAtDistance * sin((FLOAT)D3DXToRadian(camera.GetYZAngle()));
FLOAT lookAtZPosition = camera.GetZ() + lookAtDistance * (sin((FLOAT)D3DXToRadian(camera.GetXZAngle())) * cos((FLOAT)D3DXToRadian(camera.GetYZAngle())));
D3DXVECTOR3 lookAtPosition(lookAtXPosition, lookAtYPosition, lookAtZPosition);
D3DXVECTOR3 upDirection(0, 1, 0);
D3DXMatrixLookAtLH(&m_viewTransformationMatrix,
&cameraPosition,
&lookAtPosition,
&upDirection);
RECT windowDimensions = GetWindowDimensions();
FLOAT width = (FLOAT)(windowDimensions.right - windowDimensions.left);
FLOAT height = (FLOAT)(windowDimensions.bottom - windowDimensions.top);
// Set the projection matrix.
D3DXMatrixIdentity(&m_projectionMatrix);
D3DXMatrixPerspectiveFovLH(&m_projectionMatrix,
(FLOAT)(D3DXToRadian(45)), // Horizontal field of view
width / height, // Aspect ratio
1.0f, // Near view-plane
100.0f); // Far view-plane
Here is how the final matrix is set:
D3DXMATRIX finalMatrix = m_worldTransformationMatrix * m_viewTransformationMatrix * m_projectionMatrix;
// Set the new values for the constant buffer
mp_deviceContext->UpdateSubresource(mp_constantBuffer, 0, 0, &finalMatrix, 0, 0);
And finally, here is the vertex shader that uses the constant buffer:
VOut VShader(float4 position : POSITION, float4 color : COLOR, float2 texcoord : TEXCOORD)
{
VOut output;
output.color = color;
output.texcoord = texcoord;
output.position = mul(position, finalMatrix); // Transform the vertex from 3D to 2D
return output;
}
Do you see what I'm doing wrong? If you need more information on my code, feel free to ask: I really want this to work.
Thanks!
The problem is you are setting finalMatrix with a row major matrix, but HLSL expects a column major matrix. The solution is to use D3DXMatrixTranspose before updating the constants, or declare row_major in the HLSL file like this:
cbuffer ConstantBuffer
{
row_major float4x4 finalMatrix;
}

How to draw circle in opengles

Here is my part of code to show circle on screen but unfortunate circle is not coming on screen.
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix();
glLoadIdentity();
glColor3f(0.0f,1.0f,0.0f);
glBegin(GL_LINE_LOOP);
const float DEG2RAD = 3.14159/180;
for (int i=0; i < 360; i++)
{
float degInRad = i*DEG2RAD;
glVertex2f(cos(degInRad)*8,sin(degInRad)*8);
}
glEnd();
glFlush();
I am not understanding code is seems to look ok but circle is not coming on screen.
Your circle is too big. The default viewport is in the range [(-1 -1), (1 1)].
BTW, you don't need 360 segments. About 30 is usually adequate, depending on how smooth you want it.

How to display a raw YUV frame in a Cocoa OpenGL program

I have been assigned wit the task to write a program that takes a sample raw YUV file and display it in a Cocoa OpenGL program.
I am an intern at my job and I have little or no clue how to start. I have been reading wikipedia & articles on YUV, but I couldn't find any good source code on how to open a raw YUV file, extract the data and convert it into RGB and display it in the view window.
Essentially, I need help with the following aspects of the task
-how to extract the YUV data from the sample YUV file
-how to convert the YUV data into RGB color space
-how to display the RGB color space in OpenGL. (This one I think I can figure out with time, but I really need help with the first two points)
please either tell me the classes to use, or point me to places where i can learn about YUV graphic/video display
I've done this with YUV frames captured from a CCD camera. Unfortunately, there are a number of different YUV formats. I believe the one that Apple uses for the GL_YCBCR_422_APPLE texture format is technically 2VUY422. To convert an image from a YUV422 frame generated by an IIDC Firewire camera to 2VUY422, I've used the following:
void yuv422_2vuy422(const unsigned char *theYUVFrame, unsigned char *the422Frame, const unsigned int width, const unsigned int height)
{
int i =0, j=0;
unsigned int numPixels = width * height;
unsigned int totalNumberOfPasses = numPixels * 2;
register unsigned int y0, y1, y2, y3, u0, u2, v0, v2;
while (i < (totalNumberOfPasses) )
{
u0 = theYUVFrame[i++]-128;
y0 = theYUVFrame[i++];
v0 = theYUVFrame[i++]-128;
y1 = theYUVFrame[i++];
u2 = theYUVFrame[i++]-128;
y2 = theYUVFrame[i++];
v2 = theYUVFrame[i++]-128;
y3 = theYUVFrame[i++];
// U0 Y0 V0 Y1 U2 Y2 V2 Y3
// Remap the values to 2VUY (YUYS?) (Y422) colorspace for OpenGL
// Y0 U Y1 V Y2 U Y3 V
// IIDC cameras are full-range y=[0..255], u,v=[-127..+127], where display is "video range" (y=[16..240], u,v=[16..236])
the422Frame[j++] = ((y0 * 240) / 255 + 16);
the422Frame[j++] = ((u0 * 236) / 255 + 128);
the422Frame[j++] = ((y1 * 240) / 255 + 16);
the422Frame[j++] = ((v0 * 236) / 255 + 128);
the422Frame[j++] = ((y2 * 240) / 255 + 16);
the422Frame[j++] = ((u2 * 236) / 255 + 128);
the422Frame[j++] = ((y3 * 240) / 255 + 16);
the422Frame[j++] = ((v2 * 236) / 255 + 128);
}
}
For efficient display of a YUV video source, you may wish to use Apple's client storage extension, which you can set up using something like the following:
glEnable(GL_TEXTURE_RECTANGLE_EXT);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 1);
glTextureRangeAPPLE(GL_TEXTURE_RECTANGLE_EXT, videoImageWidth * videoImageHeight * 2, videoTexture);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_STORAGE_HINT_APPLE , GL_STORAGE_SHARED_APPLE);
glPixelStorei(GL_UNPACK_CLIENT_STORAGE_APPLE, GL_TRUE);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_RGBA, videoImageWidth, videoImageHeight, 0, GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, videoTexture);
This lets you quickly change out the data stored within your client-side video texture before each frame to be displayed on the screen.
To draw, you could then use code like the following:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glViewport(0, 0, [self frame].size.width, [self frame].size.height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
NSRect bounds = NSRectFromCGRect([self bounds]);
glOrtho( (GLfloat)NSMinX(bounds), (GLfloat)NSMaxX(bounds), (GLfloat)NSMinY(bounds), (GLfloat)NSMaxY(bounds), -1.0, 1.0);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 1);
glTexSubImage2D (GL_TEXTURE_RECTANGLE_EXT, 0, 0, 0, videoImageWidth, videoImageHeight, GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, videoTexture);
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0.0f, videoImageHeight);
glTexCoord2f(0.0f, videoImageHeight);
glVertex2f(0.0f, 0.0f);
glTexCoord2f(videoImageWidth, videoImageHeight);
glVertex2f(videoImageWidth, 0.0f);
glTexCoord2f(videoImageWidth, 0.0f);
glVertex2f(videoImageWidth, videoImageHeight);
glEnd();
Adam Rosenfield’s comment is incorrect. On Macs, you can display YCbCr (the digital equivalent to YUV) textures using the GL_YCBCR_422_APPLE texture format, as specified in the APPLE_ycbcr_422 extension.
This answer is not correct, see the other answers and comments. Original answer left below for posterity.
You can't display it directly. You'll need to convert it to an RGB texture. As you may have gathered from Wikipedia, there are a bunch of variations on the YUV color space. Make sure you're using the right one.
For each pixel, the conversion from YUV to RGB is a straightforward linear transformation. You just do the same thing to each pixel independently.
Once you've converted the image to RGB, you can display it by creating a texture. You need to call glGenTextures() to allocate a texture handle, glBindTexture() to bind the texture to the render context, and glTexImage2D() to upload the texture data to the GPU. To render it, you again call glBindTexture(), followed by the rendering of a quad with texture coordinates set up properly.
// parameters: image: pointer to raw YUV input data
// width: image width (must be a power of 2)
// height: image height (must be a power of 2)
// returns: a handle to the resulting RGB texture
GLuint makeTextureFromYUV(const float *image, int width, int height)
{
float *rgbImage = (float *)malloc(width * height * 3 * sizeof(float)); // check for NULL
float *rgbImagePtr = rgbImage;
// convert from YUV to RGB (floats used here for simplicity; it's a little
// trickier with 8-bit ints)
int y, x;
for(y = 0; y < height; y++)
{
for(x = 0; x < width; x++)
{
float Y = *image++;
float U = *image++;
float V = *image++;
*rgbImagePtr++ = Y + 1.13983f * V; // R
*rgbImagePtr++ = Y - 0.39465f * U - 0.58060f * V; // G
*rgbImagePtr++ = Y + 2.03211f * U; // B
}
}
// create texture
GLuint texture;
glGenTextures(1, &texture);
// bind texture to render context
glBindTexture(GL_TEXTURE_2D, texture);
// upload texture data
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_FLOAT, rgbImage);
// don't use mipmapping (since we're not creating any mipmaps); the default
// minification filter uses mipmapping. Use linear filtering for minification
// and magnification.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// free data (it's now been copied onto the GPU) and return texture handle
free(rgbImage);
return texture;
}
To render:
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex3f( 0.0f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 0.0f); glVertex3f(64.0f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f); glVertex3f(64.0f, 64.0f, 0.0f);
glTexCoord2f(0.0f, 1.0f); glVertex3f( 0.0f, 64.0f, 0.0f);
glEnd();
And don't forget to call glEnable(GL_TEXTURE_2D) at some point during initialization, and call glDeleteTextures(1, &texture) during shutdown.

Resources