My triangle doesn't render when I use OpenGL Core Profile 3.2 - cocoa

I have a Cocoa (OSX) project that is currently very simple, I'm just trying to grasp the general concepts behind using OpenGL. I was able to get a triangle to display in my view, but when I went to write my vertex shaders and fragment shaders, I realized I was running the legacy OpenGL core profile. So I switched to the OpenGL 3.2 profile by setting the properties in the pixel format of the view in question before generating the context, but now the triangle doesn't render, even without my vertex or fragment shaders.
I have a controller class for the view that's instantiated in the nib. On -awakeFromNib it sets up the pixel format and the context:
NSOpenGLPixelFormatAttribute attr[] =
{
NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core,
0
};
NSOpenGLPixelFormat *glPixForm = [[NSOpenGLPixelFormat alloc] initWithAttributes:attr];
[self.mainView setPixelFormat:glPixForm];
self.glContext = [self.mainView openGLContext];
Then I generate the VAO:
glGenVertexArrays(1, &vertexArrayID);
glBindVertexArray(vertexArrayID);
Then the VBO:
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
g_vertex_buffer_data, the actual data for that buffer is defined as follows:
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
Here's the code for actually drawing:
[_glContext setView:self.mainView];
[_glContext makeCurrentContext];
glViewport(0, 0, [self.mainView frame].size.width, [self.mainView frame].size.height);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer);
glVertexAttribPointer(
0,
3,
GL_FLOAT,
GL_FALSE,
0,
(void*)0
);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(0);
glFlush();
This code draws the triangle fine if I comment out the NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core, in the NSOpenGLPixelFormatAttribute array, but as soon as I enable OpenGL Core Profile 3.2, it just displays black. Can anyone tell me what I'm doing wrong here?
EDIT: This issue still happens whether I turn my vertex and fragment shaders on or not, but here are my shaders in case it is helpful:
Vertex shader:
#version 150
in vec3 position;
void main() {
gl_Position.xyz = position;
}
Fragment shader:
#version 150
out vec3 color;
void main() {
color = vec3(1,0,0);
}
And right before linking the program, I make this call to bind the attribute location:
glBindAttribLocation(programID, 0, "position");
EDIT 2:
I don't know if this helps at all, but I just stepped through my program, running glGetError() and it looks like everything is fine until I actually call glDrawArrays(), then it returns GL_INVALID_OPERATION. I'm trying to figure out why this could be occurring, but still having no luck.

I figured this out, and it's sadly a very stupid mistake on my part.
I think the issue is that you need a vertex shader and a fragment shader when using 3.2 core profile, you can't just render without them. The reason it wasn't working with my shaders was...wait for it...after linking my shader program, I forgot to store the programID in the ivar in my class, so later when I call glUseProgram() I'm just calling it with a zero parameter.
I guess one of the main sources of confusion was the fact that I expected the 3.2 core profile to work without any vertex or fragment shaders.

Related

alpha blending in opengl es not working

I'm trying to vary the transparency of a texture drawn onto a quad, the code below works fine except the alpha set with glColor4f has no effect. What are the possible reasons for this? Is it likely to be a gl setting somewhere else in the program?
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_COLOR, GL_ONE_MINUS_SRC_ALPHA);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureId);
glColor4f(1.0f, 1.0f, 1.0f, 0.3f);
glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0, quadVertices);
glVertexAttribPointer(normalHandle, 3, GL_FLOAT, GL_FALSE, 0, quadNormals);
glVertexAttribPointer(textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, quadTexCoords);
glEnableVertexAttribArray(vertexHandle);
glEnableVertexAttribArray(normalHandle);
glEnableVertexAttribArray(textureCoordHandle);
glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE, (GLfloat*)&modelViewProjectionButton.data[0] );
glDrawElements(GL_TRIANGLES, NUM_QUAD_INDEX, GL_UNSIGNED_SHORT, quadIndices);
glDisableVertexAttribArray(vertexHandle);
glDisableVertexAttribArray(normalHandle);
glDisableVertexAttribArray(textureCoordHandle);
Edit:
I managed to do it as per the answer below. If anyone's interested, i put a uniform variable in my shader, called alpha, like this:
uniform float alpha;
void main()
{
gl_FragColor = texture2D(texSampler2D, texCoord);
gl_FragColor = gl_FragColor * alpha;
}
and then when i'm drawing the scene i used it like this (for example to set 0.5 alpha):
GLint alphaLocation = glGetUniformLocation(shaderProgramID, "alpha");
glUniform1f(alphaLocation, 0.5);
You are obviously using OpenGL ES 2.0 (because you are using glVertexAttribPointer and glUniformMatrix4fv), which actually makes this a bit puzzling. OpenGL ES 2.0 does not define glColor<N> (...), this was part of the fixed-function API and should not be defined in a compliant OpenGL ES 2.0 implementation.
Even if it is defined, there is no mechanism in GLSL ES to get the "current color" in a shader. Desktop GL has the pre-declared variable gl_Color in compatibility GLSL profiles, but GLSL ES does not.
You will need to use a GLSL uniform if you want to define the color this way instead of using a per-vertex attribute.

Vertex buffers in open gl es 1.X

I am teaching myself about open gl es and vertex buffer (VBO) and I have written code and it is supposed to draw one red triangle but instead it colours the screen black:
- (void)drawRect:(CGRect)rect {
// Draw a red triangle in the middle of the screen:
glColor4f(1.0f, 0.0f, 0.0f, 1.0f);
// Setup the vertex data:
typedef struct {
float x;
float y;
} Vertex;
const Vertex vertices[] = {{50,50}, {50,150}, {150,50}};
const short indices[3] = {0,1,2};
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
NSLog(#"drawrect");
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, 0);
// The following line does the actual drawing to the render buffer:
glDrawElements(GL_TRIANGLE_STRIP, 3, GL_UNSIGNED_SHORT, indices);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, framebuffer);
[eAGLcontext presentRenderbuffer:GL_RENDERBUFFER_OES];
}
Here vertexBuffer is of type GLuint. What is going wrong? Thanks for your help.
Your vertices dont have a Z component, try {{50,50,-100}, {50,150,-100}, {150,50,-100}}; (your camera by default looks down the Z axis so putting it in the -Z should put it on screen) if you cant see it still try smaller numbers, im not sure what your near and far draw cutoff distance is, and if its not even set i dont know what the default is. This might not be the only issue but its the only one i can see by just looking quickly at it.
You need to add
glViewport(0, 0, 320, 480);
where you create the frame buffer and set up the context.
And replace your call to glDrawElements with
glDrawArrays(GL_TRIANGLE_STRIP, ...);

Rendering 2D sprites in a 3D world?

How do I render 2D sprites in OpenGL given that I have a png of the sprite? See images as an example of the effect I'd like to achieve. Also I would like to overlay weapons on the screen like the rifle in the bottom image. Does anyone know how I would achieve the two effects? Any help is greatly appreciated.
In 3D terms, this is called a "billboard". A billboard is completely flat 2D plane with a texture on it and it always faces the camera.
See here for a pure OpenGL implementation: http://nehe.gamedev.net/data/articles/article.asp?article=19
Just about any 3D engine should be able to do them by default. Ogre3D can do it, for instance.
a) For the first case:
That's not really 2D sprites. Those men seem to be rendered as single quads with a texture with some kind of transparency (either alpha test or alpha blending).
Anyway, even a single quad can still be considered a 3D object, so for such situation you might want to treat it as one: track its translation and rotation and render it in the same way as any other 3D object.
b) For the second case:
If you want the gun (a 2D picture, I pressume) to be rendered in the same place without any perspective transformation, then you can use the same technique one uses for drawing the GUI (etc). Have a look at my post here:
2D overlay on a 3D scene
For the overlaying of the 2D weapon, you can use glOrtho for the camera view.
You create a 3d quad and map the .png-based texture to it. You can make the quad face whatever direction you want, as in the first picture, or make it always facing the camera (like a billboard, mentioned by Svenstaro) as in your second picture. Though, to be fair, I am sure that second picture just blitted the image (with some scaling) directly in the software-created framebuffer (that looks like Wolf3d tech, software rendering).
Take a look at OpenGL Point Sprites:
http://www.informit.com/articles/article.aspx?p=770639&seqNum=7
Especially useful for partical systems but may do the trick for your purposes.
Check this tutorial about billboards. I think you'll find useful.
http://www.lighthouse3d.com/opengl/billboarding/
opengl-tutorial has:
a tutorial http://www.opengl-tutorial.org/intermediate-tutorials/billboards-particles/billboards/ focused on energy bars
OpenGL 3.3+ WTF licensed code that just works: https://github.com/opengl-tutorials/ogl/blob/71cad106cefef671907ba7791b28b19fa2cc034d/tutorial18_billboards_and_particles/tutorial18_billboards.cpp
Screenshot:
Code:
#include <stdio.h>
#include <stdlib.h>
#include <vector>
#include <algorithm>
#include <GL/glew.h>
#include <glfw3.h>
GLFWwindow* window;
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtx/norm.hpp>
using namespace glm;
#include <common/shader.hpp>
#include <common/texture.hpp>
#include <common/controls.hpp>
#define DRAW_CUBE // Comment or uncomment this to simplify the code
int main( void )
{
// Initialise GLFW
if( !glfwInit() )
{
fprintf( stderr, "Failed to initialize GLFW\n" );
getchar();
return -1;
}
glfwWindowHint(GLFW_SAMPLES, 4);
glfwWindowHint(GLFW_RESIZABLE,GL_FALSE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // To make MacOS happy; should not be needed
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
// Open a window and create its OpenGL context
window = glfwCreateWindow( 1024, 768, "Tutorial 18 - Billboards", NULL, NULL);
if( window == NULL ){
fprintf( stderr, "Failed to open GLFW window. If you have an Intel GPU, they are not 3.3 compatible. Try the 2.1 version of the tutorials.\n" );
getchar();
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
// Initialize GLEW
glewExperimental = true; // Needed for core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
getchar();
glfwTerminate();
return -1;
}
// Ensure we can capture the escape key being pressed below
glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);
// Hide the mouse and enable unlimited mouvement
glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);
// Set the mouse at the center of the screen
glfwPollEvents();
glfwSetCursorPos(window, 1024/2, 768/2);
// Dark blue background
glClearColor(0.0f, 0.0f, 0.4f, 0.0f);
// Enable depth test
glEnable(GL_DEPTH_TEST);
// Accept fragment if it closer to the camera than the former one
glDepthFunc(GL_LESS);
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
// Create and compile our GLSL program from the shaders
GLuint programID = LoadShaders( "Billboard.vertexshader", "Billboard.fragmentshader" );
// Vertex shader
GLuint CameraRight_worldspace_ID = glGetUniformLocation(programID, "CameraRight_worldspace");
GLuint CameraUp_worldspace_ID = glGetUniformLocation(programID, "CameraUp_worldspace");
GLuint ViewProjMatrixID = glGetUniformLocation(programID, "VP");
GLuint BillboardPosID = glGetUniformLocation(programID, "BillboardPos");
GLuint BillboardSizeID = glGetUniformLocation(programID, "BillboardSize");
GLuint LifeLevelID = glGetUniformLocation(programID, "LifeLevel");
GLuint TextureID = glGetUniformLocation(programID, "myTextureSampler");
GLuint Texture = loadDDS("ExampleBillboard.DDS");
// The VBO containing the 4 vertices of the particles.
static const GLfloat g_vertex_buffer_data[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
};
GLuint billboard_vertex_buffer;
glGenBuffers(1, &billboard_vertex_buffer);
glBindBuffer(GL_ARRAY_BUFFER, billboard_vertex_buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_DYNAMIC_DRAW);
#ifdef DRAW_CUBE
// Everything here comes from Tutorial 4
GLuint cubeProgramID = LoadShaders( "../tutorial04_colored_cube/TransformVertexShader.vertexshader", "../tutorial04_colored_cube/ColorFragmentShader.fragmentshader" );
GLuint cubeMatrixID = glGetUniformLocation(cubeProgramID, "MVP");
static const GLfloat g_cube_vertex_buffer_data[] = { -1.0f,-1.0f,-1.0f,-1.0f,-1.0f, 1.0f,-1.0f, 1.0f, 1.0f,1.0f, 1.0f,-1.0f,-1.0f,-1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f,-1.0f, 1.0f,-1.0f,-1.0f,-1.0f,1.0f,-1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,-1.0f,-1.0f,-1.0f,-1.0f,-1.0f,-1.0f,-1.0f,-1.0f,-1.0f, 1.0f, 1.0f,-1.0f, 1.0f,-1.0f,1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,-1.0f,-1.0f,-1.0f, 1.0f, 1.0f,-1.0f,-1.0f, 1.0f,1.0f,-1.0f, 1.0f,1.0f, 1.0f, 1.0f,1.0f,-1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,-1.0f,-1.0f,1.0f, 1.0f, 1.0f,1.0f,-1.0f, 1.0f,1.0f, 1.0f, 1.0f,1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f, 1.0f, 1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f, 1.0f,1.0f, 1.0f, 1.0f,-1.0f, 1.0f, 1.0f,1.0f,-1.0f, 1.0f};
static const GLfloat g_cube_color_buffer_data[] = { 0.583f, 0.771f, 0.014f,0.609f, 0.115f, 0.436f,0.327f, 0.483f, 0.844f,0.822f, 0.569f, 0.201f,0.435f, 0.602f, 0.223f,0.310f, 0.747f, 0.185f,0.597f, 0.770f, 0.761f,0.559f, 0.436f, 0.730f,0.359f, 0.583f, 0.152f,0.483f, 0.596f, 0.789f,0.559f, 0.861f, 0.639f,0.195f, 0.548f, 0.859f,0.014f, 0.184f, 0.576f,0.771f, 0.328f, 0.970f,0.406f, 0.615f, 0.116f,0.676f, 0.977f, 0.133f,0.971f, 0.572f, 0.833f,0.140f, 0.616f, 0.489f,0.997f, 0.513f, 0.064f,0.945f, 0.719f, 0.592f,0.543f, 0.021f, 0.978f,0.279f, 0.317f, 0.505f,0.167f, 0.620f, 0.077f,0.347f, 0.857f, 0.137f,0.055f, 0.953f, 0.042f,0.714f, 0.505f, 0.345f,0.783f, 0.290f, 0.734f,0.722f, 0.645f, 0.174f,0.302f, 0.455f, 0.848f,0.225f, 0.587f, 0.040f,0.517f, 0.713f, 0.338f,0.053f, 0.959f, 0.120f,0.393f, 0.621f, 0.362f,0.673f, 0.211f, 0.457f,0.820f, 0.883f, 0.371f,0.982f, 0.099f, 0.879f};
GLuint cubevertexbuffer;
glGenBuffers(1, &cubevertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, cubevertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_cube_vertex_buffer_data), g_cube_vertex_buffer_data, GL_DYNAMIC_DRAW);
GLuint cubecolorbuffer;
glGenBuffers(1, &cubecolorbuffer);
glBindBuffer(GL_ARRAY_BUFFER, cubecolorbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_cube_color_buffer_data), g_cube_color_buffer_data, GL_DYNAMIC_DRAW);
#endif
double lastTime = glfwGetTime();
do
{
// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
double currentTime = glfwGetTime();
double delta = currentTime - lastTime;
lastTime = currentTime;
computeMatricesFromInputs();
glm::mat4 ProjectionMatrix = getProjectionMatrix();
glm::mat4 ViewMatrix = getViewMatrix();
#ifdef DRAW_CUBE
// Again : this is just Tutorial 4 !
glDisable(GL_BLEND);
glUseProgram(cubeProgramID);
glm::mat4 cubeModelMatrix(1.0f);
cubeModelMatrix = glm::scale(cubeModelMatrix, glm::vec3(0.2f, 0.2f, 0.2f));
glm::mat4 cubeMVP = ProjectionMatrix * ViewMatrix * cubeModelMatrix;
glUniformMatrix4fv(cubeMatrixID, 1, GL_FALSE, &cubeMVP[0][0]);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, cubevertexbuffer);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0 );
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, cubecolorbuffer);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*)0 );
glDrawArrays(GL_TRIANGLES, 0, 12*3);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
#endif
// We will need the camera's position in order to sort the particles
// w.r.t the camera's distance.
// There should be a getCameraPosition() function in common/controls.cpp,
// but this works too.
glm::vec3 CameraPosition(glm::inverse(ViewMatrix)[3]);
glm::mat4 ViewProjectionMatrix = ProjectionMatrix * ViewMatrix;
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// Use our shader
glUseProgram(programID);
// Bind our texture in Texture Unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Texture);
// Set our "myTextureSampler" sampler to user Texture Unit 0
glUniform1i(TextureID, 0);
// This is the only interesting part of the tutorial.
// This is equivalent to mlutiplying (1,0,0) and (0,1,0) by inverse(ViewMatrix).
// ViewMatrix is orthogonal (it was made this way),
// so its inverse is also its transpose,
// and transposing a matrix is "free" (inversing is slooow)
glUniform3f(CameraRight_worldspace_ID, ViewMatrix[0][0], ViewMatrix[1][0], ViewMatrix[2][0]);
glUniform3f(CameraUp_worldspace_ID , ViewMatrix[0][1], ViewMatrix[1][1], ViewMatrix[2][1]);
glUniform3f(BillboardPosID, 0.0f, 0.5f, 0.0f); // The billboard will be just above the cube
glUniform2f(BillboardSizeID, 1.0f, 0.125f); // and 1m*12cm, because it matches its 256*32 resolution =)
// Generate some fake life level and send it to glsl
float LifeLevel = sin(currentTime)*0.1f + 0.7f;
glUniform1f(LifeLevelID, LifeLevel);
glUniformMatrix4fv(ViewProjMatrixID, 1, GL_FALSE, &ViewProjectionMatrix[0][0]);
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, billboard_vertex_buffer);
glVertexAttribPointer(
0, // attribute. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the billboard !
// This draws a triangle_strip which looks like a quad.
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(0);
// Swap buffers
glfwSwapBuffers(window);
glfwPollEvents();
} // Check if the ESC key was pressed or the window was closed
while( glfwGetKey(window, GLFW_KEY_ESCAPE ) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0 );
// Cleanup VBO and shader
glDeleteBuffers(1, &billboard_vertex_buffer);
glDeleteProgram(programID);
glDeleteTextures(1, &TextureID);
glDeleteVertexArrays(1, &VertexArrayID);
#ifdef DRAW_CUBE
glDeleteProgram(cubeProgramID);
glDeleteVertexArrays(1, &cubevertexbuffer);
glDeleteVertexArrays(1, &cubecolorbuffer);
#endif
// Close OpenGL window and terminate GLFW
glfwTerminate();
return 0;
}
Tested on Ubuntu 15.10.
Axis oriented version of this question: https://gamedev.stackexchange.com/questions/35946/how-do-i-implement-camera-axis-aligned-billboards Here we have done a viewpoint oriented billboard.

Why am I not able to attach this texture uniform to my GLSL fragment shader?

In my Mac application, I define a rectangular texture based on YUV 4:2:2 data from an attached camera. Using standard vertex and texture coordinates, I can draw this to a rectangular area on the screen without any problems.
However, I would like to use a GLSL fragment shader to process these image frames on the GPU, and am having trouble passing in the rectangular video texture as a uniform to the fragment shader. When I attempt to do so, the texture simply reads as black.
The shader program compiles, links, and passes validation. I am receiving the proper address for the uniform from the shader program. Other uniforms, such as floating point values, pass in correctly and the fragment shader responds to changes in these values. The fragment shader receives the correct texture coordinates. I've also sprinkled my code liberally with glGetError() and seen no errors anywhere.
The vertex shader is as follows:
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_FrontColor = gl_Color;
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
}
and the fragment shader is as follows:
uniform sampler2D videoFrame;
void main()
{
gl_FragColor = texture2D(videoFrame, gl_TexCoord[0].st);
}
This should simply display the texture on my rectangular geometry.
The relevant drawing code is as follows:
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
const GLfloat textureVertices[] = {
0.0, videoImageSize.height,
videoImageSize.width, videoImageSize.height,
0.0, 0.0,
videoImageSize.width, 0.0
};
CGLSetCurrentContext(glContext);
if(!readyToDraw)
{
[self initGL];
readyToDraw = YES;
}
glViewport(0, 0, (GLfloat)self.bounds.size.width, (GLfloat)self.bounds.size.height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &textureName);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, textureName);
glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_RGBA, videoImageSize.width, videoImageSize.height, 0, GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, videoTexture);
glUseProgram(filterProgram);
glUniform1i(uniforms[UNIFORM_VIDEOFRAME], 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, squareVertices);
glTexCoordPointer(2, GL_FLOAT, 0, textureVertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[super drawInCGLContext:glContext pixelFormat:pixelFormat forLayerTime:interval displayTime:timeStamp];
glDeleteTextures(1, &textureName);
This code resides within a CAOpenGLLayer, where the superclass's -drawInCGLContext:pixelFormat:forLayerTime: displayTime: simply runs glFlush().
The uniform address is read using code like the following:
uniforms[UNIFORM_VIDEOFRAME] = glGetUniformLocation(filterProgram, "videoFrame");
As I said, if I comment out the glUseProgram() and glUniform1i() lines, this textured rectangle draws properly. Leaving them in leads to a black rectangle being drawn.
What could be preventing my texture uniform from being passed into my fragment shader?
Not sure about the GLSL version you're using, but from 1.40 upwards there's the type sampler2DRect specifically for accessing non-power-of-two textures. Might be what you're looking for, however I don't know how rectangular textures were handled before glsl 1.40.

How to draw a texture as a 2D background in OpenGL ES 2.0?

I'm just getting started with OpenGL ES 2.0, what I'd like to do is create some simple 2D output. Given a resolution of 480x800, how can I draw a background texture?
[My development environment is Java / Android, so examples directly relating to that would be best, but other languages would be fine.]
Even though you're on Android, I created an iPhone sample application that does this for frames of video coming in. You can download the code for this sample from here. I have a writeup about this application, which does color-based object tracking using live video, that you can read here.
In this application, I draw two triangles to generate a rectangle, then texture that using the following coordinates:
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat textureVertices[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
To pass through the video frame as a texture, I use a simple program with the following vertex shader:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
and the following fragment shader:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main()
{
gl_FragColor = texture2D(videoFrame, textureCoordinate);
}
Drawing is a simple matter of using the right program:
glUseProgram(directDisplayProgram);
setting the texture uniform:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glUniform1i(uniforms[UNIFORM_VIDEOFRAME], 0);
setting the attributes:
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
and then drawing the triangles:
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
You don't really draw a background, instead you draw a rectangle (or, even more correctly: two triangles forming a rectangle) and set a texture to that. This isn't different at all from drawing any other object on screen.
There are plenty of places showing how this is done, maybe there's even an android example project showing this.
The tricky part is getting something to display in front of or behind something else. For this to work, you need to set up a depth buffer and enable depth testing (glEnable(GL_DEPTH_TEST)). And your vertices need to have a Z coordinate (and tell glDrawElements that your vertices are made up of three values, not two).
If you don't do that, objects will be rendered in the order their glDrawElements() functions are called (meaning whichever you draw last will end up obscuring the rest).
My advice is to not have a background image or do anything fancy until you get the hang of it. OpenGL ES 2.0 has kind of a steep learning curve, and tutorials on ES 1.x don't really help with getting 3D to work because they can use helper functions like gluPerspective, which 2.0 just doesn't have. Start with creating a triangle on a background of nothing. Next, make it a square. Then, if you want to go fancy already, add a texture. Play with positions. See what happens when you change the Z value of your vertices. (Hint: Not a lot, if you don't have depth testing enabled. And even then, if you don't have perspective projection, objects won't get smaller the farther they are away, so it will still seem as if nothing happened)
After a few days, it stops being so damn frustrating, and you finally "get it", mostly.

Resources