OpenGL Immediate Mode textures not working - macos

I am attempting to build a simple project using immediate mode textures.
Unfortunately, when I render, the GL color shows up rather than the texture. I've searched around for solutions, but found no meaningful difference between online examples and my code.
I've reduced it to a minimal failing example, which I have provided here. If my understanding is correct, this should produce a textured quad, with corners of black, red, green, and blue. Unfortunately, it appears purple, as if it's ignoring the texture completely. What am I doing wrong?
#include <glut.h>
GLuint tex;
void displayFunc() {
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_TRIANGLE_STRIP);
glColor3f(0.5, 0, 1);
glTexCoord2f(0.0, 0.0);
glVertex2f(-1.0, -1.0);
glTexCoord2f(1.0, 0.0);
glVertex2f(1.0, -1.0);
glTexCoord2f(0.0, 1.0);
glVertex2f(-1.0, 1.0);
glTexCoord2f(1.0, 1.0);
glVertex2f(1.0, 1.0);
glEnd();
glutSwapBuffers();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowPosition(0, 0);
glutInitWindowSize(640, 480);
glutCreateWindow("Test");
glutDisplayFunc(displayFunc);
GLubyte textureData[] = { 0, 0, 0, 255, 0, 0, 0, 255, 0, 0, 0, 255, 255, 255, 0 };
GLsizei width = 2;
GLsizei height = 2;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, width, height, 0, GL_RGB8, GL_UNSIGNED_BYTE, (GLvoid*)textureData);
glutMainLoop();
}
The output:
Also possibly worth mentioning:
I am building this project on a Mac (running El Capitan 10.11.1)
Graphics card: NVIDIA GeForce GT 650M 1024 MB

You're passing an invalid argument to glTexImage2D(). GL_RGB8 is not one of the supported values for the 7th (format) argument. The correct value is GL_RGB.
Sized formats, like GL_RGB8, are used for the internalFormat argument. In that case, the value defines both the number of components and the size of each component used for the internal storage of the texture.
The format and type parameters define the data you pass in. For these, the format only defined the number of components, while the type defines the type and size of each component.
Whenever you have problems with your OpenGL code, make sure that you call glGetError() to check for errors. In this case, you would see a GL_INVALID_ENUM error caused by your glTexImage2D() call.

Related

OpenGLES2: How to load and access a big float array

I have a large WxH float array:
float floatArray[W][H];
I want to access it in a fragment shader and I need to load/access it through a texture due to its size:
vec4 v4 = texture2D(tex, v_texCoord);
//Getting v4.x as floatArray[v_texCoord.x * W][v_texCoord.y * H]
I load the texture like this:
int texturenames[1];
glGenTextures(1, texturenames);
glActiveTexture(GL_TEXTURE0 + texturenames[0]);
glBindTexture(GL_TEXTURE_2D, texturenames[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, w, h, 0, GL_LUMINANCE, GL_FLOAT, floatArray);
glUniform1i(glGetUniformLocation(program_, "tex"), texturenames[0]);
I don't get the right values. Note that the third (internalformat) and seventh (format) parameters of glTexImage2D are GL_LUMINANCE.
void glTexImage2D(GLenum target,
GLint level,
GLint internalformat,
GLsizei width,
GLsizei height,
GLint border,
GLenum format,
GLenum type,
const GLvoid * data);
How can I load and access a big float array in OpenGLES2?
Short answer - you can't. OpenGL ES 2.0 doesn't support floating point texturing.
Given you only want a single channel perhaps you could encode it in an RGBA unorm texture and recover the value algorithmically in the shader, but it sounds horribly expensive on a mobile GPU.
OpenGL ES 3.0 does support float texturing, so that might provide more luck.

GLFW simple triangle is lost?

I modified the "Simple example" example in GLFW3.0.4 in Mac OSX 10.8 as an XCode 4.6 project (runs fine when unchanged). I am having a (2D) rectangle drawn with an external library (which draws via shaders). I can see the rectangle but If I draw the sample triangle (immediate drawn) before it, the triangle is seen in the first splash (frame) and then it is lost. If I try to draw it after, the triangle is never seen. I can only see the rectangle and I don't know what settings/states the library is changing!
I tried to inspect the application with OpenGL Profiler. Stopped before CGLFlushDrawable and could not find the triangle in any of the buffers (front, back, depth, stencil).
Am I doing something (prominently) wrong? The profiler allows only gl-function breakpoints. How can I debug this (more efficiently) and find the problem.
Here is (much of the changed parts of) the code:
void glfw2DViewport(GLFWwindow * window) {
float ratio;
int width, height;
glfwGetFramebufferSize(window, &width, &height);
ratio = width / (float) height;
glViewport(0, 0, width, height);
glClearColor(0.8, 0.8, 0.8, 1.0); // Lets see if something black is drawn!!
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// eye is at 0,0,0 looking to positive Z, -1(behind) to 1 are clipping planes:
// https://www.opengl.org/sdk/docs/man2/xhtml/glOrtho.xml
glOrtho(ratio, -ratio, -1.f, 1.f, 1.0f, -1.f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// ----- 2D settings -----
glfwSwapInterval(1);
glEnable(GL_SMOOTH);
glDisable(GL_DEPTH_TEST);
//glDisable(GL_STENCIL_TEST); // Disabling changed nothing!!
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glLineWidth(5.0f);
glEnable(GL_LINE_SMOOTH);
glPointSize(5.0f);
glEnable(GL_POINT_SMOOTH);
}
int main(void) {
GLFWwindow* window;
glfwSetErrorCallback(error_callback);
if (!glfwInit())
exit(EXIT_FAILURE);
window = glfwCreateWindow(640, 480, "Simple example", NULL, NULL);
if (!window) {
glfwTerminate();
exit(EXIT_FAILURE);
}
glfwMakeContextCurrent(window);
glfw2DViewport(window);
//...
while (!glfwWindowShouldClose(window)) {
glMatrixMode(GL_MODELVIEW_MATRIX);
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT);
//drawUnitTriangle(); // can be seen just in the first frame!
glPushClientAttrib(GL_CLIENT_ALL_ATTRIB_BITS); // A vain attempt?
glPushAttrib(GL_ALL_ATTRIB_BITS); // Another vain attempt??
external_library_identity_matrix();
external_library_rectangle(POS,RED);
external_library_flush();
glPopAttrib();
glPopClientAttrib();
// Other vain attempts:
glfwMakeContextCurrent(window);
glMatrixMode(GL_MODELVIEW_MATRIX);
glLoadIdentity();
drawUnitTriangle(); // Nothing is Drawn!!
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwDestroyWindow(window);
glfwTerminate();
exit(EXIT_SUCCESS);
}
Are you sure the posted code is exactly what you are building with? If that's true, please check the argument of glMatrixMode() it should be GL_MODELVIEW, not GL_MODELVIEW_MATRIX. There are two places where you set it like this.
Since you already have glfw2DViewport(), why don't you put it in the while loop and delete other model view setting codes?

GPUImage replace colors with colors from textures

Looking at GPUImagePosterizeFilter it seems like an easy adaptation to replace colors with pixels from textures. Say I have an image that is made from 10 greyscale colors. I would like to replace each of the pixel ranges from the 10 colors with pixels from 10 different texture swatches.
What is the proper way to create the textures? I am using the code below (I am not sure on the alpha arguments sent to CGBitmapContextCreate).
CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage;
size_t width = CGImageGetWidth(spriteImage);
size_t height = CGImageGetHeight(spriteImage);
GLubyte * spriteData = (GLubyte *) calloc(width*height*4, sizeof(GLubyte));
CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(spriteContext, CGRectMake(0, 0, width, height), spriteImage);
CGContextRelease(spriteContext);
GLuint texName;
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
free(spriteData);
return texName;
What is the proper way to pass the texture to the filter? In my main I have added:
uniform sampler2D fill0Texture;
In the code below texture is whats passed from the function above.
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(fill0Uniform, 1);
When ever I try to get an image from the spriteContext its nil and when I try using pixels from fill0Texture they are always black. I have thought about doing this with 10 chroma key iterations, but I think replacing all the pixels in a modified GPUImagePosterizeFilter is the way to go.
In order to match colors against the output from the PosterizeFilter, I am using the following code.
float testValue = 1.0 - (float(idx) / float(colorLevels));
vec4 keyColor = vec4(testValue, testValue, testValue, 1.0);
vec4 replacementColor = texture2D( tx0, textureCoord(idx));
float select = step(distance(keyColor,srcColor),.1);
return select * replacementColor;
If the color(already Posterized) passed in matches then the replacement color is returned. The textureCoord(idx) call looks up the replacement color from a gltexture.

How to efficiently copy depth buffer to texture on OpenGL ES

I'm trying to get some shadowing effects to work in OpenGL ES 2.0 on iOS by porting some code from standard GL. Part of the sample involves copying the depth buffer to a texture:
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 0, 0, 800, 600, 0);
However, it appears the glCopyTexImage2D is not supported on ES. Reading a related thread, it seems I can use the frame buffer and fragment shaders to extract the depth data. So I'm trying to write the depth component to the color buffer, then copying it:
// clear everything
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// turn on depth rendering
glUseProgram(m_BaseShader.uiId);
// this is a switch to cause the fragment shader to just dump out the depth component
glUniform1i(uiBaseShaderRenderDepth, true);
// and for this, the color buffer needs to be on
glColorMask(GL_TRUE,GL_TRUE,GL_TRUE,GL_TRUE);
// and clear it to 1.0, like how the depth buffer starts
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// draw the scene
DrawScene();
// bind our texture
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, width, height, 0);
Here is the fragment shader:
uniform sampler2D sTexture;
uniform bool bRenderDepth;
varying lowp float LightIntensity;
varying mediump vec2 TexCoord;
void main()
{
if(bRenderDepth) {
gl_FragColor = vec4(vec3(gl_FragCoord.z), 1.0);
} else {
gl_FragColor = vec4(texture2D(sTexture, TexCoord).rgb * LightIntensity, 1.0);
}
}
I have experimented with not having the 'bRenderDepth' branch, and it doesn't speed it up significantly.
Right now pretty much just doing this step its at 14fps, which obviously is not acceptable. If I pull out the copy its way above 30fps. I'm getting two suggestions from the Xcode OpenGLES analyzer on the copy command:
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: error:
Validation Error: glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0,
960, 640, 0) : Height<640> is not a power of two
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: warning:
GPU Wait on Texture: Your app updated a texture that is currently
used for rendering. This caused the CPU to wait for the GPU to
finish rendering.
I'll work to resolve the two above issues (perhaps they are the crux if of it). In the meantime can anyone suggest a more efficient way to pull that depth data into a texture?
Thanks in advance!
iOS devices generally support OES_depth_texture, so on devices where the extension is present, you can set up a framebuffer object with a depth texture as its only attachment:
GLuint g_uiDepthBuffer;
glGenTextures(1, &g_uiDepthBuffer);
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
// glTexParameteri calls omitted for brevity
GLuint g_uiDepthFramebuffer;
glGenFramebuffers(1, &g_uiDepthFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, g_uiDepthFramebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, g_uiDepthBuffer, 0);
Your texture then receives all the values being written to the depth buffer when you draw your scene (you can use a trivial fragment shader for this), and you can texture from it directly without needing to call glCopyTexImage2D.

Getting a Blank Screen when Setting a variable in Vertex Shader [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I've just finished creating a simple rectangle in OpenGL 3.2, now I want to add lighting support. However, whenever I try to move my normals to the fragment shader, nothing appears. If I comment out that line, it works perfectly again. What would be the reason causing this? Nothing shows up in error log.
Vertex Shader:
#version 150
in vec4 position;
in vec3 inNormal;
out vec3 varNormal;
uniform mat4 modelViewProjectionMatrix;
void main()
{
//varNormal = inNormal; //If I uncomment this line, nothing shows up
gl_Position = modelViewProjectionMatrix * position;
}
Fragment Shader:
#version 150
in vec3 varNormal;
out vec4 fragColor;
void main()
{
fragColor = vec4(1, 1, 1, 1);
}
And passing the normals:
GLuint posAttrib = 0;
GLuint normalAttrib = 1;
glBindAttribLocation(program, posAttrib, "position");
glBindAttribLocation(program, normalAttrib, "normalAttrib");
//Building the VAO's/VBO's
GLfloat posCoords[] =
{
-10, 0.0, -10,
-10, 0.0, 10,
10, 0.0, 10,
10, 0.0, -10,
};
GLfloat normalCoords[] =
{
0, 0, 1,
0, 0, 1,
0, 0, 1,
0, 0, 1
};
glGenVertexArrays(1, &vaoName);
glBindVertexArray(vaoName);
GLuint posBuffer;
glGenBuffers(1, &posBuffer);
glBindBuffer(GL_ARRAY_BUFFER, posBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(posCoords), posCoords, GL_STATIC_DRAW);
glEnableVertexAttribArray(posAttrib);
glVertexAttribPointer(posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0);
GLuint normalBuffer;
glGenBuffers(1, &normalBuffer);
glBindBuffer(GL_ARRAY_BUFFER, normalBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(normalCoords), normalCoords, GL_STATIC_DRAW);
glEnableVertexAttribArray(normalAttrib);
glVertexAttribPointer(normalAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0);
I haven't tried putting all of my position and normal coords in a single VBO, but I'd prefer to not resort to that method.
Not sure if that's your actual code or a cut and paste, but calling glBindAttribLocation only takes effect after the next call to glLinkProgram.
If you're not linking the program after calling glBindAttrib those won't take effect, and your attributes may be given the wrong indexes. That could explain why you get different behavior after uncommenting the normal line.
Probably the most bizarre reason to fix this, but it works.
First of all, make sure you know how the OpenGL Profiler works. There's a tutorial provided by the Apple Docs
Set a breakpoint before/after glDrawElements (or glDrawArray depending on what you're using)
Then look at your program's vertex attributes and make sure the locations are in order.
If they aren't, rearrange them.
From (or anything else):
enum
{
POSITION_ATTR = 0,
TEXTURE_ATTR=1,
NORMAL_ATTR=2,
};
To:
enum
{
NORMAL_ATTR = 0,
TEXTURE_ATTR=2,
POSITION_ATTR=1,
};
No idea how and why this is happening, but this is the solution to the problem.

Resources