Getting a Blank Screen when Setting a variable in Vertex Shader [closed] - macos

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I've just finished creating a simple rectangle in OpenGL 3.2, now I want to add lighting support. However, whenever I try to move my normals to the fragment shader, nothing appears. If I comment out that line, it works perfectly again. What would be the reason causing this? Nothing shows up in error log.
Vertex Shader:
#version 150
in vec4 position;
in vec3 inNormal;
out vec3 varNormal;
uniform mat4 modelViewProjectionMatrix;
void main()
{
//varNormal = inNormal; //If I uncomment this line, nothing shows up
gl_Position = modelViewProjectionMatrix * position;
}
Fragment Shader:
#version 150
in vec3 varNormal;
out vec4 fragColor;
void main()
{
fragColor = vec4(1, 1, 1, 1);
}
And passing the normals:
GLuint posAttrib = 0;
GLuint normalAttrib = 1;
glBindAttribLocation(program, posAttrib, "position");
glBindAttribLocation(program, normalAttrib, "normalAttrib");
//Building the VAO's/VBO's
GLfloat posCoords[] =
{
-10, 0.0, -10,
-10, 0.0, 10,
10, 0.0, 10,
10, 0.0, -10,
};
GLfloat normalCoords[] =
{
0, 0, 1,
0, 0, 1,
0, 0, 1,
0, 0, 1
};
glGenVertexArrays(1, &vaoName);
glBindVertexArray(vaoName);
GLuint posBuffer;
glGenBuffers(1, &posBuffer);
glBindBuffer(GL_ARRAY_BUFFER, posBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(posCoords), posCoords, GL_STATIC_DRAW);
glEnableVertexAttribArray(posAttrib);
glVertexAttribPointer(posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0);
GLuint normalBuffer;
glGenBuffers(1, &normalBuffer);
glBindBuffer(GL_ARRAY_BUFFER, normalBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(normalCoords), normalCoords, GL_STATIC_DRAW);
glEnableVertexAttribArray(normalAttrib);
glVertexAttribPointer(normalAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0);
I haven't tried putting all of my position and normal coords in a single VBO, but I'd prefer to not resort to that method.

Not sure if that's your actual code or a cut and paste, but calling glBindAttribLocation only takes effect after the next call to glLinkProgram.
If you're not linking the program after calling glBindAttrib those won't take effect, and your attributes may be given the wrong indexes. That could explain why you get different behavior after uncommenting the normal line.

Probably the most bizarre reason to fix this, but it works.
First of all, make sure you know how the OpenGL Profiler works. There's a tutorial provided by the Apple Docs
Set a breakpoint before/after glDrawElements (or glDrawArray depending on what you're using)
Then look at your program's vertex attributes and make sure the locations are in order.
If they aren't, rearrange them.
From (or anything else):
enum
{
POSITION_ATTR = 0,
TEXTURE_ATTR=1,
NORMAL_ATTR=2,
};
To:
enum
{
NORMAL_ATTR = 0,
TEXTURE_ATTR=2,
POSITION_ATTR=1,
};
No idea how and why this is happening, but this is the solution to the problem.

Related

OpenGL - Texture origin is top-left

I have a little problem:
I know that in UVs, the origin (0, 0) is at the bottom left.
I advanced in the discovery of opengl (I use lwjgl 3), and I wanted to make an interface.
In my opinion, I just need to display a texture that includes the text to display a button.
So I created, as usual, a new shader (vertexShader + fragmenShader).
But by great surprise, my texture is upside down! Why?
It's very weird.
this is my vertexShader:
#version 330
in vec3 position;
in vec2 texCoords;
out vec2 pass_texCoords;
void main() {
gl_Position = vec4(position, 1);
pass_texCoords = texCoords;
}
This is my fragmentShader:
#version 330
in vec2 pass_texCoords;
out vec4 out_Color;
uniform sampler2D texSampler;
void main() {
out_Color = texture(texSampler, pass_texCoords);
}
This is how I create my vao
(for the moment I plan to display a triangle):
my vetrices : -0.5, 0.5, 0, -0.5, -0.5, 0, 0.5, -0.5, 0
my texCoords : 0, 1, 0, 0, 1, 0
my indices : 0, 1, 2
And now how I display it:
//bind shader, bind VAO, enable VBOs
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, texture);
GL11.glDrawElements(GL11.GL_TRIANGLES, 3, GL11.GL_UNSIGNED_INT, 0);
//disableVBOs, unbind VAO, unbind shader
My triangle looks like (but the texture inside is inverted) :
| \
| \
| \
|_________
Need more code?
I know how invert the texture manually, but I think that the best way is to find the problem...
But if someone know how add an user-interface in a better way, I'm more than interrested!

OpenGL Immediate Mode textures not working

I am attempting to build a simple project using immediate mode textures.
Unfortunately, when I render, the GL color shows up rather than the texture. I've searched around for solutions, but found no meaningful difference between online examples and my code.
I've reduced it to a minimal failing example, which I have provided here. If my understanding is correct, this should produce a textured quad, with corners of black, red, green, and blue. Unfortunately, it appears purple, as if it's ignoring the texture completely. What am I doing wrong?
#include <glut.h>
GLuint tex;
void displayFunc() {
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_TRIANGLE_STRIP);
glColor3f(0.5, 0, 1);
glTexCoord2f(0.0, 0.0);
glVertex2f(-1.0, -1.0);
glTexCoord2f(1.0, 0.0);
glVertex2f(1.0, -1.0);
glTexCoord2f(0.0, 1.0);
glVertex2f(-1.0, 1.0);
glTexCoord2f(1.0, 1.0);
glVertex2f(1.0, 1.0);
glEnd();
glutSwapBuffers();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowPosition(0, 0);
glutInitWindowSize(640, 480);
glutCreateWindow("Test");
glutDisplayFunc(displayFunc);
GLubyte textureData[] = { 0, 0, 0, 255, 0, 0, 0, 255, 0, 0, 0, 255, 255, 255, 0 };
GLsizei width = 2;
GLsizei height = 2;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, width, height, 0, GL_RGB8, GL_UNSIGNED_BYTE, (GLvoid*)textureData);
glutMainLoop();
}
The output:
Also possibly worth mentioning:
I am building this project on a Mac (running El Capitan 10.11.1)
Graphics card: NVIDIA GeForce GT 650M 1024 MB
You're passing an invalid argument to glTexImage2D(). GL_RGB8 is not one of the supported values for the 7th (format) argument. The correct value is GL_RGB.
Sized formats, like GL_RGB8, are used for the internalFormat argument. In that case, the value defines both the number of components and the size of each component used for the internal storage of the texture.
The format and type parameters define the data you pass in. For these, the format only defined the number of components, while the type defines the type and size of each component.
Whenever you have problems with your OpenGL code, make sure that you call glGetError() to check for errors. In this case, you would see a GL_INVALID_ENUM error caused by your glTexImage2D() call.

Vertex buffers in open gl es 1.X

I am teaching myself about open gl es and vertex buffer (VBO) and I have written code and it is supposed to draw one red triangle but instead it colours the screen black:
- (void)drawRect:(CGRect)rect {
// Draw a red triangle in the middle of the screen:
glColor4f(1.0f, 0.0f, 0.0f, 1.0f);
// Setup the vertex data:
typedef struct {
float x;
float y;
} Vertex;
const Vertex vertices[] = {{50,50}, {50,150}, {150,50}};
const short indices[3] = {0,1,2};
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
NSLog(#"drawrect");
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, 0);
// The following line does the actual drawing to the render buffer:
glDrawElements(GL_TRIANGLE_STRIP, 3, GL_UNSIGNED_SHORT, indices);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, framebuffer);
[eAGLcontext presentRenderbuffer:GL_RENDERBUFFER_OES];
}
Here vertexBuffer is of type GLuint. What is going wrong? Thanks for your help.
Your vertices dont have a Z component, try {{50,50,-100}, {50,150,-100}, {150,50,-100}}; (your camera by default looks down the Z axis so putting it in the -Z should put it on screen) if you cant see it still try smaller numbers, im not sure what your near and far draw cutoff distance is, and if its not even set i dont know what the default is. This might not be the only issue but its the only one i can see by just looking quickly at it.
You need to add
glViewport(0, 0, 320, 480);
where you create the frame buffer and set up the context.
And replace your call to glDrawElements with
glDrawArrays(GL_TRIANGLE_STRIP, ...);

How to efficiently copy depth buffer to texture on OpenGL ES

I'm trying to get some shadowing effects to work in OpenGL ES 2.0 on iOS by porting some code from standard GL. Part of the sample involves copying the depth buffer to a texture:
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 0, 0, 800, 600, 0);
However, it appears the glCopyTexImage2D is not supported on ES. Reading a related thread, it seems I can use the frame buffer and fragment shaders to extract the depth data. So I'm trying to write the depth component to the color buffer, then copying it:
// clear everything
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// turn on depth rendering
glUseProgram(m_BaseShader.uiId);
// this is a switch to cause the fragment shader to just dump out the depth component
glUniform1i(uiBaseShaderRenderDepth, true);
// and for this, the color buffer needs to be on
glColorMask(GL_TRUE,GL_TRUE,GL_TRUE,GL_TRUE);
// and clear it to 1.0, like how the depth buffer starts
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// draw the scene
DrawScene();
// bind our texture
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, width, height, 0);
Here is the fragment shader:
uniform sampler2D sTexture;
uniform bool bRenderDepth;
varying lowp float LightIntensity;
varying mediump vec2 TexCoord;
void main()
{
if(bRenderDepth) {
gl_FragColor = vec4(vec3(gl_FragCoord.z), 1.0);
} else {
gl_FragColor = vec4(texture2D(sTexture, TexCoord).rgb * LightIntensity, 1.0);
}
}
I have experimented with not having the 'bRenderDepth' branch, and it doesn't speed it up significantly.
Right now pretty much just doing this step its at 14fps, which obviously is not acceptable. If I pull out the copy its way above 30fps. I'm getting two suggestions from the Xcode OpenGLES analyzer on the copy command:
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: error:
Validation Error: glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0,
960, 640, 0) : Height<640> is not a power of two
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: warning:
GPU Wait on Texture: Your app updated a texture that is currently
used for rendering. This caused the CPU to wait for the GPU to
finish rendering.
I'll work to resolve the two above issues (perhaps they are the crux if of it). In the meantime can anyone suggest a more efficient way to pull that depth data into a texture?
Thanks in advance!
iOS devices generally support OES_depth_texture, so on devices where the extension is present, you can set up a framebuffer object with a depth texture as its only attachment:
GLuint g_uiDepthBuffer;
glGenTextures(1, &g_uiDepthBuffer);
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
// glTexParameteri calls omitted for brevity
GLuint g_uiDepthFramebuffer;
glGenFramebuffers(1, &g_uiDepthFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, g_uiDepthFramebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, g_uiDepthBuffer, 0);
Your texture then receives all the values being written to the depth buffer when you draw your scene (you can use a trivial fragment shader for this), and you can texture from it directly without needing to call glCopyTexImage2D.

My triangle doesn't render when I use OpenGL Core Profile 3.2

I have a Cocoa (OSX) project that is currently very simple, I'm just trying to grasp the general concepts behind using OpenGL. I was able to get a triangle to display in my view, but when I went to write my vertex shaders and fragment shaders, I realized I was running the legacy OpenGL core profile. So I switched to the OpenGL 3.2 profile by setting the properties in the pixel format of the view in question before generating the context, but now the triangle doesn't render, even without my vertex or fragment shaders.
I have a controller class for the view that's instantiated in the nib. On -awakeFromNib it sets up the pixel format and the context:
NSOpenGLPixelFormatAttribute attr[] =
{
NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core,
0
};
NSOpenGLPixelFormat *glPixForm = [[NSOpenGLPixelFormat alloc] initWithAttributes:attr];
[self.mainView setPixelFormat:glPixForm];
self.glContext = [self.mainView openGLContext];
Then I generate the VAO:
glGenVertexArrays(1, &vertexArrayID);
glBindVertexArray(vertexArrayID);
Then the VBO:
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
g_vertex_buffer_data, the actual data for that buffer is defined as follows:
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
Here's the code for actually drawing:
[_glContext setView:self.mainView];
[_glContext makeCurrentContext];
glViewport(0, 0, [self.mainView frame].size.width, [self.mainView frame].size.height);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer);
glVertexAttribPointer(
0,
3,
GL_FLOAT,
GL_FALSE,
0,
(void*)0
);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(0);
glFlush();
This code draws the triangle fine if I comment out the NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core, in the NSOpenGLPixelFormatAttribute array, but as soon as I enable OpenGL Core Profile 3.2, it just displays black. Can anyone tell me what I'm doing wrong here?
EDIT: This issue still happens whether I turn my vertex and fragment shaders on or not, but here are my shaders in case it is helpful:
Vertex shader:
#version 150
in vec3 position;
void main() {
gl_Position.xyz = position;
}
Fragment shader:
#version 150
out vec3 color;
void main() {
color = vec3(1,0,0);
}
And right before linking the program, I make this call to bind the attribute location:
glBindAttribLocation(programID, 0, "position");
EDIT 2:
I don't know if this helps at all, but I just stepped through my program, running glGetError() and it looks like everything is fine until I actually call glDrawArrays(), then it returns GL_INVALID_OPERATION. I'm trying to figure out why this could be occurring, but still having no luck.
I figured this out, and it's sadly a very stupid mistake on my part.
I think the issue is that you need a vertex shader and a fragment shader when using 3.2 core profile, you can't just render without them. The reason it wasn't working with my shaders was...wait for it...after linking my shader program, I forgot to store the programID in the ivar in my class, so later when I call glUseProgram() I'm just calling it with a zero parameter.
I guess one of the main sources of confusion was the fact that I expected the 3.2 core profile to work without any vertex or fragment shaders.

Resources