WebGL error when attempting to get color data from vec4 - opengl-es

I'm having an issue while rendering a square in WebGL. When I run the program in Chrome, I'm getting the error:
GL ERROR :GL_INVALID_OPERATION : glDrawArrays: attempt to access out of range vertices in attribute 0
I've assumed this is because, at some point, the buffers are looking at the wrong arrays when trying to get data. I've pinpointed the issue to the
gl.vertexAttribPointer(pColorIndex, 4, gl.FLOAT, false, 0, 0);
line in the code below. i.e. if I change the 4 to a 2, the code will run, but not properly (as I'm looking at a vec4 for color data here). Is there an issue with the way my arrays are bound?
bindColorDisplayShaders();
// clear the framebuffer
gl.clear(gl.COLOR_BUFFER_BIT);
// bind the shader
gl.useProgram(shader);
// set the value of the uniform variable in the shader
var shift_loc = gl.getUniformLocation(shader, "shift");
gl.uniform1f(shift_loc, .5);
// bind the buffer
gl.bindBuffer(gl.ARRAY_BUFFER, vertexbuffer);
// get the index for the a_Position attribute defined in the vertex shader
var positionIndex = gl.getAttribLocation(shader, 'a_Position');
if (positionIndex < 0) {
console.log('Failed to get the storage location of a_Position');
return;
}
// "enable" the a_position attribute
gl.enableVertexAttribArray(positionIndex);
// associate the data in the currently bound buffer with the a_position attribute
// (The '2' specifies there are 2 floats per vertex in the buffer. Don't worry about
// the last three args just yet.)
gl.vertexAttribPointer(positionIndex, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
// bind the buffer with the color data
gl.bindBuffer(gl.ARRAY_BUFFER, chosencolorbuffer);
var pColorIndex = gl.getUniformLocation(shader, 'a_ChosenColor');
if(pColorIndex < 0){
console.log('Failed to get the storage location of a_ChosenColor');
}
gl.enableVertexAttribArray(pColorIndex);
gl.vertexAttribPointer(pColorIndex, 4, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
// draw, specifying the type of primitive to assemble from the vertices
gl.drawArrays(gl.TRIANGLES, 0, numPoints);

You can only use either a uniform or a vertex attribute,
this are two different things.
When using a vertex attribute you have to match the amount of vertices in your position buffer, and get the location using gl.getAttribLocation.
When using a uniform you're not supplying its data via array buffers but using the gl.uniform* methods to set their value.
In your example gl.uniform4fv(pColorIndex, yourColorVector).

Related

How can I properly create an array texture in OpenGL (Go)?

I have a total of two textures, the first is used as a framebuffer to work with inside a computeshader, which is later blitted using BlitFramebuffer(...). The second is supposed to be an OpenGL array texture, which is used to look up textures and copy them onto the framebuffer. It's created in the following way:
var texarray uint32
gl.GenTextures(1, &texarray)
gl.ActiveTexture(gl.TEXTURE0 + 1)
gl.BindTexture(gl.TEXTURE_2D_ARRAY, texarray)
gl.TexParameteri(gl.TEXTURE_2D_ARRAY, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
gl.TexImage3D(
gl.TEXTURE_2D_ARRAY,
0,
gl.RGBA8,
16,
16,
22*48,
0,
gl.RGBA, gl.UNSIGNED_BYTE,
gl.Ptr(sheet.Pix))
gl.BindImageTexture(1, texarray, 0, false, 0, gl.READ_ONLY, gl.RGBA8)
sheet.Pix is just the pixel array of an image loaded as a *image.NRGBA
The compute-shader looks like this:
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img;
layout(binding = 1) uniform sampler2DArray texAtlas;
void main() {
ivec2 iCoords = ivec2(gl_GlobalInvocationID.xy);
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7));
imageStore(img, iCoords, c);
}
When i run the program however, the result is just a window filled with the same color:
So my question is: What did I do wrong during the shader creation and what needs to be corrected?
For any open code questions, here's the corresponding repo
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7))
That can't work. texture samples the texture at normalized coordinates, so the texture is in [0,1] (in the st domain, the third dimension is the layer and is correct here), coordinates outside of that ar handled via the GL_WRAP_... modes you specified (repeat, clamp to edge, clamp to border). Since int % 16 is always an integer, and even with repetition only the fractional part of the coordinate will matter, you are basically sampling the same texel over and over again.
If you need the full texture sampling (texture filtering, sRGB conversions etc.), you have to use the normalized coordinates instead. But if you only want to access individual texel data, you can use texelFetch and keep the integer data instead.
Note, since you set the texture filter to GL_LINEAR, you seem to want filtering, however, your coordinates appear as if you would want at to access the texel centers, so if you're going the texture route , thenvec3(vec2(iCoords.xy)/vec2(16) + vec2(1.0/32.0) , layer) would be the proper normalization to reach the texel centers (together with GL_REPEAT), but then, the GL_LINEAR filtering would yield identical results to GL_NEAREST.

OpenGL ES depth framebuffer GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT

I've been trying to add shadow mapping to my OpenGL ES project and I've just found out that my framebuffer status returns GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT.
Here's my code to create the framebuffer :
// create fbo
int[] fboPtr = new int[1];
GLES30.glGenFramebuffers(1, fboPtr, 0);
fbo = fboPtr[0];
// use fbo
GLES30.glBindFramebuffer(GLES30.GL_FRAMEBUFFER, fbo);
// create depthMap
int[] depthMapPtr = new int[1];
GLES30.glGenTextures(1, depthMapPtr, 0);
depthMap = depthMapPtr[0];
// use depthMap
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, depthMap);
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_DEPTH_COMPONENT, size, size,
0, GLES30.GL_DEPTH_COMPONENT, GLES30.GL_FLOAT, null);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_MIN_FILTER, GLES30.GL_NEAREST);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_MAG_FILTER, GLES30.GL_NEAREST);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_WRAP_S, GLES30.GL_REPEAT);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_WRAP_T, GLES30.GL_REPEAT);
GLES30.glFramebufferTexture2D(GLES30.GL_FRAMEBUFFER, GLES30.GL_DEPTH_ATTACHMENT, GLES30.GL_TEXTURE_2D, depthMap, 0);
// draw buffer
int[] buffer = {GLES30.GL_NONE};
GLES30.glDrawBuffers(1, buffer, 0);
GLES30.glReadBuffer(GLES30.GL_NONE);
int status = GLES30.glCheckFramebufferStatus(GLES30.GL_FRAMEBUFFER);
I've bound the texture, so I don't know what can be causing the error.
Your internalFormat parameter for glTexImage2D isn't legal.
https://www.khronos.org/registry/OpenGL-Refpages/es3.0/html/glTexImage2D.xhtml
You need to use a sized internal format for depth. So I think this should work:
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_DEPTH_COMPONENT32F,
size, size, 0, GLES30.GL_DEPTH_COMPONENT, GLES30.GL_FLOAT, null);
Off topic: Learn to use the KHR_debug extension if you can - it gives you readable error messages, often with a precise reason why something failed.

seeing through triangles in GLKit

I am working on a simple iOS application to learn about OpenGLES 2.0. In the project, I'm rendering 4 triangles in the shape of a pyramid, with some sliders to adjust the height of the apex of the pyramid, and to rotate the modalViewMatrix about the y axis. I am trying to find the reason why.. after rotating this object counter-clockwise to the point where triangles appear in front of other triangles, I can see through the near triangles. However, when rotating in the clockwise direction to the same point, the near triangles are opaque and occlude the furthest triangles.
I assumed that the reason was a lack of a depth render buffer but after setting the property view.drawableDepthFormat = GLKViewDrawableDepthFormat16; the behavior persists.
For reference, this is my drawRect function where drawing is done. The only other code is done in viewDidLoad and in Global scope of the xcode project here.
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
[self.baseEffect prepareToDraw];
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER,pos);
glEnableVertexAttribArray(GLKVertexAttribPosition);
const GLvoid * off1 = NULL + offsetof(SceneVertex, position) ;
glVertexAttribPointer(GLKVertexAttribPosition, // Identifies the attribute to use
3, // number of coordinates for attribute
GL_FLOAT, // data is floating point
GL_FALSE, // no fixed point scaling
sizeof(SceneVertex), // total num bytes stored per vertex
off1);
glEnableVertexAttribArray(GLKVertexAttribNormal);
const GLvoid * off2 = NULL + offsetof(SceneVertex, normal) ;
glVertexAttribPointer(GLKVertexAttribNormal, // Identifies the attribute to use
3, // number of coordinates for attribute
GL_FLOAT, // data is floating point
GL_FALSE, // no fixed point scaling
sizeof(SceneVertex), // total num bytes stored per vertex
off2);
GLenum error = glGetError();
if(GL_NO_ERROR != error)
{
NSLog(#"GL Error: 0x%x", error);
}
int sizeOfTries = sizeof(triangles);
int sizeOfSceneVertex = sizeof(SceneVertex);
int numArraysToDraw = sizeOfTries / sizeOfSceneVertex;
glDrawArrays(GL_TRIANGLES, 0, numArraysToDraw);
}
It's not enough just to have a depth buffer, you need to tell OpenGL how you want to use it. Try adding the following lines:
glEnable(GL_DEPTH_TEST); // Enable depth testing
glDepthMask(GL_TRUE); // Enable depth write
glDepthFunc(GL_LEQUAL); // Choose the depth comparison function
While we're here, I'd recommend GLKViewDrawableDepthFormat24 over GLKViewDrawableDepthFormat16 for most use cases (better precision).
I'd also recommend familiarizing yourself with xcode's frame capture feature (doc), it really is an invaluable way to figure out what is going on when rendering is not working as intended.

WebGL Element Array Buffers not working

I'm learning WebGL with haxe and I'm stuck on the part describing element arrays. What is suppose to be a square isn't showing up and I dont know why?
var verticesArray = [
0.5, 0.5,
0.5,-0.5,
-0.5,-0.5,
-0.5, 0.5
];
var indicesArray = [0, 1, 3, 1, 2, 3];
var VBO = GL.createBuffer();
GL.bindBuffer(GL.ARRAY_BUFFER, VBO);
GL.bufferData(GL.ARRAY_BUFFER,new Float32Array(verticesArray), GL.STATIC_DRAW);
var EBO = GL.createBuffer();
GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, EBO);
GL.bufferData(GL.ELEMENT_ARRAY_BUFFER, new UInt16Array(indicesArray), GL.STATIC_DRAW);
GL.vertexAttribPointer(0, 2, GL.FLOAT, false, 0, 0);
GL.enableVertexAttribArray(0);
GL.useProgram(shaderProgram);
GL.drawElements(GL.TRIANGLES, 6, GL.UNSIGNED_INT, 0);
GL.bindBuffer(GL.ARRAY_BUFFER, null);
GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, null);
Here's all the code that's suppose to draw the square I have got the shader programing working for sure
Make sure the type in your call to drawElements matches the provided index array type, which is
gl.UNSIGNED_BYTE for Uint8Array
gl.UNSIGNED_SHORT for Uint16Array
gl.UNSIGNED_INT for Uint32Array (needs OES_element_index_uint extension)
Also VertexAttribPointer and enableVertexAttribArray calls always operate on the buffer currently bound to the ARRAY_BUFFER target, which in this case isn't a problem, but very well become one the way you wrote it if you add additional VBOs. So either set them after creating and binding the buffer or make otherwise sure the correct buffer is bound.

OES_vertex_array_object and client state

I want a vertex array object in OpenGL ES 2.0 to hold two attributes from different buffers, the second buffer being read from client memory (glBindBuffer(GL_ARRAY_BUFFER, 0)) But I get a runtime error:
GLuint my_vao;
GLuint my_buffer_attrib0;
GLfloat attrib0_data[] = { 0, 0, 0, 0 };
GLfloat attrib1_data[] = { 1, 1, 1, 1 };
void init()
{
// setup vao
glGenVertexArraysOES(1, &my_vao);
glBindVertexArrayOES(my_vao);
// setup attrib0 as a vbo
glGenBuffers( 1, &my_buffer_attrib0 );
glBindBuffer(GL_ARRAY_BUFFER, my_buffer_attrib0);
glBufferData( GL_ARRAY_BUFFER, sizeof(attrib0_data), attrib0_data, GL_STATIC_DRAW );
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray( 0 );
glEnableVertexAttribArray( 1 );
// "end" vao
glBindVertexArrayOES( 0 );
}
void draw()
{
glBindVertexArrayOES(my_vao);
// (now I assume attrib0 is bound to my_buffer_attrib0,
// and attrib1 is not bound. but is this assumption true?)
// setup attrib1
glBindBuffer( GL_ARRAY_BUFFER, 0 );
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, attrib1_data);
// draw using attrib0 and attrib1
glDrawArrays( GL_POINTS, 0, 1 ); // runtime error: Thread1: EXC_BAD_ACCESS (code=2, address=0x0)
}
What I want to achieve is to wrap the binding of two attributes as a vertex array buffer:
void draw_ok()
{
glBindVertexArrayOES( 0 );
// setup attrib0
glBindBuffer( GL_ARRAY_BUFFER, my_buffer_attrib0 );
glVertexAttribPointer( 0, 4, GL_FLOAT, GL_FALSE, 0, 0);
// setup attrib1
glBindBuffer( GL_ARRAY_BUFFER, 0 );
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, attrib1_data);
glEnableVertexAttribArray( 0 );
glEnableVertexAttribArray( 1 );
// draw using attrib0 and attrib1
glDrawArrays( GL_POINTS, 0, 1); // ok
}
Is it possible to bind two different buffers in a vertex array object? Are OES_vertex_array_object's different from (plain) OpenGL vertex array objects? Also note that I get this error in XCode running the iOS simulator. These are related links:
Use of VAO around VBO in Open ES iPhone app Causes EXC_BAD_ACCESS When Call to glDrawElements
OES_vertex_array_object
Well, a quote from the extension specifications explains it quite simply:
Should a vertex array object be allowed to encapsulate client vertex arrays?
RESOLVED: No. The OpenGL ES working group agreed that compatibility with OpenGL and the ability to to guide developers to more performant drawing by enforcing VBO usage were more important than the possibility of hurting adoption of VAOs.
So you can indeed bind two different buffers in a VAO, (well, the buffer binding isn't stored in the VAO, anyway, only the source buffers of the individual attributes, set through glVertexAttribPointer) but you cannot use client space memory in a VAO, only VBOs. This is the same for desktop GL.
So I would advise you to store all your vertex data in VBOs. If you want to use client memory because the data is updated dynamically and you think VBOs won't buy you anything there, that's still the wrong approach. Just use a VBO with a dynamic usage (GL_DYNAMIC_DRAW or even GL_STREAM_DRAW) and update it using glBuffer(Sub)Data or glMapBuffer (or the good old glBufferData(..., NULL); glMapBuffer(GL_WRITE_ONLY) combination).
Remove the following line:
glBindBuffer( GL_ARRAY_BUFFER, 0 );
from the draw() function. You didn't bind any buffer before and it may mess up buffer state.
After some digging (reading), answers was found found in OES_vertex_array_object. It seems that OES_vertex_array_object's focus on state on the server side, and client state are used if and only if the zero object is bound. It remains to answer if OES_vertex_array_object's are the same as plain OpenGL VAO's. Please comment if you know the answer to this. Below are quotations from OES_vertex_array_object:
This extension introduces vertex array objects which encapsulate
vertex array states on the server side (vertex buffer objects).
* Should a vertex array object be allowed to encapsulate client
vertex arrays?
RESOLVED: No. The OpenGL ES working group agreed that compatibility
with OpenGL and the ability to to guide developers to more
performant drawing by enforcing VBO usage were more important than
the possibility of hurting adoption of VAOs.
An INVALID_OPERATION error is generated if
VertexAttribPointer is called while a non-zero vertex array object
is bound, zero is bound to the <ARRAY_BUFFER> buffer object binding
point and the pointer argument is not NULL [fn1].
[fn1: This error makes it impossible to create a vertex array
object containing client array pointers, while still allowing
buffer objects to be unbound.]
And the presently attached vertex array object has the following
impacts on the draw commands:
While a non-zero vertex array object is bound, if any enabled
array's buffer binding is zero, when DrawArrays or
DrawElements is called, the result is undefined.
So EXC_BAD_ACCESS was the undefined result!
The functionality you desire has now been accepted by the community as an extension to WebGL:
http://www.khronos.org/registry/webgl/extensions/OES_vertex_array_object/

Resources