As I'm learning more about WebGL2, I've come across this new syntax within shaders where you set location inside of shaders via: layout (location=0) in vec4 a_Position;. How does this compare to getting the attribute location with the traditional gl.getAttribLocation('a_Position');. I assume it's faster? Any other reasons? Also, is it better to set locations to integers or would you be able to use strings as well?
There are 2 ideas conflated here
Manually assigning locations to attributes
Assigning attribute locations in GLSL vs JavaScript
Why would you want to assign locations?
You don't have to look up the location then since you already know it
You want to make sure 2 or more shader programs use the same locations so that they can use the same attributes. This also means a single vertex array can be used with both shaders. If you don't assign the attribute location then the shaders may use different attributes for the same data. In other words shaderprogram1 might use attribute 3 for position and shaderprogram2 might use attribute 1 for position.
Why would you want to assign locations in GLSL vs doing it in JavaScript?
You can assign a location like this in GLSL ES 3.0 (not GLSL ES 1.0)
layout (location=0) in vec4 a_Position;
You can also assign a location in JavaScript like this
// **BEFORE** calling gl.linkProgram
gl.bindAttribLocation(program, 0, "a_Position");
Off the top of my head it seems like doing it in JavaScript is more DRY (Don't repeat yourself). In fact if you use consistent naming then you can likely set all locations for all shaders by just binding locations for your common names before calling gl.linkProgram. One other minor advantage to doing it in JavaScript is it's compatible with GLSL ES 1.0 and WebGL1.
I have a feeling though it's more common to do it in GLSL. That seems bad to me because if you ever ran into a conflict you might have to edit 10s or 100s of shaders. For example you start with
layout (location=0) in vec4 a_Position;
layout (location=1) in vec2 a_Texcoord;
Later in another shader that doesn't have texcoord but has normals you do this
layout (location=0) in vec4 a_Position;
layout (location=1) in vec3 a_Normal;
Then sometime much later you add a shader that needs all 3
layout (location=0) in vec4 a_Position;
layout (location=1) in vec2 a_Texcoord;
layout (location=2) in vec3 a_Normal;
If you want to be able to use all 3 shaders with the same data you'd have to go edit the first 2 shaders. If you'd used the JavaScript way you wouldn't have to edit any shaders.
Of course another way would be to generate your shaders which is common. You could then either inject the locations
const someShader = `
layout (location=$POSITION_LOC) in vec4 a_Position;
layout (location=$NORMAL_LOC) in vec2 a_Texcoord;
layout (location=$TEXCOORD_LOC) in vec3 a_Normal;
...
`;
const substitutions = {
POSITION_LOC: 0,
NORMAL_LOC: 1,
TEXCOORD_LOC: 2,
};
const subRE = /\$([A-Z0-9_]+)/ig;
function replaceStuff(subs, str) {
return str.replace(subRE, (match, group0) => {
return subs[group0];
});
}
...
gl.shaderSource(prog, replaceStuff(substitutions, someShader));
or inject preprocessor macros to define them.
const commonHeader = `
#define A_POSITION_LOC 0
#define A_NORMAL_LOC 1
#define A_TEXCOORD_LOC 2
`;
const someShader = `
layout (location=A_POSITION_LOC) in vec4 a_Position;
layout (location=A_NORMAL_LOC) in vec2 a_Texcoord;
layout (location=A_TEXCOORD_LOC) in vec3 a_Normal;
...
`;
gl.shaderSource(prog, commonHeader + someShader);
Is it faster? Yes but probably not much, not calling gl.getAttribLocation is faster than calling it but you should generally only be calling gl.getAttribLocation at init time so it won't affect rendering speed and you generally only use the locations at init time when setting up vertex arrays.
is it better to set locations to integers or would you be able to use strings as well?
Locations are integers. You're manually choosing which attribute index to use. As above you can use substitutions, shader generation, preprocessor macros, etc... to convert some type of string into an integer but they need to be integers ultimately and they need to be in range of the number of attributes your GPU supports. You can't pick an arbitrary integer like 9127. Only 0 to N - 1 where N is the value returned by gl.getParameter(MAX_VERTEX_ATTRIBS). Note N will always be >= 16 in WebGL2
Related
The following attribute seems fine:
attribute vec4 coord;
The following attribute complains that an attribute "cannot be bool or int":
attribute int ty;
The following attribute complains of a "syntax error":
attribute uint ty;
These results seem quite arbitrary. I can't find a list of the valid types for vertex shader attributes. What are the rules for whether an attribute type is valid in WebGL?
OpenGL ES Shading Language 1.00 specification, page 36, section 4.3.3: "Attribute":
The attribute qualifier is used to declare variables that are passed to a vertex shader from OpenGL ES on a per-vertex basis. It is an error to declare an attribute variable in any type of shader other than a vertex shader. Attribute variables are read-only as far as the vertex shader is concerned. Values for attribute variables are passed to a vertex shader through the OpenGL ES vertex API or as part of a vertex array. They convey vertex attributes to the vertex shader and are expected to change on every vertex shader run. The attribute qualifier can be used only with the data types float, vec2, vec3, vec4, mat2, mat3, and mat4. Attribute variables cannot be declared as arrays or structures.
i'm a little frustrated. I'm about to use a mac (OS X mavericks) for coding stuff.
My shader works fine under windows 7 and android.
When i'm running my app under OS X i'm getting the following compiler errors during runtime.
Exception in thread "LWJGL Application" java.lang.IllegalStateException: ERROR: 0:6: Initializer not allowed
ERROR: 0:6: Use of undeclared identifier 'light'
ERROR: 0:6: Use of undeclared identifier 'spec'
ERROR: 0:6: Use of undeclared identifier 'spec'
ERROR: 0:6: Use of undeclared identifier 'spec'
Fragment Shader Snippet:
String fragmentShader =
"#ifdef GL_ES\n" +
"precision mediump float;\n" +
"#endif\n" +
"uniform sampler2D cubeMap;\n"+
"uniform sampler2D normalMap;"+
"varying vec2 v_TexCoords;"+
"void main() {\n"+
" const vec4 ambientColor = vec4(1.0, 1.0, 1.0, 1.0);" +
" const vec4 diffuseColor = vec4(1.0, 1.0, 1.0, 1.0);"+
" const vec3 light = normalize(vec3(-0.5,-0.8,-1.0));"+
" vec3 camDir = normalize(vec3(v_TexCoords,0) - vec3(0.5,0.5,10.0));"+
" vec4 normalPxl = texture2D(normalMap,v_TexCoords);" +
" vec3 normal = normalize(normalPxl.xyz - vec3(0.5,0.5,0.5));"+
" vec3 reflectDir = reflect(camDir,normal);"+
" float spec = pow(max(0.0,dot(camDir,reflect(light,normal))),4.0);"+
" gl_FragColor = vec4(texture2D(cubeMap, reflectDir.xy * 0.5+0.5).xyz,normalPxl.w-0.1)"+
....
I've red through OpenGL Specs and found nothing problematic with using const or declaring some variables in main.
Appreciate your Help
I'll repost my comment here so that this could be closed out, but it appears that the GLSL compiler on OS X didn't like this line:
const vec3 light = normalize(vec3(-0.5,-0.8,-1.0));
where a calculated value was being assigned to a constant. It may not optimize that out, where other compilers do. Replacing the normalization with the end vector it produces should fix this.
If you think this is a bug, I recommend filing it at http://bugreport.apple.com , because I know Apple's OpenGL driver team does pay attention to cases like this and has fixed them in the past.
Sleepless around 4am, this was nagging at me, and I finally figured out why. Long story short: this behavior is correct, and you might like to use #version 120 for your desktop shaders. Explanation follows...
There are two GLSL language standards at play here, ES 100 and Desktop 110. Contrary to the version numbers, 100 was standardized after 110, and contains some (but not all) features of Desktop 120.
There are two semantic features at play here, one is initialization of a local variable marked const, and the other is const-ness of built-in functions applied to constant arguments. So we need to look in two different specs at two different features to understand the behavior.
Both 100 and 110 allow initialization of a const local ONLY with a constant expression (100 §4.3.2 "Initializers for const declarations must be a constant expression” and 110 §4.3.2 "Initializers for const declarations must be formed from literal values, other const variables (not including function call paramaters [sic]), or expressions of these.”). This restriction is relaxed in GLSL 420.
100 and 110 differ in whether they allow built-in functions to participate in constant expressions. 110 does not (§4.3.3, "an expression whose operands are integral constant expressions, including constructors, but excluding function calls.”) but 100 does (§5.10, “[...] a built-in function call whose arguments are all constant expressions, with the exception of the texture lookup functions.”). This restriction is relaxed on the desktop in GLSL 120.
The relevant error from the Apple compiler unfortunately clipped into the horizontal scroll of the OP else I might have come to this conclusion sooner :(
"ERROR: 0:13: Initializer not allowed”
The only thing that “const” is buying you here is a compiler error if you try to reassign the variable. As a workaround, you can delete the “const" without consequence. You could also move those declarations outside of main.
In my fragment shader, I have the line
gl_FragColor = texture2D(texture, destinationTextureCoordinate) * destinationColor;
Where texture is a uniform of type sampler2D. In my code , I always set this to the value '0'.
glUniform1i(_uniformTexture, 0);
Is it possible to skip the call to glUniform1i and just hardcore 0 in the fragment shader? I tried just replacing texture with 0 and it complained about not being a valid type.
I'm not entirely sure what you're trying to achieve, but here are some thoughts:
sampler2D needs to sample a 2D texture, as the name indicates. It is a special GLSL variable, so the compiler is right to complain about 0 not being a valid type when fed into the first parameter of texture2D.
Unless you are using multiple textures, the second parameter to glUniform1i should always be 0 (default). You can skip this call if you are only using a single texture, but it's good practice to leave it in.
Why do you need a call to texture2D if you just want to pass the value 0? Surely you can just do gl_FragColor = destinationColor. This will color your fragment based on the vertex shader output from gl_Position. I'm not sure why you are implementing a texture if you don't plan on using it (or so it seems).
EDIT: Code to send two textures to the fragment shader correctly.
//glClear();
// Attach Texture 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _texture0);
glUniform1i(_uSampler0, 0);
// Attach Texture 1
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _texture1);
glUniform1i(_uSampler1, 1);
//glDrawArrays();
You need a layout, like this:
#version 420
//#extension GL_ARB_shading_language_420pack: enable Use for GLSL versions before 420.
layout(binding=0) uniform sampler2D diffuseTex;
In that way you are getting a binding to the sampler uniform 0. I think that is what you want. But keep in mind that when you are binding uniforms they got incremental values from zero, so be sure what you want to bind. The idea to bind uniform variables via the glUniform1i() function is to ensure the correctness of your code.
Source: http://www.opengl.org/wiki/GLSL_Sampler#Version_4.20_binding
From section 5.8 of The OpenGL® ES Shading Language (v1.00, r17) [PDF] (emphasis mine):
The assignment operator stores the value of the rvalue-expression into the l-value and returns an r-value with the type and precision of the lvalue-expression. The lvalue-expression and rvalue-expression must have the same type. All desired type-conversions must be specified explicitly via a constructor.
So it sounds like doing something like this would not be legal:
vec3 my_vec3 = vec3(1, 2, 3);
vec4 my_vec4 = my_vec3;
And to make it legal the second line would have to be something like:
vec4 my_vec4 = vec4(my_vec3, 1); // add 4th component
I assumed that glVertexAttribPointer had similar requirements. That is, if you were assigning to a vec4 that the size parameter would have to be equal to 4.
Then I came across the GLES20TriangleRenderer sample for Android. Some relevant snippets:
attribute vec4 aPosition;
maPositionHandle = GLES20.glGetAttribLocation(mProgram, "aPosition");
GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT, false,
TRIANGLE_VERTICES_DATA_STRIDE_BYTES, mTriangleVertices);
So aPosition is a vec4, but the call to glVertexAttribPointer that's used to set it has a size of 3. Is this code correct, is GLES20TriangleRenderer relying on unspecified behavior, or is there something else I'm missing?
The size of the attribute data passed to the shader does not have to match the size of the attribute in that shader. You can pass 2 values (from glVertexAttribPointer) to an attribute defined as a vec4; the leftover two values are zero, except for the W component which is 1. And similarly, you can pass 4 values to a vec2 attribute; the extra values are discarded.
So you can mix and match vertex attributes with uploaded values all you want.
I am doing work on Opengl-es 2.0 in ubuntu 10.10 with pvrsdk .
In my code i am taking the vertices of triangle to render but when i have to use the Attribute parameter in vertex shader . how does it change
I see that in examples they are using myVertex . what is meant by that.
like this:
const char* pszVertShader = "\
attribute highp vec4 myVertex;\
uniform mediump mat4 projmatrix;\
invariant gl_Position;\
void main(void)\
{\
gl_Position = projmatrix * myVertex;\
}";
===============================render======================
glBindAttribLocation(uiProgramObject, VERTEX_ARRAY, "myVertex");
So i just wanna know that I am taking vertices from text file will it affect the myVertex attribute.
if you need another information I can provide you and I have already posted my whole code in previous question here
OpenGL 3/OpenGL-ES 2 are abandoned the concept of predefined vertex attributes "position", "normal", "texcoord", …, that are supplied through glVertexPointer, glNormalPointer, glTexCoordPointer, …
Instead in your shaders you introduce custom varyings/in attribute identifiers. Those identifiers are referred to by a so called attribute location, a numeric index. glBindAttribLocation allows one to assign attribute identifiers to specific locations. In the case of above code fragment there seems to exists a global constant VERTEX_ARRAY – introduced by the program code(!), i.e. not predefined by OpenGL or something like that – that is universally used for the myVertex varying/in attribute in the shader.
So that particular vertex attribute can be supplied with data by glVertexAttribPointer through a common token VERTEX_ARRAY in a statement similar to this:
glVertexAttribPointer(VERTEX_ARRAY, 3, GL_FLOAT, false, 0, isVBO ? (char*)vertex.offset : (char*)vertex.data + vertex.offset);
Of course the exact semantics of the glVertexAttribPointer calls depends on the particular program that does them.
Instead of a (global) constant VERTEX_ARRAY you could as well use a variable shader.attrib.vertex, which you set per shader using
shader.attrib.vertex = glGetAttribLocation(shader.program_object, "myVertex");
and use that variable in the calls to glVertexAttribPointer
glVertexAttribPointer(shader.attrib.vertex, …)