using Sampler2DArray throwing compilation errors - opengl-es

I am trying to use sampler2DArray to sample the array texture to which I rendered something on to layer zero. But when I try to compile the new shader which I created to a sampler the array texture is throwing compilation errors.
const char *fragshader="\
#version 320 es\n\
precision mediump float;\n\
\n\
in vec2 texcoord;\n\
uniform sampler2DArray texArray;\n\
uniform int layer;\n\
//out vec4 color;
layout(location =0) out vec4 color0;\n\
\n\
void main()\n\
{\n\
//gl_FragColor = texture(tex, texcoord,1.0);\n\
color0 = texture(texArray,vec3(texcoord.x,texcoord.y,layer));
}\n\
";
I am getting compilation errors.
getting garbage values when I try to get the uniform locations.
Errors:
ERROR: 0:5: 'declaration': either a default precision should be defined or a precision qualifier should be used
ERROR: 0:5: 'declaration': either a default precision should be defined or a precision qualifier should be used
ERROR: 0:13: 'texArray' :undeclared identifier.
ERROR: 0:13: 'texture' :no matching overload function found.
ERROR: 0:13: 'assign' :cannot convert from 'const float ' to 'fragout 4-component vector of float'
ERROR: 5 compilation errors. No code generated.
Can someone please help me with this?

The first error is caused by the lack of precision specifier for the declaration of texArray, as sampler2DArray types have no default precision.
The shader compiles OK for me with this fix applied:
uniform mediump sampler2DArray texArray;

Related

How are layout qualifiers better than getAttribLocation in WebGL2?

As I'm learning more about WebGL2, I've come across this new syntax within shaders where you set location inside of shaders via: layout (location=0) in vec4 a_Position;. How does this compare to getting the attribute location with the traditional gl.getAttribLocation('a_Position');. I assume it's faster? Any other reasons? Also, is it better to set locations to integers or would you be able to use strings as well?
There are 2 ideas conflated here
Manually assigning locations to attributes
Assigning attribute locations in GLSL vs JavaScript
Why would you want to assign locations?
You don't have to look up the location then since you already know it
You want to make sure 2 or more shader programs use the same locations so that they can use the same attributes. This also means a single vertex array can be used with both shaders. If you don't assign the attribute location then the shaders may use different attributes for the same data. In other words shaderprogram1 might use attribute 3 for position and shaderprogram2 might use attribute 1 for position.
Why would you want to assign locations in GLSL vs doing it in JavaScript?
You can assign a location like this in GLSL ES 3.0 (not GLSL ES 1.0)
layout (location=0) in vec4 a_Position;
You can also assign a location in JavaScript like this
// **BEFORE** calling gl.linkProgram
gl.bindAttribLocation(program, 0, "a_Position");
Off the top of my head it seems like doing it in JavaScript is more DRY (Don't repeat yourself). In fact if you use consistent naming then you can likely set all locations for all shaders by just binding locations for your common names before calling gl.linkProgram. One other minor advantage to doing it in JavaScript is it's compatible with GLSL ES 1.0 and WebGL1.
I have a feeling though it's more common to do it in GLSL. That seems bad to me because if you ever ran into a conflict you might have to edit 10s or 100s of shaders. For example you start with
layout (location=0) in vec4 a_Position;
layout (location=1) in vec2 a_Texcoord;
Later in another shader that doesn't have texcoord but has normals you do this
layout (location=0) in vec4 a_Position;
layout (location=1) in vec3 a_Normal;
Then sometime much later you add a shader that needs all 3
layout (location=0) in vec4 a_Position;
layout (location=1) in vec2 a_Texcoord;
layout (location=2) in vec3 a_Normal;
If you want to be able to use all 3 shaders with the same data you'd have to go edit the first 2 shaders. If you'd used the JavaScript way you wouldn't have to edit any shaders.
Of course another way would be to generate your shaders which is common. You could then either inject the locations
const someShader = `
layout (location=$POSITION_LOC) in vec4 a_Position;
layout (location=$NORMAL_LOC) in vec2 a_Texcoord;
layout (location=$TEXCOORD_LOC) in vec3 a_Normal;
...
`;
const substitutions = {
POSITION_LOC: 0,
NORMAL_LOC: 1,
TEXCOORD_LOC: 2,
};
const subRE = /\$([A-Z0-9_]+)/ig;
function replaceStuff(subs, str) {
return str.replace(subRE, (match, group0) => {
return subs[group0];
});
}
...
gl.shaderSource(prog, replaceStuff(substitutions, someShader));
or inject preprocessor macros to define them.
const commonHeader = `
#define A_POSITION_LOC 0
#define A_NORMAL_LOC 1
#define A_TEXCOORD_LOC 2
`;
const someShader = `
layout (location=A_POSITION_LOC) in vec4 a_Position;
layout (location=A_NORMAL_LOC) in vec2 a_Texcoord;
layout (location=A_TEXCOORD_LOC) in vec3 a_Normal;
...
`;
gl.shaderSource(prog, commonHeader + someShader);
Is it faster? Yes but probably not much, not calling gl.getAttribLocation is faster than calling it but you should generally only be calling gl.getAttribLocation at init time so it won't affect rendering speed and you generally only use the locations at init time when setting up vertex arrays.
is it better to set locations to integers or would you be able to use strings as well?
Locations are integers. You're manually choosing which attribute index to use. As above you can use substitutions, shader generation, preprocessor macros, etc... to convert some type of string into an integer but they need to be integers ultimately and they need to be in range of the number of attributes your GPU supports. You can't pick an arbitrary integer like 9127. Only 0 to N - 1 where N is the value returned by gl.getParameter(MAX_VERTEX_ATTRIBS). Note N will always be >= 16 in WebGL2

Opengl es 2.0 GLSL compiling fails on OSX when using const

i'm a little frustrated. I'm about to use a mac (OS X mavericks) for coding stuff.
My shader works fine under windows 7 and android.
When i'm running my app under OS X i'm getting the following compiler errors during runtime.
Exception in thread "LWJGL Application" java.lang.IllegalStateException: ERROR: 0:6: Initializer not allowed
ERROR: 0:6: Use of undeclared identifier 'light'
ERROR: 0:6: Use of undeclared identifier 'spec'
ERROR: 0:6: Use of undeclared identifier 'spec'
ERROR: 0:6: Use of undeclared identifier 'spec'
Fragment Shader Snippet:
String fragmentShader =
"#ifdef GL_ES\n" +
"precision mediump float;\n" +
"#endif\n" +
"uniform sampler2D cubeMap;\n"+
"uniform sampler2D normalMap;"+
"varying vec2 v_TexCoords;"+
"void main() {\n"+
" const vec4 ambientColor = vec4(1.0, 1.0, 1.0, 1.0);" +
" const vec4 diffuseColor = vec4(1.0, 1.0, 1.0, 1.0);"+
" const vec3 light = normalize(vec3(-0.5,-0.8,-1.0));"+
" vec3 camDir = normalize(vec3(v_TexCoords,0) - vec3(0.5,0.5,10.0));"+
" vec4 normalPxl = texture2D(normalMap,v_TexCoords);" +
" vec3 normal = normalize(normalPxl.xyz - vec3(0.5,0.5,0.5));"+
" vec3 reflectDir = reflect(camDir,normal);"+
" float spec = pow(max(0.0,dot(camDir,reflect(light,normal))),4.0);"+
" gl_FragColor = vec4(texture2D(cubeMap, reflectDir.xy * 0.5+0.5).xyz,normalPxl.w-0.1)"+
....
I've red through OpenGL Specs and found nothing problematic with using const or declaring some variables in main.
Appreciate your Help
I'll repost my comment here so that this could be closed out, but it appears that the GLSL compiler on OS X didn't like this line:
const vec3 light = normalize(vec3(-0.5,-0.8,-1.0));
where a calculated value was being assigned to a constant. It may not optimize that out, where other compilers do. Replacing the normalization with the end vector it produces should fix this.
If you think this is a bug, I recommend filing it at http://bugreport.apple.com , because I know Apple's OpenGL driver team does pay attention to cases like this and has fixed them in the past.
Sleepless around 4am, this was nagging at me, and I finally figured out why. Long story short: this behavior is correct, and you might like to use #version 120 for your desktop shaders. Explanation follows...
There are two GLSL language standards at play here, ES 100 and Desktop 110. Contrary to the version numbers, 100 was standardized after 110, and contains some (but not all) features of Desktop 120.
There are two semantic features at play here, one is initialization of a local variable marked const, and the other is const-ness of built-in functions applied to constant arguments. So we need to look in two different specs at two different features to understand the behavior.
Both 100 and 110 allow initialization of a const local ONLY with a constant expression (100 §4.3.2 "Initializers for const declarations must be a constant expression” and 110 §4.3.2 "Initializers for const declarations must be formed from literal values, other const variables (not including function call paramaters [sic]), or expressions of these.”). This restriction is relaxed in GLSL 420.
100 and 110 differ in whether they allow built-in functions to participate in constant expressions. 110 does not (§4.3.3, "an expression whose operands are integral constant expressions, including constructors, but excluding function calls.”) but 100 does (§5.10, “[...] a built-in function call whose arguments are all constant expressions, with the exception of the texture lookup functions.”). This restriction is relaxed on the desktop in GLSL 120.
The relevant error from the Apple compiler unfortunately clipped into the horizontal scroll of the OP else I might have come to this conclusion sooner :(
"ERROR: 0:13: Initializer not allowed”
The only thing that “const” is buying you here is a compiler error if you try to reassign the variable. As a workaround, you can delete the “const" without consequence. You could also move those declarations outside of main.

WebGL context can't render simplest screen

I'm stuck trying to render some extremely basic stuff on webgl, I've dumbed down the rendering to the most basic thing I can think of in order to find where the issue lies, but I can't even draw a simple square for some reason. The scene I really want to render is more complex, but as I said, I've dumbed it down to try to find the problem and still no luck. I'm hoping someone can take a look and find whatever I'm missing, wich I assume is a setup step at some point.
The gl commands I'm running (as reported by webgl inspector, without errors) are:
clearColor(0,0,0,1)
clearDepth(1)
clear(COLOR_BUFFER_BIT | DEPTH_BUFFER_BIT)
useProgram([Program 2])
bindBuffer(ARRAY_BUFFER, [Buffer 5])
vertexAttribPointer(0, 2, FLOAT, false, 0, 0)
drawArrays(TRIANGLES, 0, 6)
The buffer that is being used there (Buffer 5) is setup as follows:
bufferData(ARRAY_BUFFER, [-1,-1,1,-1,-1,1,-1,1,1,-1,1,1], STATIC_DRAW)
And the program (Program 2) data is:
LINK_STATUS true
VALIDATE_STATUS false
DELETE_STATUS false
ACTIVE_UNIFORMS 0
ACTIVE_ATTRIBUTES 1
Vertex shader:
#ifdef GL_ES
precision highp float;
#endif
attribute vec2 aPosition;
void main(void) {
gl_Position = vec4(aPosition, 0, 1);
}
Fragment shader:
#ifdef GL_ES
precision highp float;
#endif
void main(void) {
gl_FragColor = vec4(1.0,0.0,0.0,1.0);
}
Other state I think could be relevant:
CULL_FACE false
CULL_FACE_MODE BACK
FRONT_FACE CCW
BLEND false
DEPTH_TEST false
VIEWPORT 0, 0 640 x 480
SCISSOR_TEST false
SCISSOR_BOX 0, 0 640 x 480
COLOR_WRITEMASK true,true,true,true
DEPTH_WRITEMASK true
STENCIL_WRITEMASK 0xffffffff
FRAMEBUFFER_BINDING null
What I expected to see from that setup/commands is a red quad taking up the whole clip space, but what I see is simply the cleared screen, as the drawArrays doesn't seem to be doing anything. Can anybody spot what I'm missing? Any tips on how to debug this would be very welcome too!
Here:
bufferData(ARRAY_BUFFER, [-1,-1,1,-1,-1,1,-1,1,1,-1,1,1], STATIC_DRAW)
replace to:
bufferData(ARRAY_BUFFER, new Float32Array([-1,-1,1,-1,-1,1,-1,1,1,-1,1,1]), STATIC_DRAW)
Because webgl doesn't know which type you are passing here (integer or float or byte). Example:
http: jsfiddle.net/9QxAz/
After reading #user1724911's fiddle, I found out what I missed is enabling the vertex attribute array - stupidly simple mistake. I'm actually surprised I didn't get any warning from webgl inspector about this, but the solution was simply to add a call to enable that attribute:
gl.enableVertexAttribArray(program.attributes.aPosition);

NULL checks in GLSL

I'm trying to check inside the shader (GLSL) if my vec4 is NULL. I need this for several reasons, mostly to get specific graphics cards compatible, since some of them pass a previous color in gl_FragColor, and some don't (providing a null vec4 that needs to be overwritten).
Well, on a fairly new Mac, someone got this error:
java.lang.RuntimeException: Error creating shader: ERROR: 0:107: '==' does not operate on 'vec4' and 'int'
ERROR: 0:208: '!=' does not operate on 'mat3' and 'int'
This is my code in the fragment shader:
void main()
{
if(gl_FragColor == 0) gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0); //Line 107
vec4 newColor = vec4(0.0, 0.0, 0.0, 0.0);
[...]
if(inverseViewMatrix != 0) //Line 208
{
[do stuff with it; though I can replace this NULL check with a boolean]
}
[...]
gl_FragColor.rgb = mix(gl_FragColor.rgb, newColor.rgb, newColor.a);
gl_FragColor.a += newColor.a;
}
As you can see, I do a 0/NULL check for gl_FragColor at the start, because some graphics cards pass valuable information there, but some don't. Now, on that special mac, it didn't work. I did some research, but couldn't find any information on how to do a proper NULL check in GLSL. Is there even one, or do I really need to make separate shaders here?
All variables meant for reading, i.e. input variables always deliver sensible values. Being an output variable, gl_FragColor is not one of these variables!
In this code
void main()
{
if(gl_FragColor == 0) gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0); //Line 107
vec4 newColor = vec4(0.0, 0.0, 0.0, 0.0);
The very first thing you do is reading from gl_FragColor. The GLSL specification clearly states, that the value of an output varialbe as gl_FragColoris, is undefined when the fragment shader stage is entered (point 1):
The value of an output variable will be undefined in any of the three following cases:
At the beginning of execution.
At each synchronization point, unless
the value was well-defined after the previous synchronization
point and was not written by any invocation since, or
the value was written by exactly one shader invocation since the
previous synchronization point, or
the value was written by multiple shader invocations since the
previous synchronization point, and the last write performed
by all such invocations wrote the same value.
When read by a shader invocation, if
the value was undefined at the previous synchronization
point and has not been writen by the same shader invocation since, or
the output variable is written to by any other shader
invocation between the previous and next synchronization points,
even if that assignment occurs in code following the read.
Only after an element of an output variable has been written to for the first time its value is defined. So the whole thing you do there makes no sense. That it "didn't work" is completely permissible and an error on your end.
You're invoking undefined behaviour and technically it would be permissible for your computer to become sentinent, chase you down the street and erase all of your data as an alternative reaction to this.
In GLSL a vec4 is a regular datatype just like int. It's not some sort of pointer to an array which could be a null pointer. At best it has some default value that's not being overwritten by a call to glUniform.
Variables in GLSL shaders are always defined (otherwise, you'll get a linker error). If you don't supply those values with data (by not loading the appropriate uniform, or binding attributes to in or attribute variables), the values in those variables will be undefined (i.e., garbage), but present.
Even if you can't have null values, you can test undefined variables. This is a trick that I use to debug my shaders:
...
/* First we test for lower range */
if(suspect_variable.x < 0.5) {
outColour = vec4(0,1,0,0); /* Green if in lower range*/
} else if(suspect_variable.x >= 0.5) { /*Then for the higher range */
outColour = vec4(1,0,0,0); /* Red if in higher range */
} else {
/* Now we have tested for all real values.
If we end up here we know that the value must be undefined */
outColour = vec4(0,0,1,0); /* Blue if it's undefined */
}
You might ask, what could make a variable undefined? Out of range access of an array would cause it to be undefined;
const int numberOfLights = 2;
uniform vec3 lightColour[numberOfLights];
...
for(int i = 0; i < 100; i++) {
/* When i bigger than 1 the suspect_variable would be undefined */
suspect_variable = suspect_variable * lightColour[i];
}
It is a simple and easy trick to use when you do not have access to real debugging tools.

Hardcoding GLSL texture sampler2D

In my fragment shader, I have the line
gl_FragColor = texture2D(texture, destinationTextureCoordinate) * destinationColor;
Where texture is a uniform of type sampler2D. In my code , I always set this to the value '0'.
glUniform1i(_uniformTexture, 0);
Is it possible to skip the call to glUniform1i and just hardcore 0 in the fragment shader? I tried just replacing texture with 0 and it complained about not being a valid type.
I'm not entirely sure what you're trying to achieve, but here are some thoughts:
sampler2D needs to sample a 2D texture, as the name indicates. It is a special GLSL variable, so the compiler is right to complain about 0 not being a valid type when fed into the first parameter of texture2D.
Unless you are using multiple textures, the second parameter to glUniform1i should always be 0 (default). You can skip this call if you are only using a single texture, but it's good practice to leave it in.
Why do you need a call to texture2D if you just want to pass the value 0? Surely you can just do gl_FragColor = destinationColor. This will color your fragment based on the vertex shader output from gl_Position. I'm not sure why you are implementing a texture if you don't plan on using it (or so it seems).
EDIT: Code to send two textures to the fragment shader correctly.
//glClear();
// Attach Texture 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _texture0);
glUniform1i(_uSampler0, 0);
// Attach Texture 1
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _texture1);
glUniform1i(_uSampler1, 1);
//glDrawArrays();
You need a layout, like this:
#version 420
//#extension GL_ARB_shading_language_420pack: enable Use for GLSL versions before 420.
layout(binding=0) uniform sampler2D diffuseTex;
In that way you are getting a binding to the sampler uniform 0. I think that is what you want. But keep in mind that when you are binding uniforms they got incremental values from zero, so be sure what you want to bind. The idea to bind uniform variables via the glUniform1i() function is to ensure the correctness of your code.
Source: http://www.opengl.org/wiki/GLSL_Sampler#Version_4.20_binding

Resources