I'm trying to check inside the shader (GLSL) if my vec4 is NULL. I need this for several reasons, mostly to get specific graphics cards compatible, since some of them pass a previous color in gl_FragColor, and some don't (providing a null vec4 that needs to be overwritten).
Well, on a fairly new Mac, someone got this error:
java.lang.RuntimeException: Error creating shader: ERROR: 0:107: '==' does not operate on 'vec4' and 'int'
ERROR: 0:208: '!=' does not operate on 'mat3' and 'int'
This is my code in the fragment shader:
void main()
{
if(gl_FragColor == 0) gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0); //Line 107
vec4 newColor = vec4(0.0, 0.0, 0.0, 0.0);
[...]
if(inverseViewMatrix != 0) //Line 208
{
[do stuff with it; though I can replace this NULL check with a boolean]
}
[...]
gl_FragColor.rgb = mix(gl_FragColor.rgb, newColor.rgb, newColor.a);
gl_FragColor.a += newColor.a;
}
As you can see, I do a 0/NULL check for gl_FragColor at the start, because some graphics cards pass valuable information there, but some don't. Now, on that special mac, it didn't work. I did some research, but couldn't find any information on how to do a proper NULL check in GLSL. Is there even one, or do I really need to make separate shaders here?
All variables meant for reading, i.e. input variables always deliver sensible values. Being an output variable, gl_FragColor is not one of these variables!
In this code
void main()
{
if(gl_FragColor == 0) gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0); //Line 107
vec4 newColor = vec4(0.0, 0.0, 0.0, 0.0);
The very first thing you do is reading from gl_FragColor. The GLSL specification clearly states, that the value of an output varialbe as gl_FragColoris, is undefined when the fragment shader stage is entered (point 1):
The value of an output variable will be undefined in any of the three following cases:
At the beginning of execution.
At each synchronization point, unless
the value was well-defined after the previous synchronization
point and was not written by any invocation since, or
the value was written by exactly one shader invocation since the
previous synchronization point, or
the value was written by multiple shader invocations since the
previous synchronization point, and the last write performed
by all such invocations wrote the same value.
When read by a shader invocation, if
the value was undefined at the previous synchronization
point and has not been writen by the same shader invocation since, or
the output variable is written to by any other shader
invocation between the previous and next synchronization points,
even if that assignment occurs in code following the read.
Only after an element of an output variable has been written to for the first time its value is defined. So the whole thing you do there makes no sense. That it "didn't work" is completely permissible and an error on your end.
You're invoking undefined behaviour and technically it would be permissible for your computer to become sentinent, chase you down the street and erase all of your data as an alternative reaction to this.
In GLSL a vec4 is a regular datatype just like int. It's not some sort of pointer to an array which could be a null pointer. At best it has some default value that's not being overwritten by a call to glUniform.
Variables in GLSL shaders are always defined (otherwise, you'll get a linker error). If you don't supply those values with data (by not loading the appropriate uniform, or binding attributes to in or attribute variables), the values in those variables will be undefined (i.e., garbage), but present.
Even if you can't have null values, you can test undefined variables. This is a trick that I use to debug my shaders:
...
/* First we test for lower range */
if(suspect_variable.x < 0.5) {
outColour = vec4(0,1,0,0); /* Green if in lower range*/
} else if(suspect_variable.x >= 0.5) { /*Then for the higher range */
outColour = vec4(1,0,0,0); /* Red if in higher range */
} else {
/* Now we have tested for all real values.
If we end up here we know that the value must be undefined */
outColour = vec4(0,0,1,0); /* Blue if it's undefined */
}
You might ask, what could make a variable undefined? Out of range access of an array would cause it to be undefined;
const int numberOfLights = 2;
uniform vec3 lightColour[numberOfLights];
...
for(int i = 0; i < 100; i++) {
/* When i bigger than 1 the suspect_variable would be undefined */
suspect_variable = suspect_variable * lightColour[i];
}
It is a simple and easy trick to use when you do not have access to real debugging tools.
Related
I am having a strange issue with a GLSL fragment shader on an 2020 Intel MacBook Pro with Intel Iris Plus graphics and MacOS Ventura 13. It seems that when I use an if block and don't have an else block, then the if block always executes.
I have stripped out everything else from the shader to validate it isn't something that is NaN/INFINITY, divide by zero, or something similar. I've also seen some people mention to not use == with float and even with only > or < it is still occurring. With everything stripped out the below code is what I am running:
#version 400
layout(location = 0)out vec4 gc_color;
void main(){
float stackptr = -1.0;
if(stackptr > 0.0) {
gc_color = vec4(1.0, 1.0, 0.0, 0.0);
} // this always runs the if block
}
When I run this, the output is always yellow no matter what I update stackptr to. The condition seems to always evaluate to true. But also when I update it with an else block like below it seems to work correctly.
if(stackptr > 0.0) {
gc_color = vec4(1.0, 1.0, 0.0, 0.0);
} else {
gc_color = vec4(1.0, 0.0, 1.0, 0.0);
} // this runs correctly
This will change the color based on what the value of stackptr is as you would expect.
I have also tried replacing "stackptr" in the condition with the value (-1.0) and that works as expected. So it seems like an issue with the variable being evaluated.
Has anyone seen this behavior before? Is it possibly a driver issue? I'll try it with an Nvidia GPU soon on Windows to check.
There is a similar issue here but no one has given an answer.
You observe undefined behaviour here. In your first shader you don't assign gc_color if condition is false, you can see everything. Let's consider how it can be processed by shader's compiler and GPU.
Shader's compiler eliminates condition, since stackptr is a constant (and it can be deduced in some cases). Result would be arbitrary, gc_color is never set.
Dynamic branching. Yellow color is assigned when condition is true, otherwise it's undefined (any color). Dynamic branching implies branch prediction and drop part of command's pipeline in case of miss. It can be less efficient than static branching for simple conditions.
Static branching. We calculate both branches (true and false) and choose one of them when condition is resolved. There are some preconditions for it (e.g. there should not be side-effects in branches), but your shader looks eligible for it. Some implementations execute true branch unconditionally. When condition is known and result is true, we can do a jump to a command after false block. If result is false, we allow to execute commands in false branch that rewrite what was set in true branch. In you case there is no false branch, so nothing is rewritten and you see yellow came from true branch.
I am trying to solve an error I get when I run this sample.
It regards query occlusion, essentially it renders four times a square changing everytime viewport but only the central two times it will actually render something since the first and the last viewport are outside the monitor area on purpose.
viewports[0] = new Vec4(windowSize.x * -0.5f, windowSize.y * -0.5f, windowSize.x * 0.5f, windowSize.y * 0.5f);
viewports[1] = new Vec4(0, 0, windowSize.x * 0.5f, windowSize.y * 0.5f);
viewports[2] = new Vec4(windowSize.x * 0.5f, windowSize.y * 0.5f, windowSize.x * 0.5f, windowSize.y * 0.5f);
viewports[3] = new Vec4(windowSize.x * 1.0f, windowSize.y * 1.0f, windowSize.x * 0.5f, windowSize.y * 0.5f);
Each of this time, it will glBeginQuery with a different query and render a first time and then I query GL_ANY_SAMPLES_PASSED
// Samples count query
for (int i = 0; i < viewports.length; ++i) {
gl4.glViewportArrayv(0, 1, viewports[i].toFA_(), 0);
gl4.glBeginQuery(GL_ANY_SAMPLES_PASSED, queryName.get(i));
{
gl4.glDrawArraysInstanced(GL_TRIANGLES, 0, vertexCount, 1);
}
gl4.glEndQuery(GL_ANY_SAMPLES_PASSED);
}
Then I try to read the result
gl4.glBindBuffer(GL_QUERY_BUFFER, bufferName.get(Buffer.QUERY));
IntBuffer params = GLBuffers.newDirectIntBuffer(1);
for (int i = 0; i < viewports.length; ++i) {
params.put(0, i);
gl4.glGetQueryObjectuiv(queryName.get(i), GL_QUERY_RESULT, params);
}
But I get:
GlDebugOutput.messageSent(): GLDebugEvent[ id 0x502
type Error
severity High: dangerous undefined behavior
source GL API
msg GL_INVALID_OPERATION error generated. Bound query buffer is not large enough to store result.
when 1455696348371
source 4.5 (Core profile, arb, debug, compat[ES2, ES3, ES31, ES32], FBO, hardware) - 4.5.0 NVIDIA 356.39 - hash 0x238337ea]
If I look on the api doc they say:
params
If a buffer is bound to the GL_QUERY_RESULT_BUFFER target, then params is treated as an offset to a location within that buffer's data store to receive the result of the query. If no buffer is bound to GL_QUERY_RESULT_BUFFER, then params is treated as an address in client memory of a variable to receive the resulting data.
I guess there is an error in that phrase, I think they meant GL_QUERY_BUFFER instead of GL_QUERY_RESULT_BUFFER, indeed they use GL_QUERY_BUFFER also here for example..
Anyway, if anything is bound there, then params is interpreted as offset, ok
but my buffer is big enough:
gl4.glBindBuffer(GL_QUERY_BUFFER, bufferName.get(Buffer.QUERY));
gl4.glBufferData(GL_QUERY_BUFFER, Integer.BYTES * queryName.capacity(), null, GL_DYNAMIC_COPY);
gl4.glBindBuffer(GL_QUERY_BUFFER, 0);
So what's the problem?
I tried to write a big number, such as 500, for the buffer size, but no success...
I guess the error lies somewhere else.. could you see it?
if I have to answer, I say I expect that if I bind a buffer to GL_QUERY_BUFFER target, then OpenGL should read the value inside params and interpreter that as the offset (in bytes) where it should save the result of the query to.
No, that's not how it works.
In C/C++, the value taken by glGetQueryObject is a pointer, which normally is a pointer to a client memory buffer. For this particular function, this would often be a stack variable:
GLuint val;
glGetQueryObjectuiv(obj, GL_QUERY_RESULT, &val);
val is declared by client code (ie: the code calling into OpenGL). This code passes a pointer to that variable, and glGetQueryObjectuiv will write data to this pointer.
This is emulated in C# bindings by using *Buffer types. These represent contiguous arrays of values from which C# can extract a pointer that is compatible with C and C++ pointers-to-arrays.
However, when a buffer is bound to GL_QUERY_BUFFER, the meaning of the parameter changes. As you noted, it goes from being a client pointer to memory into an offset. But please note what that says. It does not say a "client pointer to an offset".
That is, the pointer value itself ceases being a pointer to actual memory. Instead, the numerical value of the pointer is treated as an offset.
In C++ terms, that's this:
glBindBuffer(GL_QUERY_BUFFER, buff);
glGetQueryObjectuiv(obj, GL_QUERY_RESULT, reinterpret_cast<void*>(16));
Note how it takes the offset of 16 bytes and pretends that this value is actually a void* who's numerical value is 16. That's what the reinterpret cast does.
How do you do that in C#? I have no idea; it would depend on the binding you're using, and you never specified what that was. Tao's long-since dead, and OpenTK looks to be heading that way too. But I did find out how to do this in OpenTK.
What you need to do is this:
gl4.glBindBuffer(GL_QUERY_BUFFER, bufferName.get(Buffer.QUERY));
for (int i = 0; i < viewports.length; ++i)
{
gl4.glGetQueryObjectuiv(queryName.get(i), GL_QUERY_RESULT,
(IntPtr)(i * Integer.BYTES));
}
You multiply times Integer.BYTES because the value is a byte offset into the buffer, not the integer index into an array of ints.
i'm a little frustrated. I'm about to use a mac (OS X mavericks) for coding stuff.
My shader works fine under windows 7 and android.
When i'm running my app under OS X i'm getting the following compiler errors during runtime.
Exception in thread "LWJGL Application" java.lang.IllegalStateException: ERROR: 0:6: Initializer not allowed
ERROR: 0:6: Use of undeclared identifier 'light'
ERROR: 0:6: Use of undeclared identifier 'spec'
ERROR: 0:6: Use of undeclared identifier 'spec'
ERROR: 0:6: Use of undeclared identifier 'spec'
Fragment Shader Snippet:
String fragmentShader =
"#ifdef GL_ES\n" +
"precision mediump float;\n" +
"#endif\n" +
"uniform sampler2D cubeMap;\n"+
"uniform sampler2D normalMap;"+
"varying vec2 v_TexCoords;"+
"void main() {\n"+
" const vec4 ambientColor = vec4(1.0, 1.0, 1.0, 1.0);" +
" const vec4 diffuseColor = vec4(1.0, 1.0, 1.0, 1.0);"+
" const vec3 light = normalize(vec3(-0.5,-0.8,-1.0));"+
" vec3 camDir = normalize(vec3(v_TexCoords,0) - vec3(0.5,0.5,10.0));"+
" vec4 normalPxl = texture2D(normalMap,v_TexCoords);" +
" vec3 normal = normalize(normalPxl.xyz - vec3(0.5,0.5,0.5));"+
" vec3 reflectDir = reflect(camDir,normal);"+
" float spec = pow(max(0.0,dot(camDir,reflect(light,normal))),4.0);"+
" gl_FragColor = vec4(texture2D(cubeMap, reflectDir.xy * 0.5+0.5).xyz,normalPxl.w-0.1)"+
....
I've red through OpenGL Specs and found nothing problematic with using const or declaring some variables in main.
Appreciate your Help
I'll repost my comment here so that this could be closed out, but it appears that the GLSL compiler on OS X didn't like this line:
const vec3 light = normalize(vec3(-0.5,-0.8,-1.0));
where a calculated value was being assigned to a constant. It may not optimize that out, where other compilers do. Replacing the normalization with the end vector it produces should fix this.
If you think this is a bug, I recommend filing it at http://bugreport.apple.com , because I know Apple's OpenGL driver team does pay attention to cases like this and has fixed them in the past.
Sleepless around 4am, this was nagging at me, and I finally figured out why. Long story short: this behavior is correct, and you might like to use #version 120 for your desktop shaders. Explanation follows...
There are two GLSL language standards at play here, ES 100 and Desktop 110. Contrary to the version numbers, 100 was standardized after 110, and contains some (but not all) features of Desktop 120.
There are two semantic features at play here, one is initialization of a local variable marked const, and the other is const-ness of built-in functions applied to constant arguments. So we need to look in two different specs at two different features to understand the behavior.
Both 100 and 110 allow initialization of a const local ONLY with a constant expression (100 §4.3.2 "Initializers for const declarations must be a constant expression” and 110 §4.3.2 "Initializers for const declarations must be formed from literal values, other const variables (not including function call paramaters [sic]), or expressions of these.”). This restriction is relaxed in GLSL 420.
100 and 110 differ in whether they allow built-in functions to participate in constant expressions. 110 does not (§4.3.3, "an expression whose operands are integral constant expressions, including constructors, but excluding function calls.”) but 100 does (§5.10, “[...] a built-in function call whose arguments are all constant expressions, with the exception of the texture lookup functions.”). This restriction is relaxed on the desktop in GLSL 120.
The relevant error from the Apple compiler unfortunately clipped into the horizontal scroll of the OP else I might have come to this conclusion sooner :(
"ERROR: 0:13: Initializer not allowed”
The only thing that “const” is buying you here is a compiler error if you try to reassign the variable. As a workaround, you can delete the “const" without consequence. You could also move those declarations outside of main.
In my fragment shader, I have the line
gl_FragColor = texture2D(texture, destinationTextureCoordinate) * destinationColor;
Where texture is a uniform of type sampler2D. In my code , I always set this to the value '0'.
glUniform1i(_uniformTexture, 0);
Is it possible to skip the call to glUniform1i and just hardcore 0 in the fragment shader? I tried just replacing texture with 0 and it complained about not being a valid type.
I'm not entirely sure what you're trying to achieve, but here are some thoughts:
sampler2D needs to sample a 2D texture, as the name indicates. It is a special GLSL variable, so the compiler is right to complain about 0 not being a valid type when fed into the first parameter of texture2D.
Unless you are using multiple textures, the second parameter to glUniform1i should always be 0 (default). You can skip this call if you are only using a single texture, but it's good practice to leave it in.
Why do you need a call to texture2D if you just want to pass the value 0? Surely you can just do gl_FragColor = destinationColor. This will color your fragment based on the vertex shader output from gl_Position. I'm not sure why you are implementing a texture if you don't plan on using it (or so it seems).
EDIT: Code to send two textures to the fragment shader correctly.
//glClear();
// Attach Texture 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _texture0);
glUniform1i(_uSampler0, 0);
// Attach Texture 1
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _texture1);
glUniform1i(_uSampler1, 1);
//glDrawArrays();
You need a layout, like this:
#version 420
//#extension GL_ARB_shading_language_420pack: enable Use for GLSL versions before 420.
layout(binding=0) uniform sampler2D diffuseTex;
In that way you are getting a binding to the sampler uniform 0. I think that is what you want. But keep in mind that when you are binding uniforms they got incremental values from zero, so be sure what you want to bind. The idea to bind uniform variables via the glUniform1i() function is to ensure the correctness of your code.
Source: http://www.opengl.org/wiki/GLSL_Sampler#Version_4.20_binding
From section 5.8 of The OpenGL® ES Shading Language (v1.00, r17) [PDF] (emphasis mine):
The assignment operator stores the value of the rvalue-expression into the l-value and returns an r-value with the type and precision of the lvalue-expression. The lvalue-expression and rvalue-expression must have the same type. All desired type-conversions must be specified explicitly via a constructor.
So it sounds like doing something like this would not be legal:
vec3 my_vec3 = vec3(1, 2, 3);
vec4 my_vec4 = my_vec3;
And to make it legal the second line would have to be something like:
vec4 my_vec4 = vec4(my_vec3, 1); // add 4th component
I assumed that glVertexAttribPointer had similar requirements. That is, if you were assigning to a vec4 that the size parameter would have to be equal to 4.
Then I came across the GLES20TriangleRenderer sample for Android. Some relevant snippets:
attribute vec4 aPosition;
maPositionHandle = GLES20.glGetAttribLocation(mProgram, "aPosition");
GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT, false,
TRIANGLE_VERTICES_DATA_STRIDE_BYTES, mTriangleVertices);
So aPosition is a vec4, but the call to glVertexAttribPointer that's used to set it has a size of 3. Is this code correct, is GLES20TriangleRenderer relying on unspecified behavior, or is there something else I'm missing?
The size of the attribute data passed to the shader does not have to match the size of the attribute in that shader. You can pass 2 values (from glVertexAttribPointer) to an attribute defined as a vec4; the leftover two values are zero, except for the W component which is 1. And similarly, you can pass 4 values to a vec2 attribute; the extra values are discarded.
So you can mix and match vertex attributes with uploaded values all you want.