I'm stuck trying to render some extremely basic stuff on webgl, I've dumbed down the rendering to the most basic thing I can think of in order to find where the issue lies, but I can't even draw a simple square for some reason. The scene I really want to render is more complex, but as I said, I've dumbed it down to try to find the problem and still no luck. I'm hoping someone can take a look and find whatever I'm missing, wich I assume is a setup step at some point.
The gl commands I'm running (as reported by webgl inspector, without errors) are:
clearColor(0,0,0,1)
clearDepth(1)
clear(COLOR_BUFFER_BIT | DEPTH_BUFFER_BIT)
useProgram([Program 2])
bindBuffer(ARRAY_BUFFER, [Buffer 5])
vertexAttribPointer(0, 2, FLOAT, false, 0, 0)
drawArrays(TRIANGLES, 0, 6)
The buffer that is being used there (Buffer 5) is setup as follows:
bufferData(ARRAY_BUFFER, [-1,-1,1,-1,-1,1,-1,1,1,-1,1,1], STATIC_DRAW)
And the program (Program 2) data is:
LINK_STATUS true
VALIDATE_STATUS false
DELETE_STATUS false
ACTIVE_UNIFORMS 0
ACTIVE_ATTRIBUTES 1
Vertex shader:
#ifdef GL_ES
precision highp float;
#endif
attribute vec2 aPosition;
void main(void) {
gl_Position = vec4(aPosition, 0, 1);
}
Fragment shader:
#ifdef GL_ES
precision highp float;
#endif
void main(void) {
gl_FragColor = vec4(1.0,0.0,0.0,1.0);
}
Other state I think could be relevant:
CULL_FACE false
CULL_FACE_MODE BACK
FRONT_FACE CCW
BLEND false
DEPTH_TEST false
VIEWPORT 0, 0 640 x 480
SCISSOR_TEST false
SCISSOR_BOX 0, 0 640 x 480
COLOR_WRITEMASK true,true,true,true
DEPTH_WRITEMASK true
STENCIL_WRITEMASK 0xffffffff
FRAMEBUFFER_BINDING null
What I expected to see from that setup/commands is a red quad taking up the whole clip space, but what I see is simply the cleared screen, as the drawArrays doesn't seem to be doing anything. Can anybody spot what I'm missing? Any tips on how to debug this would be very welcome too!
Here:
bufferData(ARRAY_BUFFER, [-1,-1,1,-1,-1,1,-1,1,1,-1,1,1], STATIC_DRAW)
replace to:
bufferData(ARRAY_BUFFER, new Float32Array([-1,-1,1,-1,-1,1,-1,1,1,-1,1,1]), STATIC_DRAW)
Because webgl doesn't know which type you are passing here (integer or float or byte). Example:
http: jsfiddle.net/9QxAz/
After reading #user1724911's fiddle, I found out what I missed is enabling the vertex attribute array - stupidly simple mistake. I'm actually surprised I didn't get any warning from webgl inspector about this, but the solution was simply to add a call to enable that attribute:
gl.enableVertexAttribArray(program.attributes.aPosition);
Related
I can't get a SSBO working using Qt3D. I'm also unable to find a single example displaying how it is supposed to be done.
Here are the main parts of my code:
Buffer init:
QByteArray ssboData;
ssboData.resize(1000);
ssboData.fill(0);
mySSBOBuffer = new Qt3DRender::QBuffer(this);
mySSBOBuffer->setUsage(Qt3DRender::QBuffer::DynamicRead);
mySSBOBuffer->setAccessType(Qt3DRender::QBuffer::AccessType::ReadWrite);
mySSBOBuffer->setData(ssboData);
QByteArray atomicCounterData;
atomicCounterData.resize(4);
atomicCounterData.fill(0);
myAtomicCounterBuffer = new Qt3DRender::QBuffer(this);
myAtomicCounterBuffer->setUsage(Qt3DRender::QBuffer::DynamicRead);
myAtomicCounterBuffer->setAccessType(Qt3DRender::QBuffer::AccessType::ReadWrite);
myAtomicCounterBuffer->setData(atomicCounterData);
Passing the buffers as QParameters to the shader.
myMaterial->addParameter(new Qt3DRender::QParameter("acCountFrags", QVariant::fromValue(myAtomicCounterBuffer->id()), myMaterial));
myMaterial->addParameter(new Qt3DRender::QParameter("ssboBuffer", QVariant::fromValue(mySSBOBuffer->id()), myMaterial));
I also tried
myMaterial->addParameter(new Qt3DRender::QParameter("acCountFrags", QVariant::fromValue(myAtomicCounterBuffer), myMaterial));
myMaterial->addParameter(new Qt3DRender::QParameter("ssboBuffer", QVariant::fromValue(mySSBOBuffer), myMaterial));
Fragment Shader (color has no use, just to check shader is working):
#version 430 core
layout(binding = 0, offset = 0) uniform atomic_uint acCountFrags;
layout (std430) buffer ssboBuffer
{
uint fragIds[];
};
out vec4 color;
void main()
{
uint index = atomicCounterIncrement(acCountFrags);
fragIds[index] = 5;
color = vec4(0.2, 0.2, 0.2, 1.0);
}
In all of my tries, nothing is written to the buffers after rendering. They are still full of 0 like after init.
Does anybody know if i'm doing something wrong ? Or somewhere I could find a working example ?
Thank you.
The answer was a missing BufferCapture component in my FrameGraph. Found it thanks to the example given by HappyFeet in the comments.
As I'm learning more about WebGL2, I've come across this new syntax within shaders where you set location inside of shaders via: layout (location=0) in vec4 a_Position;. How does this compare to getting the attribute location with the traditional gl.getAttribLocation('a_Position');. I assume it's faster? Any other reasons? Also, is it better to set locations to integers or would you be able to use strings as well?
There are 2 ideas conflated here
Manually assigning locations to attributes
Assigning attribute locations in GLSL vs JavaScript
Why would you want to assign locations?
You don't have to look up the location then since you already know it
You want to make sure 2 or more shader programs use the same locations so that they can use the same attributes. This also means a single vertex array can be used with both shaders. If you don't assign the attribute location then the shaders may use different attributes for the same data. In other words shaderprogram1 might use attribute 3 for position and shaderprogram2 might use attribute 1 for position.
Why would you want to assign locations in GLSL vs doing it in JavaScript?
You can assign a location like this in GLSL ES 3.0 (not GLSL ES 1.0)
layout (location=0) in vec4 a_Position;
You can also assign a location in JavaScript like this
// **BEFORE** calling gl.linkProgram
gl.bindAttribLocation(program, 0, "a_Position");
Off the top of my head it seems like doing it in JavaScript is more DRY (Don't repeat yourself). In fact if you use consistent naming then you can likely set all locations for all shaders by just binding locations for your common names before calling gl.linkProgram. One other minor advantage to doing it in JavaScript is it's compatible with GLSL ES 1.0 and WebGL1.
I have a feeling though it's more common to do it in GLSL. That seems bad to me because if you ever ran into a conflict you might have to edit 10s or 100s of shaders. For example you start with
layout (location=0) in vec4 a_Position;
layout (location=1) in vec2 a_Texcoord;
Later in another shader that doesn't have texcoord but has normals you do this
layout (location=0) in vec4 a_Position;
layout (location=1) in vec3 a_Normal;
Then sometime much later you add a shader that needs all 3
layout (location=0) in vec4 a_Position;
layout (location=1) in vec2 a_Texcoord;
layout (location=2) in vec3 a_Normal;
If you want to be able to use all 3 shaders with the same data you'd have to go edit the first 2 shaders. If you'd used the JavaScript way you wouldn't have to edit any shaders.
Of course another way would be to generate your shaders which is common. You could then either inject the locations
const someShader = `
layout (location=$POSITION_LOC) in vec4 a_Position;
layout (location=$NORMAL_LOC) in vec2 a_Texcoord;
layout (location=$TEXCOORD_LOC) in vec3 a_Normal;
...
`;
const substitutions = {
POSITION_LOC: 0,
NORMAL_LOC: 1,
TEXCOORD_LOC: 2,
};
const subRE = /\$([A-Z0-9_]+)/ig;
function replaceStuff(subs, str) {
return str.replace(subRE, (match, group0) => {
return subs[group0];
});
}
...
gl.shaderSource(prog, replaceStuff(substitutions, someShader));
or inject preprocessor macros to define them.
const commonHeader = `
#define A_POSITION_LOC 0
#define A_NORMAL_LOC 1
#define A_TEXCOORD_LOC 2
`;
const someShader = `
layout (location=A_POSITION_LOC) in vec4 a_Position;
layout (location=A_NORMAL_LOC) in vec2 a_Texcoord;
layout (location=A_TEXCOORD_LOC) in vec3 a_Normal;
...
`;
gl.shaderSource(prog, commonHeader + someShader);
Is it faster? Yes but probably not much, not calling gl.getAttribLocation is faster than calling it but you should generally only be calling gl.getAttribLocation at init time so it won't affect rendering speed and you generally only use the locations at init time when setting up vertex arrays.
is it better to set locations to integers or would you be able to use strings as well?
Locations are integers. You're manually choosing which attribute index to use. As above you can use substitutions, shader generation, preprocessor macros, etc... to convert some type of string into an integer but they need to be integers ultimately and they need to be in range of the number of attributes your GPU supports. You can't pick an arbitrary integer like 9127. Only 0 to N - 1 where N is the value returned by gl.getParameter(MAX_VERTEX_ATTRIBS). Note N will always be >= 16 in WebGL2
I am trying to do a progressive rendering using the previous rendering as a texture to the next one.
EDIT 1: As suggested in the comments, I did updated my version of THREE.js to the latest available, and kept my old code, the result is the same (even if the vertical positions of objects flipped). And my problem still remains. Please do consider my update and my pray for help.
Original message:
My fragment shader should only increment the color on the green channel with 0.1, like this:
#ifdef GL_ES
precision highp float;
#endif
uniform sampler2D sampa;
varying vec2 tc;
void main(void)
{
vec4 c = texture2D(sampa, tc);
vec4 t = vec4(c.x, c.y + .1, c.z, 1.);
gl_FragColor = t;
}
My composer is like this:
composer.addPass(renderModel);
composer.addPass(screenPass);
composer.addPass(feedPass);
Where renderModel is a RenderPass, rendering my scene in which I have a plane and a cube.
and screenPass and feedPass are identical with the only difference being that one renders on screen the other one renders in writeBuffer (composer.renderTarget1).
var renderModel = new THREE.RenderPass(scene, camera);
renderModel.clear = false;
screenPass = new THREE.ShaderPass(shader2, 'sampa');
screenPass.renderToScreen = true;
screenPass.clear = false;
screenPass.needsSwap = false;
feedPass = new THREE.ShaderPass(shader2, 'sampa');
feedPass.renderToScreen = false;
feedPass.clear = false;
feedPass.needsSwap = false;
And in the animation loop, I have something like this:
composer.render();
if(step % 250 == 0)
{
newmat = new THREE.MeshBasicMaterial(
{
map : composer.renderTarget1
});
plane.material = newmat;
}
step++;
requestAnimationFrame(animate);
The part with step % 250 is to delay the change of material.
Anyway, the problem is that the plane is disappearing when that happens. Even if it is correctly rendered in the first 250 steps. I guess it is still there but with no texture data, so it is not actually rendered.
I know that EffectComposer is not part of the library, and it is found only in examples, and might not be supported, but I would really do with any advice on this situation, and any answer will be greatly appreciated.
As for any other info about the problem, or some other code that might help I very am willing to share.
Could you point out what am I doing wrong?
I thank you for your kindness.
It seems the solution to this problem is to use two RenderTargets, and switch between them on every step. My limited knowledge stops me from understanding exactly why but this is exactly how EffectComposer work.
For those who might have this problem and need a solution you should try to set needsSwap to true for your shaders pass.
And if you do not use EffectComposer, then remember to use two rendertargets.
I'm trying to check inside the shader (GLSL) if my vec4 is NULL. I need this for several reasons, mostly to get specific graphics cards compatible, since some of them pass a previous color in gl_FragColor, and some don't (providing a null vec4 that needs to be overwritten).
Well, on a fairly new Mac, someone got this error:
java.lang.RuntimeException: Error creating shader: ERROR: 0:107: '==' does not operate on 'vec4' and 'int'
ERROR: 0:208: '!=' does not operate on 'mat3' and 'int'
This is my code in the fragment shader:
void main()
{
if(gl_FragColor == 0) gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0); //Line 107
vec4 newColor = vec4(0.0, 0.0, 0.0, 0.0);
[...]
if(inverseViewMatrix != 0) //Line 208
{
[do stuff with it; though I can replace this NULL check with a boolean]
}
[...]
gl_FragColor.rgb = mix(gl_FragColor.rgb, newColor.rgb, newColor.a);
gl_FragColor.a += newColor.a;
}
As you can see, I do a 0/NULL check for gl_FragColor at the start, because some graphics cards pass valuable information there, but some don't. Now, on that special mac, it didn't work. I did some research, but couldn't find any information on how to do a proper NULL check in GLSL. Is there even one, or do I really need to make separate shaders here?
All variables meant for reading, i.e. input variables always deliver sensible values. Being an output variable, gl_FragColor is not one of these variables!
In this code
void main()
{
if(gl_FragColor == 0) gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0); //Line 107
vec4 newColor = vec4(0.0, 0.0, 0.0, 0.0);
The very first thing you do is reading from gl_FragColor. The GLSL specification clearly states, that the value of an output varialbe as gl_FragColoris, is undefined when the fragment shader stage is entered (point 1):
The value of an output variable will be undefined in any of the three following cases:
At the beginning of execution.
At each synchronization point, unless
the value was well-defined after the previous synchronization
point and was not written by any invocation since, or
the value was written by exactly one shader invocation since the
previous synchronization point, or
the value was written by multiple shader invocations since the
previous synchronization point, and the last write performed
by all such invocations wrote the same value.
When read by a shader invocation, if
the value was undefined at the previous synchronization
point and has not been writen by the same shader invocation since, or
the output variable is written to by any other shader
invocation between the previous and next synchronization points,
even if that assignment occurs in code following the read.
Only after an element of an output variable has been written to for the first time its value is defined. So the whole thing you do there makes no sense. That it "didn't work" is completely permissible and an error on your end.
You're invoking undefined behaviour and technically it would be permissible for your computer to become sentinent, chase you down the street and erase all of your data as an alternative reaction to this.
In GLSL a vec4 is a regular datatype just like int. It's not some sort of pointer to an array which could be a null pointer. At best it has some default value that's not being overwritten by a call to glUniform.
Variables in GLSL shaders are always defined (otherwise, you'll get a linker error). If you don't supply those values with data (by not loading the appropriate uniform, or binding attributes to in or attribute variables), the values in those variables will be undefined (i.e., garbage), but present.
Even if you can't have null values, you can test undefined variables. This is a trick that I use to debug my shaders:
...
/* First we test for lower range */
if(suspect_variable.x < 0.5) {
outColour = vec4(0,1,0,0); /* Green if in lower range*/
} else if(suspect_variable.x >= 0.5) { /*Then for the higher range */
outColour = vec4(1,0,0,0); /* Red if in higher range */
} else {
/* Now we have tested for all real values.
If we end up here we know that the value must be undefined */
outColour = vec4(0,0,1,0); /* Blue if it's undefined */
}
You might ask, what could make a variable undefined? Out of range access of an array would cause it to be undefined;
const int numberOfLights = 2;
uniform vec3 lightColour[numberOfLights];
...
for(int i = 0; i < 100; i++) {
/* When i bigger than 1 the suspect_variable would be undefined */
suspect_variable = suspect_variable * lightColour[i];
}
It is a simple and easy trick to use when you do not have access to real debugging tools.
In my fragment shader, I have the line
gl_FragColor = texture2D(texture, destinationTextureCoordinate) * destinationColor;
Where texture is a uniform of type sampler2D. In my code , I always set this to the value '0'.
glUniform1i(_uniformTexture, 0);
Is it possible to skip the call to glUniform1i and just hardcore 0 in the fragment shader? I tried just replacing texture with 0 and it complained about not being a valid type.
I'm not entirely sure what you're trying to achieve, but here are some thoughts:
sampler2D needs to sample a 2D texture, as the name indicates. It is a special GLSL variable, so the compiler is right to complain about 0 not being a valid type when fed into the first parameter of texture2D.
Unless you are using multiple textures, the second parameter to glUniform1i should always be 0 (default). You can skip this call if you are only using a single texture, but it's good practice to leave it in.
Why do you need a call to texture2D if you just want to pass the value 0? Surely you can just do gl_FragColor = destinationColor. This will color your fragment based on the vertex shader output from gl_Position. I'm not sure why you are implementing a texture if you don't plan on using it (or so it seems).
EDIT: Code to send two textures to the fragment shader correctly.
//glClear();
// Attach Texture 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _texture0);
glUniform1i(_uSampler0, 0);
// Attach Texture 1
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _texture1);
glUniform1i(_uSampler1, 1);
//glDrawArrays();
You need a layout, like this:
#version 420
//#extension GL_ARB_shading_language_420pack: enable Use for GLSL versions before 420.
layout(binding=0) uniform sampler2D diffuseTex;
In that way you are getting a binding to the sampler uniform 0. I think that is what you want. But keep in mind that when you are binding uniforms they got incremental values from zero, so be sure what you want to bind. The idea to bind uniform variables via the glUniform1i() function is to ensure the correctness of your code.
Source: http://www.opengl.org/wiki/GLSL_Sampler#Version_4.20_binding