Threejs array index error: Index expression must be constant - three.js

I have got the following error with Three.js when I try to use an array with a non constant index:
'[]' : Index expression must be constant
With the following fragment shader:
precision mediump float;
varying vec2 vUV;
uniform vec2 screenResolution;
vec4 colors[2];
void main(void) {
vec2 uv = gl_FragCoord.xy / screenResolution.xy;
colors[0] = vec4(0.0);
colors[1] = vec4(1.0);
int index = int(floor(uv.y * 1.9));
gl_FragColor = colors[index];
}
This error does not occur with Babylon.js.
I know it was not possible to use non constant index for arrays in earlier versions of GLSL ES but it should be possible now, right?
How can I know the GLSL versions used by Three.js and Babylon.js?

Short answer
For the use of GLSL ES 3.0 in Three.js you've to create a WebGL 2.0 context.
Once you have checked that WebGL 2 is supported by the device, create a WebGLRenderer with a given webgl2 context:
var canvas = document.createElement( 'canvas' );
var context = canvas.getContext( 'webgl2' );
var renderer = new THREE.WebGLRenderer( { canvas: canvas, context: context } );
See the Three.js documentation: How to use WebGL2.
Long answer
The version of the shader of the question is GLSL ES 1.0. The index of an array has to be a constant-expressions.
See OpenGL ES Shading Language 1.00 Specification - 13 Acknowledgements; page 109:
5 Indexing of Arrays, Vectors and Matrices
Definition:
constant-index-expressions are a superset of constant-expressions. Constant-index-expressions can include loop indices as defined in Appendix A section 4.
The following are constant-index-expressions:
Constant expressions
Loop indices as defined in section 4
Expressions composed of both of the above
When used as an index, a constant-index-expression must have integral type.
Uniforms (excluding samplers)
In the vertex shader, support for all forms of array indexing is mandated. In the fragment shader, support for indexing is only mandated for constant-index-expressions.#
This means the index of an array in a fragment shader has to be a constant or a loop index in any case.
This changes in GLSL ES 3.0. See OpenGL ES Shading Language 3.00 - 12.30 Dynamic Indexing; page 142:
For GLSL ES 1.00, support of dynamic indexing of arrays, vectors and matrices was not mandated because it was not directly supported by some implementations. Software solutions (via program transforms) exist for a subset of cases but lead to poor performance. Should support for dynamic indexing be mandated for GLSL ES 3.00?
RESOLUTION: Mandate support for dynamic indexing of arrays except for sampler arrays, fragment output arrays and uniform block arrays.
GLSL ES 3.0 shader have to be qualified by the version quailfier in the first line of the shade code:
#version 300 es
Further there are some syntactically differences like the qualifier for the shader inputs and outputs, which are in respectively out instead of attribute or varying.
For the use of GLSL ES 3.0 you've to create a WebGL 2.0 context.
See the Three.js documentation: How to use WebGL2.

Related

Set depth texture for Z-testing in OpenGL ES 2.0 or 3.0

Having a 16-bit uint texture in my C++ code, I would like to use it for z-testing in an OpenGL ES 3.0 app. How can I achieve this?
To give some context, I am making an AR app where virtual objects can be occluded by real objects. The depth texture of real environment is generated, but I can't figure out how to apply it.
In my app, I first use glTexImage2D to render backdrop image from the camera feed, then I draw some virtual objects. I would like the objects to be transparent based on a depth texture. Ideally, the occlusion testing needs to be not binary, but gradual, so that I can alpha blend the objects with background near the occlusion edges.
I can pass and read the depth texture in the fragment shader, but not sure how to use it for z-testing instead of rendering.
Lets assume you have a depth texture uniform sampler2D u_depthmap and the internal format of the depth texture is a floating point format.
To read the texel from the texture, where the current fragment is on, you have to know the size of the viewport (uniform vec2 u_vieport_size). gl_FragCoord contains the window relative coordinate (x, y, z, 1/w) values for the fragment. So the texture coordinate for the depth map is calcualted by:
vec2 map_uv = gl_FragCoord.xy / u_vieport_size;
The depth from the depth texture u_depthmap is given in range [0.0, 1.0], because of the internal floating point format. The depth of the fragment is contained in the gl_FragCoord.z, in range [0.0, 1.0], too.
That means that the depth of the map and the depth of the fragment can be calculated as follows:
uniform sampler2D u_depthmap;
uniform vec2 u_vieport_size;
void mian()
{
vec2 map_uv = gl_FragCoord.xy / u_vieport_size;
float map_depth = texture(u_depthmap, map_uv).x;
float frag_depth = gl_FragCoord.z;
.....
}
Note, map_depth and frag_depth are both in the range [0.0, 1.0]. If the were generated both with the same projection (especially the same near and far plane), then they are comparable. This means you have to ensure that the shader generates the same depth values as the ones in the depth map, for the same point in the world. If this is not the case, then you have to linearize the depth values and you have to calculate the view space Z-coordinate.

GLSL texture distortion

I want to create a simple heat distortion on my texture, but can't seem to figure out the steps required to accomplish this. So far, I've been able to change pixel colors the following way (using pixel shader):
varying vec3 v_Position;
varying vec4 v_Color;
varying vec3 v_Normal;
varying vec2 v_TexCoordinate;
void main()
{
var col = texture2D(u_Texture, v_TexCoordinate);
col.r = 0.5;
gl_FragColor = col;
}
This is where I get lost. How can I modify pixel locations to distort the texture? can I set any other properties, but gl_FragColor? or do I have to create a plane with many vertices and distort the vertex locations? Is it possible to get 'neighbour' pixel color values? Thanks!
How can I modify pixel locations to distort the texture?
By modifying the location from which you sample the texture. That would be the second parameter of texture2D
var col = texture2D(u_Texture, v_TexCoordinate);
^-------------^
Texture distortion goes here
Is it possible to get 'neighbour' pixel color values?
Yes, and that's the proper way to do it. In a fragment shader the location you're writing to is immutable¹, so it has all to be done through the fetch location. Also note that you can sample from the same texture an arbitrary² number of times, which enables you to implement blurring³ effects.
¹: writes to freely determined image locations (scatter writes) are supported by OpenGL-4 class hardware, but scatter writes are extremely inefficient and should be avoided.
²: there's a practical limit of the total runtime of the shader, which may be limited by the OS, and also by the desired frame rate.
³: blurring effects should be implemented using so called separable filters for largely improved performance.

Shader position vec4 or vec3

I have read some tutorials about GLSL.
In certain position attribute is a vec4 in some vec3.
I know that the matrix operations need a vec4, but is it worth to send an additional element?
Isn't it better to send vec3 and later cast in the shader vec4(position, 1.0)?
Less data in memory - it will be faster? Or we should pack an extra element to avoid casting?
Any tips what should be better?
layout(location = 0) in vec4 position;
MVP*position;
or
layout(location = 0) in vec3 position;
MVP*vec4(position,1.0);
For vertex attributes, this will not matter. The 4th component is automatically expanded to 1.0 when it is absent.
That is to say, if you pass a 3-dimensional vertex attribute pointer to a 4-dimensional vector, GL will fill-in W with 1.0 for you. I always go with this route, it avoids having to explicitly write vec4 (...) when doing matrix multiplication on the position and it also avoids wasting memory storing the 4th component.
This works for 2D coordinates too, by the way. A 2D coordinate passed to a vec4 attribute becomes vec4 (x, y, 0.0, 1.0). The general rule is this: all missing components are replaced with 0.0 except for W, which is replaced with 1.0.
However, to people who are unaware of the behavior of GLSL in this circumstance, it can be confusing. I suppose this is why most tutorials never touch on this topic.

OpenGLES2.0 GL_POINT_SMOOTH

I'm using this code in a fragment shader to round the edges of a GL point
precision mediump float;
varying vec4 fragColor;
void main() {
gl_FragColor = fragColor;
if(length(gl_PointCoord-vec2(0.5)) > 0.5)
discard;
}
The problem is, the rounding is applied to every type of primitive drawn in the context, including triangle strips. Is there a way of adding an if statement to limit the rounding to only GL_POINTS?
I think you should just use a new shader for other primitives.
Two minor comments:
Did you consider using a small texture (containing a circle) instead of doing a calculation like this? It might be a bit faster but it obviously depends on the details.
Also try to avoid using the discard keyword. It might have a negative impact on performance. You could for example set the alpha value to 0 for those fragments that you currently discard.

Simple, IO concerning vertex shader in webGL

So, I'm trying to make sense of some code before I copy and paste it into my application. In openGL I'm seeing some variables typed as in and out. I see no such thing in the following code snippet. From what I understand, the vertex shader "magically" gets the input for the "in" typed variables via the program, which incidentally can have a fragment and vertex shader attached to it(the program). Heres the code:
<script id="shader-vs" type="x-shader/x-vertex">
attribute vec2 aVertexPosition;
attribute vec2 aPlotPosition;
varying vec2 vPosition;
void main(void) {
gl_Position = vec4(aVertexPosition, 1.0, 1.0);
vPosition = aPlotPosition;
}
</script>
So, my question is, by attaching an appropriate program here, aVertexPosition and aPlotPosition will both be properly initialized and furthermore, vPosition could be used somewhere else in my app, namely the fragment shader?
Let me try and explain how the GPU pipeline I/O works:
Each Vertex has a set of attributes associated with it. Given your example code:
attribute vec2 aVertexPosition;
attribute vec2 aPlotPosition;
You are saying that each vertex has a 2D vertex position and plot position. If you added:
attribute vec3 vNormal;
Then every vertex would also have a normal. You could think of these as vertex "properties".
You must tell the GPU where to fetch the values for each of these attributes.
Each vertex attribute is assigned an attribute array index when the shader is compiled. You must enable each attribute array index that your shader requires
enableVertexAttribArray(int attributeIndex);
Once you've enabled it, you want to bind the attribute array to a vertex buffer.
bindBuffer(ARRAY_BUFFER, buffer);
You now describe how to fetch the attribute with this call:
vertexAttribPointer(int attributeIndex, int count, int type, bool normalized, int stride, int offset);
Given your example code:
vertexAttribPointer(0, 2, FLOAT, false, 16, 0); // vertex position
vertexAttribPointer(1, 2, FLOAT, false, 16, 8); // plot position
16 or the stride is the number of bytes between each vertex. Each vertex consists of 4 floats and each float is 4 bytes wide. The offset is where the attribute starts within a vertex. The vertex position is at the 0th byte of the vertex and plot position is at the 8th.
You can think of these as describing how to index into an array. The Nth vertex:
aVertexPosition.x = BUFFER[offset + N * stride + sizeof(FLOAT) * 0];
aVertexPosition.y = BUFFER[offset + N * stride + sizeof(FLOAT) * 1];
Vertex attributes are fetched automatically for you by the GPU and filled in before your vertex shader function is executed. Yes your vertex shader main is called once for every single vertex you draw.
The output of the vertex shader stage are the 'varying' variables. They are 'varying' because they are interpolated across the surface of the primitive (triangle) between vertices. You write the values out for each vertex but when the triangle is rasterized into fragments, each fragment gets the interpolated value of each varying variable. The fragment shader gets run for every fragment (pixel) that is "covered" by the draw call. If you draw a small triangle that covers a 4x4 patch of pixels then the fragment shader is executed 16 times.
Concisely:
Vertex Shader Inputs: Vertex Attributes & Uniform values (not covered)
Vertex Shader Outputs: Varying Values at each vertex
Fragment Shader Inputs: Varying Values for a given fragment (pixel)
Fragment Shader Outputs: Color & Depth values which are stored in the color and depth buffer.
Vertex Shader is run for every vertex in the draw call.
Fragment shader is run for every "covered" or "lit" fragment (pixel) in the draw call.
In other, newer versions of OpenGL, which have more shader stages than just vertex and fragment, in and out are used instead of attribute and varying.
attribute corresponds to in for a vertex shader.
varying corresponds to out for a vertex shader.
varying corresponds to in for a fragment shader.
(I haven't actually used in and out, so this description may be inaccurate. Please feel free to improve my answer by editing. I don't know how uniforms fit in.)

Resources