three.js - using more than one samplercube - opengl-es

i am writng a shader that needs to use one sampler cube for reflection and another for refraction. water is intended to reflect form a skybox, but refract using a container for the water. i create a uniform in the fragment shader for each sampler cube:
uniform samplerCube reflectMap;
uniform samplerCube refractMap;
i create 2 texture cubes, tRefract and tReflect, using THREE.ImageUtils.loadTextureCube
i associate the uniforms to the custom shader, setting their values:
refractMap: {type: 't', value: tRefract}
reflectMap: {type: 't', value: tReflect}
in the fragment shader i sample the textures and mix the colors:
vec4 reflectCubeColor = textureCube(reflectMap, vec3(-vReflect.x, vReflect.yz));
vec4 refractCubeColor = textureCube(refractMap, vec3(-vRefract.x, vRefract.yz));
vec3 c = mix(gl_FragColor.xyz, reflectCubeColor.xyz, reflectivity);
gl_FragColor = vec4(mix(refractCubeColor.xyz, c, opacity), 1.0);
(vReflect and vRefract have been computed in the vertex shader)
what happens, however, is only the texture associated to the reflection is used. it appears on the skybox, it is reflected in the water and refracted in the water.
thinking maybe this was a question of not switching texture units as one might do in "regular" opengl, i dug around in the three.js code to see if there was a way to specify the texture unit. it appears in the built-in normal shader that something like this might doing just that:
refractMap: {type: 't', value: 6, texture: tRefract}
reflectMap: {type: 't', value: 7, texture: tReflect}
where value is specifying the texture unit. i didn't see any api, however, for saying which unit should be the active unit. it doesn't seem to matter the order in which the textures are loaded or associated to uniforms, and changing the values associated as above seems to have no effect.
so can i make use of more than one samplerCube in my fragment shader ?

Related

How are layout qualifiers better than getAttribLocation in WebGL2?

As I'm learning more about WebGL2, I've come across this new syntax within shaders where you set location inside of shaders via: layout (location=0) in vec4 a_Position;. How does this compare to getting the attribute location with the traditional gl.getAttribLocation('a_Position');. I assume it's faster? Any other reasons? Also, is it better to set locations to integers or would you be able to use strings as well?
There are 2 ideas conflated here
Manually assigning locations to attributes
Assigning attribute locations in GLSL vs JavaScript
Why would you want to assign locations?
You don't have to look up the location then since you already know it
You want to make sure 2 or more shader programs use the same locations so that they can use the same attributes. This also means a single vertex array can be used with both shaders. If you don't assign the attribute location then the shaders may use different attributes for the same data. In other words shaderprogram1 might use attribute 3 for position and shaderprogram2 might use attribute 1 for position.
Why would you want to assign locations in GLSL vs doing it in JavaScript?
You can assign a location like this in GLSL ES 3.0 (not GLSL ES 1.0)
layout (location=0) in vec4 a_Position;
You can also assign a location in JavaScript like this
// **BEFORE** calling gl.linkProgram
gl.bindAttribLocation(program, 0, "a_Position");
Off the top of my head it seems like doing it in JavaScript is more DRY (Don't repeat yourself). In fact if you use consistent naming then you can likely set all locations for all shaders by just binding locations for your common names before calling gl.linkProgram. One other minor advantage to doing it in JavaScript is it's compatible with GLSL ES 1.0 and WebGL1.
I have a feeling though it's more common to do it in GLSL. That seems bad to me because if you ever ran into a conflict you might have to edit 10s or 100s of shaders. For example you start with
layout (location=0) in vec4 a_Position;
layout (location=1) in vec2 a_Texcoord;
Later in another shader that doesn't have texcoord but has normals you do this
layout (location=0) in vec4 a_Position;
layout (location=1) in vec3 a_Normal;
Then sometime much later you add a shader that needs all 3
layout (location=0) in vec4 a_Position;
layout (location=1) in vec2 a_Texcoord;
layout (location=2) in vec3 a_Normal;
If you want to be able to use all 3 shaders with the same data you'd have to go edit the first 2 shaders. If you'd used the JavaScript way you wouldn't have to edit any shaders.
Of course another way would be to generate your shaders which is common. You could then either inject the locations
const someShader = `
layout (location=$POSITION_LOC) in vec4 a_Position;
layout (location=$NORMAL_LOC) in vec2 a_Texcoord;
layout (location=$TEXCOORD_LOC) in vec3 a_Normal;
...
`;
const substitutions = {
POSITION_LOC: 0,
NORMAL_LOC: 1,
TEXCOORD_LOC: 2,
};
const subRE = /\$([A-Z0-9_]+)/ig;
function replaceStuff(subs, str) {
return str.replace(subRE, (match, group0) => {
return subs[group0];
});
}
...
gl.shaderSource(prog, replaceStuff(substitutions, someShader));
or inject preprocessor macros to define them.
const commonHeader = `
#define A_POSITION_LOC 0
#define A_NORMAL_LOC 1
#define A_TEXCOORD_LOC 2
`;
const someShader = `
layout (location=A_POSITION_LOC) in vec4 a_Position;
layout (location=A_NORMAL_LOC) in vec2 a_Texcoord;
layout (location=A_TEXCOORD_LOC) in vec3 a_Normal;
...
`;
gl.shaderSource(prog, commonHeader + someShader);
Is it faster? Yes but probably not much, not calling gl.getAttribLocation is faster than calling it but you should generally only be calling gl.getAttribLocation at init time so it won't affect rendering speed and you generally only use the locations at init time when setting up vertex arrays.
is it better to set locations to integers or would you be able to use strings as well?
Locations are integers. You're manually choosing which attribute index to use. As above you can use substitutions, shader generation, preprocessor macros, etc... to convert some type of string into an integer but they need to be integers ultimately and they need to be in range of the number of attributes your GPU supports. You can't pick an arbitrary integer like 9127. Only 0 to N - 1 where N is the value returned by gl.getParameter(MAX_VERTEX_ATTRIBS). Note N will always be >= 16 in WebGL2

Threejs - Applying simple texture on a shader material

Using Threejs (67) with a Webgl renderer, I can't seem to get a plane with a shader material to wear its texture. No matter what I do the material would just stay black.
My code at the moment looks quite basic :
var grassT = new Three.Texture(grass); // grass is an already loaded image.
grassT.wrapS = grassT.wrapT = Three.ClampToEdgeWrapping;
grassT.flipY = false;
grassT.minFilter = Three.NearestFilter;
grassT.magFilter = Three.NearestFilter;
grassT.needsUpdate = true;
var terrainUniforms = {
grassTexture : { type: "t", value: grassT},
}
Then I just have this revelant part in the vertexShader :
vUv = uv;
And on the fragmentShader side :
gl_FragColor = texture2D(grassTexture, vUv);
This results in :
Black material.
No error in console.
gl_FragColor value is always (0.0, 0.0, 0.0, 1.0).
What I tryed / checked:
Everything works fine if I just apply custom plain colors.
All is ok if I use vertexColors with plain colors too.
My texture width / height is indeed a power of 2.
The image is on the same server than the code.
Tested others images with same result.
The image is actually loading in the browser debugger.
UVS for the mesh are corrects.
Played around with wrapT, wrapS, minFilter, magFilter
Adapted the mesh size so the texture has a 1:1 ratio.
Preloaded the image with requirejs image plugin and created the texture from THREE.Texture() instead of using THREE.ImageUtils();
Played around with needsUpdate : true;
Tryed to add defines['USE_MAP'] during material instanciation.
Tryed to add material.dynamic = true.
I have a correct rendering loop (interraction with terrain is working).
What I still wonder :
It's a multiplayer game using a custom port with express + socket.io. Am I hit by any Webgl security policy ?
I have no lights logic at the moment, is that a problem ?
Maybe the shader material needs other "defines" at instanciation ?
I guess I'm overlooking something simpler, this is why I'm asking...
Thanks.
I am applying various effects on the same shader. I have a custom API that merge all different effects uniforms simply by using Three.UniformsUtils.merge() However this function is calling the clone() method on the texture and this is causing to reset needsUpdate to false before the texture reach the renderer.
It appears that you should set your texture needsUpdate property to true when reaching the material level. On the texture level, if the uniform you set get merged, and therefore cloned, later in the process, it'll lose its needsUpdate property.
The issue is also detailled here: https://github.com/mrdoob/three.js/issues/3393
In my case the following wasn't working (grassT is my texture):
grassT.needsUpdate = true
while the following is running perfectly later on in the code:
material.uniforms.grassTexture.value.needsUpdate = true;
Image loading is asynchronous. Most likely, you are rendering your scene before the texture image loads.
You must set the texture.needsUpdate flag to true after the image loads. three.js has a utility that will do that for you:
var texture = THREE.ImageUtils.loadTexture( "texture.jpg" );
Once rendered, the renderer sets the texture.needsUpdate flag back to false.
three.js r.68

Lights and shade with a custom fragmentShader

I'm creating a sphere and adding distortion to it, that's working fine.
When I look at the wireframe it's like this
and with the wireframe turned of it looks like this
As you can see the are no shades and the distortion isn't visible when the wireframe is turned of.
What I'm looking for is what to place in my custom fragmentShader.
I used this
// calc the dot product and clamp
// 0 -> 1 rather than -1 -> 1
vec3 light = vec3(0.5,0.2,1.0);
// ensure it's normalized
light = normalize(light);
// calculate the dot product of
// the light to the vertex normal
float dProd = max(0.0, dot(vNormal, light));
// feed into our frag colour
gl_FragColor = vec4(dProd, dProd, dProd, 1.0);
But that just creates a very ugly false light.
Any ideas anybody?
Thanks in advance,
Wezy
If you want to use Three.js' lights (and I can't think a reason why not), you need to include the corresponding shader chunks:
Take a look at how the WebGLRenderer composes its shaders in THREE.ShaderLib
Pick a material there that is close to what you want and copy its definition (uniforms and both shader codes) to your code, renaming it to something custom
Delete chunks you don't need, and add your custom code to appropriate places

Hardcoding GLSL texture sampler2D

In my fragment shader, I have the line
gl_FragColor = texture2D(texture, destinationTextureCoordinate) * destinationColor;
Where texture is a uniform of type sampler2D. In my code , I always set this to the value '0'.
glUniform1i(_uniformTexture, 0);
Is it possible to skip the call to glUniform1i and just hardcore 0 in the fragment shader? I tried just replacing texture with 0 and it complained about not being a valid type.
I'm not entirely sure what you're trying to achieve, but here are some thoughts:
sampler2D needs to sample a 2D texture, as the name indicates. It is a special GLSL variable, so the compiler is right to complain about 0 not being a valid type when fed into the first parameter of texture2D.
Unless you are using multiple textures, the second parameter to glUniform1i should always be 0 (default). You can skip this call if you are only using a single texture, but it's good practice to leave it in.
Why do you need a call to texture2D if you just want to pass the value 0? Surely you can just do gl_FragColor = destinationColor. This will color your fragment based on the vertex shader output from gl_Position. I'm not sure why you are implementing a texture if you don't plan on using it (or so it seems).
EDIT: Code to send two textures to the fragment shader correctly.
//glClear();
// Attach Texture 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _texture0);
glUniform1i(_uSampler0, 0);
// Attach Texture 1
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _texture1);
glUniform1i(_uSampler1, 1);
//glDrawArrays();
You need a layout, like this:
#version 420
//#extension GL_ARB_shading_language_420pack: enable Use for GLSL versions before 420.
layout(binding=0) uniform sampler2D diffuseTex;
In that way you are getting a binding to the sampler uniform 0. I think that is what you want. But keep in mind that when you are binding uniforms they got incremental values from zero, so be sure what you want to bind. The idea to bind uniform variables via the glUniform1i() function is to ensure the correctness of your code.
Source: http://www.opengl.org/wiki/GLSL_Sampler#Version_4.20_binding

OpenGL ES 2.x: Bind both `GL_TEXTURE_2D` and `GL_TEXTURE_CUBE_MAP` in the same texture image unit?

What happens if you bind (different textures) to both GL_TEXTURE_2D and GL_TEXTURE_CUBE_MAP in the same texture image unit?
For example, suppose I bind one texture to GL_TEXTURE0's GL_TEXTURE_2D target and another texture to the same texture unit's GL_TEXTURE_CUBE_MAP target. Can I then have two uniform variables, one a sampler2D and the other a samplerCube and set both to 0 (to refer to GL_TEXTURE0)?
I suspect the answer is "no" (or that the result is undefined) but I haven't found anything in the spec that specifically prohibits using multiple texture targets in the same texture image unit.
I haven't found anything that describes if you can bind a 2D texture and a cube map texture in the same texture unit, but (or so) I guess this is perfectly possible. It makes sense to allow it, since all texture modification functions require you to specify the texture target to operate on, anyway.
But the OpenGL ES 2 spec explicitly disallows to use both at the same time in a shader, as chapter 2.10 says:
It is not allowed to have variables of different sampler types
pointing to the same texture image unit within a program object. This
situation can only be detected at the next rendering command issued,
and an INVALID_OPERATION error will then be generated.
So you cannot use both a sampler2D and a samplerCube referring to the same texture unit to bend your implementation's texture unit limits.
For Chrome, I'm getting an error trying to perform such an operation.
var gl = document.getElementById("canv00").getContext("webgl");
const texture = gl.createTexture()
gl.bindTexture(gl.TEXTURE_2D, texture)
gl.bindTexture(gl.TEXTURE_CUBE_MAP, texture)
gl.getParameter(gl.TEXTURE_BINDING_2D) // texture
gl.getParameter(gl.TEXTURE_BINDING_CUBE_MAP) // null
gl.getError() // returns 1282 error
var gl = document.getElementById("canv00").getContext("webgl");
const texture = gl.createTexture()
gl.bindTexture(gl.TEXTURE_2D, texture)
// gl.bindTexture(gl.TEXTURE_CUBE_MAP, texture)
gl.getParameter(gl.TEXTURE_BINDING_2D) // texture
gl.getParameter(gl.TEXTURE_BINDING_CUBE_MAP) // null
gl.getError() // no error
var gl = document.getElementById("canv00").getContext("webgl");
const texture = gl.createTexture()
// gl.bindTexture(gl.TEXTURE_2D, texture)
gl.bindTexture(gl.TEXTURE_CUBE_MAP, texture)
gl.getParameter(gl.TEXTURE_BINDING_2D) // null
gl.getParameter(gl.TEXTURE_BINDING_CUBE_MAP) // texture
gl.getError() // no error

Resources