Is there any way to manually specify view and model matrices?
I know Three.js is not supposed to be used in this way, but I am currently developing some educational materials to teach a typical computer graphics pipeline and would like to explicitly supply model/view/projection matrices to a shader. While I understood which matrices are model/view/projection matrices in Three.js from this issue, I haven't been able to find a good way to manually control them.
So far, I was able to specify the projection matrix by using camera.projectionMatrix.makePerspective() and the model matrix by using applyMatrix(). Actually, applyMatrix() is not ideal from the educational point of view because it internally decomposes the matrix to position, quaternion and scale and probably reconstructs the model matrix from those values and supply it to a shader.
One possible solution is to use ShaderMaterial() and specify all of the three matrices as uniforms. However, I may want to avoid it because they are also passed to a shader implicitly and the name "material" might confuse students.
Does anybody have suggestions to do this kind of stuff in Three.js?
However, I may want to avoid it because they are also passed to a shader implicitly and the name "material" might confuse students.
I'm not sure if this is the best approach. A Material in three.js should indeed be more than a shader. It consists of two shaders, but other stuff as well. For example if you set myMaterial.transparent = true; you will trigger a completely different flow of WebGLRenderer which in turn sets up different WebGL calls. Setting the blending mode for example is not what a shader does.
It would probably be worth explaining this abstraction, rather than renaming it.
...matrices in Three.js from this issue, I haven't been able to find a good way to manually control them.
With RawShaderMaterial you should be able to write the whole shader from scratch.
uniform mat4 uMyProjectionMatrix;
uniform mat4 uMyModelMatrix;
uniform mat4 uMyViewMatrix;
uniform mat5 uMyModelViewMatrix;
attribute vec3 aMyPosition;
void main(){
gl_Position = uMyProjectionMatrix * uMyViewMatrix * uMyModelMatrix * vec4( aMyPositon , 1.);
}
It is entirely up to you to define what those are. Is the projection matrix orthographic or not for example.
With ShaderMaterial you get these automagically:
void main(){
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4( position , 1. );
}
projectionMatrix and viewMatrix are derived from the camera's properties, as you can see in the link (btw i've no idea why that's not in the documentation, i found myself referring to that particular issue a bunch of times :) ).
Both of these can be modified. Automagically if you do
myCamera.far = newFar;
myCamera.fov = newFov;
myCamera.updateProjectionMatrix(); //this will be the new projectionMatrix in GLSL
but, nothing should be preventing you from doing
myCamera.projectionMatrix.array[3] = mySkewLogic;
Same applies to modelMatrix:
myObject.position.x = newX;
myObject.updateMatrixWorld();
//or
myObject.matrixWorld.array[3] = someXTranslationLogic;
Related
I am implementing a feature extraction algorithm with OpenGL ES 3.0 (given an input texture with some 1's and mostly 0's, produce an output texture that has feature regions labeled). The problem I face in my fragment shader is how to perform a “lookup” on an intermediate vec or float rather than a sampler.
Conceptually every vec or float is just a texel-valued function, so there ought to be a way to get its value given texel coordinates, something like textureLikeLookup(texel_valued, v_color) - but I haven’t found anything that parallels the texture* functions.
The options I see are:
Render my vector to a framebuffer and pass that as a texture into another shader/rendering pass - undesirable because I have many such passes in a loop, and I want to avoid interleaving CPU calls;
Switch to ES 3.1 and take advantage of imageStore (https://www.khronos.org/registry/OpenGL-Refpages/es3.1/html/imageStore.xhtml) - it seems clear that if I can update an intermediate image within my shader then I can achieve this within the fragment shader (cf. https://www.reddit.com/r/opengl/comments/279fc7/glsl_frag_update_values_to_a_texturemap_within/), but I would rather stick to 3.0 if possible.
Is there a better/natural way to deal with this problem? In other words, do something like this
// works, because tex_sampler is a sampler2D
vec4 texel_valued = texture(tex_sampler, v_color);
when the data is not a sampler2D but a vec:
// doesn't work, because texel_valued is not a sampler but a vec4
vec4 oops = texture(texel_valued, v_color);
I see that Threejs has a Points Material to draw a geometry as points rather than as triangles. However, I want to manipulate the vertices using my own vertex shader, using a Shader Material. In WebGL, I think I could just call gl_drawArrays using gl.Points instead of gl.Triangles. How can I tell the renderer to draw the geometry as points? Is there a better way to go about this?
little addition, I had no joy until I added gl_PointSize to my vertex shader:
void main(){
gl_PointSize = 100.;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.);
}
found the answer in the GPU particle system example.
Found my solution right after asking the question. Just create a THREE.Points object instead of THREE.Mesh using whatever geometry and the Shader Material you want to use.
THREE.Points(geometry, new THREE.ShaderMaterial(parameters));
I am developing a sphere impostor shader on GLSL with ThreeJS. My algorithm is based on the publication from Sigg et al. named "GPU-Based Ray-Casting of Quadratic Surfaces".
When using a classic geometry approach, you need dozens or even hundreds triangles to represent each sphere. It may cause memory overload if you need to show thousands of spheres. The sphere impostor allows you to store only positions and radius on the geometry to show a sphere, giving much more performance than the previous technique.
For now, I succeeded to develop the shader, even by using ThreeJS shader chunks to ensure a full ThreeJS compatibility. You can find a demo page here. However, there is a last thing not working on this implementation.
When moving the objects on the scene, it seems that the object using the sphere impostor is delayed compared to a normal mesh. You can also notice that some times, the spheres are "cut" like on this picture.
This second bug makes me think that the sprite is nicely placed into the scene by the vertex shader but the fragment shader is computing wrong coordinates. I suspect two pieces of code where the problem could be :
Two varyings provided by the vertex shader to the fragment shader that should give the same value for each pixel of a sprite. I don't know how to verify this.
varying float projMatrix11;
varying float projMatrix22;
I don't know if I'm doing well to update my shader uniforms
group.traverse(function(o) {
if (!o.material) { return; }
var u = o.material.uniforms;
if (!u) { return; }
modelViewMatrixInverse.getInverse(
o.modelViewMatrix
);
if (u.projectionMatrixInverse) {
u.projectionMatrixInverse.value = projectionMatrixInverse;
}
if (u.projectionMatrixTranspose) {
u.projectionMatrixTranspose.value = projectionMatrixTranspose;
}
if (u.modelViewMatrixInverse) {
u.modelViewMatrixInverse.value = modelViewMatrixInverse;
}
if (u.viewport) {
u.viewport.value = viewport;
}
});
I wasn't able to debug the problem and hope someone knowing better ThreeJS than I can give me some clues about it.
I really hope we can solve this problem, so we may be able to propose this feature to the whole community of ThreeJS ;)
Note : I delayed the calls of requestAnimationFrame for you to facilitate debugging
EDIT : After digging more, the problem may come from how I'm updating custom uniforms. One of it uses the modelViewMatrix to get it's inverse. But the modelViewMatrix is updated only during the render call of the WebGLRenderer, so the frame delay may come from there. How can I update a uniform which is depending to other uniforms and keep them synchronized on ThreeJS ?
Answer found alone, I will explain it here if someone encounters the same trouble.
The problem is that I was updating modelViewMatrixInverse uniform by using the modelViewMatrix provided by ThreeJS. This uniform is only updated during the call of render() method of the WebGLRenderer and my modelViewMatrixInverse was one frame late at each render call. That's why my custom shader was everytime one frame late than ThreeJS native shaders.
In my attemp to write my own custom shader (I am using a THREE.ShaderMaterial), I need to set its WebGL's built-in gl_ModelViewMatrixInverseTranspose uniform (as seen on http://mew.cx/glsl_quickref.pdf). I noticed some uniforms are already automatically passed to the shader, for instance gl_ModelViewMatrix and gl_ProjectionMatrix are already accounted for, by threejs' modelViewMatrix and projectionMatrix respectively. gl_ModelViewProjectionMatrix, on the other hand, seems to be missing, but I noticed some examples where it can easily be computed inside the shader as projectionMatrix * modelViewMatrix. So my question is: am I to manually compute gl_ModelViewMatrixInverseTranspose inside my shader starting from modelViewMatrix (and if so, how?) or is there a unform (possibily merged inside my definition of THREE.ShaderMaterial with THREE.UniformsUtils.merge) that already handles it? Cheers.
In a Three.js shader the inverse transpose of modelViewMatrix is called normalMatrix.
It is automatically passed into the shaders so you don't need to do any work to get it.
// = inverse transpose of modelViewMatrix
uniform mat3 normalMatrix;
For reference here are the built-in uniform and attributes Three.js has.
I'm writing a physically based shader using glsl es in three.js. For the addition of specular global illumination I use a cubemap dds texture with mipmap chain inside (precalculate with CubeMapGen as it's explained here). I need to access this texture in fragment shader and I would like to select manually the index of mipmap. The correct function for doing this is
vec4 textureCubeLod(samplerCube sampler, vec3 coord, float lod)
but it's available only in vertex shader. In my fragment shader I'm using the similar function
vec4 textureCube(samplerCube sampler, vec3 coord, float bias)
but it doesn't work well, because the bias parameter is just added to the automatically calculated level of detail. So, when I zoom in or zoom out on the scene the LOD of mipmap change, but for my shader it must be the same (it must depends only on the rough parameter, as explained in the link above).
I would like to select manually the level of mipmap in fragment shader only depends on the roughness of the material (for example using the formula mipMapIndex = roughness*numMipMap), so it must be costant with the distance and no automatically changed when zooming. How can I solve this?
It wont work with webGL atm, because there is no support for this feature. You can experiment with textureLOD extensions though with recent builds of chrome canary, but it still needs some tweaking. Go about flags and look for this:
Enable WebGL Draft Extensions
WebGL textureCube bias causing seams