In my attemp to write my own custom shader (I am using a THREE.ShaderMaterial), I need to set its WebGL's built-in gl_ModelViewMatrixInverseTranspose uniform (as seen on http://mew.cx/glsl_quickref.pdf). I noticed some uniforms are already automatically passed to the shader, for instance gl_ModelViewMatrix and gl_ProjectionMatrix are already accounted for, by threejs' modelViewMatrix and projectionMatrix respectively. gl_ModelViewProjectionMatrix, on the other hand, seems to be missing, but I noticed some examples where it can easily be computed inside the shader as projectionMatrix * modelViewMatrix. So my question is: am I to manually compute gl_ModelViewMatrixInverseTranspose inside my shader starting from modelViewMatrix (and if so, how?) or is there a unform (possibily merged inside my definition of THREE.ShaderMaterial with THREE.UniformsUtils.merge) that already handles it? Cheers.
In a Three.js shader the inverse transpose of modelViewMatrix is called normalMatrix.
It is automatically passed into the shaders so you don't need to do any work to get it.
// = inverse transpose of modelViewMatrix
uniform mat3 normalMatrix;
For reference here are the built-in uniform and attributes Three.js has.
Related
I am implementing a feature extraction algorithm with OpenGL ES 3.0 (given an input texture with some 1's and mostly 0's, produce an output texture that has feature regions labeled). The problem I face in my fragment shader is how to perform a “lookup” on an intermediate vec or float rather than a sampler.
Conceptually every vec or float is just a texel-valued function, so there ought to be a way to get its value given texel coordinates, something like textureLikeLookup(texel_valued, v_color) - but I haven’t found anything that parallels the texture* functions.
The options I see are:
Render my vector to a framebuffer and pass that as a texture into another shader/rendering pass - undesirable because I have many such passes in a loop, and I want to avoid interleaving CPU calls;
Switch to ES 3.1 and take advantage of imageStore (https://www.khronos.org/registry/OpenGL-Refpages/es3.1/html/imageStore.xhtml) - it seems clear that if I can update an intermediate image within my shader then I can achieve this within the fragment shader (cf. https://www.reddit.com/r/opengl/comments/279fc7/glsl_frag_update_values_to_a_texturemap_within/), but I would rather stick to 3.0 if possible.
Is there a better/natural way to deal with this problem? In other words, do something like this
// works, because tex_sampler is a sampler2D
vec4 texel_valued = texture(tex_sampler, v_color);
when the data is not a sampler2D but a vec:
// doesn't work, because texel_valued is not a sampler but a vec4
vec4 oops = texture(texel_valued, v_color);
I see that Threejs has a Points Material to draw a geometry as points rather than as triangles. However, I want to manipulate the vertices using my own vertex shader, using a Shader Material. In WebGL, I think I could just call gl_drawArrays using gl.Points instead of gl.Triangles. How can I tell the renderer to draw the geometry as points? Is there a better way to go about this?
little addition, I had no joy until I added gl_PointSize to my vertex shader:
void main(){
gl_PointSize = 100.;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.);
}
found the answer in the GPU particle system example.
Found my solution right after asking the question. Just create a THREE.Points object instead of THREE.Mesh using whatever geometry and the Shader Material you want to use.
THREE.Points(geometry, new THREE.ShaderMaterial(parameters));
Is there any way to manually specify view and model matrices?
I know Three.js is not supposed to be used in this way, but I am currently developing some educational materials to teach a typical computer graphics pipeline and would like to explicitly supply model/view/projection matrices to a shader. While I understood which matrices are model/view/projection matrices in Three.js from this issue, I haven't been able to find a good way to manually control them.
So far, I was able to specify the projection matrix by using camera.projectionMatrix.makePerspective() and the model matrix by using applyMatrix(). Actually, applyMatrix() is not ideal from the educational point of view because it internally decomposes the matrix to position, quaternion and scale and probably reconstructs the model matrix from those values and supply it to a shader.
One possible solution is to use ShaderMaterial() and specify all of the three matrices as uniforms. However, I may want to avoid it because they are also passed to a shader implicitly and the name "material" might confuse students.
Does anybody have suggestions to do this kind of stuff in Three.js?
However, I may want to avoid it because they are also passed to a shader implicitly and the name "material" might confuse students.
I'm not sure if this is the best approach. A Material in three.js should indeed be more than a shader. It consists of two shaders, but other stuff as well. For example if you set myMaterial.transparent = true; you will trigger a completely different flow of WebGLRenderer which in turn sets up different WebGL calls. Setting the blending mode for example is not what a shader does.
It would probably be worth explaining this abstraction, rather than renaming it.
...matrices in Three.js from this issue, I haven't been able to find a good way to manually control them.
With RawShaderMaterial you should be able to write the whole shader from scratch.
uniform mat4 uMyProjectionMatrix;
uniform mat4 uMyModelMatrix;
uniform mat4 uMyViewMatrix;
uniform mat5 uMyModelViewMatrix;
attribute vec3 aMyPosition;
void main(){
gl_Position = uMyProjectionMatrix * uMyViewMatrix * uMyModelMatrix * vec4( aMyPositon , 1.);
}
It is entirely up to you to define what those are. Is the projection matrix orthographic or not for example.
With ShaderMaterial you get these automagically:
void main(){
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4( position , 1. );
}
projectionMatrix and viewMatrix are derived from the camera's properties, as you can see in the link (btw i've no idea why that's not in the documentation, i found myself referring to that particular issue a bunch of times :) ).
Both of these can be modified. Automagically if you do
myCamera.far = newFar;
myCamera.fov = newFov;
myCamera.updateProjectionMatrix(); //this will be the new projectionMatrix in GLSL
but, nothing should be preventing you from doing
myCamera.projectionMatrix.array[3] = mySkewLogic;
Same applies to modelMatrix:
myObject.position.x = newX;
myObject.updateMatrixWorld();
//or
myObject.matrixWorld.array[3] = someXTranslationLogic;
I'm writing a physically based shader using glsl es in three.js. For the addition of specular global illumination I use a cubemap dds texture with mipmap chain inside (precalculate with CubeMapGen as it's explained here). I need to access this texture in fragment shader and I would like to select manually the index of mipmap. The correct function for doing this is
vec4 textureCubeLod(samplerCube sampler, vec3 coord, float lod)
but it's available only in vertex shader. In my fragment shader I'm using the similar function
vec4 textureCube(samplerCube sampler, vec3 coord, float bias)
but it doesn't work well, because the bias parameter is just added to the automatically calculated level of detail. So, when I zoom in or zoom out on the scene the LOD of mipmap change, but for my shader it must be the same (it must depends only on the rough parameter, as explained in the link above).
I would like to select manually the level of mipmap in fragment shader only depends on the roughness of the material (for example using the formula mipMapIndex = roughness*numMipMap), so it must be costant with the distance and no automatically changed when zooming. How can I solve this?
It wont work with webGL atm, because there is no support for this feature. You can experiment with textureLOD extensions though with recent builds of chrome canary, but it still needs some tweaking. Go about flags and look for this:
Enable WebGL Draft Extensions
WebGL textureCube bias causing seams
The OpenGL Superbible 5th Edition was recently released, and it documents OpenGL 3.3. Unfortunately, OS X only supports OpenGL 2.1 and GLSL version 1.20. The very first non-trivial vertex shader they give you fails to compile with the error message:
ERROR: 0:5: '' : Version number not supported by GL2
ERROR: 0:8: 'in' : syntax error syntax error
The shader is, as written:
// Simple Diffuse lighting Shader
// Vertex Shader
// Richard S. Wright Jr.
// OpenGL SuperBible
#version 130
// Incoming per vertex... position and normal
in vec4 vVertex;
in vec3 vNormal;
// Set per batch
uniform vec4 diffuseColor;
uniform vec3 vLightPosition;
uniform mat4 mvpMatrix;
uniform mat4 mvMatrix;
uniform mat3 normalMatrix;
// Color to fragment program
smooth out vec4 vVaryingColor;
void main(void)
{
// Get surface normal in eye coordinates
vec3 vEyeNormal = normalMatrix * vNormal;
// Get vertex position in eye coordinates
vec4 vPosition4 = mvMatrix * vVertex;
vec3 vPosition3 = vPosition4.xyz / vPosition4.w;
// Get vector to light source
vec3 vLightDir = normalize(vLightPosition - vPosition3);
// Dot product gives us diffuse intensity
float diff = max(0.0, dot(vEyeNormal, vLightDir));
// Multiply intensity by diffuse color
vVaryingColor.rgb = diff * diffuseColor.rgb;
vVaryingColor.a = diffuseColor.a;
// Let's not forget to transform the geometry
gl_Position = mvpMatrix * vVertex;
}
replace the glsl version by :
#version 120
but in 1.2 the keyword in and out were not defined yet, it was attribute and varying.
smooth varying vec4 vVaryingColor;
You'll probably need to make similar changes in the fragment shader
For vVertex and vNormal, these are custom names, which means they have been bound in the C++ code. The easiest way to work around this is to rename them gl_Vertex and gl_Normal
Short of changing the #version to match 120, you also need to change in to attribute and out to varying. I may miss something else, but that's all that shows for me right now.
Update 2011: As of OS X Lion, this is no longer the case. Lion has added support for OpenGL 3.2.
Unfortunately, I've come to believe this is a fool's errand. The book uses a GLTools library (distributed on the website) which enforces the passing in of various parameters in a way that is fundamentally incompatible with OpenGL 2.1.
If it were one example, it could be rewritten, but it's a number of examples and the effort would be overwhelming for the return if you were trying to teach yourself OpenGL.
You have two options:
Buy a Windows machine which supports OpenGL 3 and put your Mac in the corner until Apple steps up to support the newer standard.
Buy the 4th Edition of the book which is still in print.
From the website:
If you are still interested in the deprecated functionality of pre-OpenGL 3.x, we recommend the fourth edition, which is still in print, and which covers OpenGL 2.1 and the fixed function pipeline quite thoroughly.