THREE.js/GLSL: WebGL shader to color fragments based on world space position - three.js

I have seen solution to color fragments based on their position in screen space or in their local object space like Three.js/GLSL - Convert Pixel Coordinate to World Coordinate.
Those are working with screen coordinates and do change when the camera moves or rotates; or only apply to local object space.
What I like to accomplish instead is to color fragments based on their position in world space (as in world space of the three.js scene graph).
Even when the camera moves the color should stay constant.
Example of wished behaviour: A 1x1x1 cube positioned in world space at (x:0,y:0,z:2) would have its third component (blue == z) always between 1.5 - 2.5. This should be true even if the camera moves.
What I have got so far:
vertex shader
varying vec4 worldPosition;
void main() {
// The following changes on camera movement:
worldPosition = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
// This is closer to what I want (colors dont change when camera moves)
// but it is in local not world space:
worldPosition = vec4(position, 1.0);
// This works fine as long as the camera doesnt move:
worldPosition = modelViewMatrix * vec4(position, 1.0);
// Instead I'd like the above behaviour but without color changes on camera movement
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
fragment shader
uniform vec2 u_resolution;
varying vec4 worldPosition;
void main(void)
{
// I'd like to use worldPosition something like this:
gl_FragColor = vec4(worldPosition.xyz * someScaling, 1.0);
// Here this would mean that fragments xyz > 1.0 in world position would be white and black for <= 0.0; that would be fine because I can still divide it to compensate
}
Here is what I have got:
https://glitch.com/edit/#!/worldpositiontest?path=index.html:163:15
If you move with wasd you can see that the colors don't stay in place. I'd like them to, however.

Your worldPosition is wrong. You shouldn't implicate the camera in the calculation. That means, no projectionMatrix nor viewMatrix.
// The world poisition of your vertex: NO CAMERA
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
// Position in normalized screen coords: ADD CAMERA
gl_Position = projectionMatrix * viewMatrix * worldPosition;
// Here you can compute the color based on worldPosition, for example:
vColor = normalize(abs(worldPosition.xyz));
Check this fiddle.
Note
Notice that the abs() used here for the color can potentially give the same color for different position, which might not be what you're looking for.

Related

How can I color points in Three JS using OpenGL and Fragment Shaders to depend on the points' distance to the scene origin

To clarify I am using React, React Three Fiber, Three JS
I have 1000 points mapped into the shape of a disc, and I would like to give them texture via ShaderMaterial. It takes a vertexShader and a fragmentShader. For the color of the points I want them to transition in a gradient from blue to red, the further away points are blue and the closest to origin points are red.
This is the vertexShader:
const vertexShader = `
uniform float uTime;
uniform float uRadius;
varying float vDistance;
void main() {
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
vDistance = length(mvPosition.xyz);
gl_Position = projectionMatrix * mvPosition;
gl_PointSize = 5.0;
}
`
export default vertexShader
And here is the fragmentShader:
const fragmentShader = `
uniform float uDistance[1000];
varying float vDistance;
void main() {
// Calculate the distance of the fragment from the center of the point
float d = 1.0 - length(gl_PointCoord - vec2(0.5, 0.5));
// Interpolate the alpha value of the fragment based on its distance from the center of the point
float alpha = smoothstep(0.45, 0.55, d);
// Interpolate the color of the point between red and blue based on the distance of the point from the origin
vec3 color = mix(vec3(1.0, 0.0, 0.0), vec3(0.0, 0.0, 1.0), vDistance);
// Set the output color of the fragment
gl_FragColor = vec4(color, alpha);
}
`
export default fragmentShader
I have tried solving the problem at first by passing an array of normalized distances for every point, but I now realize the points would have no idea how to associate which array index is the distance correlating to itself.
The main thing I am confused about it is how gl_FragColor works. In the example linked the idea is that every point from the vertexShader file will have a vDistance and use that value to assign a unique color to itself in the fragmentShader
So far I have only succeeded in getting all of the points to be the same color, they do not seem to differ based on distance at all

Custom Point Shader in Autodesk Forge Viewer behaves weirdly with orthograpic Camera

I am using a PointCloud for fast rendering of sprites after this blog post. Everything works fine with the perspective camera. However, if I switch to the orthographic camera via viewer.navigation.toOrthographic() the points' sizes are not calculated correctly. Does anyone know what the issue is or where I might find some clue?
My vertex shader
#if NUM_CUTPLANES > 0
varying highp vec3 vWorldPosition;
#endif
attribute float mVisible;
attribute float mSize;
varying float vMVisible;
uniform float scale;
void main() {
vMVisible = mVisible;
#if NUM_CUTPLANES > 0
vec4 _worldPosition = modelMatrix * vec4( position, 1.0 );
vWorldPosition = _worldPosition.xyz;
#endif
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * mvPosition;
gl_PointSize = mSize * (scale / (-mvPosition.z) );
}
Zoomed out
Zoomed in
Zoomed in a little bit more
The problem is in the last few lines of the fragment shader:
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * mvPosition;
gl_PointSize = mSize * (scale / (-mvPosition.z) );
When using an orthographic projection instead of a perspective projection, the mvPosition (position transformed into the "normalized device coordinates") may have very different values, and so dividing the point size by mvPosition.z may yield unexpected results.
You may need to clamp the final point size by some constants (or by uniforms you provide from your JavaScript code), e.g.:
gl_PointSize = clamp(mSize * scale / -mvPosition.z, 10.0, 100.0);

How to color only the faces where the normals are perpendicular to the camera

I am trying to do the math for a shader that needs to darken on the faces that have normals perpendicular to the camera (dot product is 0). So basically how do I get this dot product?
How do I fix the following?
uniform float time;
uniform vec3 eye_dir;
varying float darkening;
void main(){
float product=dot(normalize(eye_dir),normalize(normal.xyz));
darkening=product;
gl_Position=
projectionMatrix*
modelViewMatrix*
vec4(position,1.);
}
// in THREE.js
this.camera.getWorldDirection(this.eyeDir);
...
cell.material.uniforms.eye_dir = new Uniform(this.eyeDir);
To do what you want you've to calculate the vector from the fragment to the camera. The easiest way to do this, is to do it in view space (camera space), because in view space the position of the camera is (0, 0, 0).
Transform the position by the modelViewMatrix from model space to view space and the normal by the normalMatrix from model space to view space. See WebGLProgram.
Since the result of the dot product is 1.0 when the vectors are orientated in the same direction, the darkening is 1.0 - abs(dotproduct).
varying float darkening;
void main(){
vec4 view_pos = modelViewMatrix * vec4(position, 1.0);
vec3 view_dir = normalize(-view_pos.xyz); // vec3(0.0) - view_pos;
vec3 view_nv = normalize(normalMatrix * normal.xyz);
float NdotV = dot(view_dir, view_nv);
darkening = 1.0 - abs(NdotV);
gl_Position = projectionMatrix * view_pos;
}
Note, the Dot product of eye_dir and normal doesn't make any sense at all, because eye_dir is a vector in world space and normal is a vector in model (object) space.

Partially transparent shader occluding objects in THREE.js

I am making a game with a fog of war layer covering the board. I want to have a cursor that shows up when the player mouses over a tile, and I'm implementing this with a glow effect around the tile, also implemented using a shader.
I'm running into a strange issue: the glow effect works fine for positive x values (when the camera is set at x = -250, y = 250) but I can't see it for negative x values unless the camera gets rotated to almost completely vertical (or I move the camera underneath the fog of war layer).
It's hard to explain, so I've made a CodePen demonstrating the problem: https://codepen.io/jakedluhy/pen/QqzajN?editors=0010
I'm pretty new to custom shaders, so any insight or help would be appreciated. Here's the shaders for the fog of war:
// Vertex
varying vec4 vColor;
void main() {
vec3 cRel = cameraPosition - position;
float dx = (20.0 * cRel.x) / cRel.y;
float dz = (20.0 * cRel.z) / cRel.y;
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(
position.x + dx,
position.y,
position.z + dz,
1.0
);
vColor = vec4(0.0, 0.0, 0.0, 0.7);
}
// Fragment
varying vec4 vColor;
void main() {
gl_FragColor = vColor;
}
And the shaders for the "glow":
// Vertex
varying vec4 vColor;
attribute float alpha;
void main() {
vColor = vec4(color, alpha);
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(position, 1.0);
}
// Fragment
varying vec4 vColor;
void main() {
gl_FragColor = vColor;
}
The math in the vertex shader for the fog of war is to keep the fog in a relative position to the game board.
Tagging THREE.js and glsl because I'm not sure whether this is a THREE.js exclusive problem or not...
Edit: version 0.87.1
Your example looks pretty weird. By setting depthWrite:false on your fog material the two boxes render.
version 0.87.1

Three.js shader pointLight position

I'm trying to write my shader and add light sources, I sort of figured it out and did it.
But there is a problem, the position of the light source is incorrectly determined, when the camera rotates or moves, something unimaginable happens.
So I get the position of the vertex shader
vec3 vGlobalPosition = (modelMatrix * vec4(position, 1.0 )).xyz
Now I'm trying to make an illuminated area
float lightDistance = pointLights[ i ].distance;
vec3 lightPosition = pointLights[ i ].position;
float diffuseCoefficient = max(
1.0 - (distance(lightPosition,vGlobalPosition) / lightDistance ), 0.0);
gl_FragColor.rgb += color.rgb * diffuseCoefficient;
But as I wrote earlier if you rotate the camera, the lighting area moves to different positions.
I set the light position manually and everything became normal.
vec3 lightPosition = vec3(2000,0,2000);
...
The question is how to get the right position of the light source? I need a global position, what position is contained in the light source I do not know.
Added an example: http://codepen.io/korner/pen/XMzEaG
Your problem lies with vPos. Currently you do:
vPos = (modelMatrix * vec4(position, 1.0)).xyz
Instead you need to multiply the position with modelViewMatrix:
vPos = (modelViewMatrix * vec4(position, 1.0)).xyz;
You need to use modelViewMatrix because PointLight.position is relative to the camera.

Resources