How to color only the faces where the normals are perpendicular to the camera - three.js

I am trying to do the math for a shader that needs to darken on the faces that have normals perpendicular to the camera (dot product is 0). So basically how do I get this dot product?
How do I fix the following?
uniform float time;
uniform vec3 eye_dir;
varying float darkening;
void main(){
float product=dot(normalize(eye_dir),normalize(normal.xyz));
darkening=product;
gl_Position=
projectionMatrix*
modelViewMatrix*
vec4(position,1.);
}
// in THREE.js
this.camera.getWorldDirection(this.eyeDir);
...
cell.material.uniforms.eye_dir = new Uniform(this.eyeDir);

To do what you want you've to calculate the vector from the fragment to the camera. The easiest way to do this, is to do it in view space (camera space), because in view space the position of the camera is (0, 0, 0).
Transform the position by the modelViewMatrix from model space to view space and the normal by the normalMatrix from model space to view space. See WebGLProgram.
Since the result of the dot product is 1.0 when the vectors are orientated in the same direction, the darkening is 1.0 - abs(dotproduct).
varying float darkening;
void main(){
vec4 view_pos = modelViewMatrix * vec4(position, 1.0);
vec3 view_dir = normalize(-view_pos.xyz); // vec3(0.0) - view_pos;
vec3 view_nv = normalize(normalMatrix * normal.xyz);
float NdotV = dot(view_dir, view_nv);
darkening = 1.0 - abs(NdotV);
gl_Position = projectionMatrix * view_pos;
}
Note, the Dot product of eye_dir and normal doesn't make any sense at all, because eye_dir is a vector in world space and normal is a vector in model (object) space.

Related

How can I color points in Three JS using OpenGL and Fragment Shaders to depend on the points' distance to the scene origin

To clarify I am using React, React Three Fiber, Three JS
I have 1000 points mapped into the shape of a disc, and I would like to give them texture via ShaderMaterial. It takes a vertexShader and a fragmentShader. For the color of the points I want them to transition in a gradient from blue to red, the further away points are blue and the closest to origin points are red.
This is the vertexShader:
const vertexShader = `
uniform float uTime;
uniform float uRadius;
varying float vDistance;
void main() {
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
vDistance = length(mvPosition.xyz);
gl_Position = projectionMatrix * mvPosition;
gl_PointSize = 5.0;
}
`
export default vertexShader
And here is the fragmentShader:
const fragmentShader = `
uniform float uDistance[1000];
varying float vDistance;
void main() {
// Calculate the distance of the fragment from the center of the point
float d = 1.0 - length(gl_PointCoord - vec2(0.5, 0.5));
// Interpolate the alpha value of the fragment based on its distance from the center of the point
float alpha = smoothstep(0.45, 0.55, d);
// Interpolate the color of the point between red and blue based on the distance of the point from the origin
vec3 color = mix(vec3(1.0, 0.0, 0.0), vec3(0.0, 0.0, 1.0), vDistance);
// Set the output color of the fragment
gl_FragColor = vec4(color, alpha);
}
`
export default fragmentShader
I have tried solving the problem at first by passing an array of normalized distances for every point, but I now realize the points would have no idea how to associate which array index is the distance correlating to itself.
The main thing I am confused about it is how gl_FragColor works. In the example linked the idea is that every point from the vertexShader file will have a vDistance and use that value to assign a unique color to itself in the fragmentShader
So far I have only succeeded in getting all of the points to be the same color, they do not seem to differ based on distance at all

THREE.js/GLSL: WebGL shader to color fragments based on world space position

I have seen solution to color fragments based on their position in screen space or in their local object space like Three.js/GLSL - Convert Pixel Coordinate to World Coordinate.
Those are working with screen coordinates and do change when the camera moves or rotates; or only apply to local object space.
What I like to accomplish instead is to color fragments based on their position in world space (as in world space of the three.js scene graph).
Even when the camera moves the color should stay constant.
Example of wished behaviour: A 1x1x1 cube positioned in world space at (x:0,y:0,z:2) would have its third component (blue == z) always between 1.5 - 2.5. This should be true even if the camera moves.
What I have got so far:
vertex shader
varying vec4 worldPosition;
void main() {
// The following changes on camera movement:
worldPosition = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
// This is closer to what I want (colors dont change when camera moves)
// but it is in local not world space:
worldPosition = vec4(position, 1.0);
// This works fine as long as the camera doesnt move:
worldPosition = modelViewMatrix * vec4(position, 1.0);
// Instead I'd like the above behaviour but without color changes on camera movement
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
fragment shader
uniform vec2 u_resolution;
varying vec4 worldPosition;
void main(void)
{
// I'd like to use worldPosition something like this:
gl_FragColor = vec4(worldPosition.xyz * someScaling, 1.0);
// Here this would mean that fragments xyz > 1.0 in world position would be white and black for <= 0.0; that would be fine because I can still divide it to compensate
}
Here is what I have got:
https://glitch.com/edit/#!/worldpositiontest?path=index.html:163:15
If you move with wasd you can see that the colors don't stay in place. I'd like them to, however.
Your worldPosition is wrong. You shouldn't implicate the camera in the calculation. That means, no projectionMatrix nor viewMatrix.
// The world poisition of your vertex: NO CAMERA
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
// Position in normalized screen coords: ADD CAMERA
gl_Position = projectionMatrix * viewMatrix * worldPosition;
// Here you can compute the color based on worldPosition, for example:
vColor = normalize(abs(worldPosition.xyz));
Check this fiddle.
Note
Notice that the abs() used here for the color can potentially give the same color for different position, which might not be what you're looking for.

THREE.JS GLSL sprite always front to camera

I'm creating a glow effect for car stop lights and found a shader that makes it possible to always face the camera:
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main() {
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * -viewVector );
intensity = pow( c - dot(vNormal, vNormel), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
This solution is quite simple and almost works. It reacts to camera movement and it would be great. BUT this element is a child of a car. The car itself is moving around and when it rotates the material stops pointing directly at the camera.
I don't want to use SpritePlugin or LensFlarePlugin because they slow down my game by 20fps so I'll stick to this lightweight solution.
I found a solution for Direct 3d that you have to remove rotation data from tranformation matrix, but I don't know how to do this in THREE.js
I guess that instead of adding calculations with car transformation there must be a way to simplify this shader instead.
How to simplify this shader so the material always faces the camera?
From the link below: "To do spherical billboarding, just remove all rotations by setting the identity matrix". How to do it ShaderMaterial in THREE.js?
http://www.geeks3d.com/20140807/billboarding-vertex-shader-glsl/
The problem here I think is intercepting transformation matrix from ShaderMaterial before it's passed to the shader, but I'm not sure.
Probably irrelevant but here's also fragment shader:
uniform vec3 glowColor;
varying float intensity;
void main() {
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}
edit: for now I found a workaround which is eliminating parent's rotation influence by setting opposite quaternion. Not perfect and it's happening in CPU not GPU
this.quaternion._x = -this.parent.quaternion._x;
this.quaternion._y = -this.parent.quaternion._y;
this.quaternion._z = -this.parent.quaternion._z;
this.quaternion._w = -this.parent.quaternion._w;
Are you looking for an implementation of billboarding? (make a 2D sprite always face camera) If so, all you need to do is this:
"vec3 billboard(vec2 v, mat4 view){",
" vec3 up = vec3(view[0][1], view[1][1], view[2][1]);",
" vec3 right = vec3(view[0][0], view[1][0], view[2][0]);",
" vec3 p = right * v.x + up * v.y;",
" return p;",
"}"
v is the offset from the center, basically the 4 vertices in a plane that faces the z-axis. Eg. (1.0, 1.0), (1.0, -1.0), (-1.0, 1.0), and (-1.0, -1.0).
Use it like so:
"vec3 worldPos = billboard(a_offset, u_view);"
// then do whatever else.

Three js 2d matrix visualization

I am trying to visualize 2d matrices using Three js. These matrices are the states of the neurons in a neural network. The matrices are not huge (64 x 32) The values in these matrices will change and I want those new values to be displayed in the visualization.
For the 2d matrix I want a plane of neurons.
I have tried creating a particle system using a plane geometry with as many vertices as neurons in the data matrix.
var width = 32;
var height = 64;
var planeGeometry = new THREE.PlaneGeometry( width, height, width - 1 , height - 1 );
var particlePlane = new THREE.ParticleSystem( planeGeometry, shaderMaterial );
In the fragment shader each particle is given a base texture (a white circle)
gl_FragColor = texture2D(baseTexture, gl_PointCoord);
And then I use a second texture containing the data matrix values (greyscale pixel values) to modify each base texture.
// Sets particle texture to desired color
// vertexPosition is a vec2 in coordinates local to the plane
gl_FragColor = gl_FragColor * texture2D( dataTexture, vertexPosition );
To calculate vertexPosition in the vertex share I do the following (irrelevant lines ommitted):
uniform float width;
uniform float height;
varying vec2 vertexPosition;
void main()
{
vertexPosition = vec2( position.x / width, position.y / height );
}
This is where I'm getting caught up. The vertexPosition does not seem to be mapping properly to the dataTexture pixels. I want a one to one correspondence between particles and pixels.
How do I properly map from the location of particles/vertexes on a plane to equivalent pixel locations in a texture?
I am new to three js, so please feel free to tell me my approach is totally off.
To get texture coordinates, there are ready to use projection matrix in glsl, here is what I would use as a vertex shader
varying vec2 vertexPosition;
void main() {
vertexPosition = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
Then you have the xy position to use in the fragment in the varying vertexPosition.

Get position from depth texture

Im trying to reduce the number of post process textures I have to draw in my scene. The end goal is to support an SSAO shader. The shader requires depth, postion and normal data. Currently I am storing the depth and normals in 1 float texture and the position in another.
I've been doing some reading, and it seems possible that you can get the position by simply using the depth stored in the normal texture. You have to unproject the x and y and multiply it by the depth value. I can't seem to get this right however and its probably due to my lack of understanding...
So currently my positions are drawn to a position texture. This is what it looks like (this is currently working correctly)
So is my new method. I pass the normal texture that stores the normal x,y and z in the RGB channels and the depth in the w. In the SSAO shader I need to get the position and so this is how im doing it:
//viewport is a vec2 of the viewport width and height
//invProj is a mat4 using camera.projectionMatrixInverse (camera.projectionMatrixInverse.getInverse( camera.projectionMatrix );)
vec3 get_eye_normal()
{
vec2 frag_coord = gl_FragCoord.xy/viewport;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
return normalize((invProj * device_normal).xyz);
}
...
float srcDepth = texture2D(tNormalsTex, vUv).w;
vec3 eye_ray = get_eye_normal();
vec3 srcPosition = vec3( eye_ray.x * srcDepth , eye_ray.y * srcDepth , eye_ray.z * srcDepth );
//Previously was doing this:
//vec3 srcPosition = texture2D(tPositionTex, vUv).xyz;
However when I render out the positions it looks like this:
The SSAO looks very messed up using the new method. Any help would be greatly appreciated.
I was able to find a solution to this. You need to multiply the ray normal by the camera far - near (I was using the normalized depth value - but you need the world depth value.)
I created a function to extract the position from the normal/depth texture like so:
First in the depth capture pass (fragment shader)
float ld = length(vPosition) / linearDepth; //linearDepth is cam.far - cam.near
gl_FragColor = vec4( normalize( vNormal ).xyz, ld );
And now in the shader trying to extract the position...
/// <summary>
/// This function will get the 3d world position from the Normal texture containing depth in its w component
/// <summary>
vec3 get_world_pos( vec2 uv )
{
vec2 frag_coord = uv;
float depth = texture2D(tNormals, frag_coord).w;
float unprojDepth = depth * linearDepth - 1.0;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
vec3 eye_ray = normalize((invProj * device_normal).xyz);
vec3 pos = vec3( eye_ray.x * unprojDepth, eye_ray.y * unprojDepth, eye_ray.z * unprojDepth );
return pos;
}

Resources