Threejs: PointCloudMaterial size compared to ShaderMaterial gl_PointSize with size attenuation - three.js

I am curious to know what the relationship is between gl_PointSize and the size property within PointCloudMaterial.
When I create a PointCloud with PointCloudMaterial and set the size property to 1, the size of the particles are far larger than when creating a PointCloud with a ShaderMaterial and setting the size parameter for the vertex shader to 1. I also account for size attenuation like in the PointCloudMaterial shader:
<script type="x-shader/x-vertex" id="particle_vs">
uniform float size;
uniform float scale;
void main() {
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_PointSize = size * ( scale / length( mvPosition.xyz ) );
gl_Position = projectionMatrix * mvPosition;
}
</script>
I have extracted a simple example of my problem here:
http://dev.cartelle.nl/particle-example/
The red particles are assigned a PointCloudMaterial set to size 1.
The green particles are assigned a ShaderMaterial that has a Vertex Shader that accounts for size attenuation like in PointCloudMaterial. I've set the size to 300 in this case, and as you can see the green particles are still smaller.
My expected result is to have a ShaderMaterial to take the same unit measure as the size property on PointCloudMaterial. I have to have both these materials working together, so I'm trying to figure out the relationship between these sizes. Must be something I'm missing with the vertex shader?
Thanks!
Johnny

In your ShaderMaterial, you need to set
scale: { type: 'f', value: window.innerHeight / 2 },
This is because of the following line in the WebGLRenderer method refreshUniformsParticle():
uniforms.scale.value = _canvas.height / 2.0; // TODO: Cache this.
three.js r.71

Related

How can I color points in Three JS using OpenGL and Fragment Shaders to depend on the points' distance to the scene origin

To clarify I am using React, React Three Fiber, Three JS
I have 1000 points mapped into the shape of a disc, and I would like to give them texture via ShaderMaterial. It takes a vertexShader and a fragmentShader. For the color of the points I want them to transition in a gradient from blue to red, the further away points are blue and the closest to origin points are red.
This is the vertexShader:
const vertexShader = `
uniform float uTime;
uniform float uRadius;
varying float vDistance;
void main() {
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
vDistance = length(mvPosition.xyz);
gl_Position = projectionMatrix * mvPosition;
gl_PointSize = 5.0;
}
`
export default vertexShader
And here is the fragmentShader:
const fragmentShader = `
uniform float uDistance[1000];
varying float vDistance;
void main() {
// Calculate the distance of the fragment from the center of the point
float d = 1.0 - length(gl_PointCoord - vec2(0.5, 0.5));
// Interpolate the alpha value of the fragment based on its distance from the center of the point
float alpha = smoothstep(0.45, 0.55, d);
// Interpolate the color of the point between red and blue based on the distance of the point from the origin
vec3 color = mix(vec3(1.0, 0.0, 0.0), vec3(0.0, 0.0, 1.0), vDistance);
// Set the output color of the fragment
gl_FragColor = vec4(color, alpha);
}
`
export default fragmentShader
I have tried solving the problem at first by passing an array of normalized distances for every point, but I now realize the points would have no idea how to associate which array index is the distance correlating to itself.
The main thing I am confused about it is how gl_FragColor works. In the example linked the idea is that every point from the vertexShader file will have a vDistance and use that value to assign a unique color to itself in the fragmentShader
So far I have only succeeded in getting all of the points to be the same color, they do not seem to differ based on distance at all

glClipPlane - Is there an equivalent in webGL?

I have a 3D mesh. Is there any possibility to render the sectional view (clipping) like glClipPlane in OpenGL?
I am using Three.js r65.
The latest shader that I have added is:
Fragment Shader:
uniform float time;
uniform vec2 resolution;
varying vec2 vUv;
void main( void )
{
vec2 position = -1.0 + 2.0 * vUv;
float red = abs( sin( position.x * position.y + time / 2.0 ) );
float green = abs( cos( position.x * position.y + time / 3.0 ) );
float blue = abs( cos( position.x * position.y + time / 4.0 ) );
if(position.x > 0.2 && position.y > 0.2 )
{
discard;
}
gl_FragColor = vec4( red, green, blue, 1.0 ); }
Vertex Shader:
varying vec2 vUv;
void main()
{
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
Unfortunately in the OpenGL-ES specification against which WebGL has been specified there are no clip planes and the vertex shader stage lacks the gl_ClipDistance output, by which plane clipping is implemented in modern OpenGL.
However you can use the fragment shader to implement per-fragment clipping. In the fragment shader test the position of the incoming fragment against your set of clip planes and if the fragment does not pass the test discard it.
Update
Let's have a look at how clip planes are defined in fixed function pipeline OpenGL:
void ClipPlane( enum p, double eqn[4] );
The value of the first argument, p, is a symbolic constant,CLIP PLANEi, where i is
an integer between 0 and n − 1, indicating one of n client-defined clip planes. eqn
is an array of four double-precision floating-point values. These are the coefficients
of a plane equation in object coordinates: p1, p2, p3, and p4 (in that order). The
inverse of the current model-view matrix is applied to these coefficients, at the time
they are specified, yielding
p' = (p'1, p'2, p'3, p'4) = (p1, p2, p3, p4) inv(M)
(where M is the current model-view matrix; the resulting plane equation is unde-
fined if M is singular and may be inaccurate if M is poorly-conditioned) to obtain
the plane equation coefficients in eye coordinates. All points with eye coordinates
transpose( (x_e, y_e,z_e, w_e) ) that satisfy
(p'1, p'2, p'3, p'4)  x_e  ≥ 0
 y_e 
 z_e 
 w_e 
lie in the half-space defined by the plane; points that do not satisfy this condition
do not lie in the half-space.
So what you do is, you add uniforms by which you pass the clip plane parameters p' and add another out/in pair of variables between the vertex and fragment shader to pass the vertex eye space position. Then in the fragment shader the first thing you do is performing the clip plane equation test and if it doesn't pass you discard the fragment.
In the vertex shader
in vec3 vertex_position;
out vec4 eyespace_pos;
uniform mat4 modelview;
void main()
{
/* ... */
eyespace_pos = modelview * vec4(vertex_position, 1);
/* ... */
}
In the fragment shader
in vec4 eyespace_pos;
uniform vec4 clipplane;
void main()
{
if( dot( eyespace_pos, clipplane) < 0 ) {
discard;
}
/* ... */
}
In the newer versions (> r.76) of three.js clipping is supported in the THREE.WebGLRenderer. There is an array property called clippingPlanes where you can add your custom clipping planes (THREE.Plane instances).
For three.js you can check these two examples:
1) WebGL clipping (code base here on GitHub)
2) WebGL clipping advanced (code base here on GitHub)
A simple example
To add a clipping plane to the renderer you can do:
var normal = new THREE.Vector3( -1, 0, 0 );
var constant = 0;
var plane = new THREE.Plane( normal, constant );
renderer.clippingPlanes = [plane];
Here a fiddle to demonstrate this.
You can also clip on object level by adding a clipping plane to the object material. For this to work you have to set the renderer localClippingEnabled property to true.
// set renderer
renderer.localClippingEnabled = true;
// add clipping plane to material
var normal = new THREE.Vector3( -1, 0, 0 );
var constant = 0;
var color = 0xff0000;
var plane = new THREE.Plane( normal, constant );
var material = new THREE.MeshBasicMaterial({ color: color });
material.clippingPlanes = [plane];
var mesh = new THREE.Mesh( geometry, material );
Note: In r.77 some of the clipping functionality in the THREE.WebGLRenderer was moved moved to a separate THREE.WebGLClipping class, check here for reference in the three.js master branch.

Three js 2d matrix visualization

I am trying to visualize 2d matrices using Three js. These matrices are the states of the neurons in a neural network. The matrices are not huge (64 x 32) The values in these matrices will change and I want those new values to be displayed in the visualization.
For the 2d matrix I want a plane of neurons.
I have tried creating a particle system using a plane geometry with as many vertices as neurons in the data matrix.
var width = 32;
var height = 64;
var planeGeometry = new THREE.PlaneGeometry( width, height, width - 1 , height - 1 );
var particlePlane = new THREE.ParticleSystem( planeGeometry, shaderMaterial );
In the fragment shader each particle is given a base texture (a white circle)
gl_FragColor = texture2D(baseTexture, gl_PointCoord);
And then I use a second texture containing the data matrix values (greyscale pixel values) to modify each base texture.
// Sets particle texture to desired color
// vertexPosition is a vec2 in coordinates local to the plane
gl_FragColor = gl_FragColor * texture2D( dataTexture, vertexPosition );
To calculate vertexPosition in the vertex share I do the following (irrelevant lines ommitted):
uniform float width;
uniform float height;
varying vec2 vertexPosition;
void main()
{
vertexPosition = vec2( position.x / width, position.y / height );
}
This is where I'm getting caught up. The vertexPosition does not seem to be mapping properly to the dataTexture pixels. I want a one to one correspondence between particles and pixels.
How do I properly map from the location of particles/vertexes on a plane to equivalent pixel locations in a texture?
I am new to three js, so please feel free to tell me my approach is totally off.
To get texture coordinates, there are ready to use projection matrix in glsl, here is what I would use as a vertex shader
varying vec2 vertexPosition;
void main() {
vertexPosition = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
Then you have the xy position to use in the fragment in the varying vertexPosition.

How to get fullscreen texture coordinates for a fullscreen texture from a previous rendering pass?

I do two rendering passes in webgl application using three.js (contrived example here):
renderer.render(depthScene, camera, depthTarget);
renderer.render(scene, camera);
The first rendering pass is to the render target depthTarget which I want to access in the second rendering pass as a texture uniform:
uniform sampler2D tDepth;
float unpack_depth( const in vec4 rgba_depth ) { ... }
void main() {
vec2 screenTexCoord = vec2( 1.0, 1.0 );
float depth = 1.0 - unpack_depth( texture2D( tDepth, screenTexCoord ) );
gl_FragColor = vec4( vec3( depth ), 1.0 );
}
My question is how do I get the value for screenTexCoord? It is not gl_FragCoord.xy.
To avoid a possible misunderstanding: I don't want to render the texture from the first pass to a quad. I want to use the texture from the first pass while rendering the geometry in the second pass.
EDIT:
According to the WebGL specification gl_FragCoord contains window coordinates which are normalized device coordinates (ndc) scaled by the viewport. The ndc are within [-1, 1] so the following should yield coordinates within [0, 1] for texture lookup:
vec2 ndcXY = gl_FragCoord.xy / vec2( viewWidth, viewHeight );
vec2 screenTexCoord = (ndcXY+1.0)/2.0;
But somewhere I must be wrong because the updated example does still not show the (packed) depth?!
Finally figured it out myself. The correct way to calculate the texture coordinates is just:
vec2 screenTexCoord = gl_FragCoord.xy / vec2( viewWidth, viewHeight );
See a working example here.

Get position from depth texture

Im trying to reduce the number of post process textures I have to draw in my scene. The end goal is to support an SSAO shader. The shader requires depth, postion and normal data. Currently I am storing the depth and normals in 1 float texture and the position in another.
I've been doing some reading, and it seems possible that you can get the position by simply using the depth stored in the normal texture. You have to unproject the x and y and multiply it by the depth value. I can't seem to get this right however and its probably due to my lack of understanding...
So currently my positions are drawn to a position texture. This is what it looks like (this is currently working correctly)
So is my new method. I pass the normal texture that stores the normal x,y and z in the RGB channels and the depth in the w. In the SSAO shader I need to get the position and so this is how im doing it:
//viewport is a vec2 of the viewport width and height
//invProj is a mat4 using camera.projectionMatrixInverse (camera.projectionMatrixInverse.getInverse( camera.projectionMatrix );)
vec3 get_eye_normal()
{
vec2 frag_coord = gl_FragCoord.xy/viewport;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
return normalize((invProj * device_normal).xyz);
}
...
float srcDepth = texture2D(tNormalsTex, vUv).w;
vec3 eye_ray = get_eye_normal();
vec3 srcPosition = vec3( eye_ray.x * srcDepth , eye_ray.y * srcDepth , eye_ray.z * srcDepth );
//Previously was doing this:
//vec3 srcPosition = texture2D(tPositionTex, vUv).xyz;
However when I render out the positions it looks like this:
The SSAO looks very messed up using the new method. Any help would be greatly appreciated.
I was able to find a solution to this. You need to multiply the ray normal by the camera far - near (I was using the normalized depth value - but you need the world depth value.)
I created a function to extract the position from the normal/depth texture like so:
First in the depth capture pass (fragment shader)
float ld = length(vPosition) / linearDepth; //linearDepth is cam.far - cam.near
gl_FragColor = vec4( normalize( vNormal ).xyz, ld );
And now in the shader trying to extract the position...
/// <summary>
/// This function will get the 3d world position from the Normal texture containing depth in its w component
/// <summary>
vec3 get_world_pos( vec2 uv )
{
vec2 frag_coord = uv;
float depth = texture2D(tNormals, frag_coord).w;
float unprojDepth = depth * linearDepth - 1.0;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
vec3 eye_ray = normalize((invProj * device_normal).xyz);
vec3 pos = vec3( eye_ray.x * unprojDepth, eye_ray.y * unprojDepth, eye_ray.z * unprojDepth );
return pos;
}

Resources