Scale non-uniform resulting wrong imported vertex normals - three.js

Scenario:
a. I have a default sphere in 3ds max
b. I scaled it down on X axis 10 times then exported to obj.
c. Import that scaled down sphere into 3JS, then I scale up X 10 times again.
d. I want the sphere look like step A - before it got scaled down - and I need to keep the imported vertex normals, not compute in 3JS
____
Problem:
The sphere even after scaled up still have the same vertex normal as when imported, imported vertex normal not update by scaling in 3JS, so lighting affect on sphere look not correct anymore. Please see these jsFiddle to understand what i'm talking about:
JsFiddle to demonstrate:
Scaled down in Max and scaled up in 3JS, resulting wrong normals: http://jsfiddle.net/uury1jtt/5/
Default sphere from Max, not scale anything, no problem, just for demonstrate what should a correct vertex normal look like: http://jsfiddle.net/uury1jtt/4/
(the sphere is just for simplify sample, normally it can be any mesh)
Reason:
I want to scale non-uniforms imported-mesh to reuse it in different shape
Please suggest some ideas on how can I get rid of this problem? Every ideas is much appreciated!

That's because your normals are all squished in the x-direction. You'll have to dig deep into the BufferGeometry's attributes and manually "stretch" those normals:
Try substituting your onclick callback in the JSFiddle with the following code:
document.getElementById("scaleButton").onclick = function(){
object.scale.set(10,1,1);
// Get the 'normal' attribute of the geometry
var normals = object.children[0].geometry.getAttribute("normal");
// Manually stretch the x-value of the normal by 10
for(var i3 = 0; i3 < normals.length; i3 +=3){
normals.array[i3] = normals.array[i3] * 10;
}
// Inform the renderer that the attribute was altered
normals.needsUpdate = true;
}
Working JSFiddle

Related

ThreeJS Points (Point Cloud) with Lighting using custom Shader Material

Coded using:
Using ThreeJS v0.130.1
Framework: Angular 12, but that's not relevant to the issue.
Testing on Chrome browser.
I am building an application that gets more than 100K points. I use these points to render a THREE.Points object on the screen.
I found that default THREE.PointsMaterial does not support lighting (the points are visible with or without adding lights to the scene).
So I tried to implement a custom ShaderMaterial. But I could not find a way to add lighting to the rendered object.
Here is a sample of what my code is doing:
Sample App on StackBlitz showing my current attempt
In this code, I am using sample values for point cloud data, normals and color but everything else is similar to my actual application. I can see the 3D object, but need more proper lighting using normals.
I need help or guidance to implement the following:
Add lighting to custom shader material. I have Googled and tried many things, no success so far.
Using normals, show the effects of lighting (In this sample code, the normals are fixed to Y-axis direction, but I am calculating them based on some vector logic in actual application). So calculating normals is already done, but I want to use them to show light shine/shading effect in the custom shader material.
And in this sample, color attribute is set to fixed red color, but in actual application I am able to apply colors using UV range from a texture to color attribute.
Please advise how/if I can get lighting based on normals for Point Cloud. Thanks.
Note: I looked at this Stackoveflow question but it only deals with changing the alpha/transparency of points and not lighting.
Adding lighting to a custom material is a very complex process. Especially since you could use Phong, Lambert, or Physical lighting methods, and there's a lot of calculations that need to pass from the vertex to the fragment shader. For instance, this segment of shader code is just a small part of what you'd need.
Instead of trying to re-create lighting from scratch, I recommend you create a PlaneGeometry with the material you'd like (Phong, Lambert, Physical, etc...) and use an InstancedMesh to create thousands of instances, just like in this example.
Based on that example, the pseudo-code of how you could achieve a similar effect is something like this:
const count = 100000;
const geometry = new PlaneGeometry();
const material = new THREE.MeshPhongMaterial();
mesh = new THREE.InstancedMesh( geometry, material, count );
mesh.instanceMatrix.setUsage( THREE.DynamicDrawUsage ); // will be updated every frame
scene.add( mesh );
const dummy = new THREE.Object3D();
update() {
// Sets the rotation so it's always perpendicular to camera
dummy.lookAt(camera);
// Updates positions of each plane
for (let i = 0; i < count; i++){
dummy.position.set( x, y, z );
dummy.updateMatrix();
mesh.setMatrixAt( i ++, dummy.matrix );
}
}
The for() loop would be the most expensive part of each frame, so if you need to update it on each frame, you might want to calculate this in the vertex shader, but that's another question altogether.

Raycasting to intersect objects that have been displaced by vertex shader

Let's say I have a vertical list of meshes created from PlaneBufferGeometry with ShaderMaterial. The meshes are distributed vertically and evenly spaced.
The list will have two states:
Displaying the meshes as they are
Displaying meshes with each object's vertices transformed by the vertex shader to the same arbitrary value, let's say z = -50. This gives a zoomed out effect and the user can scroll through this list (in the code we do this by moving the camera y position)
In my app I'm trying to make my mouseover events work for the second state but it's tricky since the GPU transforms the vertices so the updated vertices are not reflected in the attributes on the JS side.
*Note I've looked into GPU picking and do not want to use it because I believe there should be a simpler way to do this without render targets
Attempted Solution
My current approach is to manually change the boundingBox of each plane when we are in the second state like so:
var box = new THREE.Box3().setFromObject(plane);
box.min.z = -50;
box.max.z = -50;
plane.geometry.boundingBox = box;
And then to change the boundingSphere's center to have the same z position of -50 after computing it.
I did this approach because I looked into the Raycaster and Mesh code for THREE.js and it seems like they check both boundingSphere and boundingBox for object intersections. So I thought if I modified both of them to reflect the transforms done by the GPU, the raycaster would work fine but it doesn't seem to be working for me.
The relevant raycaster code is here:
// mouse being vec2 of normalized coordinates and camera being a perspective camera
raycaster.setFromCamera( mouse, camera );
const intersects = raycaster.intersectObjects( planes );
Possible Theories
The only thing I can think of that's wrong about this approach is maybe I'm not projecting the mouse coords right? Since all the objects now lie on the plane z = -50 would I need to project those mouse coordinates to that plane?
Inspired by the link posted by #prisoner849 I found a working solution to just create additional transparent planes equal to the number of planes in the scene. In these planes, I set the z position to -50 and just intersect with these when in state #2.
A bit hacky, but works for now.

Modifying a mesh's vertices with vertex shader doesn't change its BufferGeometry attributes and causes raycaster to be inaccurate

[Updated with a JSFiddle here]
If you hover slightly outside the plane the raycaster still thinks it's hovering over the object because we modified the z position in the vertex shader
For my project I have a carousel of planes (PlaneBufferGeometry and ShaderMaterial) that I need hover effects on.
However, I have this one state where the planes are shrunk by animating each vertex's z coordinate in the vertex shader. In this state, my hover effects using THREE.Raycaster are broken because the positions in the BufferGeom array aren't updated so the Raycaster still uses the same uvs as the original sized planes.
I already tried calling the following functions for every plane p after the vertex shader runs:
p.frustrumCulled = false;
p.geometry.verticesNeedUpdate = true;
p.geometry.normalsNeedUpdate = true;
p.geometry.computeBoundingBox();
p.geometry.computeBoundingSphere();
p.geometry.computeFaceNormals();
p.geometry.computeVertexNormals();
p.geometry.attributes.position.needsUpdate = true;
I also know if I just scale each plane using THREE.Mesh's built in scale, the uvs would be raycasted correctly but I can't do that because there's a specific animation I can only achieve with the vertex shader.
Raycasting happens on the CPU. If you are going to displace vertices on the GPU (via the vertex shader), raycasting can' work correctly since it is not possible to respect the transformed vertices for the intersection test.
You have two options now. You can apply the transformation at the CPU instead of the GPU before performing the raycast. An other option is the usage of different approaches like GPU picking in order to detect the interaction with a 3D object.

Obtaining normal of the mesh face using raycaster intersectObjects - Three.js

I tried to obtain the normal of the mesh face using these:
ray = new THREE.Raycaster(x, y);
var intersection = ray.intersectObjects(objectsOptical, true);
var vector = intersection[0].face.normal;
Added intersection[0].point and intersection[0].face.normal (multiplied by constant) as one vertex and intersection[0].point as second vertex of a (gray) line. And I got this (red lines are rays and gray should be normals - but they are not):
Illustrative image
Please help me to obtain NORMALS of the mesh FACE.
Thank you.
The normals that you have plotted with red lines look like they might be correct taking into effect perspective projection.
The raycast test hits a single triangular face from your mesh. The normal you are referring to is the normal for the face object the ray hit, ie. from the original mesh.
In the source code for THREE.Raycaster the intersection calculations can be seen returning the face directly.
Elsewhere it is suggested that Ray.intersectObjects() requires face centroids. However I'm not sure about this since the source code doesn't refer to centroids.
Perhaps the normals in the original geometry weren't correct. Try this function first:
geometry.computeFaceNormals();

three.js - Adjusting opacity of individual particles

I am trying to vary the opacity of particles as a function of their distance from a plane.
This issue describes my problem, and the answer a year ago was essentially "you can't". Opacity is apparently a parameter of a material, not an element, and hence individual particle opacity is not possible.
Has anything changed, is there any way I could achieve this? If individual particle colouring is possible, I imagine this isn't out of reach.
Cheers
EDIT - This answer shows how to set per-point opacity using a custom ShaderMaterial. See https://stackoverflow.com/a/67892506/1461008 for an approach using PointsMaterial.
ParticleSystem has been renamed to PointCloud and then to Points.
Yes, you can create a Point Cloud and vary the alpha value of each particle's color dynamically.
In three.js, you can do this by setting the Point Cloud's material to be a ShaderMaterial having an attribute equal to the desired alpha value for each particle.
If ShaderMaterials, vertex shaders and fragment shaders are new to you, here is a really simple Fiddle that implements a Point Cloud with dynamic alphas: https://jsfiddle.net/9Lvrnpwc/.
EDIT: Updated fiddle
three.js r.148
Not sure why, but proposed solution didn't work for me. I used somewhat tricky shading to make points round and blurry at edges. So the corners of points were supposed to be transparent, but they appeared black: http://jsfiddle.net/5kz64ero/1/
Relevant part of my fragment shader:
// Distance from 0.0 to 0.5 from the center of the point
float d = distance(gl_PointCoord, vec2(0.5, 0.5));
// Applying sigmoid to smoothen the edge
float opacity = 1.0 / (1.0 + exp(16.0 * (d - 0.25)));
gl_FragColor = vec4(opacity * vColor, opacity);
I figured that traditionally this is solved by depth-sorting (with farthest points coming first), and I found some evidence that some older implementations of ParticleSystem in Three contained sortParticles attribute. But it's not there anymore. And in my case sorting would really involve redoing that every time camera position changes. Instead I set depthWrite: false and it seems to solve the issue.
The result: http://jsfiddle.net/5kz64ero/6/

Resources