Three.js - How to update UV mapping when using morph targets? - three.js

I've been struggling with this one for hours, and found nothing either in the docs or here on SO that would point me to the right direction to achieve what I aim at.
I'm loading a scene containing several meshes. The first one is used as an actual mesh, rendered on the scene, the other ones are just used as morph targets (their geometries, properly speaking).
loader.load("scene.json", function (loadedScene) {
camera.lookAt( scene.position );
var basis = loadedScene.getObjectByName( "Main" ).geometry;
var firstTarget = loadedScene.getObjectByName( "Target1" ).geometry;
// and so on for the rest of the "target" meshes
basis.morphTargets[0] = {name: 'fstTarget', vertices: firstTarget.vertices};
var MAIN = new THREE.Mesh(basis);
This works very well, and I can morph the MAIN mesh with no hassle by playing with the influence values. The differences between the basis mesh and the target are not huge, basically they're just XY adjustments (2D shape variations).
Now, I'm using a textures material: UVs are properly projected (Blender export) and the result is nice with the MAIN mesh as is.
The problem comes when the basis shape is morphed towards a target geometry: as expected, the texture (UVs) adapts automatically, but this is not what I need to achieve => I'd need to have the UVs "morph" towards the morph target's UVs for the texture to look the same.
Here is an example of what I have now (left: basis mesh, right: morphTargetInfluences = 1 for the first morph target)
morph target and texture
What I'd like to have is the exact same texture projection on the final, morphed mesh...
I can't figure out how to do that the right way. Should I reassign the target UVs to the MAIN mesh (and how would I do that)?
The result would be like having a cloth below which a shape is morphed, and the cloth being "shrinked-wrapped" all the time against that underlying shape => you can actually see the shape changes, but the cloth itself is not deformed, just wrapping itself properly and consistently around the shape...
Any help would be much appreciated! Thanks in advance :)

Related

ThreeJS Points (Point Cloud) with Lighting using custom Shader Material

Coded using:
Using ThreeJS v0.130.1
Framework: Angular 12, but that's not relevant to the issue.
Testing on Chrome browser.
I am building an application that gets more than 100K points. I use these points to render a THREE.Points object on the screen.
I found that default THREE.PointsMaterial does not support lighting (the points are visible with or without adding lights to the scene).
So I tried to implement a custom ShaderMaterial. But I could not find a way to add lighting to the rendered object.
Here is a sample of what my code is doing:
Sample App on StackBlitz showing my current attempt
In this code, I am using sample values for point cloud data, normals and color but everything else is similar to my actual application. I can see the 3D object, but need more proper lighting using normals.
I need help or guidance to implement the following:
Add lighting to custom shader material. I have Googled and tried many things, no success so far.
Using normals, show the effects of lighting (In this sample code, the normals are fixed to Y-axis direction, but I am calculating them based on some vector logic in actual application). So calculating normals is already done, but I want to use them to show light shine/shading effect in the custom shader material.
And in this sample, color attribute is set to fixed red color, but in actual application I am able to apply colors using UV range from a texture to color attribute.
Please advise how/if I can get lighting based on normals for Point Cloud. Thanks.
Note: I looked at this Stackoveflow question but it only deals with changing the alpha/transparency of points and not lighting.
Adding lighting to a custom material is a very complex process. Especially since you could use Phong, Lambert, or Physical lighting methods, and there's a lot of calculations that need to pass from the vertex to the fragment shader. For instance, this segment of shader code is just a small part of what you'd need.
Instead of trying to re-create lighting from scratch, I recommend you create a PlaneGeometry with the material you'd like (Phong, Lambert, Physical, etc...) and use an InstancedMesh to create thousands of instances, just like in this example.
Based on that example, the pseudo-code of how you could achieve a similar effect is something like this:
const count = 100000;
const geometry = new PlaneGeometry();
const material = new THREE.MeshPhongMaterial();
mesh = new THREE.InstancedMesh( geometry, material, count );
mesh.instanceMatrix.setUsage( THREE.DynamicDrawUsage ); // will be updated every frame
scene.add( mesh );
const dummy = new THREE.Object3D();
update() {
// Sets the rotation so it's always perpendicular to camera
dummy.lookAt(camera);
// Updates positions of each plane
for (let i = 0; i < count; i++){
dummy.position.set( x, y, z );
dummy.updateMatrix();
mesh.setMatrixAt( i ++, dummy.matrix );
}
}
The for() loop would be the most expensive part of each frame, so if you need to update it on each frame, you might want to calculate this in the vertex shader, but that's another question altogether.

Raycasting to intersect objects that have been displaced by vertex shader

Let's say I have a vertical list of meshes created from PlaneBufferGeometry with ShaderMaterial. The meshes are distributed vertically and evenly spaced.
The list will have two states:
Displaying the meshes as they are
Displaying meshes with each object's vertices transformed by the vertex shader to the same arbitrary value, let's say z = -50. This gives a zoomed out effect and the user can scroll through this list (in the code we do this by moving the camera y position)
In my app I'm trying to make my mouseover events work for the second state but it's tricky since the GPU transforms the vertices so the updated vertices are not reflected in the attributes on the JS side.
*Note I've looked into GPU picking and do not want to use it because I believe there should be a simpler way to do this without render targets
Attempted Solution
My current approach is to manually change the boundingBox of each plane when we are in the second state like so:
var box = new THREE.Box3().setFromObject(plane);
box.min.z = -50;
box.max.z = -50;
plane.geometry.boundingBox = box;
And then to change the boundingSphere's center to have the same z position of -50 after computing it.
I did this approach because I looked into the Raycaster and Mesh code for THREE.js and it seems like they check both boundingSphere and boundingBox for object intersections. So I thought if I modified both of them to reflect the transforms done by the GPU, the raycaster would work fine but it doesn't seem to be working for me.
The relevant raycaster code is here:
// mouse being vec2 of normalized coordinates and camera being a perspective camera
raycaster.setFromCamera( mouse, camera );
const intersects = raycaster.intersectObjects( planes );
Possible Theories
The only thing I can think of that's wrong about this approach is maybe I'm not projecting the mouse coords right? Since all the objects now lie on the plane z = -50 would I need to project those mouse coordinates to that plane?
Inspired by the link posted by #prisoner849 I found a working solution to just create additional transparent planes equal to the number of planes in the scene. In these planes, I set the z position to -50 and just intersect with these when in state #2.
A bit hacky, but works for now.

Applying translation, rotation and scale to THREE.BoxGeometry(..)

I am creating a scene with several thousand cubes, each of which can be translated, scaled and rotated.
Initially, I tried creating a cube geometry, applying the transformations, creating a mesh with my material and adding it to the scene. That worked but was very slow because of all the draw calls.
So now, for all the cubes, I create a THREE.BoxGeometry(1,1,1), and merge it with a single THREE.Geometry() each time before finally creating a material and a single mesh and adding it to my scene.
The performance is drastically improved but I don't know how to apply a translation (XYZ), scale (XYZ) and rotation (quaternion) to each box before merging it. Can someone point me at the right syntax?
Secondly, if I merge my geometry like this, is there still a way to pick out a single box with a raycaster? Typically I've used the object name on the mesh but that's not relevant anymore.
Thank you.

three.js create texture from cubecamera

When using a cube camera one normally sets the envMap of the material to the cubeCamera.renderTarget, e.g.:
var myMaterial = new THREE.MeshBasicMaterial({color:0xffffff,
envMap: myCubeCamera.renderTarget,
side: THREE.DoubleSide});
This works great for meshes that are meant to reflect or refract what the cube camera sees. However, I'd like to simply create a texture and apply that to my mesh. In other words, I don't want my object to reflect or refract. I want the face normals to be ignored.
I tried using a THREE.WebGLRenderTarget, but it won't handle a cube camera. And using a single perpspective camera with WebGLRenderTarget does not give me a 360 texture, obviously.
Finally, simply assigning the cubeCamera.renderTarget to the 'map' property of the material doesn't work either.
Is it possible to do what I want?
r73.
Edit: this is not what the author of the question is looking for, I'll keep my answer below for other people
Your envmap is already a texture so there's no need to apply it as a map. Also, cubemaps and textures are structurally different, so it won't be possible to swap them, or if you succeed in doing that the result is not what you probably you might expect.
I understand from what you're asking you want a static envmap instead to be updated at each frame, if that's the case simply don't run myCubeCamera.updateCubeMap() into your render function. Instead place it at the end of your scene initialization with your desired cube camera position, your envmap will show only that frame.
See examples below:
Dynamic Cubemap Example
Static Cubemap Example
The answer is: Set the refractionRatio on the material to 1.0. Then face normals are ignored since no refraction is occurring.
In a normal situation where the Cube Camera is in the same scene as the mesh, this would be pointless because the mesh would be invisible. But in cases where the Cube Camera is looking at a different scene, then this is a useful feature.

Three.js / BufferGeometry how to use mesh & line material

I'm using BufferGeometry to draw triangles.
I can use a mesh geometry, specifiyng 3 index-attribute for every triangle. I'm using a basic material without wireframe. I suposse I'll can use raycast.
Also I have seen the linesegments approach for wireframe. Interesting.
Ok, my problem... I'd like to see my triangles as a wireframe and also I need raycast. So .... the solution is to create my own shader, isn't it ?
Thanks
you dont have to create a custom shader you can have a mesh with wireframe material, and the ray should still "hit" the object
var mesh = new THREE.Mesh(geometry,new THREE.MeshBasicMaterial({wireframe : true}));
if it for some reason does not hit or you want to LineSegments object you can keep track of all transformations that affected the object, and apply them onto a mesh you wont add to scene
var segmentObject = new THREE.LineSegments(geometry,lineMaterial);
scene.add(segmentObject);
var meshNotInScene = new THREE.Mesh(geometry,dummyMaterial);
and you will use the mesh object to determine if raycast hit the object
this way you can have a different hitbox for an object for example if you have a flying donut by pairing it with a circle mesh you can select it even if you click in its hole etc...
keep in mind that materials have sides and if you dont care about which side is which set "side" to THREE.DoubleSide

Resources