How to Set Plane Mesh to always lookAt camera without tilting - three.js

I'm trying to make a Plane to always face the camera or another moving object but I want the Plane to only rotate on 1 axis. How can I use the lookAt function to make it only rotate side ways without tilting to look up or down at the moving object?

thanks, I managed to solve it easily by just keeping the y position of the rotating object constant.
if(planex){
var yaw_control = controls.getYawObject();
pos = new THREE.Vector3( yaw_control.position.x, planex.position.y, yaw_control.position.z );
planex.lookAt(pos);
}

http://www.lighthouse3d.com/opengl/billboarding/index.php?billCyl
maybe this article of any help for you. You are looking for those cylindrical billboards i think but read up from the first page ;) You can modify the specific mesh matrix yourself, although i am not sure if this is the most efficient way. I also did this myself once.
Get the camera look vec:
three.js set and read camera look vector
Then get the camera upVec and afterwards get the cross prodcut of those = rightVec according to the article above.
using those vectors, you can fill in a new Three.Matrix4() like explained in the article and then replace the meshes matrix with the newly created one. As I said, i am not quite into the matrix stuff in three.js but this works but it is probably not that efficient.
For this to work you will have to deactive the meshes auto matrix update with
mesh.matrixAutoUpdate = false;

Related

ThreeJS Points (Point Cloud) with Lighting using custom Shader Material

Coded using:
Using ThreeJS v0.130.1
Framework: Angular 12, but that's not relevant to the issue.
Testing on Chrome browser.
I am building an application that gets more than 100K points. I use these points to render a THREE.Points object on the screen.
I found that default THREE.PointsMaterial does not support lighting (the points are visible with or without adding lights to the scene).
So I tried to implement a custom ShaderMaterial. But I could not find a way to add lighting to the rendered object.
Here is a sample of what my code is doing:
Sample App on StackBlitz showing my current attempt
In this code, I am using sample values for point cloud data, normals and color but everything else is similar to my actual application. I can see the 3D object, but need more proper lighting using normals.
I need help or guidance to implement the following:
Add lighting to custom shader material. I have Googled and tried many things, no success so far.
Using normals, show the effects of lighting (In this sample code, the normals are fixed to Y-axis direction, but I am calculating them based on some vector logic in actual application). So calculating normals is already done, but I want to use them to show light shine/shading effect in the custom shader material.
And in this sample, color attribute is set to fixed red color, but in actual application I am able to apply colors using UV range from a texture to color attribute.
Please advise how/if I can get lighting based on normals for Point Cloud. Thanks.
Note: I looked at this Stackoveflow question but it only deals with changing the alpha/transparency of points and not lighting.
Adding lighting to a custom material is a very complex process. Especially since you could use Phong, Lambert, or Physical lighting methods, and there's a lot of calculations that need to pass from the vertex to the fragment shader. For instance, this segment of shader code is just a small part of what you'd need.
Instead of trying to re-create lighting from scratch, I recommend you create a PlaneGeometry with the material you'd like (Phong, Lambert, Physical, etc...) and use an InstancedMesh to create thousands of instances, just like in this example.
Based on that example, the pseudo-code of how you could achieve a similar effect is something like this:
const count = 100000;
const geometry = new PlaneGeometry();
const material = new THREE.MeshPhongMaterial();
mesh = new THREE.InstancedMesh( geometry, material, count );
mesh.instanceMatrix.setUsage( THREE.DynamicDrawUsage ); // will be updated every frame
scene.add( mesh );
const dummy = new THREE.Object3D();
update() {
// Sets the rotation so it's always perpendicular to camera
dummy.lookAt(camera);
// Updates positions of each plane
for (let i = 0; i < count; i++){
dummy.position.set( x, y, z );
dummy.updateMatrix();
mesh.setMatrixAt( i ++, dummy.matrix );
}
}
The for() loop would be the most expensive part of each frame, so if you need to update it on each frame, you might want to calculate this in the vertex shader, but that's another question altogether.

Can't get Three.js camera up direction

I need to get the camera up direction and i've tried many ways with no luck, i'm not an expert of quaternions so i'm doubting i did it right.
I've tried:
camera.up
camera.up.applyMatrix4(camera.matrixWorld);
new THREE.Vertex3(0,1,0).applyMatrix4(camera.matrixWorld);
camera.up.normalize().applyMatrix4(camera.matrixWorld);
after this i create two planes passing by two points of my interest, and add the plane helper to the scene and i can see they are very far from where i was expecting them. (i'm expecting two planes that looks like the top and bottom of the camera frustum).
P.s. the camera is a shadow camera of a directional light so an orthographic camera, and i manipulate the directional light position and target before doing this operation, but i've called updateMatrixWorld on the light, on it's target and the camera, on the camera i've called also updateProjectionMatrix... still no results
I've made a sandbox to see what i've tried till now, and better visualize what i want to achieve:
https://codesandbox.io/embed/throbbing-cache-j5yse
once i manage to get the green arrow to point to the top of the blue triangle of the camera helper i'm good to go
In the normal render flow, shadow camera matrices are updated as part of rendering the shadow map (WebGLShadowMap.render).
However, if you want the updated matrix values before the render, then you'll need to update them manually (you already understand this part).
The shadow camera is a property of (not a child of) the DirectionalLight. As such, it doesn't follow the same rules as other scene objects when it comes to updating its matrices (because it's not really a child of the scene). Instead, you need to call the shadow property's updateMatrices method (inherited from LightShadow.updateMatrices).
const dl = new THREE.DirectionalLight(0xffffff, 1)
dl.shadow.updateMatrices(dl) // <<------------------------ Updates the shadow camera
This updates the shadow camera with information from the DirectionalLight's own matrix, and its target's matrix, to properly orient the shadow camera.
Finally, it looks like you're trying to get the "world up" of the camera. Personally, I'd use the convenience function localToWorld:
let up = new THREE.Vector3(0, 1, 0)
dl.shadow.camera.localToWorld(up) // destructively converts "up" from local-to-camera into world coordinates
via trial and errors i've figured out that what gave me the correct result was:
calling
directionalLight.shadow.updateMatrices(...)
and then
new THREE.Vector3(0,1,0).applyQuaternion(directionalLight.shadow.camera.quaternion)

Raycasting to intersect objects that have been displaced by vertex shader

Let's say I have a vertical list of meshes created from PlaneBufferGeometry with ShaderMaterial. The meshes are distributed vertically and evenly spaced.
The list will have two states:
Displaying the meshes as they are
Displaying meshes with each object's vertices transformed by the vertex shader to the same arbitrary value, let's say z = -50. This gives a zoomed out effect and the user can scroll through this list (in the code we do this by moving the camera y position)
In my app I'm trying to make my mouseover events work for the second state but it's tricky since the GPU transforms the vertices so the updated vertices are not reflected in the attributes on the JS side.
*Note I've looked into GPU picking and do not want to use it because I believe there should be a simpler way to do this without render targets
Attempted Solution
My current approach is to manually change the boundingBox of each plane when we are in the second state like so:
var box = new THREE.Box3().setFromObject(plane);
box.min.z = -50;
box.max.z = -50;
plane.geometry.boundingBox = box;
And then to change the boundingSphere's center to have the same z position of -50 after computing it.
I did this approach because I looked into the Raycaster and Mesh code for THREE.js and it seems like they check both boundingSphere and boundingBox for object intersections. So I thought if I modified both of them to reflect the transforms done by the GPU, the raycaster would work fine but it doesn't seem to be working for me.
The relevant raycaster code is here:
// mouse being vec2 of normalized coordinates and camera being a perspective camera
raycaster.setFromCamera( mouse, camera );
const intersects = raycaster.intersectObjects( planes );
Possible Theories
The only thing I can think of that's wrong about this approach is maybe I'm not projecting the mouse coords right? Since all the objects now lie on the plane z = -50 would I need to project those mouse coordinates to that plane?
Inspired by the link posted by #prisoner849 I found a working solution to just create additional transparent planes equal to the number of planes in the scene. In these planes, I set the z position to -50 and just intersect with these when in state #2.
A bit hacky, but works for now.

ThreeJS calculation for object in relation to camera position and orientation

It seems like this would be a pretty common problem, but I can't find an example online and I'm too much of a math noob...
Using ThreeJS, I have a library to do spatial audio positioning (https://github.com/tmwoz/hrtf-panner-js) based on user position, but my code assumes the camera looks straight ahead and doesn't move. Since my camera is moving, I need to get the xyz position of a 3D object in relation to the camera's position and orientation.
//finds the object in world coordinates
var p = new THREE.Vector3();
p.setFromMatrixPosition(visual.object.matrixWorld);
this.audioTrack.updateHrtf(p.x, p.y, p.z);
How do a translate the object into camera-space coordinates? Thanks for your help!
Note: I know that the WebAudio API has a mechanism to do this simply, but it doesn't have the power of the HRTF (Head Related Transfer Function) library, which sounds much better.
To transform a Vector3 vec from world space to camera space, do this:
camera.matrixWorldInverse.getInverse( camera.matrixWorld );
vec.applyMatrix4( camera.matrixWorldInverse );
three.js r.73

Three.js Get Camera lookAt Vector

I'm looking to translate the camera along its lookAt vector. Once I have this vector, I can do scalar translation along it, use that point to move the camera position in global coordinates and re-render. The trick is getting the arbitrary lookAt vector? I've looked at several other questions and solutions but they don't seem to work for me.
You can't get the lookAtVector from the camera itself, you can however create a new vector and apply the camera rotation to that.
var lookAtVector = new THREE.Vector3(0,0, -1);
lookAtVector.applyQuaternion(camera.quaternion);
The first choice should be cam.translateZ();
There is a second option as well. You can extract the lookAt vector from the matrix property of the camera object. You just need to take the elements corresponding to the local z-axis.
var lookAtVector = new THREE.Vector3(cam.matrix[8], cam.matrix[9], cam.matrix[10]);

Resources