ThreeJS: PlaneBufferGeometry, raycasting and faces - three.js

Made a buffered plane, set its vertices with:
var vertices = tg.attributes.position.array;
geometry.addAttribute('position', new THREE.BufferAttribute(vertices, 3));
Now i want to raycast to a face to get it's Z value:
var z = intersects[i].object.geometry.vertices[intersects[i].face.a].z;
This worked on a standard geometry as it had faces and lot of other things i'm trying to save from memory.
My question comes from the index: intersects[i].face.a. What do i have to add? There seem not to be a method to add "faces" to the buffered geometry. Right now there is just one face for the whole geometry at:
object.face.(a,b,c)
Perhaps there is another way of clicking on a face and getting it's vertex value when using buffered geoms.
Tips? Thanks!

Buffer geometries contain attributes of positions in an array.
If you want to obtain information about z-value of a specific vertex from a buffer geometry's vertices then you can do it like this:
intersects[i].object.geometry.attributes.position.array[intersects[i].face.a * 3 + 2]
also you can use the z-coordinate of the point of intersection (which is in world's coordinates):
intersects[i].point.z;
jsfiddle example (see function showDetails(intersect), the green plane is THREE.PlaneGeometry, the blue plane is THREE.PlaneBufferGeometry)

Related

three.js bind Plane to proxy Object transformations

I'm trying to transform a Plane according to a Object3D (position and rotation). That Plane is used as a clippingPlane.
If I call Plane.applyMatrix4( Object.matrixWorld ) it just applies the matrix once, and doesn't bind the Plane to that matrix for future transformations.
However if I call the same function in a loop the transformations applied to the Plane are continuous.
EG if I call Object.rotate.z = 1 once, and then Plane.applyMatrix4( Object.matrixWorld ) in a loop, the Plane rotates 1 unit along the Z axis at every loop.
Any ideas?
Being this Plane used as a clipping plane, I also tried to transform it in the shader material of the mesh being clipped, and it maybe would be the best performance-wise, but I'm not so skilled to accomplish that.
I would just to this:
object.add( plane );
In this way, plane is a child of object. All transformations applied to object are also applied to plane. Besides, it's now very easy to transform plane relative to object.
The quickest solution I found is to reset and apply Object's .matrixWorld to the Plane. As I said before, it would be great to add useful transformation and "binding" methods to the THREE.Plane object, since it's used as clipping plane too.
Right now I did this way:
// will store the object's inverse transformations matrix in world coords
var inversePrevMatrix = new THREE.Matrix4();
function loop(){
// reset plane previous transformations
plane.applyMatrix4( inversePrevMatrix );
// apply actual object matrix in world coordinates
plane.applyMatrix4( object.matrixWorld );
// set prevMatrix
inversePrevMatrix.getInverse( object.matrixWorld );
}

Get screen coordinates of a vertex in a THREE.js Points object using bufferGeometry

I want to have a DOM node track a particle in my THREE.js simulation. My simulation is built with the Points object, using a bufferGeometry. I'm setting the positions of each vertex in the render loop. Over the course of the simulation I'm moving / rotating both the camera and the Points object (through its parent Object3d).
I can't figure out how to get reliable screen coordinates for any of my particles. I've followed the instructions on other questions, like Three.JS: Get position of rotated object, and Converting World coordinates to Screen coordinates in Three.js using Projection, but none of them seem to work for me. At this point I can see that the calculated projections of the vertices are changing with my camera movements and object rotations, but not in a way that I can actually map to the screen. Also, sometimes two particles that neighbor each other on the screen will yield wildly different projected positions.
Here's my latest attempt:
const { x, y, z } = layout.getNodePosition(nodes[nodeHoverTarget].id)
var m = camera.matrixWorldInverse.clone()
var mw = points.matrixWorld.clone()
var p = camera.projectionMatrix.clone()
var modelViewMatrix = m.multiply(mw)
var position = new THREE.Vector3(x, y, z)
var projectedPosition = position.applyMatrix4(p.multiply(modelViewMatrix))
console.log(projectedPosition)
Essentially I've replicated the operations in my shader to derive gl_Position.
projectedPosition is where I'd like to store the screen coordinates.
I'm sorry if I've missed something obvious... I've tried a lot of things but so far nothing has worked :/
Thanks in advance for any help.
I figured it out...
var position = new THREE.Vector3(x, y, z)
var projectedPosition = position.applyMatrix4(points.matrixWorld).project(camera)

How to plot country names on the globe, so the mesh will be aligned with the surfaces

I'm trying to plot country names of the globe, so the text meshes will be aligned with the surface, but I'm failing to calculate proper rotations. For text I'm using THREE.TextGeometry. The name appears on the click of the mesh of the country at the point of intersection using raycasting. I'm lacking knowledge of how to turn these coordinates to proper rotation angles. I'm not posting my code, as it's complete mess and I believe for a knowldgeable person will be easier to explain how to achieve this in general.
Here is desired result:
The other solution, which I tried (and which, of course, is not the ultimate), based on this SO answer. The idea is to use the normal of the face you intersect with the raycaster.
Obtain the point of intersection.
Obtain the face of intersection.
Obtain the normal of the face (2).
Get the normal (3) in world coordinates.
Set position of the text object as sum of point of intersection (1) and the normal in world coordinates (4).
Set lookAt() vector of the text object as sum of its position (5) and the normal in world coordinates (4).
Seems long, but actually it makes not so much of code:
var PGHelper = new THREE.PolarGridHelper(...); // let's imagine it's your text object ;)
var PGlookAt = new THREE.Vector3(); // point of lookAt for the "text" object
var normalMatrix = new THREE.Matrix3();
var worldNormal = new THREE.Vector3();
and in the animation loop:
for ( var i = 0; i < intersects.length; i++ ) {
normalMatrix.getNormalMatrix( intersects[i].object.matrixWorld );
worldNormal.copy(intersects[i].face.normal).applyMatrix3( normalMatrix ).normalize();
PGHelper.position.addVectors(intersects[i].point, worldNormal);
PGlookAt.addVectors(PGHelper.position, worldNormal);
PGHelper.lookAt(PGlookAt);
}
jsfiddle exmaple
The method works with meshes of any geometry (checked with spheres and boxes though ;) ). And I'm sure there are another better methods.
very interesting question.I have tried this way, we can regard the text as a plane. lets define a normal vector n from your sphere center(or position) to point on the sphere surface where you want to display text. I have a simple way to make normal vector right.
1. put the text mesh on sphere center. text.position.copy(sphere.position)
2. make text to the point on sphere surface, text.lookAt(point)
3.relocate text to the point. text.position.copy(point)

How to modify UV coordinates with three.js

I want to modify a JSON model's unwrap (UV coordinates) during runtime in order to move the texture over the surface of the geometry. I found faceVertexUvs in the Geometry class documentation. It contains one array, which is correct. That array contains a lot of elements, which I assume to be the UV coordinates for each vertex. Example code:
var uvs = mesh.geometry.faceVertexUvs[0];
console.log(uvs.length);
Gives me 4232 as output. So far so good. Now I would like to alter the u and v values, but the 4000+ elements of the array are strings ("1" through "4234"). I only found examples showing how to create an unwrap from scratch, in which case people push Vector2 data up into faceVertexUvs. So why am I not seeing Vector2 data in there?
Okay, I solved this one myself, hooray. I was searching for the wrong terms, because three.js makes this pretty simple. Instead of actually altering the unwrap (UV coordinates of the geometry), it is possible to set an offset on the texture itself:
texture.offset.y += 0.1;
For this to work with a seamless/tiling texture, we also have to tell three.js that the texture should wrap/repeat:
texture.wrapS = THREE.RepeatWrapping;
texture.wrapT = THREE.RepeatWrapping;
Source: Tunnel Effect Tutorial
Well, that was easy!

Three.js—rotation around arbitrary line

I am starting with Three.js so I might have misunderstood some basics of the concept. I have a usual 3d scene with a hierarchy like this:
.
+-container #(0,0,0) (Object3d, no own geometry)
+-child 1 #(1,1,1)
+-child 2 #(1, -2, 5)
+-child 3 #(-4, -2, -3)
.
.
. more should come
all »children« of the »container« are imported models from Blender. What I would like to do is to rotate the whole container around a pivot axis based on the current selection, which should be one of the children.
Image three cubes in Blender, all selected with the 3d cursor at center of first in location and center of transformation. A rotation transforms all cubes, but the rotation is relative to the first in selection.
In terms of three.js, what would like to do is to rotate the container, so that the rotation is applied to all children.
To do that I think that the following steps should do the trick:
create a matrix,
translate that matrix by the negative of the selected objects position
rotate that matrix
translate the matrix back to the selected objects position
apply the transform to the container
I have tried the following code but the result is just wrong:
var sp = selection.position.clone(),
m = new THREE.Matrix4();
selection.localToWorld(sp);
m.setPosition(sp.clone().negate());
//I've used makeRotationX for testing purposes, should be replaced with quaternion rotation later on…
m = m.multiply(new THREE.Matrix4().makeRotationX(2*180/Math.PI));
m = m.multiply(new THREE.Matrix4().makeTranslation(sp.x,sp.y,sp.z));
this._container.applyMatrix(m);
Thanks for help!
UPDATE
sign error—this works:
var sp = selection.position.clone(),
m = new THREE.Matrix4();
m.makeTranslation(sp.x,sp.y,sp.z);
m.multiply(new THREE.Matrix4().makeRotationX(0.1));
m.multiply(new THREE.Matrix4().makeTranslation(-sp.x,-sp.y,-sp.z));
this._container.applyMatrix(m);
BUT that code does not really look that good, creating three matrices for that single operating seems to bit of overhead, what is the usual »three.js-way«?
UPDATE #2
Due to the comment here is an image describing what I would like to do:
The »arrows« at the origin stand for the parent container and the cube, the sphere and the cone are its »children«. The red line shows the line I would like rotate the parent around, this way the rotation is applied to all children.
rotateOnAxis() takes a Vector as axis, so the line the objects rotates around crosses its origin.

Resources