How to get vertices of obj model object in Three.JS? - three.js

After loading a .obj model in Three.js I am unable to find vertices data. Vertices data is needed to apply collision detection as suggested by this answer
var loader = new THREE.OBJLoader();
loader.load('models/wall.obj', function ( object ) {
object.traverse( function ( node ) {
if ( node.isMesh ) {
console.log(node);
}
});
scene.add( object );
});
In mesh there is geometry.attributes.position.array but I am unable to find "vertices" anywhere in object.
Right now trying to convert position.array data to vertices but below code is not working, this answer is pointing the problem correctly but I am unable to use it to solve the issue:
var tempVertex = new THREE.Vector3();
// set tempVertex based on information from mesh.geometry.attributes.position
mesh.localToWorld(tempVertex);
// tempVertex is converted from local coordinates into world coordinates,
// which is its "after mesh transformation" position

geometry.attributes.position.array IS the vertices. Every three values makes up one vertex. You will also want to look at the index property (geometry.index), because that is a list of indices into the position array, defining the vertices that make up a shape. In the case of a Mesh defined as individual triangles, every three indices makes up one triangle. (Tri-strip data is slightly different, but the concept of referencing vertex values by the index is the same.)
You could alternately use the attribute convenience functions:
BufferAttribute.getX
BufferAttribute.getY
BufferAttribute.getZ
These functions take the index into account. So if you want the first vertex of the first triangle (index = 0):
let pos = geometry.attributes.position;
let vertex = new THREE.Vector3( pos.getX(0), pos.getY(0), pos.getZ(0) );
This is equivalent to:
let pos = geometry.attributes.position.array;
let idx = geometry.index.array;
let size = geometry.attributes.position.itemSize;
let vertex = new THREE.Vector3( pos[(idx[0] * size) + 0], pos[(idx[0] * size) + 1], pos[(idx[0] * size) + 2] );
Once you have your vertex, then you can use mesh.localToWorld to convert the point to world-space.

Related

three js LoadObject pivot [duplicate]

What I'm trying to achieve is a rotation of the geometry around pivot point and make that the new definition of the geometry. I do not want te keep editing the rotationZ but I want to have the current rotationZ to be the new rotationZ 0.
This way when I create a new rotation task, it will start from the new given pivot point and the newly given rad.
What I've tried, but then the rotation point moves:
// Add cube to do calculations
var box = new THREE.Box3().setFromObject( o );
var size = box.getSize();
var offsetZ = size.z / 2;
o.geometry.translate(0, -offsetZ, 0)
// Do ratation
o.rotateZ(CalcUtils.degreeToRad(degree));
o.geometry.translate(0, offsetZ, 0)
I also tried to add a Group and rotate that group and then remove the group. But I need to keep the rotation without all the extra objects. The code I created
var box = new THREE.Box3().setFromObject( o );
var size = box.size();
var geometry = new THREE.BoxGeometry( 20, 20, 20 );
var material = new THREE.MeshBasicMaterial( { color: 0xcc0000 } );
var cube = new THREE.Mesh( geometry, material );
cube.position.x = o.position.x;
cube.position.y = 0; // Height / 2
cube.position.z = -size.z / 2;
o.position.x = 0;
o.position.y = 0;
o.position.z = size.z / 2;
cube.add(o);
scene.add(cube);
// Do ratation
cube.rotateY(CalcUtils.degreeToRad(degree));
// Remove cube, and go back to single object
var position = o.getWorldPosition();
scene.add(o)
scene.remove(cube);
console.log(o);
o.position.x = position.x;
o.position.y = position.y;
o.position.z = position.z;
So my question, how do I save the current rotation as the new 0 rotation point. Make the rotation final
EDIT
I added an image of what I want to do. The object is green. I have a 0 point of the world (black). I have a 0 point of the object (red). And I have rotation point (blue).
How can I rotate the object around the blue point?
I wouldn't recommend updating the vertices, because you'll run into trouble with the normals (unless you keep them up-to-date, too). Basically, it's a lot of hassle to perform an action for which the transformation matrices were intended.
You came pretty close by translating, rotating, and un-translating, so you were on the right track. There are some built-in methods which can help make this super easy.
// obj - your object (THREE.Object3D or derived)
// point - the point of rotation (THREE.Vector3)
// axis - the axis of rotation (normalized THREE.Vector3)
// theta - radian value of rotation
// pointIsWorld - boolean indicating the point is in world coordinates (default = false)
function rotateAboutPoint(obj, point, axis, theta, pointIsWorld){
pointIsWorld = (pointIsWorld === undefined)? false : pointIsWorld;
if(pointIsWorld){
obj.parent.localToWorld(obj.position); // compensate for world coordinate
}
obj.position.sub(point); // remove the offset
obj.position.applyAxisAngle(axis, theta); // rotate the POSITION
obj.position.add(point); // re-add the offset
if(pointIsWorld){
obj.parent.worldToLocal(obj.position); // undo world coordinates compensation
}
obj.rotateOnAxis(axis, theta); // rotate the OBJECT
}
After this method completes, the rotation/position IS persisted. The next time you call the method, it will transform the object from its current state to wherever your inputs define next.
Also note the compensation for using world coordinates. This allows you to use a point in either world coordinates or local space by converting the object's position vector into the correct coordinate system. It's probably best to use it this way any time your point and object are in different coordinate systems, though your observations may differ.
As a simple solution for anyone trying to quickly change the pivot point of an object, I would recommend creating a group and adding the mesh to the group, and rotating around that.
Full example
const geometry = new THREE.BoxGeometry();
const material = new THREE.MeshBasicMaterial({ color: 0xff0000 });
const cube = new THREE.Mesh(geometry, material);
scene.add(cube)
Right now, this will just rotate around its center
cube.rotation.z = Math.PI / 4
Create a new group and add the cube
const group = new THREE.Group();
group.add(cube)
scene.add(group)
At this point we are back where we started. Now move the mesh:
cube.position.set(0.5,0.5,0)
Then move the group
group.position.set(-0.5, -0.5, 0)
Now use your group to rotate the object:
group.rotation.z = Math.PI / 4

Create points from shape

I'm trying to draw around 3D elements of an object to make them selectable.
In a 2D shape it's pretty easy using Shap.
const shape = new Shape();
if (!points.length) return shape;
for (const point of points) shape.lineTo(point[0], point[1]);
if (mousePos) shape.lineTo(mousePos[0], mousePos[1]);
return shape;
I was thinking in 3D I could draw around the entities with the mouse, fill in the gaps with a point cloud, iterative other each point with the raycaster to so it is adjusted to the point nearest the camera that intersects with the mesh and I should have a points shape that fits the underlying mesh.
My question is - if I have a Shape what is the easiest way to create points to cover the area of the shape so I can find the point closest to the camera?
use BufferGeometry and PointMaterial to express particle point.
points require vertices with z position. give them Z position.
import * as T from 'three';
const points = [x1,y1,x2,y2,x3,y3...]; /*... your points in 2d ...*/
const shape = new T.Shape(/* ... your shape from points ... */);
const z = 3; // any z position you want
// because `THREE.Shape` is in 2D, give it a Z position from the shape
const vertices = new Float32Array(points.reduce((ac, cv) => ([...ac, ...cv, z]), []);
const geometry = new T.BufferGeometry();
geometry.setAttribute( 'position', new T.BufferAttribute( vertices, 3 ) );
const material = new T.PointsMaterial({ color: new T.Color('red') });
const mesh = new T.Mesh(geometry, material);
scene.add(mesh);
https://threejs.org/docs/index.html?q=shape#api/en/core/BufferGeometry
Edit: points array is flattened array of [x,y]. vertices array is flattened array of [x,y,z].

Three js RayCast between two object in scene

I know how raycast object in scene when click on mouse, but now i need to know if two object in scene can raycast each other.
This is, i load a 3D Object in scene for example Two Rooms in OBJ object, then i add three mesh box in some points, for example two point on first room and one point on second room.
Then two points on first room can raycast each other(have direct vision), but two point for first room can't raycast point on second room.(they don't have vision through room wall).
I attached code used for load scene and points, any sugestion hwo to do?
//LOAD MAIN 3D OBJECT
var objLoader = new THREE.OBJLoader();
objLoader.setMaterials(materials);
objLoader.setPath('./asset/3d/');
objLoader.load("model01.obj", function(object){
var mesh = object.children[0];
mesh.castShadow = true;
mesh.receiveShadow = true;
mesh.rotation.x = Math.PI / 2;
var box = new THREE.Box3().setFromObject( object )
var ox = -(box.max.x + box.min.x) / 2;
var oy = -(box.max.y + box.min.y) / 2;
var oz = -(box.max.z + box.min.z) / 2;
mesh.position.set(ox, oy, oz);
_scene.add(mesh);
render();
setTimeout(render, 1000);
}
//LOAD count_points inside scene
for(var i=0;i<cta_points;i++){
var c_r = 2;
var c_geometry = new THREE.BoxBufferGeometry( c_r, c_r, c_r );
var c_material = new THREE.MeshLambertMaterial( { color: new THREE.Color("rgb("+(40 + 30)+", 0, 0)"),opacity: 0.0,
transparent: true} );
var c_mesh = new THREE.Mesh( c_geometry, c_material );
var position = get_positions(i);
c_mesh.position.copy(position);
c_mesh.name="BOX";
scene.add( c_mesh );
}
Possibly take a look at:
How to detect collision in three.js?
Usually, to solve this problem, you would make a collision mask with a collision group.
The collision group is added per object, and is represented by a "bit" in a bitmask,
The wall could be in a separate collision group, like 4 (binary 100)
and the objects could be in another group, say 2 (binary 10)
Then you just need to check collisions of the object against the mask.
(check if the collision group matches against a bitmask (the masks above could be 10, 100,) to check for collisions).
So that way, you can call THREE.Raycaster().intersectObjects(args), where the arguments are the ones that pass the bitmask test ( mask == object.collision_group ).
That way, you won't need to include the wall for collision detection testing, since it is using a separate bitmask.

How to update the Geometry vertex position Objloader

I am using objloader to load multiple objects. I am trying to move one of the objects and need to have the updated vertex positions. while loading the objects I converted the buffergeometry to geometry and run some functions. I checked some samples all updating the vertices of the buffergeometry. Do I need to convert it back to buffergeometry or not ?
I need to have the real time positions while moving to calculate some other functions, so I prefer not to keep on converting from buffer to geometry and vice versa.
Here is a piece of code:
var tool= new THREE.OBJLoader();
tool.load( '../obj/tool.obj', function ( object ) {
var material = new THREE.MeshLambertMaterial({color:0xA0A0A0});
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = material;
Geometry = new THREE.Geometry().fromBufferGeometry(child.geometry);
}
console.log(Geometry.vertices[220]);
Geometry.position.x += 0.01;
Geometry.verticesNeedUpdate = true;
console.log(Geometry.vertices[220]);
Besides, I checked the migration document of the latest versions and checked them out.
OBJLoader returns BufferGeometry. You can update a vertex position like so:
geometry.attributes.position.setX( index, x );
geometry.attributes.position.setXYZ( index, x, y, z ); // alternate
geometry.attributes.position.needsUpdate = true; // only required if geometry previously-rendered
Study http://threejs.org/docs/#Reference/Core/BufferAttribute
Instead, you can convert to Geometry. In your case, do the following in the loader callback:
child.geometry = new THREE.Geometry().fromBufferGeometry( child.geometry );
You then update a vertex position using this pattern:
geometry.vertices[ 0 ].set( x, y, z );
geometry.verticesNeedUpdate = true;
Only set the needsUpdate flag if the geometry has been previously rendered.
three.js r.71

Three.js Extruding standard complex geometries

I created a ring geometry that I used to represent a plane in a sphere. The problem with this geometry is that if I put the camera in perpendicular, it disappears because it has no width.
To solve that I want to extrude this geometry instead of directly creating the mesh.
There is a lot of posts on how to create the shapes needed to extrude by pushing points and holes and whatever, but not on how to obtain this vertices correctly from a geometry.
First I tried to create the shape passing the vertices of the ring geometry directly. It fails with an undefined at "vertices":
var orb_plane_shape = new THREE.Shape(ring_geom.vertices.clone());
Then, I tried to copy the vertices vector, place by place, and give it to the Shape constructor. It works but with the following problems:
-A warning: unable to triangulate polygon!
-There is no clear hole in the ring, and looks like the vertice connection order has changed.
var vertices = [];
for (var i = 0; i < ring_geom.vertices.length ; i++) {
vertices.push(ring_geom.vertices[i].clone());
}
var orb_plane_shape = new THREE.Shape(vertices);
// extrude options
var options = {
amount: 1, // default 100, only used when path is null
bevelEnabled: false,
bevelSegments: 2,
steps: 1, // default 1, try 3 if path defined
extrudePath: null // or path
};
// geometry
var geometry = new THREE.ExtrudeGeometry( orb_plane_shape, options );
plane_orb = new THREE.Mesh(geometry, material_plane_orb);
I would like to establish a method to convert any of the standar 2D geometries (circle, ring...) to a shape, to be able to extrude it.
Thanks

Resources