three.js: Get updated vertices with skeletal animations? - animation

Similar to the question in this stack overflow question Three.js: Get updated vertices with morph targets I am interested in how to get the "actual" position of the vertices of a mesh with a skeletal animation.
I have tried printing out the position values, but they are never actually updated (as I am to understand, this is because they are calculated on the GPU, not CPU). The answer to the question above mentioned doing the same computations on the CPU as on the GPU to get the up to date vertex positions for morph target animations, but is there a way to do this same approach for skeletal animations? If so, how??
Also, for the morph targets, someone pointed out that this code is already present in the Mesh.raycast function (https://github.com/mrdoob/three.js/blob/master/src/objects/Mesh.js#L115 ). However, I don't see HOW the raycast works with skeletal animation meshes-- how does it know the updated position of the faces?
Thank you!

A similar topic was discussed in the three.js forum some time ago. I've presented there a fiddle which computes the AABB for a skinned mesh per frame. The code actually performs the same vertex displacement via JavaScript like in the vertex shader. The routine looks like so:
function updateAABB( skinnedMesh, aabb ) {
var skeleton = skinnedMesh.skeleton;
var boneMatrices = skeleton.boneMatrices;
var geometry = skinnedMesh.geometry;
var index = geometry.index;
var position = geometry.attributes.position;
var skinIndex = geometry.attributes.skinIndex;
var skinWeigth = geometry.attributes.skinWeight;
var bindMatrix = skinnedMesh.bindMatrix;
var bindMatrixInverse = skinnedMesh.bindMatrixInverse;
var i, j, si, sw;
aabb.makeEmpty();
//
if ( index !== null ) {
// indexed geometry
for ( i = 0; i < index.count; i ++ ) {
vertex.fromBufferAttribute( position, index[ i ] );
skinIndices.fromBufferAttribute( skinIndex, index[ i ] );
skinWeights.fromBufferAttribute( skinWeigth, index[ i ] );
// the following code section is normally implemented in the vertex shader
vertex.applyMatrix4( bindMatrix ); // transform to bind space
skinned.set( 0, 0, 0 );
for ( j = 0; j < 4; j ++ ) {
si = skinIndices.getComponent( j );
sw = skinWeights.getComponent( j );
boneMatrix.fromArray( boneMatrices, si * 16 );
// weighted vertex transformation
temp.copy( vertex ).applyMatrix4( boneMatrix ).multiplyScalar( sw );
skinned.add( temp );
}
skinned.applyMatrix4( bindMatrixInverse ); // back to local space
// expand aabb
aabb.expandByPoint( skinned );
}
} else {
// non-indexed geometry
for ( i = 0; i < position.count; i ++ ) {
vertex.fromBufferAttribute( position, i );
skinIndices.fromBufferAttribute( skinIndex, i );
skinWeights.fromBufferAttribute( skinWeigth, i );
// the following code section is normally implemented in the vertex shader
vertex.applyMatrix4( bindMatrix ); // transform to bind space
skinned.set( 0, 0, 0 );
for ( j = 0; j < 4; j ++ ) {
si = skinIndices.getComponent( j );
sw = skinWeights.getComponent( j );
boneMatrix.fromArray( boneMatrices, si * 16 );
// weighted vertex transformation
temp.copy( vertex ).applyMatrix4( boneMatrix ).multiplyScalar( sw );
skinned.add( temp );
}
skinned.applyMatrix4( bindMatrixInverse ); // back to local space
// expand aabb
aabb.expandByPoint( skinned );
}
}
aabb.applyMatrix4( skinnedMesh.matrixWorld );
}
Also, for the morph targets, someone pointed out that this code is already present in the Mesh.raycast function
Yes, you can raycast against morphed meshes. Raycasting against skinned meshes is not supported yet. The code in Mesh.raycast() is already very complex. I think it needs some serious refactoring before it is further enhanced. In the meantime, you can use the presented code snippet to build a solution by yourself. The vertex displacement logic is actually the most complicated part.
Live demo: https://jsfiddle.net/fnjkeg9x/1/
three.js R107

Related

Three.js raycasting collision not working

I am working on an arcade style Everest Flight Simulator.
In my debugger where I am building this, I have a terrain and helicopter class which generate the BufferGeometry terrain mesh, the Groups for the helipad Geometries, and the group for the helicopter Camera and Geometry.
My issue is that currently I can't seem to get any collision to detect. I imagine it may not support BufferGeometries so that is an issue for me because I need the terrain to be a Buffer since it's far too expansive... as a standard geometry it causes a memory crash in the browser.
However, testing the helipad geometries alone it still does not trigger. They are in a group so I add the groups to a global window array and set the collision check to be recursive but to no avail.
Ultimately, I am open to other forms of collision detection and may need two types as I have to use buffer geometries. Any ideas on how to fix this or a better solution?
The Helicopter Object Itself
// Rect to Simulate Helicopter
const geometry = new THREE.BoxGeometry( 2, 1, 4 ),
material = new THREE.MeshBasicMaterial(),
rect = new THREE.Mesh( geometry, material );
rect.position.x = 0;
rect.position.y = terrain.returnCameraStartPosY();
rect.position.z = 0;
rect.rotation.order = "YXZ";
rect.name = "heli";
// Link Camera and Helicopter
const heliCam = new THREE.Group(),
player = new Helicopter(heliCam, "OH-58 Kiowa", 14000);
heliCam.add(camera);
heliCam.add(rect);
heliCam.position.set( 0, 2040, -2000 );
heliCam.name = "heliCam";
scene.add(heliCam);
Adding Objects to Global Collision Array
// Add Terrain
const terrain = new Terrain.ProceduralTerrain(),
terrainObj = terrain.returnTerrainObj(),
helipadEnd = new Terrain.Helipad( 0, 1200, -3600, "Finish", true ),
helipadStart = new Terrain.Helipad( 0, 2000, -2000, "Start", false ),
helipadObjStart = helipadStart.returnHelipadObj(),
helipadObjEnd = helipadEnd.returnHelipadObj();
window.collidableMeshList.push(terrainObj);
window.collidableMeshList.push(helipadObjStart);
window.collidableMeshList.push(helipadObjEnd);
Collision Detection Function Run Every Frame
collisionDetection(){
const playerOrigin = this.heli.children[1].clone(); // Get Box Mesh from Player Group
for (let i = playerOrigin.geometry.vertices.length - 1; i >= 0; i--) {
const localVertex = playerOrigin.geometry.vertices[i].clone(),
globalVertex = localVertex.applyMatrix4( playerOrigin.matrix ),
directionVector = globalVertex.sub( playerOrigin.position ),
ray = new THREE.Raycaster( playerOrigin, directionVector.clone().normalize() ),
collisionResults = ray.intersectObjects( window.collidableMeshList, true ); // Recursive Boolean for children
if ( collisionResults.length > 0 ){
this.landed = true;
console.log("Collision");
}
// if ( collisionResults.length > 0 && collisionResults[0].distance < directionVector.length() ){
// this.landed = true;
// console.log("Collision with vectorLength")
// }
}
}
It's hard to tell what's going on inside your custom classes, but it looks like you're using an Object3D as the first argument of the raycaster, instead of a Vector3 when you use this.heli.children[1].clone(). Why don't you try something like:
var raycaster = new THREE.Raycaster();
var origin = this.heli.children[1].position;
raycaster.set(origin, direction);
Also, are you sure you're using a BufferGeometry? Because when you access a vertex value like this: playerOrigin.geometry.vertices[i], it should give you an error. There is no vertices attribute in a BufferGeometry so I don't know how you're determining the direction vector.

How fill a loaded STL mesh ( NOT SIMPLE SHAPES LIKE CUBE ETC) with random particles and animate with this geometry bound in three.js

How I can fill a loaded STL mesh ( like suzane NOT SIMPLE SHAPES LIKE CUBE etc) with random particles and animate it inside this geometry bounds with three.js ?
I see many examples but all of it for simple shapes with geometrical bounds like cube or sphere with limit by coordinates around center
https://threejs.org/examples/?q=points#webgl_custom_attributes_points3
TNX
A concept, using a ray, that counts intersections of the ray with faces of a mesh, and if the number is odd, it means that the point is inside of the mesh:
Codepen
function fillWithPoints(geometry, count) {
var ray = new THREE.Ray()
var size = new THREE.Vector3();
geometry.computeBoundingBox();
let bbox = geometry.boundingBox;
let points = [];
var dir = new THREE.Vector3(1, 1, 1).normalize();
for (let i = 0; i < count; i++) {
let p = setRandomVector(bbox.min, bbox.max);
points.push(p);
}
function setRandomVector(min, max){
let v = new THREE.Vector3(
THREE.Math.randFloat(min.x, max.x),
THREE.Math.randFloat(min.y, max.y),
THREE.Math.randFloat(min.z, max.z)
);
if (!isInside(v)){return setRandomVector(min, max);}
return v;
}
function isInside(v){
ray.set(v, dir);
let counter = 0;
let pos = geometry.attributes.position;
let faces = pos.count / 3;
let vA = new THREE.Vector3(), vB = new THREE.Vector3(), vC = new THREE.Vector3();
for(let i = 0; i < faces; i++){
vA.fromBufferAttribute(pos, i * 3 + 0);
vB.fromBufferAttribute(pos, i * 3 + 1);
vC.fromBufferAttribute(pos, i * 3 + 2);
if (ray.intersectTriangle(vA, vB, vC)) counter++;
}
return counter % 2 == 1;
}
return new THREE.BufferGeometry().setFromPoints(points);
}
The concepts from the previous answer is very good, but it has some performance limitations:
the whole geometry is tested with every ray
the recursion on points outside can lead to stack overflow
Moreover, it's incompatible with indexed geometry.
It can be improved by creating a spatial hashmap storing the geometry triangles and limiting the intersection test to only some part of the mesh.
Demonstration

How to morphTarget of an .obj file (BufferGeometry)

I'm trying to morph the vertices of a loaded .obj file like in this example: https://threejs.org/docs/#api/materials/MeshDepthMaterial - when 'wireframe' and 'morphTargets' are activated in THREE.MeshDepthMaterial.
But I can't reach the desired effect. From the above example the geometry can be morphed via geometry.morphTargets.push( { name: 'target1', vertices: vertices } ); however it seems that morphTargets is not available for my loaded 3D object as it is a BufferGeometry.
Instead I tried to change independently each vertices point from myMesh.child.child.geometry.attributes.position.array[i], it kind of works (the vertices of my mesh are moving) but not as good as the above example.
Here is a Codepen of what I could do.
How can I reach the desired effect on my loaded .obj file?
Adding morph targets to THREE.BufferGeometry is a bit different than THREE.Geometry. Example:
// after loading the mesh:
var morphAttributes = mesh.geometry.morphAttributes;
morphAttributes.position = [];
mesh.material.morphTargets = true;
var position = mesh.geometry.attributes.position.clone();
for ( var j = 0, jl = position.count; j < jl; j ++ ) {
position.setXYZ(
j,
position.getX( j ) * 2 * Math.random(),
position.getY( j ) * 2 * Math.random(),
position.getZ( j ) * 2 * Math.random()
);
}
morphAttributes.position.push(position); // I forgot this earlier.
mesh.updateMorphTargets();
mesh.morphTargetInfluences[ 0 ] = 0;
// later, in your render() loop:
mesh.morphTargetInfluences[ 0 ] += 0.001;
three.js r90

three.js / physi.js heightfield wont accept geometry

I'm attempting to create a large terrain in three.js, i'm using physi.js as the physics engine.
generating the geometry from the heightmap is no problem so far. however, when i try to add it as a THREE.Mesh it works beautifully, when i try adding it as a Physijs.HeightfieldMesh i get the following error:
TypeError: geometry.vertices[(a + (((this._physijs.ypts - b) - 1) * this._physijs.ypts))] is undefined
The geometry is generated as a plane, then the Z position of each vertex gets modified according to the heightmap.
var geometry = new THREE.PlaneGeometry( img.naturalWidth, img.naturalHeight,img.naturalWidth -1, img.naturalHeight -1 );
var material = new THREE.MeshLambertMaterial( { color : 0x0F0F0F } );
//set height of vertices
for ( var i = 0; i<plane.geometry.vertices.length; i++ ) {
plane.geometry.vertices[i].z = data[i];//let's just assume the data is correct since it works as a THREE.Mesh
}
var terrain = new THREE.Mesh(geometry, material); // works
//does not work
var terrain = new Physijs.heightfieldMesh(
geometry,
material,
0
);
I think your problem is you are using "plane.geometry" instead of just "geometry" in the loop to set the vertex height.
Maybe it should be:
//set height of vertices
for ( var i = 0; i < geometry.vertices.length; i++ ) {
geometry.vertices[i].z = data[i];//let's just assume the data is correct since it works as a THREE.Mesh
}
This fiddle that I created seems to work ok.

How to get other 3D objects within a radius of a position in three.js

I have a 3D scene in three.js in which I need to get an array of objects that are within X range of a source object. At the moment, the example I'm using is utilizing raycasting inside of a for loop that iterates an array of "collidable objects" that exist in the scene. I feel like there must be a better way to handle this because this approach is exponentially more complex if every object in the array has to raycast from itself to every other object in the array. This has massive performance impacts as the array of collidable objects grows.
//hold collidable objects
var collidableObjects = [];
var scene = new THREE.Scene();
var cubeGeo = new THREE.CubeGeometry( 10 , 10 , 10 );
var materialA = new THREE.MeshBasicMaterial( { color: 0xff0000 } );
var materialB = new THREE.MeshBasicMaterial( { color: 0x00ff00 } );
var cubeA = new THREE.Mesh( cubeGeo , materialA );
collidableObjects.push( cubeA );
scene.add( cubeA );
//Change this variable to a larger number to see the processing time explode
var range = 100;
for( var x = 0 ; x < range ; x += 20 ) {
for( var z = 0; z < range ; z += 20 ) {
if( x === 0 && z === 0 ) continue;
var cube = new THREE.Mesh( cubeGeo , materialB );
scene.add( cube );
cube.position.x = x;
cube.position.z = z;
collidableObjects.push( cube );
var cube = cube.clone();
scene.add( cube );
cube.position.x = x * -1;
cube.position.z = z;
collidableObjects.push( cube );
var cube = cube.clone();
scene.add( cube );
cube.position.x = x;
cube.position.z = z * -1;
collidableObjects.push( cube );
var cube = cube.clone();
scene.add( cube );
cube.position.x = x * -1;
cube.position.z = z * -1;
collidableObjects.push( cube );
}
}
var camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
camera.position.y = 200;
camera.lookAt( scene.position );
function render() {
//requestAnimationFrame(render);
renderer.render(scene, camera);
console.log( getObjectsWithinRange( cubeA , 30 ) );
}
function getObjectsWithinRange( source , range ) {
var startTime = new Date().getTime();
var inRange = [];
for( var i = 0; i < collidableObjects.length ; ++i ) {
var ray = new THREE.Raycaster( source.position , collidableObjects[i].position , 0 , range );
if( ( obj = ray.intersectObject( collidableObjects[i] ) ) && obj.length ) {
inRange.push( obj[0] );
}
}
var endTime = new Date().getTime();
console.log( 'Processing Time: ' , endTime - startTime );
return inRange;
}
render();
You can see the JSfiddle of this here.
If you change the indicated variable to a larger number say 200, then you'll see the processing time start to get out of control. I feel like there has to be a simpler way to reduce down the array of doing this so I looked at the documentation for the Raycaster of three.js and I noticed that both the near and far attributes say "This value indicates which objects can be discarded based on the distance." so I presume there's some internal function that is used to refine the results down based on distance before casting all the rays.
I did a little digging on this and came up with a single function inside of Ray.js.
distanceToPoint: function () {
var v1 = new THREE.Vector3();
return function ( point ) {
var directionDistance = v1.subVectors( point, this.origin ).dot( this.direction );
// point behind the ray
if ( directionDistance < 0 ) {
return this.origin.distanceTo( point );
}
v1.copy( this.direction ).multiplyScalar( directionDistance ).add( this.origin );
return v1.distanceTo( point );
};
}(),
I guess what I'm looking for is a better way to get all of the objects in the scene that are within X radius of a source object. I don't even need to use the Raycasting because I'm not interested in mesh collision, rather just a list of the objects within X radius of the source object. I don't even need to recurse into the children of those objects because of the way the scene is set up. So I feel like there must be some internal function or something that simply uses the THREE.Vector3 objects and math to refine them by distance. That has to be a lot cheaper math to run than Raycasting in this case. If there's already a function that handles this somewhere in three.js, I don't want to recreate one from scratch. I also realize this may be a very long-winded question for what could very well be a single line answer, but I wanted to make sure I have all the details and whatnot here in case someone else looking to do this searches for it later.
Collision checking is a more general problem and I think you'll have more success if you think about it in a context outside of Three.js. There are a number of methods for managing large numbers of objects that need to check for collision with each other. Here are a few optimizations that might be relevant to you here:
The first optimization is for each object to have a boolean property indicating whether it moved since the last physics update. If both objects you're comparing haven't moved, you don't need to recalculate collision. This is mostly relevant if you have a large number of objects in a steady state (like crates you can push around). There are a number of other optimizations you can build on top of this; for example, often if two objects haven't moved, they won't be colliding, because if they were colliding they would be recoiling (moving apart).
The second optimization is that you usually only need to check collision within a certain distance. For example, if you know that all of your objects are smaller than 100 units, then you can just check whether (x1-x2)^2 + (y1-y2)^2 + (z1-z2)^2 > 100^2. If the check is true (indicating the distance between the two objects is large) then you don't need to calculate detailed collisions. In fact this is more or less the near/far optimization that Raycaster provides for you, but you are not making use of it in your code, since you are always calling the intersectObject method.
The third optimization is that you are allocating a bunch of new Raycaster and related objects in every physics update. Instead, you can keep a pool of Raycasters (or even a single Raycaster) and just update their properties. This will avoid a lot of garbage collecting.
Finally, the most common generalized approach to dealing with a large number of collideable objects is called spatial partitioning. The idea is basically that you divide your world into a given number of spaces and keep track of which space objects are in. Then, when you need to calculate collision, you only need to check other objects that are in the same space. The most common approach for doing this is to use an Octree (an 8-ary tree). As WestLangley mentioned, Three.js has an Octree implementation starting in r59, along with an example (source). Here is a reasonable introduction to the concept of spatial partitioning using 2D examples.
Outside of these optimizations, if you need to do anything particularly complicated, you may want to consider using an external physics library, which will manage optimizations like these for you. The most popular ones for use with Three.js at the moment are Physijs, Cannon.js, and Ammo.js.

Resources