How to morphTarget of an .obj file (BufferGeometry) - three.js

I'm trying to morph the vertices of a loaded .obj file like in this example: https://threejs.org/docs/#api/materials/MeshDepthMaterial - when 'wireframe' and 'morphTargets' are activated in THREE.MeshDepthMaterial.
But I can't reach the desired effect. From the above example the geometry can be morphed via geometry.morphTargets.push( { name: 'target1', vertices: vertices } ); however it seems that morphTargets is not available for my loaded 3D object as it is a BufferGeometry.
Instead I tried to change independently each vertices point from myMesh.child.child.geometry.attributes.position.array[i], it kind of works (the vertices of my mesh are moving) but not as good as the above example.
Here is a Codepen of what I could do.
How can I reach the desired effect on my loaded .obj file?

Adding morph targets to THREE.BufferGeometry is a bit different than THREE.Geometry. Example:
// after loading the mesh:
var morphAttributes = mesh.geometry.morphAttributes;
morphAttributes.position = [];
mesh.material.morphTargets = true;
var position = mesh.geometry.attributes.position.clone();
for ( var j = 0, jl = position.count; j < jl; j ++ ) {
position.setXYZ(
j,
position.getX( j ) * 2 * Math.random(),
position.getY( j ) * 2 * Math.random(),
position.getZ( j ) * 2 * Math.random()
);
}
morphAttributes.position.push(position); // I forgot this earlier.
mesh.updateMorphTargets();
mesh.morphTargetInfluences[ 0 ] = 0;
// later, in your render() loop:
mesh.morphTargetInfluences[ 0 ] += 0.001;
three.js r90

Related

Three.js: how to combine several indices & vector arrays to one

I am trying to visualize a grand strategy (EU4, CK3, HOI) like map in Three.js. I started creating meshes for every cell. the results are fine (screenshot 1 & 2).
Separate mesh approach - simple land / water differentiation :
Separate mesh approach - random cell color :
however, with a lot of cells, performance becomes an issue (I am getting 15fps with 10k cells).
In order to improve performance I would like to combine all these separate indices & vertex arrays into 2 big arrays, which will then be used to create a single mesh.
I am looping through all my cells to push their indices, vertices & colors into the big arrays like so:
addCellGeometryToMapGeometry(cell) {
let startIndex = this.mapVertices.length;
let cellIndices = cell.indices.length;
let cellVertices = cell.vertices.length;
let color = new THREE.Color( Math.random(), Math.random(), Math.random() );
for (let i = 0; i < cellIndices; i++) {
this.mapIndices.push(startIndex + cell.indices[i]);
}
for (let i = 0; i < cellVertices; i++) {
this.mapVertices.push(cell.vertices[i]);
this.mapColors.push (color);
}
}
I then generate the combined mesh:
generateMapMesh() {
let geometry = new THREE.BufferGeometry();
const material = new THREE.MeshPhongMaterial( {
side: THREE.DoubleSide,
flatShading: true,
vertexColors: true,
shininess: 0
} );
geometry.setIndex( this.mapIndices );
geometry.setAttribute( 'position', new THREE.Float32BufferAttribute( this.mapVertices, 3 ) );
geometry.setAttribute( 'color', new THREE.Float32BufferAttribute( new Float32Array(this.mapColors.length), 3 ) );
for ( let i = 0; i < this.mapColors.length; i ++ ) {
geometry.attributes.color.setXYZ(i, this.mapColors[i].r, this.mapColors[i].g, this.mapColors[i].b);
}
return new THREE.Mesh( geometry, material );
}
Unfortunately the results are underwhelming:
While the data in the combined arrays look okay, only every third cell is rendered. In some cases the indices seem to get mixed up too.
Combined approach - random cell colors :
In other similar topics it is recommended to merge existing meshes. However, I figured that my approach should allow me to better understand what is actually happening & potentially save on performance as well.
Has my code obvious flaws that I cannot see?
Or am I generally on a wrong path, if so, how should it be done instead?
I actually found the issue in my code. wrong:
let startIndex = this.mapVertices.length;
The issue here is that the values in the indices array always reference a vertex (which consists of 3 consecutive array entries in the vertices array). correct:
let startIndex = this.mapVertices.length / 3;
Additionally I should only push one color per vertex instead of one per vertex array entry (= 1 per coordinate) but make sure that the arraylength of the geometry.color attribute stays at it is.
With these 2 changes, the result for the combined mesh looks exactly the same as when creating a separate mesh for every cell. The performance improvement is impressive.
separate meshes:
60 - 65 ms needed to render a frame
144 mb allocated memory
combined mesh:
0 - 1 ms needed to render a frame
58 mb allocated memory
Here are the fixed snippets:
addCellGeometryToMapGeometry(cell) {
let startIndex = this.mapVertices.length / 3;
let cellIndices = cell.indices.length;
let cellVertices = cell.vertices.length;
console.log('Vertex-- maplength: ' + startIndex + ' celllength: ' + cellVertices);
console.log('Indices -- maplength: ' + this.mapIndices.length + ' celllength: ' + cellIndices);
console.log({cell});
let color = new THREE.Color( Math.random(), Math.random(), Math.random() );
for (let i = 0; i < cellIndices; i++) {
this.mapIndices.push(startIndex + cell.indices[i]);
}
for (let i = 0; i < cellVertices; i++) {
this.mapVertices.push(cell.vertices[i]);
if (i % 3 === 0) { this.mapColors.push (color); }
}
}
generateMapMesh() {
let geometry = new THREE.BufferGeometry();
const material = new THREE.MeshPhongMaterial( {
side: THREE.DoubleSide,
flatShading: true,
vertexColors: true,
shininess: 0
} );
geometry.setIndex( this.mapIndices );
geometry.setAttribute( 'position', new THREE.Float32BufferAttribute( this.mapVertices, 3 ) );
geometry.setAttribute( 'color', new THREE.Float32BufferAttribute( new Float32Array(this.mapVertices.length), 3 ) );
for ( let i = 0; i < this.mapColors.length; i ++ ) {
geometry.attributes.color.setXYZ(i, this.mapColors[i].r, this.mapColors[i].g, this.mapColors[i].b);
}
return new THREE.Mesh( geometry, material );
}

ThreeJS - THREE.BufferGeometry.computeBoundingSphere() Gives Error: NaN Position Values

I am creating a simple THREE.PlaneBufferGeometry using Threejs. The surface is a geologic surface in the earth.
This surface has local gaps or 'holes' in it represented by NaN's. I have read another similar, but older, post where the suggestion was to fill the position Z component with 'undefined' rather than NaN. I tried that but get this error:
THREE.BufferGeometry.computeBoundingSphere(): Computed radius is NaN. The "position" attribute is likely to have NaN values.
PlaneBufferGeometry {uuid: "8D8EFFBF-7F10-4ED5-956D-5AE1EAD4DD41", name: "", type: "PlaneBufferGeometry", index: Uint16BufferAttribute, attributes: Object, …}
Here is the TypeScript function that builds the surface:
AddSurfaces(result) {
let surfaces: Surface[] = result;
if (this.surfaceGroup == null) {
this.surfaceGroup = new THREE.Group();
this.globalGroup.add(this.surfaceGroup);
}
surfaces.forEach(surface => {
var material = new THREE.MeshPhongMaterial({ color: 'blue', side: THREE.DoubleSide });
let mesh: Mesh2D = surface.arealMesh;
let values: number[][] = surface.values;
let geometry: PlaneBufferGeometry = new THREE.PlaneBufferGeometry(mesh.width, mesh.height, mesh.nx - 1, mesh.ny - 1);
var positions = geometry.getAttribute('position');
let node: number = 0;
// Surfaces in Three JS are ordered from top left corner x going fastest left to right
// and then Y ('j') going from top to bottom. This is backwards in Y from how we do the
// modelling in the backend.
for (let j = mesh.ny - 1; j >= 0; j--) {
for (let i = 0; i < mesh.nx; i++) {
let value: number = values[i][j];
if(!isNaN(values[i][j])) {
positions.setZ(node, -values[i][j]);
}
else {
positions.setZ(node, undefined); /// This does not work? Any ideas?
}
node++;
}
}
geometry.computeVertexNormals();
var plane = new THREE.Mesh(geometry, material);
plane.receiveShadow = true;
plane.castShadow = true;
let xOrigin: number = mesh.xOrigin;
let yOrigin: number = mesh.yOrigin;
let cx: number = xOrigin + (mesh.width / 2.0);
let cy: number = yOrigin + (mesh.height / 2.0);
// translate point to origin
let tempX: number = xOrigin - cx;
let tempY: number = yOrigin - cy;
let azi: number = mesh.azimuth;
let aziRad = azi * Math.PI / 180.0;
// now apply rotation
let rotatedX: number = tempX * Math.cos(aziRad) - tempY * Math.sin(aziRad);
let rotatedY: number = tempX * Math.sin(aziRad) + tempY * Math.cos(aziRad);
cx += (tempX - rotatedX);
cy += (tempY - rotatedY);
plane.position.set(cx, cy, 0.0);
plane.rotateZ(aziRad);
this.surfaceGroup.add(plane);
});
this.UpdateCamera();
this.animate();
}
Thanks!
I have read another similar, but older, post where the suggestion was to fill the position Z component with 'undefined' rather than NaN.
Using undefined will fail in the same way like using NaN. BufferGeometry.computeBoundingSphere() computes the radius based on Vector3.distanceToSquared(). If you call this method with a vector that contains no valid numerical data, NaN will be returned.
Hence, you can't represent the gaps in a geometry with NaN or undefined position data. The better way is to generate a geometry which actually represents the geometry of your geologic surface. Using ShapeBufferGeometry might be a better candidate since shapes do support the concept of holes.
three.js r117
THREE.PlaneBufferGeometry:: parameters: {
width: number;
height: number;
widthSegments: number;
heightSegments: number;
};
widthSegments or heightSegments should be greater 1 ,if widthSegments < 1 ,widthSegments may be equal 0 or nan.
In my case, it was happening when I tried to create a beveled shape based on a single vector or a bunch of identical vectors - so there was only a single point. Filtering out such shapes solved the issue.

three.js: Get updated vertices with skeletal animations?

Similar to the question in this stack overflow question Three.js: Get updated vertices with morph targets I am interested in how to get the "actual" position of the vertices of a mesh with a skeletal animation.
I have tried printing out the position values, but they are never actually updated (as I am to understand, this is because they are calculated on the GPU, not CPU). The answer to the question above mentioned doing the same computations on the CPU as on the GPU to get the up to date vertex positions for morph target animations, but is there a way to do this same approach for skeletal animations? If so, how??
Also, for the morph targets, someone pointed out that this code is already present in the Mesh.raycast function (https://github.com/mrdoob/three.js/blob/master/src/objects/Mesh.js#L115 ). However, I don't see HOW the raycast works with skeletal animation meshes-- how does it know the updated position of the faces?
Thank you!
A similar topic was discussed in the three.js forum some time ago. I've presented there a fiddle which computes the AABB for a skinned mesh per frame. The code actually performs the same vertex displacement via JavaScript like in the vertex shader. The routine looks like so:
function updateAABB( skinnedMesh, aabb ) {
var skeleton = skinnedMesh.skeleton;
var boneMatrices = skeleton.boneMatrices;
var geometry = skinnedMesh.geometry;
var index = geometry.index;
var position = geometry.attributes.position;
var skinIndex = geometry.attributes.skinIndex;
var skinWeigth = geometry.attributes.skinWeight;
var bindMatrix = skinnedMesh.bindMatrix;
var bindMatrixInverse = skinnedMesh.bindMatrixInverse;
var i, j, si, sw;
aabb.makeEmpty();
//
if ( index !== null ) {
// indexed geometry
for ( i = 0; i < index.count; i ++ ) {
vertex.fromBufferAttribute( position, index[ i ] );
skinIndices.fromBufferAttribute( skinIndex, index[ i ] );
skinWeights.fromBufferAttribute( skinWeigth, index[ i ] );
// the following code section is normally implemented in the vertex shader
vertex.applyMatrix4( bindMatrix ); // transform to bind space
skinned.set( 0, 0, 0 );
for ( j = 0; j < 4; j ++ ) {
si = skinIndices.getComponent( j );
sw = skinWeights.getComponent( j );
boneMatrix.fromArray( boneMatrices, si * 16 );
// weighted vertex transformation
temp.copy( vertex ).applyMatrix4( boneMatrix ).multiplyScalar( sw );
skinned.add( temp );
}
skinned.applyMatrix4( bindMatrixInverse ); // back to local space
// expand aabb
aabb.expandByPoint( skinned );
}
} else {
// non-indexed geometry
for ( i = 0; i < position.count; i ++ ) {
vertex.fromBufferAttribute( position, i );
skinIndices.fromBufferAttribute( skinIndex, i );
skinWeights.fromBufferAttribute( skinWeigth, i );
// the following code section is normally implemented in the vertex shader
vertex.applyMatrix4( bindMatrix ); // transform to bind space
skinned.set( 0, 0, 0 );
for ( j = 0; j < 4; j ++ ) {
si = skinIndices.getComponent( j );
sw = skinWeights.getComponent( j );
boneMatrix.fromArray( boneMatrices, si * 16 );
// weighted vertex transformation
temp.copy( vertex ).applyMatrix4( boneMatrix ).multiplyScalar( sw );
skinned.add( temp );
}
skinned.applyMatrix4( bindMatrixInverse ); // back to local space
// expand aabb
aabb.expandByPoint( skinned );
}
}
aabb.applyMatrix4( skinnedMesh.matrixWorld );
}
Also, for the morph targets, someone pointed out that this code is already present in the Mesh.raycast function
Yes, you can raycast against morphed meshes. Raycasting against skinned meshes is not supported yet. The code in Mesh.raycast() is already very complex. I think it needs some serious refactoring before it is further enhanced. In the meantime, you can use the presented code snippet to build a solution by yourself. The vertex displacement logic is actually the most complicated part.
Live demo: https://jsfiddle.net/fnjkeg9x/1/
three.js R107

THREEJS - Indexed BufferGeometry with 2 materials

I want to create ONE single buffer geometry that can hold many materials.
I have read that in order to achieve this in BufferGeometry, I need to use groups. So I created the following "floor" mesh:
var gg=new THREE.BufferGeometry(),vtx=[],fc=[[],[]],mm=[
new THREE.MeshLambertMaterial({ color:0xff0000 }),
new THREE.MeshLambertMaterial({ color:0x0000ff })
];
for(var y=0 ; y<11 ; y++)
for(var x=0 ; x<11 ; x++) {
vtx.push(x-5,0,y-5);
if(x&&y) {
var p=(vtx.length/3)-1;
fc[(x%2)^(y%2)].push(
p,p-11,p-1,
p-1,p-11,p-12
);
}
}
gg.addAttribute('position',new THREE.Float32BufferAttribute(vtx,3));
Array.prototype.push.apply(fc[0],fc[1]); gg.setIndex(fc[0]);
gg.computeVertexNormals();
gg.addGroup(0,100,0);
gg.addGroup(100,100,1);
scene.add(new THREE.Mesh(gg,mm));
THE ISSUE:
looking at the example in https://www.crazygao.com/vc/tst2.htm can see that the BLUE material looks weird.
Single material showup OK.
2 materials with group as above, in any case show the BLUE really strage.
Changing the 1st group to start=0, count=200 (for all triangles) and removing the 2nd group, will show MORE squares of RED (obviously) but still NOT in the way I would like it to show.
Changing the 1st group count to any value greater than 200 will cause a crash (obviously) of attempting to access vertex out of range...
Is anyone know clearly what shall I do?
I am using THREE.js v.101 and I prefer to not create special custom shader for that, or add another vertex buffer to duplicate those I already have, and I prefer to not create 2 meshes as this may get much more complicated with advanced models.
Check out this: https://jsfiddle.net/mmalex/zebos3va/
fix #1 - don't define group 0
fix #2 - 2nd parameter in .addGroup is buffer length, it must be multiple of 3 (100 was wrong)
var gg = new THREE.BufferGeometry(),
vtx = [],
fc = [[],[]],
mm = [
new THREE.MeshLambertMaterial({
color: 0xff0000
}),
new THREE.MeshLambertMaterial({
color: 0x0000ff
})
];
for (var y = 0; y < 11; y++)
for (var x = 0; x < 11; x++) {
vtx.push(x - 5, 0, y - 5);
if (x && y) {
var p = (vtx.length / 3) - 1;
fc[(x % 2) ^ (y % 2)].push(
p, p - 11, p - 1,
p - 1, p - 11, p - 12
);
}
}
gg.addAttribute('position', new THREE.Float32BufferAttribute(vtx, 3));
fc[0].push.apply(fc[1]);
gg.setIndex(fc[0]);
gg.computeVertexNormals();
// group 0 is everything, unless you define group 1
// fix #1 - don't define group 0
// fix #2 - 2nd parameter is buffer length, it must be multiple of 3 (100 was wrong)
gg.addGroup(0, 102, 1);
scene.add(new THREE.Mesh(gg, mm));

How to dynamically construct vertex geometry on the GPU

As a way of teaching myself WebGL and Three.js, I'm building a simple demo/game. The idea is a 3D version of this: https://www.youtube.com/watch?v=cYpE8-4_YBk&t=75s - navigate the opening, and don't crash.
What I want to do is create a dynamic "tube" using vertices calculated on the fly, like this:
for ( var p = 0 ; p < LIMIT ; p++ ) {
r = RADIUS + Math.random();
let x = CX + (r * Math.cos(ANGLE));
let y = CY + (r * Math.sin(ANGLE));
let z = - ( p / DENSITY );
let v = new THREE.Vector3( x, y, z );
tube.vertices.push( v );
tube.faces.push( new THREE.Face3( p, p-offset[0], p-offset[2] ) );
tube.faces.push( new THREE.Face3( p, p-offset[2], p-offset[1] ) );
// update calculation parameters
CX += ( Math.random() - 0.5 ) * 0.1;
CY += ( Math.random() - 0.5 ) * 0.1;
RADIUS += ( Math.random() - 0.5 ) * 0.1;
if ( RADIUS < MIN_RADIUS ) RADIUS = MIN_RADIUS;
if ( RADIUS > MAX_RADIUS ) RADIUS = MAX_RADIUS;
ANGLE += INCREMENT;
}
I have it working where the vertices are calculated once as a static mesh which is then uploaded to the GPU and animated towards the camera. No interactivity, yet. https://codepen.io/jarrowwx/pen/gKraVm
Next step is to make it infinite. Which means I'm going to have to do it differently.
One way I could do it is to keep a sufficient number of vertices in an array, calculating new vertices as I go, enough to make up for the distance traveled since the last time step. Then, once a second or so, rebuild the face array, and rebuild the scene. But that means once a second, there will probably be a noticeable lag.
But because all the vertices are calculated, it makes sense to do all the work on the GPU and avoid the task of transferring data in the first place.
How would one go about building this data structure on the GPU so that the data never has to be transferred from CPU to GPU? And then, once you have pulled that off, how do you continuously extend it as the camera flies through it?

Resources