I am creating a simple THREE.PlaneBufferGeometry using Threejs. The surface is a geologic surface in the earth.
This surface has local gaps or 'holes' in it represented by NaN's. I have read another similar, but older, post where the suggestion was to fill the position Z component with 'undefined' rather than NaN. I tried that but get this error:
THREE.BufferGeometry.computeBoundingSphere(): Computed radius is NaN. The "position" attribute is likely to have NaN values.
PlaneBufferGeometry {uuid: "8D8EFFBF-7F10-4ED5-956D-5AE1EAD4DD41", name: "", type: "PlaneBufferGeometry", index: Uint16BufferAttribute, attributes: Object, …}
Here is the TypeScript function that builds the surface:
AddSurfaces(result) {
let surfaces: Surface[] = result;
if (this.surfaceGroup == null) {
this.surfaceGroup = new THREE.Group();
this.globalGroup.add(this.surfaceGroup);
}
surfaces.forEach(surface => {
var material = new THREE.MeshPhongMaterial({ color: 'blue', side: THREE.DoubleSide });
let mesh: Mesh2D = surface.arealMesh;
let values: number[][] = surface.values;
let geometry: PlaneBufferGeometry = new THREE.PlaneBufferGeometry(mesh.width, mesh.height, mesh.nx - 1, mesh.ny - 1);
var positions = geometry.getAttribute('position');
let node: number = 0;
// Surfaces in Three JS are ordered from top left corner x going fastest left to right
// and then Y ('j') going from top to bottom. This is backwards in Y from how we do the
// modelling in the backend.
for (let j = mesh.ny - 1; j >= 0; j--) {
for (let i = 0; i < mesh.nx; i++) {
let value: number = values[i][j];
if(!isNaN(values[i][j])) {
positions.setZ(node, -values[i][j]);
}
else {
positions.setZ(node, undefined); /// This does not work? Any ideas?
}
node++;
}
}
geometry.computeVertexNormals();
var plane = new THREE.Mesh(geometry, material);
plane.receiveShadow = true;
plane.castShadow = true;
let xOrigin: number = mesh.xOrigin;
let yOrigin: number = mesh.yOrigin;
let cx: number = xOrigin + (mesh.width / 2.0);
let cy: number = yOrigin + (mesh.height / 2.0);
// translate point to origin
let tempX: number = xOrigin - cx;
let tempY: number = yOrigin - cy;
let azi: number = mesh.azimuth;
let aziRad = azi * Math.PI / 180.0;
// now apply rotation
let rotatedX: number = tempX * Math.cos(aziRad) - tempY * Math.sin(aziRad);
let rotatedY: number = tempX * Math.sin(aziRad) + tempY * Math.cos(aziRad);
cx += (tempX - rotatedX);
cy += (tempY - rotatedY);
plane.position.set(cx, cy, 0.0);
plane.rotateZ(aziRad);
this.surfaceGroup.add(plane);
});
this.UpdateCamera();
this.animate();
}
Thanks!
I have read another similar, but older, post where the suggestion was to fill the position Z component with 'undefined' rather than NaN.
Using undefined will fail in the same way like using NaN. BufferGeometry.computeBoundingSphere() computes the radius based on Vector3.distanceToSquared(). If you call this method with a vector that contains no valid numerical data, NaN will be returned.
Hence, you can't represent the gaps in a geometry with NaN or undefined position data. The better way is to generate a geometry which actually represents the geometry of your geologic surface. Using ShapeBufferGeometry might be a better candidate since shapes do support the concept of holes.
three.js r117
THREE.PlaneBufferGeometry:: parameters: {
width: number;
height: number;
widthSegments: number;
heightSegments: number;
};
widthSegments or heightSegments should be greater 1 ,if widthSegments < 1 ,widthSegments may be equal 0 or nan.
In my case, it was happening when I tried to create a beveled shape based on a single vector or a bunch of identical vectors - so there was only a single point. Filtering out such shapes solved the issue.
Related
I need to compute 3D coordinates from a screen-space position using a rendered depth-map. Unfortunately, using the regular raytracing is not an option for me because I am dealing with a single geometry containing something on the order of 5M faces.
So I figured I will do the following:
render a depth-map with RGBADepthPacking into a renderTarget
use a regular unproject-call to compute a ray from the mouse-position (exactly as I would do when using raycasting)
lookup the depth from the depth-map at the mouse-coordinates and compute a point along the ray using that distance.
This kind of works, but somehow the located point is always slightly behind the object, so there is probably something wrong with my depth-calculations.
Now some details about the steps above
Rendering the depth-map is pretty much straight-forward:
const depthTarget = new THREE.WebGLRenderTarget(w, h);
const depthMaterial = new THREE.MeshDepthMaterial({
depthPacking: THREE.RGBADepthPacking
});
// in renderloop
renderer.setClearColor(0xffffff, 1);
renderer.clear();
scene.overrideMaterial = depthMaterial;
renderer.render(scene, camera, depthTarget);
Lookup the stored color-value at the mouse-position with:
renderer.readRenderTargetPixels(
depthTarget, x, h - y, 1, 1, rgbaBuffer
);
And convert back to float using (adapted from the GLSL-Version in packing.glsl):
const v4 = new THREE.Vector4()
const unpackDownscale = 255 / 256;
const unpackFactors = new THREE.Vector4(
unpackDownscale / (256 * 256 * 256),
unpackDownscale / (256 * 256),
unpackDownscale / 256,
unpackDownscale
);
function unpackRGBAToDepth(rgbaBuffer) {
return v4.fromArray(rgbaBuffer)
.multiplyScalar(1 / 255)
.dot(unpackFactors);
}
and finally computing the depth-value (I found corresponding code in readDepth() in examples/js/shaders/SSAOShader.js which I ported to JS):
function computeDepth() {
const cameraFarPlusNear = cameraFar + cameraNear;
const cameraFarMinusNear = cameraFar - cameraNear;
const cameraCoef = 2.0 * cameraNear;
let z = unpackRGBAToDepth(rgbaBuffer);
return cameraCoef / (cameraFarPlusNear - z * cameraFarMinusNear);
}
Now, as this function returns values in range 0..1 I think it is the depth in clip-space coordinates, so I convert them into "real" units using:
const depth = camera.near + depth * (camera.far - camera.near);
There is obviously something slightly off with these calculations and I didn't figure out the math and details about how depth is stored yet.
Can someone please point me to the mistake I made?
Addition: other things I tried
First I thought it should be possible to just use the unpacked depth-value as value for z in my unproject-call like this:
const x = mouseX/w * 2 - 1;
const y = -mouseY/h * 2 + 1;
const v = new THREE.Vector3(x, y, depth).unproject(camera);
However, this also doesn't get the coordinates right.
[EDIT 1 2017-05-23 11:00CEST]
As per #WestLangleys comment I found the perspectiveDepthToViewZ() function which sounds like it should help. Written in JS that function is
function perspectiveDepthToViewZ(invClipZ, near, far) {
return (near * far) / ((far - near) * invClipZ - far);
}
However, when called with unpacked values from the depth-map, results are several orders of magnitude off. See here.
Ok, so. Finally solved it. So for everyone having trouble with similar issues, here's the solution:
The last line of the computeDepth-function was just wrong. There is a function perspectiveDepthToViewZ in packing.glsl, that is pretty easy to convert to JS:
function perspectiveDepthToViewZ(invClipZ, near, far) {
return (near * far) / ((far - near) * invClipZ - far);
}
(i believe this is somehow part of the inverse projection-matrix)
function computeDepth() {
let z = unpackRGBAToDepth(rgbaBuffer);
return perspectiveDepthToViewZ(z, camera.near, camera.far);
}
Now this will return the z-axis value in view-space for the point. Left to do is converting this back to world-space coordinates:
const setPositionFromViewZ = (function() {
const viewSpaceCoord = new THREE.Vector3();
const projInv = new THREE.Matrix4();
return function(position, viewZ) {
projInv.getInverse(camera.projectionMatrix);
position
.set(
mousePosition.x / windowWidth * 2 - 1,
-(mousePosition.y / windowHeight) * 2 + 1,
0.5
)
.applyMatrix4(projInv);
position.multiplyScalar(viewZ / position.z);
position.applyMatrix4(camera.matrixWorld);
};
}) ();
I'm having this error while adding my own geometry's attribute.
I've already read this WebGL GL ERROR :GL_INVALID_OPERATION : glDrawElements: attempt to access out of range vertices in attribute 1 , and I understand what is the problem, but I can't figure out why.
I'm building a BufferGeometry, a tree, starting from 1000 object. 300 objects are using a LeafGeometry, 700 objects are using a BoxGeometry.
I want to fulfill a buffer, containing a value that tells if a vertex belongs to the trunk or to the foliage. What I'm doing is the follow:
1) First I calculate the dimension of the buffer (and here I think I'm doing it wrong) calling : getTotNumVertices(LeafGeometry.new(options), BoxGeometry.new(options), 1000, 3000)
function getTotNumVertices(foliage_geometry, trunk_geometry, tot_objects, foliage_start_at){
let n_vertices_in_leaf = foliage_geometry.vertices.length * 3;
let n_vertices_in_trunk = trunk_geometry.vertices.length * 3;
let n_vertices_in_leafs = foliage_start_at * n_vertices_in_leaf;
let n_vertices_in_stam = (tot_objects - foliage_start_at) * n_vertices_in_trunk;
return{
tot_vertices: (n_vertices_in_stam + n_vertices_in_leafs),
n_vertices_leaf: n_vertices_in_leaf,
n_vertices_trunk: n_vertices_in_trunk
};
}
2)Once I've got the total number of vertex, I create the buffer
function createBuffers(n_vert){
// I'm returnin an array becuase in my real code I'm returning
// more than one buffer
return {
isLeafBuffer: new Float32Array(n_vert)
};
}
3) Then I build my BufferGeometry, merging together the 1000 objects:
let hash_vertex_info = getTotNumVertices(leafGeom, geometries["box"], 1000, 300);
let buffers = createBuffers(hash_vertex_info.tot_vertices);
let geometry = new THREE.Geometry();
let objs = buildTheTree(1000, 300);
for (let i = 0; i < objs.length; i++){
// here code that fullfills the buffers
let mesh = objs[i];
mesh.updateMatrix();
geometry.merge(mesh.geometry, mesh.matrix);
}
let bufGeometry = new THREE.BufferGeometry().fromGeometry(geometry);
console.log(bufGeometry.attributes.position.count);
console.log(hash_vertex_info.tot_vertices);
And here the problem, the value of is bufGeometry.attributes.position.count is 623616, the value of hash_vertex_info.tot_vertices is 308940.
When drawing, WebGL try do access a value bigger than 308940 and then the error.
What am I doing wrong?
///////////EDIT AFTER A WHILE
Basically, I'm having facing the same problem explained in this question
Does converting a Geometry to a BufferGeometry in Three.js increase the number of vertices?
I need to calculate the total number of vertices in order to create a buffer that will contain values for my shader. This is my code, the number of vertices it is still different between the merged geometry and the buffer geometry obtained from it.
let tot_objects = 100;
let material = new THREE.MeshStandardMaterial( {color: 0x00ff00} );
let geometry = new THREE.BoxGeometry(5, 5, 5, 4, 4, 4);
let objs = populateGroup(geometry, material, tot_objects);
//let's merge all the objects in one geometry
let mergedGeometry = new THREE.Geometry();
for (let i = 0; i < objs.length; i++){
let mesh = objs[i];
mesh.updateMatrix();
mergedGeometry.merge(mesh.geometry, mesh.matrix);
}
let bufGeometry = new THREE.BufferGeometry().fromGeometry(mergedGeometry);
let totVerticesMergedGeometry = (mergedGeometry.vertices.length ) + (mergedGeometry.faces.length * 3);
console.log(bufGeometry.attributes.position.count); // 57600
console.log(totVerticesMergedGeometry); // 67400 !!!
scene.add(new THREE.Mesh(bufGeometry, material));
function populateGroup(selected_geometry, selected_material, tot_objects) {
let objects = [];
for (var i = 0; i< tot_objects; i++) {
let coord = {x:i, y:i, z:i};
let object = new THREE.Mesh(selected_geometry, selected_material);
object.position.set(coord.x, coord.y, coord.z);
object.rotateY( (90 + 40 + i * 100/tot_objects) * -Math.PI/180.0 );
objects.push(object);
}
return objects;
}
The number of totVerticesMergedGeometry and bufGeometry.attributes.position.count should be the same, but is is still different.
Is my way of counting vertices wrong? actually it is the same used here https://github.com/mrdoob/three.js/blob/master/src/core/DirectGeometry.js#L166, meaning (geometry.vertices.length) + (geometry.faces.length * 3).
What I was doing wrong was the way to calculate the number of vertices.
The number of vertices used for the buffer is calculate with MyObjectGeometry.faces.lenght * 3 * NumberOfObjectThatWillBeMerged
A more detailed answer is here Why the number of vertices in a merged Geometry differs from the number of vertices in the BufferedGeometry obtained from it?
I had this error, because I was calling the constructor with the values instead of an array of values:
- var colors = new Float32Array(
+ var colors = new Float32Array( [
1.0, 0.0, 0.0,
0.0, 1.0, 0.0,
0.0, 0.0, 1.0,
1.0, 0.0, 1.0,
0.0, 1.0, 1.0,
1.0, 1.0, 1.0,
- );
+ ] );
I'm reading "create terrain from heightmap" example from ThreeJs Cookbook
This example load GrandCanyon: http://lh5.ggpht.com/_-B0hFoGrn-w/SvHiYk39yAI/AAAAAAAABOQ/6IGZwifUYGA/GrandCanyon.png
And create a 3D terrain: http://www.smartjava.org/tjscb/02-geometries-meshes/02.06-create-terrain-from-heightmap.html
There are some code pieces I can not understand:
// draw on canvas
ctx.drawImage(img, 0, 0);
var pixel = ctx.getImageData(0, 0, width, depth);
var geom = new THREE.Geometry;
var output = [];
for (var x = 0; x < depth; x++) {
for (var z = 0; z < width; z++) {
// get pixel
// since we're grayscale, we only need one element
var yValue = pixel.data[z * 4 + (depth * x * 4)] / heightOffset;
var vertex = new THREE.Vector3(x * spacingX, yValue, z * spacingZ);
geom.vertices.push(vertex);
}
}
why is yValue calculated with that value ? why don't we use var yValue = pixel.data[z * 4 + (depth * x )] or something like that ?
And do we really need spacingX and spacingZ ?
Source code is here: https://github.com/josdirksen/threejs-cookbook/blob/master/02-geometries-meshes/02.06-create-terrain-from-heightmap.html
Could you please help me ?
Thank you very much!
You don't NEED spacingX and spacingZ, no. You could adjust scale in other ways, like applying a scale matrix to the entire THREE.Geometry after you've populated the vertices. Up to you, really.
As fort the yValue, the indexing is to adjust for the way the data for the texture is laid out. There are four channels, usually RGBA, but in this case we only need one of them as a height.
I can't find anywhere an explaination about how to use the frames option for ExtrudeGeometry in Three.js. Its documentation says:
extrudePath — THREE.CurvePath. 3d spline path to extrude shape along. (creates Frames if (frames aren't defined)
frames — THREE.TubeGeometry.FrenetFrames. containing arrays of tangents, normals, binormals
but I don't understand how frames must be defined. I think using the "frames" option, passing three arrays for tangents, normals and binormals (calculated in some way), but how to pass them in frames?... Probably (like here for morphNormals):
frames = { tangents: [ new THREE.Vector3(), ... ], normals: [ new THREE.Vector3(), ... ], binormals: [ new THREE.Vector3(), ... ] };
with the three arrays of the same lenght (perhaps corresponding to steps or curveSegments option in ExtrudeGeometry)?
Many thanks for an explanation.
Edit 1:
String.prototype.format = function () {
var str = this;
for (var i = 0; i < arguments.length; i++) {
str = str.replace('{' + i + '}', arguments[i]);
}
return str;
}
var numSegments = 6;
var frames = new THREE.TubeGeometry.FrenetFrames( new THREE.SplineCurve3(spline), numSegments );
var tangents = frames.tangents,
normals = frames.normals,
binormals = frames.binormals;
var tangents_list = [],
normals_list = [],
binormals_list = [];
for ( i = 0; i < numSegments; i++ ) {
var tangent = tangents[ i ];
var normal = normals[ i ];
var binormal = binormals[ i ];
tangents_list.push("({0}, {1}, {2})".format(tangent.x, tangent.y, tangent.z));
normals_list.push("({0}, {1}, {2})".format(normal.x, normal.y, normal.z));
binormals_list.push("({0}, {1}, {2})".format(binormal.x, binormal.y, binormal.z));
}
alert(tangents_list);
alert(normals_list);
alert(binormals_list);
Edit 2
Times ago, I opened this topic for which I used this solution:
var spline = new THREE.SplineCurve3([
new THREE.Vector3(20.343, 19.827, 90.612), // t=0
new THREE.Vector3(22.768, 22.735, 90.716), // t=1/12
new THREE.Vector3(26.472, 23.183, 91.087), // t=2/12
new THREE.Vector3(27.770, 26.724, 91.458), // t=3/12
new THREE.Vector3(31.224, 26.976, 89.861), // t=4/12
new THREE.Vector3(32.317, 30.565, 89.396), // t=5/12
new THREE.Vector3(31.066, 33.784, 90.949), // t=6/12
new THREE.Vector3(30.787, 36.310, 88.136), // t=7/12
new THREE.Vector3(29.354, 39.154, 90.152), // t=8/12
new THREE.Vector3(28.414, 40.213, 93.636), // t=9/12
new THREE.Vector3(26.569, 43.190, 95.082), // t=10/12
new THREE.Vector3(24.237, 44.399, 97.808), // t=11/12
new THREE.Vector3(21.332, 42.137, 96.826) // t=12/12=1
]);
var spline_1 = [], spline_2 = [], t;
for( t = 0; t <= (7/12); t+=0.0001) {
spline_1.push(spline.getPoint(t));
}
for( t = (7/12); t <= 1; t+=0.0001) {
spline_2.push(spline.getPoint(t));
}
But I was thinking the possibility to set the tangent, normal and binormal for the first point (t=0) of spline_2 to be the same of last point (t=1) of spline_1; so I thought if that option, frames, could return in some way useful for the purpose. Could be possible to overwrite the value for a tangent, normal and binormal in the respective list, to obtain the same value for the last point (t=1) of spline_1 and the first point (t=0) of spline_2, so to guide the extrusion? For example, for the tangent at "t=0" of spline_2:
tangents[0].x = 0.301;
tangents[0].y = 0.543;
tangents[0].z = 0.138;
doing the same also for normals[0] and binormals[0], to ensure the same orientation for the last point (t=1) of spline_1 and the first one (t=0) of spline_2
Edit 3
I'm trying to visualize the tangent, normal and binormal for each control point of "mypath" (spline) using ArrowHelper, but, as you can see in the demo (on scene loading, you need zoom out the scene slowly, until you see the ArrowHelpers, to find them. The relative code starts from line 122 to line 152 in the fiddle), the ArrowHelper does not start at origin, but away from it. How to obtain the same result of this reference demo (when you check the "Debug normals" checkbox)?
Edit 4
I plotted two splines that respectively end (blue spline) and start (red spline) at point A (= origin), displaying tangent, normal and binormal vectors at point A for each spline (using cyan color for the blue spline's labels, and yellow color for the red spline's labels).
As mentioned above, to align and make continuous the two splines, I thought to exploit the three vectors (tangent, normal and binormal). Which mathematical operation, in theory, should I use to turn the end face of blue spline in a way that it views the initial face (yellow face) of red spline, so that the respective tangents (D, D'-hidden in the picture), normals (B, B') and binormals (C, C') are aligned? Should I use the ".setFromUnitVectors (vFrom, VTO)" method of quaternion? In its documentation I read: << Sets this quaternion to the rotation required to rotate vFrom direction vector to vector direction VTO ... vFrom VTO and are assumed to be normalized. >> So, probably, I need to define three quaternions:
quaternion for the rotation of the normalized tangent D vector in the direction of the normalized tangent D' vector
quaternion for the rotation of the normalized normal B vector in the direction of the normalized normal B' vector
quaternion for the rotation of the normalized binormal C vector in the direction of the normalized binormal C' vector
with:
vFrom = normalized D, B and C vectors
VTO = normalized D', B' and C' vectors
and apply each of the three quaternions respectively to D, B and C (not normalized)?
Thanks a lot again
Edit 5
I tried this code (looking in the image how to align the vectors) but nothing has changed:
var numSegments_1 = points_1.length; // points_1 = list of points
var frames_1 = new THREE.TubeGeometry.FrenetFrames( points_1_spline, numSegments_1, false ); // path, segments, closed
var tangents_1 = frames_1.tangents,
normals_1 = frames_1.normals,
binormals_1 = frames_1.binormals;
var numSegments_2 = points_2.length;
var frames_2 = new THREE.TubeGeometry.FrenetFrames( points_2_spline, numSegments_2, false );
var tangents_2 = frames_2.tangents,
normals_2 = frames_2.normals,
binormals_2 = frames_2.binormals;
var b1_b2_angle = binormals_1[ binormals_1.length - 1 ].angleTo( binormals_2[ 0 ] ); // angle between binormals_1 (at point A of spline 1) and binormals_2 (at point A of spline 2)
var quaternion_n1_axis = new THREE.Quaternion();
quaternion_n1_axis.setFromAxisAngle( normals_1[ normals_1.length - 1 ], b1_b2_angle ); // quaternion equal to a rotation on normal_1 as axis
var vector_b1 = binormals_1[ binormals_1.length - 1 ];
vector_b1.applyQuaternion( quaternion_n1_axis ); // apply quaternion to binormals_1
var n1_n2_angle = normals_1[ normals_1.length - 1 ].angleTo( normals_2[ 0 ] ); // angle between normals_1 (at point A of spline 1) and normals_2 (at point A of spline 2)
var quaternion_b1_axis = new THREE.Quaternion();
quaternion_b1_axis.setFromAxisAngle( binormals_1[ binormals_1.length - 1 ], -n1_n2_angle ); // quaternion equal to a rotation on binormal_1 as axis
var vector_n1 = normals_1[ normals_1.length - 1 ];
vector_n1.applyQuaternion( quaternion_b1_axis ); // apply quaternion to normals_1
nothing in this other way also:
var numSegments_1 = points_1.length; // points_1 = list of points
var frames_1 = new THREE.TubeGeometry.FrenetFrames( points_1_spline, numSegments_1, false ); // path, segments, closed
var tangents_1 = frames_1.tangents,
normals_1 = frames_1.normals,
binormals_1 = frames_1.binormals;
var numSegments_2 = points_2.length;
var frames_2 = new THREE.TubeGeometry.FrenetFrames( points_2_spline, numSegments_2, false );
var tangents_2 = frames_2.tangents,
normals_2 = frames_2.normals,
binormals_2 = frames_2.binormals;
var quaternion_n1_axis = new THREE.Quaternion();
quaternion_n1_axis.setFromUnitVectors( binormals_1[ binormals_1.length - 1 ].normalize(), binormals_2[ 0 ].normalize() );
var vector_b1 = binormals_1[ binormals_1.length - 1 ];
vector_b1.applyQuaternion( quaternion_n1_axis );
var quaternion_b1_axis = new THREE.Quaternion();
quaternion_b1_axis.setFromUnitVectors( normals_1[ normals_1.length - 1 ].normalize(), normals_2[ 0 ].normalize() );
var vector_n1 = normals_1[ normals_1.length - 1 ];
vector_n1.applyQuaternion( quaternion_b1_axis );
I understand that I can use body.position.set(x, y, z) to instantaneously move a body, but how can I move it smoothly in an animated manner where it's movement will adhere to the physics and collide with any other bodies on its journey? Using body.velocity.set(x, y, z) will set its velocity, and using body.linearDamping = v, will provide some friction/resistance... but it's still not good enough to allow me to specify exactly where I want the body to stop.
It sounds like you're looking for a kinematic body. With kinematic bodies, you have full control over the movement, and it will push away other bodies in its path. However, the body has infinite mass and is not affected by other bodies colliding with it.
Start off by defining the start and end positions of your body.
var startPosition = new CANNON.Vec3(5, 0, 2);
var endPosition = new CANNON.Vec3(-5, 0, 2);
var tweenTime = 3; // seconds
Then create your kinematic body. In this example we'll add a Box shape to it.
var body = new CANNON.Body({
mass: 0,
type: CANNON.Body.KINEMATIC,
position: startPosition
});
body.addShape(new CANNON.Box(new CANNON.Vec3(1,1,1)));
world.add(body);
Compute the direction vector and get total length of the tween path.
var direction = new CANNON.Vec3();
endPosition.vsub(startPosition, direction);
var totalLength = direction.length();
direction.normalize();
The speed and velocity can be calculated using the formula v = s / t.
var speed = totalLength / tweenTime;
direction.scale(speed, body.velocity);
For each update, compute the tween progress: a number between 0 and 1 where 0 i start position and 1 is end position. Using this number you can calculate the current body position.
var progress = (world.time - startTime) / tweenTime;
if(progress < 1){
// Calculate current position
direction.scale(progress * totalLength, offset);
startPosition.vadd(offset, body.position);
} else {
// We passed the end position! Stop.
body.velocity.set(0,0,0);
body.position.copy(endPosition);
}
See full code below. You can duplicate one of the cannon.js demos and just paste this code.
var demo = new CANNON.Demo();
var postStepHandler;
demo.addScene("Tween box",function(){
var world = demo.getWorld();
// Inputs
var startPosition = new CANNON.Vec3(5, 0, 2);
var endPosition = new CANNON.Vec3(-5, 0, 2);
var tweenTime = 3; // seconds
var body = new CANNON.Body({
mass: 0,
type: CANNON.Body.KINEMATIC,
position: startPosition
});
body.addShape(new CANNON.Box(new CANNON.Vec3(1,1,1)));
world.add(body);
demo.addVisual(body);
if(postStepHandler){
world.removeEventListener('postStep', postStepHandler);
}
// Compute direction vector and get total length of the path
var direction = new CANNON.Vec3();
endPosition.vsub(startPosition, direction);
var totalLength = direction.length();
direction.normalize();
var speed = totalLength / tweenTime;
direction.scale(speed, body.velocity);
// Save the start time
var startTime = world.time;
var offset = new CANNON.Vec3();
postStepHandler = function(){
// Progress is a number where 0 is at start position and 1 is at end position
var progress = (world.time - startTime) / tweenTime;
if(progress < 1){
direction.scale(progress * totalLength, offset);
startPosition.vadd(offset, body.position);
} else {
body.velocity.set(0,0,0);
body.position.copy(endPosition);
world.removeEventListener('postStep', postStepHandler);
postStepHandler = null;
}
}
world.addEventListener('postStep', postStepHandler);
});
demo.start();
You need to use a physics library for this, such as Physijs. It works easily with Three.js. Googling for "Physijs Three.js" will provide examples.