How to create a simple three dimensional coordinate display? - three.js

I have a series of data that simply represents X/Y/Z coordinates, and would like to display them on screen. I am aware of the existence of three.js, but the examples I have waded through so far seem to be for far more complex animations and whatnot, and have not managed to find an ELI5 tutorial of set of documentation to get me going.
Note that I am not married to three.js, but it does seem like the best tool for the job.
I'm not looking for someone to do it for me, just some links to some basic documentation would be so very appreciated!

Straightforward, conceptually consists of making sphere's and setting their position to your coordinates.
1) Have coordinate data
const coordData = [
{x: 9, y: 2, z: 6},
{x: 3, y: 7, z: 4},
...
]
2) Create Geometry, Material and Group
const pointGeom = new THREE.SphereBufferGeometry( 5, 32, 32 ),
pointMat = new THREE.MeshBasicMaterial({ color: 0xffff00 }),
pointGroup = new THREE.Group();
3) Loop over coordinate data and create a point (sphere) for each using above const's
for (var i = coordData.length - 1; i >= 0; i--) {
const coord = coordData[i],
point = new THREE.Mesh( pointGeom, pointMat );
point.position.set( coord.x, coord.y, coord.z );
pointGroup.add(point);
}
4) Add Group to your scene
scene.add(pointGroup);

I ended up coming up with a great answer through the plot.ly library. This page was pretty much perfect for what I wanted to do.
https://plot.ly/javascript/3d-point-clustering/

Related

Rotating Box3 together with the model in ThreeJS

I have the following question.
There is a model, through the setFromObject method I get Box3 (screenshot - http://prntscr.com/12787py). Next, I rotate the model and get a new Box3 (screenshot - http://prntscr.com/12789fy).
Is it possible after rotating the model to get Box3 with the same rotation, as if Box3 was rotating with the model?
Box3 is a mathematical representation of a box. As such, it is represented with only two Vector3 properties (max and min) that represent two opposing corners of the box. The values of these do not represent a box in full 3D space, but rather an axis-aligned box.
It looks like you're using BoundingBoxHelper. This creates a wireframe box that is world-aligned. This means it will compute its shape based on the transformed positions of the geometry vertices, and so it may change shape as your mesh is rotated.
To create a shape-tight wireframe box that rotates with your object, you will need to create one directly from your geometry, and ensure the same transformation is applied to both shapes.
// your shape
const shapeGeo = new BoxGeometry( 10, 10, 10 )
shapeGeo.computeBoundingBox() // <----------- DO THIS BEFORE ADDING IT TO THE SCENE!
const shapeMat = new MeshPhongMaterial( { color: 'red' } )
const shapeMsh = new Mesh( shapeGeo, shapeMat )
// your wireframe
const bboxMin = shapeGeo.boundingBox.min
const bboxMax = shapeGeo.boundingBox.max
const wireGeo = new BufferGeometry()
wireGeo.setAttribute( 'position' , new BufferAttribute( new Float32Array( [
bboxMin.x, bboxMin.y, bboxMin.z,
bboxMin.x, bboxMin.y, bboxMax.z,
bboxMin.x, bboxMax.y, bboxMax.z,
bboxMin.x, bboxMax.y, bboxMin.z,
bboxMax.x, bboxMin.y, bboxMin.z,
bboxMax.x, bboxMin.y, bboxMax.z,
bboxMax.x, bboxMax.y, bboxMax.z,
bboxMax.x, bboxMax.y, bboxMin.z,
] ), 3, false ) )
wireGeo.setIndex( new BufferAttribute( new Uint8Array( [
0, 1, 1, 2, 2, 3, 3, 0,
4, 5, 5, 6, 6, 7, 7, 4,
0, 4, 1, 5, 2, 6, 3, 7
] ), 1, false ) )
const wireMat = new LineBasicMaterial( { color: 'yellow' } )
const wireBox = new LineSegments( wireGeo, wireMat )
Now, here's where things take what might seem like an odd twist. Once you have your wire box, you can simply add it to your shape, and future changes to your shape will be passed on to your wire box:
scene.add( shapeMsh )
shapeMsh.add( wireBox )
This works because transformations are passed on to children*, and a Mesh is really just an extension of Object3D, so a Mesh can have children just like any other Object3D derivative.
* as long as you don't disable automatic matrix updates

Three js: How to normalize a mesh generated by vertices

I'm somewhat new to Three js, and my linear algebra days were back in the 90s so I don't recall much about quarternions. My issue is I have 8 vertices for a cube that I can use to create a custom geometry mesh from, but it doesn't set the position / rotation / scale info for its world matrix. Therefor it can not be used cleanly by other three js modules like controls. I can look up the math and calculate what position / scale / rotation (rotation gets a bit hairy with some fun acos stuff) should be and create a standard boxgeometry from that. But it seems like there should be some way to do it via three js objects if I can generate the proper matrix to apply to it. The quarternion setFromUnitVectors looked interesting, but I'd still have to do some work to generate the vectors. Any ideas would be appreciated thanks
Edit: :) So let me try and simplify. I have 8 vertices, I want to create a box geometry. But box geometry doesn't take vertices. It takes width, height, depth (relatively easy to calculate) and then you set the position/scale/rotation. So here's my code thus far:
5____4
1/___0/|
| 6__|_7
2/___3/
const box = new Box3();
box.setFromPoints(points);
const width = points[1].distanceTo(points[0]);
const height = points[3].distanceTo(points[0]);
const depth = points[4].distanceTo(points[0]);
const geometry = new BoxGeometry(width, height, depth);
mesh = new Mesh(geometry, material);
const center = box.getCenter(new Vector3());
const normalizedCorner = points[0].clone().sub(center);
const quarterian = new Quaternion();
quarterian.setFromUnitVectors(geometry.vertices[0], normalizedCorner);
mesh.setRotationFromQuaternion(quarterian);
mesh.position.copy(center);
The problem being my rotation element is wrong (besides my vectors not being unit vectors). I'm apparently not getting the correct quarternion to rotate my mesh correctly.
Edit: From WestLangley's suggestion, I'm creating a rotation matrix. However, while it rotates in the correct plane, the angle is off. Here's what I have added:
const matrix = new Matrix4();
const widthVector = new Vector3().subVectors(points[6], points[7]).normalize();
const heightVector = new Vector3().subVectors(points[6], points[5]).normalize();
const depthVector = new Vector3().subVectors(points[6], points[2]).normalize();
matrix.set(
widthVector.x, heightVector.x, depthVector.x, 0,
widthVector.y, heightVector.y, depthVector.y, 0,
widthVector.z, heightVector.z, depthVector.z, 0,
0, 0, 0, 1,
);
mesh.quaternion.setFromRotationMatrix(matrix);
Per WestLangley's comments I wasn't creating my matrix correctly. The correct matrix looks like:
const matrix = new Matrix4();
const widthVector = new Vector3().subVectors(points[7], points[6]).normalize();
const heightVector = new Vector3().subVectors(points[5], points[6]).normalize();
const depthVector = new Vector3().subVectors(points[2], points[6]).normalize();
matrix.set(
widthVector.x, heightVector.x, depthVector.x, 0,
widthVector.y, heightVector.y, depthVector.y, 0,
widthVector.z, heightVector.z, depthVector.z, 0,
0, 0, 0, 1,
);
mesh.quaternion.setFromRotationMatrix(matrix);

Orbiting a cube in WebGL with glMatrix

https://jsfiddle.net/sepoto/Ln7qvv7w/2/
I have a base set up to display a cube with different colored faces. What I am trying to do is set up a camera and apply a combined X axis and Y axis rotation so that the cube spins around both axis concurrently. There seems to be some problems with the matrices I set up as I can see the blue face doesn't look quite right. There are some examples of how this is done using older versions of glMatrix however the code in the examples no longer works because of some changes in vec4 of the glMatrix library. Does anyone know how this can be done using the latest version of glMatrix as I have attached a CDN to the fiddle?
Thank you!
function drawScene() {
gl.viewport(0,0,gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
mat4.ortho( mOrtho, -5, 5, 5, -5, 2, -200);
mat4.identity(mMove);
var rotMatrix = mat4.create();
mat4.identity(rotMatrix);
rotMatrix = mat4.fromYRotation(rotMatrix, yRot,rotMatrix);
rotMatrix = mat4.fromXRotation(rotMatrix, xRot,rotMatrix);
mat4.multiply(mMove, rotMatrix, mMove);
setMatrixUniforms();
gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexPositionBuffer);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, triangleVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, triangleColorBuffer);
gl.vertexAttribPointer(shaderProgram.vertexColorAttribute, triangleColorBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.TRIANGLES, 0, triangleVertexPositionBuffer.numItems);
yRot += 0.01;
xRot += 0.01;
}
As the name says, fromYRotation() initializes a matrix to a given rotation. Hence, you need two temporary matrices for the partial rotations, which you can then combine:
var rotMatrix = mat4.create();
var rotMatrixX = mat4.create();
var rotMatrixY = mat4.create();
mat4.fromYRotation(rotMatrixY, yRot);
mat4.fromXRotation(rotMatrixX, xRot);
mat4.multiply(rotMatrix, rotMatrixY, rotMatrixX);
And the reason why your blue face was behaving strangely, was the missing depth test. Enable it in your initialization method:
gl.enable(gl.DEPTH_TEST);
You dont need to use three matrices:
// you should do allocations outside of the renderloop
var rotMat = mat4.create();
// no need to set the matrix to identity as
// fromYRotation resets rotMats contents anyway
mat4.fromYRotation(rotMat, yRot);
mat4.rotateX(rotMat,xRot);

Projecting a point from world to screen. SO solutions give bad coordinates

I'm trying to place an HTML div element over a three.js object. Most stackoverflow solutions offer a pattern similar to this:
// var camera = ...
function toScreenXY(pos, canvas) {
var width = canvas.width, height = canvas.height;
var p = new THREE.Vector3(pos.x, pos.y, pos.z);
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * width;
vector.y = -(vector.y - 1) / 2 * height;
return vector;
}
I've tried many variations on this idea, and all of them agree on giving me this result:
console.log(routeStart.position); // target mesh
console.log(toScreenXY(routeStart.position));
// output:
//
// mesh pos: T…E.Vector3 {x: -200, y: 200, z: -100}
// screen pos: T…E.Vector3 {x: -985.2267639636993, y: -1444.7267503738403, z: 0.9801980328559876}
The actual screen coordinates for this camera position and this mesh position are somewhere around x: 470, y: 80 - I determined them by hardcoding my div position.
-985, -1444 are not even close to the actual screen coords :)
Please don't offer links to existing solutions if they follow the same logic as the snippet I provided. I would be especially thankful if someone could explain why I get these negative values, even though this approach seems to work for everyone else.
Here's a couple of examples using the same principle:
Three.js: converting 3d position to 2d screen position
Converting World coordinates to Screen coordinates in Three.js using Projection
Now, I've figured out the problem myself! Turns out, you can't project things before calling renderer.render(). It's very confusing that it gives you back weird negative coords.
Hope other people will find this answer useful.

Find the coordinates of the topleft of the canvas on the xy plane given a camera

I think this is an ok way to get the xyplane (if this is wrong or there is a better way, let me know):
xyplane = new THREE.Plane().setFromCoplanarPoints(
new THREE.Vector3(0, 0, 0),
new THREE.Vector3(10, 15, 0),
new THREE.Vector3(100, -90, 0),
);
I have a camera which happens to be at (0, 0, 1000) looking at the origin. I can find a coordinate that is the exact top left of my viewing port:
projector = new THREE.Projector();
topleft = new THREE.Vector3(-1, 1, 0);
through = projector.unprojectVector(topleft, camera);
through is then THREE.Vector3 {x: -31.425217496422327, y: 22.169468302342334, z: 900.0000201165681} which is perfectly at the topleft of the canvas. However, I want to find the equivalent point where z = 0. I try to do this with a ray; but, I fail.
ray = new THREE.Ray(camera.position, through);
point = ray.intersectPlane(xyplane);
point is then {x: 34.91690754890442, y: -24.632742007573448, z: 0} which is not even close. I'm misunderstanding something basic. I will study Coordinates of intersection between Ray and Plane and others while I try to figure this out. Maybe someone can explain in plain language.
Right, thanks to #WestLangley for the comment. The through variable is not the correct second argument to the Ray constructor. We need a unit vector for direction. With the camera position and the through point, we can get a direction vector with:
direction = through.sub(camera.position).normalize();
Then the rest works as expected:
ray = new THREE.Ray(camera.position, direction);
p = ray.intersectPlane(xyplane);

Resources