How to Interpret the Drawing Lines tutorial from three.js documentation? - three.js

I was reading the "Drawing lines" tutorial part on the three.js documentation as shown in the Picture below...
This is the code used to demonstrate drawing lines. The code itself is fine.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>My first three.js app</title>
<style>
body { margin: 0; }
</style>
</head>
<body>
<script src="///C:/Users/pc/Desktop/threejs_tutorial/build_threejs.html"></script>
<script>
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 500 );
camera.position.set( 0, 0, 100 );
camera.lookAt( 0, 0, 0 );
const renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
//create a blue LineBasicMaterial
const material = new THREE.LineBasicMaterial( { color: 0x0000ff } );
const points = [];
points.push( new THREE.Vector3( - 10, 0, 0 ) );
points.push( new THREE.Vector3( 0, 10, 0 ) );
points.push( new THREE.Vector3( 10, 0, 0 ) );
const geometry = new THREE.BufferGeometry().setFromPoints( points );
const line = new THREE.Line( geometry, material );
scene.add( line );
renderer.render( scene, camera );
</script>
</body>
</html>
Let's go over the commands used in the creation of lines as suggested by the three.js documentation.
One by one
First line: the command
const scene = new THREE.Scene()
It says "create scene" but what it really does is to create a 3D space like as shown in the Picture 1.
Second line: the command
const camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 500 );
It says create a Camera, given is the field of view, aspect ratio, distance from camera to near viewing plane and distance from camera to far viewing plane as shown in Picture 2
Third line: the command
camera.position.set( 0, 0, 100 );
It says position the camera on the z-axis as shown in Picture 4. I assume that the orientation of camera is always parallel to x-axis.
Fourth line: the command
camera.lookAt( 0, 0, 0 );
It says orient the camera in the direction of the point (0, 0, 0) as shown in Picture 5.
Fifth line: the command
const renderer = new THREE.WebGLRenderer();
It says the WebGL Renderer shall be ready when it will be summoned by calling its name as shown in Picture 6.
Sixth line: the command
renderer.setSize( window.innerWidth, window.innerHeight );
It says the window where the user will see will be adjusted by renderer. This window is your computer screen. Whatever the size of your computer screen, the renderer will adjust accordingly as shown in Picture 7.
Seventh line: the command
document.body.appendChild( renderer.domElement );
It says the renderer now exists in 3D space shown in Picture 8.
Eighth line: the command
const material = new THREE.LineBasicMaterial( { color: 0x0000ff } );
It says we have to set the properties of the future line first before actually drawing it. In this case, this future line will have a color of blue as shown in Picture 9.
Ninth line: the command
const points = [];
I don't know the purpose of the array in this context. I guess whatever inside the array will become "real", something that can be put inside the viewing frustrum of the camera where it will be rendered in the near future as shown in Picture 10.
Tenth line: the command
points.push( new THREE.Vector3( - 10, 0, 0 ) );
points.push( new THREE.Vector3( 0, 10, 0 ) );
points.push( new THREE.Vector3( 10, 0, 0 ) );
It says position the points specified by the Vector3 command and these points will be pushed along the three points positioned in the 3D space. The points doesn't appear yet because the renderer isn't yet summoned as shown in Picture 11.
Eleventh line: the command
const geometry = new THREE.BufferGeometry().setFromPoints( points );
It says the points positioned by the Vector 3 will be converted into renderable form by the Buffer Geometry because a Buffer Geometry is a representation of mesh, line, or point geometry as shown in Picture 12.
Twelfth line: the command
const line = new THREE.Line( geometry, material );
It says the line will be created based from the geometry and material set beforehand as shown in Picture 13.
Thirteenth line: the command
scene.add( line );
It says the line has been added to the 3D space inside the viewing frustrum of the camera as shown in Picture 14.
Fourteenth line: the command
renderer.render( scene, camera );
It says the renderer has been ordered to render the scene and the camera. But I wondered why the command isn't
renderer.render( scene, camera, line );
....
The final output looked like this:
My question is:
Is anything what I've said is correct?
Thank you! I'm open for learning and dispelling myths surrounding the commands used in this example.

You've made a few wrong assumptions:
camera.position.set( 0, 0, 100 ); puts the camera at 100 units along the z-axis, parallel to the z-axis because it hasn't been rotated.
document.body.appendChild( renderer.domElement ); adds the <canvas> element to your HTML document. It's one of the few commands that have nothing to do with 3D space.
renderer.render(scene, cam) renders everything that's been added to the scene. You already added the lines with scene.add(line), so there's no reason to specifically target line again.
It says the renderer now exists in 3D space shown in Picture 8.
Some of your screenshots use different axes systems. To get acquainted with the Three.js/WebGL coordinate system, I recommend you visit the Three.js editor and add a camera with Add > PerspectiveCamera (near the bottom). You can then modify its position attributes to see what the axes do. Also keep an eye on the axes widget on the corner:
x-axis: +right / -left
y-axis: +up / -down
z-axis: +toward user / - away

Bravo! I wish I had this when I was first learning!
To understand the 9th and 10th lines, I think it's best to understand a bit of history...
At one time, you were able to use a Geometry object:
let geometry = new THREE.Geometry()
geometry.vertices.push(new THREE.Vector3(-10, 0, 0))
geometry.vertices.push(new THREE.Vector3(0, 10, 0))
geometry.vertices.push(new THREE.Vector3(10, 0, 0))
So you're adding these points to the vertices attribute of the geometry. Which looks like this:
[{x:-10, y:0, z:0}, {x:0, y:10, z:0}, {x:10, y:0, z:0}]
Eventurally they did away with Geometry. So now you're supposed to use a BufferGeometry. But with the BufferGeometry, there is no more vertices attribute.
So what do we do...?
Well, now we create a points array, and use the setFromPoints function to basically apply these coordinates to your geometry object:
const points = [];
points.push(new THREE.Vector3(-10, 0, 0));
points.push(new THREE.Vector3(0, 10, 0));
points.push(new THREE.Vector3(10, 0, 0));
let geometry = new THREE.BufferGeometry().setFromPoints( points );
What this does is it sets an attribute of type Float32Array...
geometry.attributes.position.array
Which looks like this:
[ -10, 0, 0, 0, 10, 0, 10, 0, 0 ]
And if you do:
geometry.attributes.position.count
You get: 3. Three points.
Hope it helps :)

Related

three.js make a cutting plane visible

In this demo:
https://threejs.org/examples/?q=clipping#webgl_clipping_advanced
if you enable the "visualize" option, you can see the 3d pyramid "cutting" the inside object.
Here:
https://threejs.org/examples/?q=clipping#webgl_clipping
there is a simple 2d plane cutting the object, but there is no such option to "see" the plane. I just started learning threejs and I am not too familiar with any 3d engine (other than fully understanding the math behind it), so I tried some basic stuff, e.g.:
localPlane.visible = true
But of course it didn't work. Any 'simple' way to make the second demo display the cutting plane?
Thank you
Here's some code to add the plane in the position you want:
const planeGeometry = new THREE.PlaneGeometry( 1.5, 1.5 );
const planeMaterial = new THREE.MeshBasicMaterial( {color: 0xffff00, side: THREE.DoubleSide} );
const plane = new THREE.Mesh( planeGeometry, planeMaterial );
plane.position.copy(localPlane.normal);
plane.position.multiplyScalar(-localPlane.constant);
plane.quaternion.setFromUnitVectors(new THREE.Vector3(0, 0, 1), localPlane.normal);
plane.material.opacity = 0.5;
plane.material.transparent = true;
scene.add( plane );
And here's what that looks like...
However, depending on what you are implementing, you may find it easier to create a Plane Mesh in the position you want, and then derive the clipping plane THREE.Plane from that.
If you want to be able to move the clipping plane around, the reasoning involved in moving a Plane Mesh Object3D is probably more straightforward than reasoning about moving a THREE.Plane.
(update)
Another alternative approach I've come across: you could simply use the built in THREE.PlaneHelper, which can be used to visualize any THREE.Plane.
https://threejs.org/docs/#api/en/helpers/PlaneHelper
The sample code offered on that page is this:
const plane = new THREE.Plane( new THREE.Vector3( 1, 1, 0.2 ), 3 );
const helper = new THREE.PlaneHelper( plane, 1, 0xffff00 );
scene.add( helper );

Set camera left/right

I have a three.js animation of a person running. I have embedded this in an iFrame on my site however the character runs off the screen.
I am very happy with the positioning and the camera angle, I just need to move it right so that the character is centred in the iFrame.
Below is the code I am using.
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(30, window.innerWidth / window.innerHeight, 1, 4000);
camera.position.set(0, 150, 50);
camera.position.z = cz;
camera.zoom = 3.5;
camera.updateProjectionMatrix();
scene.add(camera);
You could use the camera.lookAt() method, which will point the camera towards the desired position.
// You could set a constant vector
var targetPos = new THREE.Vector3(50, 25, 0);
camera.lookAt(targetPos);
// You could also do it in the animation loop
// if the position will change on each frame
update() {
person.position.x += 0.5;
camera.lookAt(person.position);
renderer.render(scene, camera);
}
I feel like the lookAt() method wouldn't work. It will just rotate the camera, and you specified you like the camera placement/angle.
If you want to move the camera to the right along with you model, set the camera's position.x equal to model.x for every frame(assuming left/right is still the X axis).
person.position.x += 0.5;
camera.position.x = person.position.x;
Alternatively, you could keep the object and camera static and move the ground plane. Or even have a rotating cylinder with a big enough radius flipped on its side.

Three js raycaster WITHOUT camera

I seem to find only examples to use the raycaster with the camera, but none that just have a raycaster from Point A to Point B.
I have a working raycaster, it retrieves my Helpers, Lines etc. but it seems it does not recognize my sphere.
My first thought was my points are off, so i decided to create a line from my pointA to my pointB with a direction like so:
var pointA = new Vector3( 50, 0, 0 );
var direction = new Vector3( 0, 1, 0 );
direction.normalize();
var distance = 100;
var pointB = new Vector3();
pointB.addVectors ( pointA, direction.multiplyScalar( distance ) );
var geometry = new Geometry();
geometry.vertices.push( pointA );
geometry.vertices.push( pointB );
var material = new LineBasicMaterial( { color : 0xff0000 } );
var line = new Line( geometry, material );
This will show a line from my point (50 0 0) to (50 100 0) right trough my sphere which is at point (50, 50, 0) so my pointA and direction values are correct.
Next i add a raycaster:
To avoid conflicts with any side effects i recreated my points here:
var raycaster = new Raycaster(new Vector3( 50, 0, 0 ), new Vector3( 0, 1, 0 ).normalize());
var intersects = raycaster.intersectObject(target);
console.log(intersects);
Seems pretty straight forward to me, i also tried to use raycaster.intersectObjects(scene.children) but it gives Lines, helpers etc. but not my sphere.
What am i doing wrong? I am surely missing something here.
IMG of the line and the sphere:
What you see is explained in the following github issue:
https://github.com/mrdoob/three.js/issues/11449
The problem is that the ray emitted from THREE.Raycaster does not directly hit a face but its vertex which results in no intersection.
There are several workarounds to solve this issue e.g. slightly shift the geometry or the ray. For your case:
var raycaster = new THREE.Raycaster( new THREE.Vector3( 50, 0, 0 ), new THREE.Vector3( 0, 1, 0.01 ).normalize() );
However, a better solution is to fix the engine and make the test more robust.
Demo: https://jsfiddle.net/kzwmoug2/3/
three.js R106

Positioning Objects at the same Elevation

I have an issue with the position of cubes in my application. When I set them all with the same size they are rendered properly on the same Y position as I defined:
Example:
geometry = new THREE.BoxGeometry(50, 50, 50);
material = new THREE.MeshBasicMaterial({ color: 0xff0000 })
mesh = new THREE.Mesh(geometry, material);
mesh.position.set(100, 0, 400); // I always set y as 0 because I want the cubes to be on the same level like buildings in a city
And I do the same for the next cubes, only changing the X and Z positions.
However, when I create cubes with different sizes, which is my objective, as follows,
geometry = new THREE.BoxGeometry(50, 100, 50);
they appear on a different level in the final visualization on the browser, as shows the image:
https://cloud.githubusercontent.com/assets/3678443/8651664/35574c18-2972-11e5-8c75-2612733ea595.png
Any ideas on how to solve this problem? What am I doing wrong?
BoxGeometry is centered on the origin. There are two solutions to translating the box so it sits on the XZ-plane.
Option 1. Translate the geometry so the bottom face of the box passes through the origin. You do that by translating the geometry up by half its height.
geometry = new THREE.BoxGeometry( 50, 50, 50 );
geometry.translate( 0, 50 / 2, 0 );
mesh = new THREE.Mesh( geometry, material );
mesh.position.set( 100, 0, 400 );
Option 2. Translate the mesh by setting its position.
geometry = new THREE.BoxGeometry( 50, 50, 50 );
mesh = new THREE.Mesh( geometry, material );
mesh.position.set( 100, 50 / 2, 400 );
The first option is likely preferable for your use case.
three.js r.92
The Position of the Objects is correct, they are placed where their centerĀ“s are. So your cube with 100 height in geometry extends 50 to the top and 50 to the bottom, its centroid is right in its "middle" at 0.
You could set the y positions of your Cubes to y + cube.geometry.parameters.height / 2 so every cube is aligned at one level (variable y).

three.js: rotational matrix to place THREE.group along new axis

(Please also refer to my illustration of the problem: http://i.stack.imgur.com/SfwwP.png)
problem description and ideas
I am creating several objects in the standard XYZ coordinate system.
Those are added to a THREE.group.
Please think of the group as a wall with several frames and image hung on it.
I want to create my frame objects with eg. dimension of (40, 20, 0.5). So I get a rather flat landscape formatted frame/artwork. I create and place several of those. Then I add them to the group, which I wanted to freely rotate in the world along two vectors start and end.
The problem I am struggling with is how to rotate and position the group from a given vector start to a give vector end.
So far I tried to solve it with a THREE.Matrix4().lookAt :
var group = new THREE.Group();
startVec = new THREE.Vector3( 100, 0, -100 );
endVec = new THREE.Vector3( -200, 0, 200 );
matrix = new THREE.Matrix4().lookAt(startVec, endVec, new THREE.Vector3( 0, 1, 0 ));
group.matrixAutoUpdate = false;
var object1 = new THREE.Mesh(new THREE.BoxGeometry(0.5, 20, 40), mat);
// etc. -> notice the swapping of X and Z coordinates I have to do.
group.add(object1);
group.applyMatrix(matrix);
You can see the example on jsfiddle:
http://jsfiddle.net/y6b9Lumw/1/
If you open jsfiddle, you can see that the objects are not placed along the line from start to end, although I their are placed along the groups internal X-Axis like: addBox(new THREE.Vector3( i * 30, 0 , 0 ));
Full code:
<html>
<head>
<title>testing a rotation matrix</title>
<style>body { margin: 0; } canvas { width: 100%; height: 100% } </style>
</head> <body>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r70/three.min.js"></script>
<script>
var scene, camera, renderer, light, matrix;
var startVec, endVec;
var boxes;
function addBox(v) {
var boxmesh;
var boxgeom = new THREE.BoxGeometry( 15, 5, 1 );
var boxmaterial = new THREE.MeshLambertMaterial( {color: 0xdd2222} );
boxmesh = new THREE.Mesh( boxgeom, boxmaterial );
//boxmesh.matrix.makeRotationY(Math.PI / 2);
boxmesh.matrix.setPosition(v);
boxmesh.matrixAutoUpdate = false;
boxes.add(boxmesh);
}
function init() {
renderer = new THREE.WebGLRenderer();
renderer.setClearColor( 0x222222 );
renderer.setPixelRatio( window.devicePixelRatio );
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 1000 );
camera.position.set( -20, 30, 300 );
var light = new THREE.PointLight (0xCCCCCC, 0.5 );
scene.add(light);
startVec = new THREE.Vector3( 100, 0, -100 );
endVec = new THREE.Vector3( -200, 0, 200 );
matrix = new THREE.Matrix4().lookAt(startVec, endVec, new THREE.Vector3( 0, 1, 0 ));
boxes = new THREE.Group();
for (var i = -100; i < 100; i++) {
addBox(new THREE.Vector3( i * 30, 0 , 0 ));
}
boxes.matrixAutoUpdate = false;
boxes.applyMatrix(matrix);
scene.add(boxes);
var linegeometry = new THREE.Geometry();
linegeometry.vertices.push( startVec, endVec);
var line = new THREE.Line(linegeometry, new THREE.LineBasicMaterial({color: 0x33eeef}));
scene.add(line);
render();
}
function render(){
requestAnimationFrame(render);
renderer.render(scene, camera);
}
init();
</script>
</body> </html>
This works only nicely to some extend. As the look vector is usually oriented along the Z-Axis (i think it is (0,0,1)). So unfortunately the objects inside the group get rotated like that aswell.
This is actually what you would expect from a lookAt() rotational transformation. It's not what I would like to have though, as this places all the children in the group, on their Z-Axis, instead of their X-Axis.
In order to have things look properly I had to initialize my groups children with X and Z swapped.
Instead of:
var object1 = new THREE.BoxGeometry( 40, 20, 0.5 );
I have to do:
var object1 = new THREE.BoxGeometry( 0.5, 20, 40 );
same if I want to translate objects in the group on the X-Axis, I have to use the Z-Axis, as that is the look-vector along which the whole wall is oriented by the matrix transformation.
my question is:
How does my matrix have to be constructed/look like to accomplish what I want: Normally create objects, and then have their X-Axis placed along vector start and vector end, like placing artworks on a wall, which can be moved around?
I thought about creating a matrix, whose X-Axis is end.sub(start), so the vector from start end end, might this be what I need to do? If so, how would I construct it?
problem illustration with an image
I tried to illustrate my sitation in two images. One being the wall, one being the wall inside the world, with the same objects attached to the wall (see top of the post).
In the first figure you see the local coordinate system of the group, with two added children, one translated along X.
In the second figure, you can see the same localsystem inside the world how I would like it to be. The green axes are the world axes. The start and end vectors are shown aswell. You can see, both boxes, are properly placed along that line.
I would like to answer my own question by disregarding the idea of manipulating the matrix myself. Thx to #WestLangley I adapted my idea to the following by setting the groups quaternion via .setFromUnitVectors.
So the rotation is derived from the rotation from the x-axis to the direction vector of start and end, as explained in three.js' documentation:
"Sets this quaternion to the rotation required to rotate direction vector vFrom to direction vector vTo."
(http://threejs.org/docs/#Reference/Math/Quaternion.setFromUnitVectors)
Below is the relevant part of my solution:
// define the starting and ending vector of the wall
start = new THREE.Vector3( -130, -40, 300 );
end = new THREE.Vector3( 60, 20, -100 );
// dir is the direction from start to end, normalized
var dir = new THREE.Vector3().copy(end).sub(start).normalize();
// position wall in the middle of start and end
var middle = new THREE.Vector3().copy(start).lerp(end, 0.5);
wall.position.copy(middle);
// rotate wall by applying rotation from X-Axis to dir
wall.quaternion.setFromUnitVectors( new THREE.Vector3(1, 0, 0), dir );
The result can be seen here: http://jsfiddle.net/L9dmqqvy/1/

Resources