Three.js - Arranging cubes in a grid - three.js

I would like to position cubes in a rectangular/square like grid. I'm having trouble trying to create some methodology in depending on what I pick through an HTML form input (checkboxes) to have it arrange left to right and up to down, a series of cubes, in a prearranged grid all on the same plane.
What measurement units is three.js in? Right now, I'm setting up my shapes using the built-in geometries, for instance.
var planeGeometry = new THREE.PlaneGeometry(4, 1, 1, 1);
The 4 and 1; I'm unsure what that measures up to in pixels, although I do see it rendered. I'm resorting to eyeballing it (guess and checking) every time so that it looks acceptable.

Without a fair bit of extra math THREE is not measured in pixels.
To make a simple grid (I leave optimizations, colors, etc for future refinements) try something like:
var hCount = from_my_web_form('horiz'),
vCount = from_my_web_form('vert'),
size = 1,
spacing = 1.3;
var grid = new THREE.Object3d(); // just to hold them all together
for (var h=0; h<hCount; h+=1) {
for (var v=0; v<vCount; v+=1) {
var box = new THREE.Mesh(new THREE.BoxGeometry(size,size,size),
new THREE.MeshBasicMaterial());
box.position.x = (h-hCount/2) * spacing;
box.position.y = (v-vCount/2) * spacing;
grid.add(box);
}
}
scene.add(grid);

Related

3D three.js Create the ground surface of a 3D building

Following my post last week three.js How to programatically produce a plane from dataset I come back to the community to solve a problem of definition of surface occupied on the ground by a 3D building.
The solution proposed in comments in this post works for this building but is not universal.
To make it universal I chose the following method: when the walls are built I create their clone in another group (see this previous post for walls creation)
// prepare the clones
var clones = new THREE.Group();
scene.add(clones);
var num=0;
// drawing the real walls
var wallGeometry = new THREE.PlaneGeometry(size,(hstair*batims[i][1]));
val = 0xFFFFFF;
opa = 0.5;
if(deltaX > deltaY){val = 0x000000; opa = 0.05;} // shaded wall
var wallMaterial = new THREE.MeshBasicMaterial({color:val,transparent:true, opacity:opa, side:THREE.DoubleSide});
var walls = new THREE.Mesh(wallGeometry, wallMaterial);
walls.position.set((startleft+endleft)/2,(hstair*batims[i][1])/2,(startop+endtop)/2);
walls.rotation.y = -rads;
scene.add(walls);
// add the pseudo-walls to scene
var cloneGeometry=new THREE.PlaneGeometry(long,3);
var cloneMaterial=new THREE.MeshBasicMaterial({color:0xff0000,transparent:true,opacity:0.5,side:THREE.DoubleSide});
var clone=new THREE.Mesh(pseudomursGeometry,pseudomursMaterial);
clone.position.set((startleft+endleft)/2,3,(startop+endtop)/2);
clone.rotation.y=-rads;
clones.add(clone);
num++;
The idea is now to rotate this pseudo-building so that the longest wall is vertical, which allows me to determine the exact floor area occupied with its boundingBox:
var angle=turn=0;
for(i=0; i<dists.length; i++) { // dists is the array of wall lengths
if(dists[i]==longs[0]){ // longs is the reordered lengths array
angle=angles[i][1]; // angle of the longest wall
}
}
// we can now rotate the whole group to put the longest wall vertical
if(angle>0){
turn = angle*-1+(Math.PI/2);
}
else {
turn = angle+(Math.PI/2);
}
clones.rotation.y=turn;
It works perfectly as long as the building has a right angle, whatever its shape: triangle, rectangle, bevel, right angle polygons,
var boundingBox = new THREE.Box3().setFromObject(clones);
var thisarea = boundingBox.getSize();
// area size gives the expected result
console.log('AREA SIZE = '+thisarea.x+' '+thisarea.y+' '+thisarea.z);
...but not when there are no more right angles, for example a trapezoid
The reason is that we rotate the group, and not the cloned walls. I can access and rotate each wall by
for(n=0;n<num;n++){
thisangle = clones.children[n].rotation.y;
clones.children[n].rotation.y = turn-thisangle;
}
But the result is wrong for the others pseudo-walls:
So the question is: how to turn each red pseudo-wall so that the longest one is vertical and the others remain correctly positioned in relation to it? In this way, any building with any shape can be reproduced in 3D with its internal equipment. Any idea on how to achieve this result?
A weird & ugly but well-working solution:
// 1. determines which is the longest side
for(i=0; i<dists.length; i++) {
if(dists[i]==longs[0]){
longest=i;
break; // avoid 2 values if rectangle
}
}
// 2. the group is rotated until the longest side has an angle in degrees
// close to 0 or 180
var letsturn = setInterval(function() {
clones.rotation.y += 0.01;
var group_rotation = THREE.Math.radToDeg(clones.rotation.y); // degrees
var stop = Math.round(angles[longest][0] - group_rotation);
// 3. stop when longest wall is vertical
if( (stop>=179 && stop<=181) || (stop>=-1 && stop<=1) ) {
clearInterval(letsturn);
createPlane() // we can now use boundingBox in reliability
}
}, 1);
et voilà.

Three.js memory allocation & workflow question

Let’s say I want to make 100 objects - for example cars, like the one you see here:
This car is currently comprised of 5 meshes: one yellow Cube and four blue Spheres
What I’d like to know is what would be the most efficient/correct way to make 100 of these cars - or maybe 500 - in terms of memory management/ CPU performance, etc.
The way I’m currently going about doing this is as follows:
Make an empty THREE.Group called “newCarGroup” -
Create the yellow rectangular Mesh for the body of the car - called “carBodyMesh”
Create four blue Sphere Meshes for the Tires called “tire1Mesh”, “tire2Mesh”, “tire3Mesh”, and “tire4Mesh”
Add the Body and the four Tires to the “newCarGroup”
And finally, in a FOR loop, create/instantiate 100 “newCarGroup” objects, adding each one to the SCENE at a random position
The code is below.
It's working perfectly well right now, but I’d like to know if this is the “proper”/best way to do this?
Consider it’s possible I might end up needing 1,000 cars - or 5,000 cars. So will this scale properly?
Also, I need to add more objects to the car: like 4 windows - actually make that 6 windows, to also include the front and back windshields, then four headlights, etc.
So the final Car Object alone may end up being comprised of 20 meshes - or more.
Being that I’m kinda new to THREE.JS I wanna make sure I develop good habits and go about this sort of thing the right way.
Here’s my code:
function makeOneCar() {
var newCarGroup = new THREE.Group();
// 1. CAR-Body:
const bodyGeometry = new THREE.BoxGeometry(30, 10, 10);
const bodyMaterial = new THREE.MeshPhongMaterial({ color: "yellow" } );
const carBodyMesh = new THREE.Mesh(bodyGeometry, bodyMaterial);
// 2. TIRES:
const tireGeometry = new THREE.SphereGeometry(2, 16, 16);;
const tireMaterial = new THREE.MeshPhongMaterial( { color: "blue" } );
const tire1Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
const tire2Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
const tire3Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
const tire4Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
// TIRE 1 Position:
tire1Mesh.position.x = carBodyMesh.position.x - 11;
tire1Mesh.position.y = carBodyMesh.position.y - 4.15;
tire1Mesh.position.z = carBodyMesh.position.z + 4.5;
// TIRE 2 Position:
tire2Mesh.position.x = carBodyMesh.position.x + 11;
tire2Mesh.position.y = carBodyMesh.position.y - 4.15;
tire2Mesh.position.z = carBodyMesh.position.z + 4.5;
// TIRE 3 Position:
tire3Mesh.position.x = carBodyMesh.position.x - 11;
tire3Mesh.position.y = carBodyMesh.position.y - 4.15;
tire3Mesh.position.z = carBodyMesh.position.z - 4.5;
// TIRE 4 Position:
tire4Mesh.position.x = carBodyMesh.position.x + 11;
tire4Mesh.position.y = carBodyMesh.position.y - 4.15;
tire4Mesh.position.z = carBodyMesh.position.z - 4.5;
// Putting it all together:
newCarGroup.add(carBodyMesh);
newCarGroup.add(tire1Mesh);
newCarGroup.add(tire2Mesh);
newCarGroup.add(tire3Mesh);
newCarGroup.add(tire4Mesh);
// Setting (x, y, z) Coordinates - RANDOMLY
let randy = Math.floor(Math.random() * 10);
let newCarGroupX = randy % 2 == 0 ? Math.random() * 250 : Math.random() * -250;
let newCarGroupY = 0.0;
let newCarGroupZ = randy % 2 == 0 ? Math.random() * 250 : Math.random() * -250;
newCarGroup.position.set(newCarGroupX, newCarGroupY, newCarGroupZ)
scene.add(newCarGroup);
}
function makeCars() {
for(var carCount = 0; carCount < 100; carCount ++) {
makeOneCar();
}
}
I’d like to know if this is the “proper”/best way to do this?
This is subjective. You say the method works great for your current use-case, so for that use-case, it is fine.
So will this scale properly?
The simple answer is: No. The more complex answer is: ...not really.
You're re-using the geometry and materials, which is good. But every Mesh you create has meta information surrounding it, which adds to your overall memory footprint.
Also, every standard Mesh you add incurs what is known as a "draw call", which is the GPU drawing that particular shape. Instead, take a look at InstancedMesh. This allows the GPU to be given instructions on how to draw the shape throughout the scene once. Yes, rather than drawing each cube individually, the GPU can draw all the cubes at the same time, and they can even have different colors and transformations. There are limitations to this class, but it's a good starting point to understanding how instancing works.

D3 Donut chart projected to sphere/globe

I want to use d3 for the next task:
display rotating globe with donut chart in center of every country. It should be possible to interact with globe (select country, zoom, rotate).
Seems d3 provide an easy way to implement every part of it but I can not get donuts part working as I need.
There is an easy way draw donut chart with the help of d3.arc:
var arc = d3.arc();
var data = [3, 23, 17, 35, 4];
var radius = 15/scale;
var _arc = arc.innerRadius(radius - 7/scale)
.outerRadius(radius).context(donutsContext);
var pieData = pie(data);
for (var i = 0; i < pieData.length; i++) {
donutsContext.beginPath();
donutsContext.fillStyle = color(i);
_arc(pieData[i]);
}
by with code as it is donuts are displayed on a plane on top of the globe, like:
globe with donut
​
while I want them to be 'wrapped' around the globe
There is d3.geoCircle method that can be projected to globe correctly. I got 'ring' projected correctly to the globe with the help of two circles:
var circle = d3.geoCircle()
.center(centroid)
.radius(2);
var outerCircle = circle();
var circle = d3.geoCircle()
.center(centroid)
.radius(1);
var innerCircle = circle();
var interCircleCoordinates = [];
for (var i = innerCircle.coordinates[0].length - 1; i >= 0; i--) {
interCircleCoordinates.push(innerCircle.coordinates[0][i]);
}
outerCircle.coordinates.push(interCircleCoordinates);
​globe with rings
but I really need to get a donut.
The other way I tried is getting image from donuts and wrapping this image around globe with the help of pixels manipulation:
var image = new Image;
image.onload = onload;
image.src = img;
function onload() {
window.dx = image.width;
window.dy = image.height;
context.drawImage(image, 0, 0, dx, dy);
sourceData = context.getImageData(0, 0, dx, dy).data;
target = context.createImageData(width, height);
targetData = target.data;
for (var y = 0, i = -1; y < height; ++y) {
for (var x = 0; x < width; ++x) {
var p = projection.invert([x, y]), λ = p[0], φ = p[1];
if (λ > 180 || λ < -180 || φ > 90 || φ < -90) { i += 4; continue; }
var q = ((90 - φ) / 180 * dy | 0) * dx + ((180 + λ) / 360 * dx | 0) << 2;
var r = sourceData[q];
var g = sourceData[++q];
var b = sourceData[++q];
targetData[++i] = r;
targetData[++i] = g;
targetData[++i] = b;
targetData[++i] = 125;//
}
}
context.clearRect(0,0, width, height);
context.putImageData(target, 0, 0);
};
by this way I get extremely slow rotating and interaction with a globe for a globe size I need (1000px)
So my questions are:
Is there is some way to project donuts that are generated with the help of d3.arc to a sphere (globe, orthographic projection)?
Is there is some way to get a donut from geoCircle?
Maybe there is some other way to achieve my goal I do not see
There is one way that comes to mind to display donuts on a globe. The key challenge is that d3 doesn't project three dimensional objects very well - with one exception, geographic features. Consequently, an "easy" solution is to convert your pie charts into geographic features and project them with the rest of your features.
To do this you need to:
Use a pie/donut generator as you normally would
Go along the paths generated to get points approximating the pie shape.
Convert the points to long/lat points
Assemble those points into geojson
Project them onto the map.
The first point is easy enough, just make a pie chart with an inner radius.
Now you have to select each path and find points along its perimeter using path.getPointAtLength(), this will be dependent on path length, so path.getTotalLength() will be handy (and corners are important, so you might want to incorporate a little bit of complexity for these corner cases to ensure you get them)).
Once you have the points, you need the use of a second projection, azimuthal equidistant would be best. If the pie chart is centered on [0,0] in svg coordinate space, rotate the azimuthal (don't center), so that the centroid coordinate is located at [0,0] in svg space (you can use translates on the pies to position them, but it will just add extra steps). Take each point and run it through projection.invert() using the second projection. You will need to update the projection for each donut chart as each one will have a different geographic centroid.
Once you have lat long points, it's easy - you've already done it with the geo circle function - convert to geojson and project with the orthographic projection.
This approach gave me something like:
Notes: Depending on your data, it might be easiest to preprocess your data into geojson and store that as opposed to calculating the geojson each page load.
You are using canvas, while you don't need to actually use an svg, you need to still be able to access svg functions like getPointAtLength, you do not need to have an svg or display svg elements by using a custom element replicating a path :
document.createElementNS(d3.namespaces.svg, 'path');
Oh, and make sure the second projection's translate is set - the default is [480,250] for all (most?) d3 projections, that will throw things off if unaccounted for.

Fit to screen from different Orthographic camera positions

Made a simple jsFiddle example to illustrate a problem.
I'm trying to fit object's bounding box to screen from different camera positions. In example in dat.GUI panel you can change camera position and then click button fit to screen.
When changing y and z (positive) camera positions to find camera's top and bottom properties code below is used
var top = boundingBox.max.y * Math.cos(angleToZAxis) + boundingBox.max.z * Math.sin(angleToZAxis); // line 68
var bottom boundingBox.min.y * Math.cos(angleToZAxis) + boundingBox.min.z * Math.sin(angleToZAxis);
I would like to know how I can include camera's x position and negative positions in this calculation, what is the math behind this. Should I use rotation matrix and how to use it?
Or maybe it can be achieved in some simple way with threejs methods, can't figure out, tried the code below but something is wrong:
var matrix = new THREE.Matrix4();
matrix.lookAt ( this.camera.position, new THREE.Vector3(0, 0, 0), new THREE.Vector3(0, 1, 0) );
var bbMax = boundingBox.max.clone().applyMatrix4(matrix);
var bbMin = boundingBox.min.clone().applyMatrix4(matrix)
;
to fit an orthographic camera you have to simply change its zoom and position
you can calculate zoom from the bounding box of your object
(I used the boxes from geometry, but you will have to take in account matrices of the objects in group; I used them because .setFromObject was not returning consistent value)
Canvas3D.prototype.fitToScreen = function() {
this.group.children[0].geometry.computeBoundingBox();
var boundingBox = this.group.children[0].geometry.boundingBox.clone();
this.group.children[1].geometry.computeBoundingBox();
boundingBox.union(this.group.children[1].geometry.boundingBox);
var rotation = new THREE.Matrix4().extractRotation(this.camera.matrix);
boundingBox.applyMatrix4(rotation);
this.camera.zoom = Math.min(this.winWidth / (boundingBox.max.x - boundingBox.min.x),
this.winHeight / (boundingBox.max.y - boundingBox.min.y)) * 0.95;
this.camera.position.copy(boundingBox.center());
this.camera.updateProjectionMatrix();
this.camera.updateMatrix();
};
using this will not work in your fiddle because you are using OrbitControls and they rotate camera on update based on their own state - so either update that state or create your own controls
also either move camera back after
this.camera.position.copy(boundingBox.center());
or set near plane to -1000 to avoid having cut object
this.camera = new THREE.OrthographicCamera(this.winWidth / -2,
this.winWidth / 2 , this.winHeight / 2, this.winHeight / -2, -10000, 10000);
EDIT
now i see that you dont want to just fit the object but the whole box...
to do so an easy way is to project the points of the box and get the distances of extremes in pixels, then you can set ortho camera directly
boundingBox = new THREE.Box3().setFromObject(this.group);
//take all 8 vertices of the box and project them
var p1 = new THREE.Vector3(boundingBox.min.x,boundingBox.min.y,boundingBox.min.z).project(this.camera);
var p2 = new THREE.Vector3(boundingBox.min.x,boundingBox.min.y,boundingBox.max.z).project(this.camera);
var p3 = new THREE.Vector3(boundingBox.min.x,boundingBox.max.y,boundingBox.min.z).project(this.camera);
var p4 = new THREE.Vector3(boundingBox.min.x,boundingBox.max.y,boundingBox.max.z).project(this.camera);
var p5 = new THREE.Vector3(boundingBox.max.x,boundingBox.min.y,boundingBox.min.z).project(this.camera);
var p6 = new THREE.Vector3(boundingBox.max.x,boundingBox.min.y,boundingBox.max.z).project(this.camera);
var p7 = new THREE.Vector3(boundingBox.max.x,boundingBox.max.y,boundingBox.min.z).project(this.camera);
var p8 = new THREE.Vector3(boundingBox.max.x,boundingBox.max.y,boundingBox.max.z).project(this.camera);
//fill a box to get the extremes of the 8 points
var box = new THREE.Box3();
box.expandByPoint(p1);
box.expandByPoint(p2);
box.expandByPoint(p3);
box.expandByPoint(p4);
box.expandByPoint(p5);
box.expandByPoint(p6);
box.expandByPoint(p7);
box.expandByPoint(p8);
//take absolute value because the points already have the correct sign
var top = box.max.y * Math.abs(this.camera.top);
var bottom = box.min.y * Math.abs(this.camera.bottom);
var right = box.max.x * Math.abs(this.camera.right);
var left = box.min.x * Math.abs(this.camera.left);
this.updateCamera(left, right, top, bottom);
this code also stretches the view to fit exactly into the window so you will have to check for the aspect ratio and change one size accordingly, but that should be trivial

threejs selecting different parts of a mesh

I'm using THREE.js. I have a model of a human that I want to be able to select different portions of. For example, if you click on one of the legs a particular action will be executed. My original idea was to split the model up into separate meshes and then use raytracing to determine which object was selected. But now when i render the scene, the shading along the edges of each mesh doesn't blend with adjoining meshes. This leaves ragged looking lines across the model between selectable portions. Is there a way to blend the shading between the mesh pieces I've created? Or is there a better way to select part of a mesh other than creating separate meshes? I have some programming experience, but this is the first time I've tried to use three.js. Any insight would be greatly appreciated.
You may create additional attribute for each triangle, that would be color of the bodypart that it belongs to. So, all triangles of the left leg would be red, all triangles of right leg would be blue etc.
Render your model normally, and add second pass where you would render triangles colored in the way described above, so no shading at all. Then, you could get your mouse position where the user clicked and look up in that bodypart-colored framebuffer and just check the pixel color on the place where user clicked.
This technique of picking 3d objects by assigning them different colors, rendering those colors to another texture and then checking color of clicked pixel is quite common, although it has some flaws. On the other hand, neither is ray testing absolutely accurate.
I believe that this demo runs actually based on that concept - demo.
var aiGeojj = new t.CubeGeometry(30, 30, 30);
var uprighters = Math.floor((Math.random() * 11));
var aiMaterialjj = new t.MeshBasicMaterial({ map: t.ImageUtils.loadTexture('images/images_bots/greenbot/upright/' + uprighters + '.gif'), opacity: 0, transparent: true });
var ojj= new t.Mesh(aiGeojj, aiMaterialjj);
ojj.limbs = [];
ojj.trunk = [];
var aiGeojjkey2c = new t.CubeGeometry(50, 50, 50);
var uprightersc = Math.floor((Math.random() * 11));
var aiMaterialjjc = new t.MeshBasicMaterial({ map: t.ImageUtils.loadTexture('images/images_bots/greenbot/upright/' + uprightersc + '.gif'), opacity: 1, transparent: true });
var ojjkey2c = new t.Mesh(aiGeojjkey2c, aiMaterialjjc);
ojjkey2c.id = "hiworld";
ojj.add(ojjkey2c);
ojj.trunk.push(ojjkey2c);
for( var you = 0; you < ojj.length; you++){
for( var youb = 0; youb < ojj[you].trunk.length; youb++){
window.alert( ojj[you].trunk[youb].id);
}
}

Resources