Smart Centering and Scaling after Model Import in three.js - three.js

Is there a way to determine the size and position of a model and then auto-center and scale the model so that it is positioned at the origin and within the view of the camera? I find that when I import a Collada model from Sketchup, if the model was not centered at the origin in Sketchup, then it is not centered in three.js. While that makes sense, it would be nice to auto-center to origin after importing.
I've seen some discussion in the different file loaders about getting the bounds of the imported model, but I have been unable to find any references to how to do that.
The scaling issue is less important, but I feel like it relates to a bounds function, which is why I asked it too.
EDIT:
More info after playing around a bit and a few more google searches...
The code for my callback function on loading the collada file now looks like this:
loader.load(mURL, function colladaReady( collada ) {
dae = collada.scene;
skin = collada.skins[ 0 ];
dae.scale.x = dae.scale.y = dae.scale.z = 1;
dae.updateMatrix();
//set arbitrary min and max for comparison
var minX = 100000;
var minY = 100000;
var minZ = 100000;
var maxX = 0;
var maxY = 0;
var maxZ = 0;
var geometries = collada.dae.geometries;
for(var propName in geometries){
if(geometries.hasOwnProperty(propName) && geometries[propName].mesh){
dae.geometry = geometries[propName].mesh.geometry3js;
dae.geometry.computeBoundingBox();
bBox = dae.geometry.boundingBox;
if(bBox.min.x < minX) minX = bBox.min.x;
if(bBox.min.y < minY) minY = bBox.min.x;
if(bBox.min.z < minZ) minZ = bBox.min.z;
if(bBox.max.x > maxX) maxX = bBox.max.x;
if(bBox.max.y > maxY) maxY = bBox.max.x;
if(bBox.max.z > maxZ) maxZ = bBox.max.z;
}
}
//rest of function....
This is generating some interesting data about the model. I can get an overall extreme coordinate for the model, which I'm assuming (probably incorrectly) would be close to an overall bounding box for the model. But trying to do anything with those coordinates (like averaging and moving the model to the averages) generates inconsistent results.
Also, it seems inefficient to have to loop through every geometry for a model, is there a better way? If not, can this logic be applied to other loaders?

You can use THREE.Box3#setFromObject to get the bounding box of any Object3D, including an imported model, without having to loop through the geometries yourself. So you could do something like
var bBox = new THREE.Box3().setFromObject(collada.scene);
to get the extreme bounding box of the model; then you could use any of the techniques in the answers that gaitat linked in order to set the camera position correctly. For instance, you could follow this technique (How to Fit Camera to Object) and do something like:
var height = bBox.size().y;
var dist = height / (2 * Math.tan(camera.fov * Math.PI / 360));
var pos = collada.scene.position;
camera.position.set(pos.x, pos.y, dist * 1.1); // fudge factor so you can see the boundaries
camera.lookAt(pos);
Quick fiddle: http://jsfiddle.net/p19r9re2/ .

try geometry.center()
center: function () {
var offset = new Vector3();
return function center() {
this.computeBoundingBox();
this.boundingBox.getCenter( offset ).negate();
this.translate( offset.x, offset.y, offset.z );
return this;
};
}(),

Related

Is it possible to use a texture material on objects with various sizes?

Working with Three.js r113, I'm creating walls from coordinates of a blueprint dynamically as custom geometries. I've set up the vertices, faces and faceVertexUvs already successfully. Now I'd like to wrap these geometries with a textured material, that repeats the texture and keeps the original aspect ratio.
Since the walls have different lengths, I was wondering which is the best approach to do this?
What I've tried so far is loading the texture once and then using different texture.repeat values, depending on the wall length:
let textures = function() {
let wall_brick = new THREE.TextureLoader().load('../textures/light_brick.jpg');
return {wall_brick};
}();
function makeTextureMaterial(texture, length, height) {
const scale = 2;
texture.wrapS = THREE.RepeatWrapping;
texture.wrapT = THREE.RepeatWrapping;
texture.repeat.set( length * scale, height * scale );
return new THREE.MeshStandardMaterial({map: texture});
}
I then call the above function, after creating the geometry and assign the returned materials to the material array to apply it to faces of front and back of each wall. Note: material.wall is an untextured MeshStandardMaterial for the other faces.
let scaledMaterial = [
makeTextureMaterial(textures.wall_brick, this.length.back, this.height),
makeTextureMaterial(textures.wall_brick, this.length.front, this.height),
material.wall
];
this.geometry.faces[0].materialIndex = 0; // back
this.geometry.faces[1].materialIndex = 0; // back
this.geometry.faces[2].materialIndex = 1; // front
this.geometry.faces[3].materialIndex = 1; // front
this.geometry.faces[4].materialIndex = 2;
this.geometry.faces[5].materialIndex = 2;
this.geometry.faces[6].materialIndex = 2;
this.geometry.faces[7].materialIndex = 2;
this.geometry.faces[8].materialIndex = 2;
this.geometry.faces[9].materialIndex = 2;
this.geometry.faces[10].materialIndex = 2;
this.geometry.faces[11].materialIndex = 2; // will do those with a loop later on :)
this.mesh = new THREE.Mesh(this.geometry, scaledMaterial);
What happens is that the texture is displayed on the desired faces, but it's not scaled individually by this.length.back and this.length.front
Any ideas how to do this? Thank you.
I have just found the proper approach to this. The individual scaling is done via faceVertexUvs, as West Langley answered here: https://stackoverflow.com/a/27098476/4355114

How fill a loaded STL mesh ( NOT SIMPLE SHAPES LIKE CUBE ETC) with random particles and animate with this geometry bound in three.js

How I can fill a loaded STL mesh ( like suzane NOT SIMPLE SHAPES LIKE CUBE etc) with random particles and animate it inside this geometry bounds with three.js ?
I see many examples but all of it for simple shapes with geometrical bounds like cube or sphere with limit by coordinates around center
https://threejs.org/examples/?q=points#webgl_custom_attributes_points3
TNX
A concept, using a ray, that counts intersections of the ray with faces of a mesh, and if the number is odd, it means that the point is inside of the mesh:
Codepen
function fillWithPoints(geometry, count) {
var ray = new THREE.Ray()
var size = new THREE.Vector3();
geometry.computeBoundingBox();
let bbox = geometry.boundingBox;
let points = [];
var dir = new THREE.Vector3(1, 1, 1).normalize();
for (let i = 0; i < count; i++) {
let p = setRandomVector(bbox.min, bbox.max);
points.push(p);
}
function setRandomVector(min, max){
let v = new THREE.Vector3(
THREE.Math.randFloat(min.x, max.x),
THREE.Math.randFloat(min.y, max.y),
THREE.Math.randFloat(min.z, max.z)
);
if (!isInside(v)){return setRandomVector(min, max);}
return v;
}
function isInside(v){
ray.set(v, dir);
let counter = 0;
let pos = geometry.attributes.position;
let faces = pos.count / 3;
let vA = new THREE.Vector3(), vB = new THREE.Vector3(), vC = new THREE.Vector3();
for(let i = 0; i < faces; i++){
vA.fromBufferAttribute(pos, i * 3 + 0);
vB.fromBufferAttribute(pos, i * 3 + 1);
vC.fromBufferAttribute(pos, i * 3 + 2);
if (ray.intersectTriangle(vA, vB, vC)) counter++;
}
return counter % 2 == 1;
}
return new THREE.BufferGeometry().setFromPoints(points);
}
The concepts from the previous answer is very good, but it has some performance limitations:
the whole geometry is tested with every ray
the recursion on points outside can lead to stack overflow
Moreover, it's incompatible with indexed geometry.
It can be improved by creating a spatial hashmap storing the geometry triangles and limiting the intersection test to only some part of the mesh.
Demonstration

D3 Donut chart projected to sphere/globe

I want to use d3 for the next task:
display rotating globe with donut chart in center of every country. It should be possible to interact with globe (select country, zoom, rotate).
Seems d3 provide an easy way to implement every part of it but I can not get donuts part working as I need.
There is an easy way draw donut chart with the help of d3.arc:
var arc = d3.arc();
var data = [3, 23, 17, 35, 4];
var radius = 15/scale;
var _arc = arc.innerRadius(radius - 7/scale)
.outerRadius(radius).context(donutsContext);
var pieData = pie(data);
for (var i = 0; i < pieData.length; i++) {
donutsContext.beginPath();
donutsContext.fillStyle = color(i);
_arc(pieData[i]);
}
by with code as it is donuts are displayed on a plane on top of the globe, like:
globe with donut
​
while I want them to be 'wrapped' around the globe
There is d3.geoCircle method that can be projected to globe correctly. I got 'ring' projected correctly to the globe with the help of two circles:
var circle = d3.geoCircle()
.center(centroid)
.radius(2);
var outerCircle = circle();
var circle = d3.geoCircle()
.center(centroid)
.radius(1);
var innerCircle = circle();
var interCircleCoordinates = [];
for (var i = innerCircle.coordinates[0].length - 1; i >= 0; i--) {
interCircleCoordinates.push(innerCircle.coordinates[0][i]);
}
outerCircle.coordinates.push(interCircleCoordinates);
​globe with rings
but I really need to get a donut.
The other way I tried is getting image from donuts and wrapping this image around globe with the help of pixels manipulation:
var image = new Image;
image.onload = onload;
image.src = img;
function onload() {
window.dx = image.width;
window.dy = image.height;
context.drawImage(image, 0, 0, dx, dy);
sourceData = context.getImageData(0, 0, dx, dy).data;
target = context.createImageData(width, height);
targetData = target.data;
for (var y = 0, i = -1; y < height; ++y) {
for (var x = 0; x < width; ++x) {
var p = projection.invert([x, y]), λ = p[0], φ = p[1];
if (λ > 180 || λ < -180 || φ > 90 || φ < -90) { i += 4; continue; }
var q = ((90 - φ) / 180 * dy | 0) * dx + ((180 + λ) / 360 * dx | 0) << 2;
var r = sourceData[q];
var g = sourceData[++q];
var b = sourceData[++q];
targetData[++i] = r;
targetData[++i] = g;
targetData[++i] = b;
targetData[++i] = 125;//
}
}
context.clearRect(0,0, width, height);
context.putImageData(target, 0, 0);
};
by this way I get extremely slow rotating and interaction with a globe for a globe size I need (1000px)
So my questions are:
Is there is some way to project donuts that are generated with the help of d3.arc to a sphere (globe, orthographic projection)?
Is there is some way to get a donut from geoCircle?
Maybe there is some other way to achieve my goal I do not see
There is one way that comes to mind to display donuts on a globe. The key challenge is that d3 doesn't project three dimensional objects very well - with one exception, geographic features. Consequently, an "easy" solution is to convert your pie charts into geographic features and project them with the rest of your features.
To do this you need to:
Use a pie/donut generator as you normally would
Go along the paths generated to get points approximating the pie shape.
Convert the points to long/lat points
Assemble those points into geojson
Project them onto the map.
The first point is easy enough, just make a pie chart with an inner radius.
Now you have to select each path and find points along its perimeter using path.getPointAtLength(), this will be dependent on path length, so path.getTotalLength() will be handy (and corners are important, so you might want to incorporate a little bit of complexity for these corner cases to ensure you get them)).
Once you have the points, you need the use of a second projection, azimuthal equidistant would be best. If the pie chart is centered on [0,0] in svg coordinate space, rotate the azimuthal (don't center), so that the centroid coordinate is located at [0,0] in svg space (you can use translates on the pies to position them, but it will just add extra steps). Take each point and run it through projection.invert() using the second projection. You will need to update the projection for each donut chart as each one will have a different geographic centroid.
Once you have lat long points, it's easy - you've already done it with the geo circle function - convert to geojson and project with the orthographic projection.
This approach gave me something like:
Notes: Depending on your data, it might be easiest to preprocess your data into geojson and store that as opposed to calculating the geojson each page load.
You are using canvas, while you don't need to actually use an svg, you need to still be able to access svg functions like getPointAtLength, you do not need to have an svg or display svg elements by using a custom element replicating a path :
document.createElementNS(d3.namespaces.svg, 'path');
Oh, and make sure the second projection's translate is set - the default is [480,250] for all (most?) d3 projections, that will throw things off if unaccounted for.

Calculate the vertex while creating terrain from heightmap using ThreeJs

I'm reading "create terrain from heightmap" example from ThreeJs Cookbook
This example load GrandCanyon: http://lh5.ggpht.com/_-B0hFoGrn-w/SvHiYk39yAI/AAAAAAAABOQ/6IGZwifUYGA/GrandCanyon.png
And create a 3D terrain: http://www.smartjava.org/tjscb/02-geometries-meshes/02.06-create-terrain-from-heightmap.html
There are some code pieces I can not understand:
// draw on canvas
ctx.drawImage(img, 0, 0);
var pixel = ctx.getImageData(0, 0, width, depth);
var geom = new THREE.Geometry;
var output = [];
for (var x = 0; x < depth; x++) {
for (var z = 0; z < width; z++) {
// get pixel
// since we're grayscale, we only need one element
var yValue = pixel.data[z * 4 + (depth * x * 4)] / heightOffset;
var vertex = new THREE.Vector3(x * spacingX, yValue, z * spacingZ);
geom.vertices.push(vertex);
}
}
why is yValue calculated with that value ? why don't we use var yValue = pixel.data[z * 4 + (depth * x )] or something like that ?
And do we really need spacingX and spacingZ ?
Source code is here: https://github.com/josdirksen/threejs-cookbook/blob/master/02-geometries-meshes/02.06-create-terrain-from-heightmap.html
Could you please help me ?
Thank you very much!
You don't NEED spacingX and spacingZ, no. You could adjust scale in other ways, like applying a scale matrix to the entire THREE.Geometry after you've populated the vertices. Up to you, really.
As fort the yValue, the indexing is to adjust for the way the data for the texture is laid out. There are four channels, usually RGBA, but in this case we only need one of them as a height.

Box2d.XNA gravity issue

I try to integrate Box2D inside my game for WP7. However, the bodies that I add, do not respond as expected to the gravity. Basically, it seems that whatever property I modify, the object that I add still seems to be very "light" and does not actually respond to gravity changes.
Here is the code:
void Init
{
world = new World(new Vector2(0, 100), false);
world.ContinuousPhysics = true;
// add ground
BodyDef bd = new BodyDef();
Body ground = world.CreateBody(bd);
PolygonShape shape = new PolygonShape();
shape.SetAsEdge(new Vector2(0.0f, bbheight), new Vector2(bbwidth, bbheight));
ground.CreateFixture(shape, 0.0f);
AddObject(new Vector2(450,0));
}
private void AddObject(Vector2 position)
{
float PTM = 32;
Vector2 pos = new Vector2(position.X / PTM, position.Y / PTM);
var circle = new CircleShape();
circle._radius = 1.0f;
var fd = new FixtureDef();
fd.shape = circle;
fd.restitution = 0.5f;
fd.friction = 1.0f;
fd.density = 1000.0f;
BodyDef bd = new BodyDef();
bd.type = BodyType.Dynamic;
bd.fixedRotation = true;
bd.allowSleep = false;
bd.position = pos;
var body = world.CreateBody(bd);
body.CreateFixture(fd);
body.SetUserData(Red);
}
I would be grateful if you could give some help.
Thanks!
Box2D Engine is designed in pixels but in units and it likes small units.
Example if you try scale 1 pixel = 1 unit when you make and object that is 100 pixels wide it is large as a big planet for Box2D. So if the distance between two objects is 300 it will take forever to colide
What you nees to do is to change the scale as Box2D was designed to.
I recommend you to read or watch some Box2D tutorials like this one http://www.kerp.net/box2d/ this tutorial is for Flash Box2D version but main difference is Class names.

Resources