I'm trying to draw a boundingbox around a skinned model:
const box = new THREE.BoxHelper(trooper, 0xff0000);
scene.add(box);
https://jsfiddle.net/66sor15y/4/
As you see the model doesn't fit the bounding box.
I filed an issue to understand the problem, but I still have issues drawing the actual box.
https://github.com/mrdoob/three.js/issues/13989
Any help would be highly appreciated!
Related
I'm currently trying to calculate/draw the bounding box from a force graph using 3d-force-3d and three.js. I'm not having any luck creating the box. force-graph has a getGraphBbox() function but not much documentation. I have tried Any help would be appreciated! ...here is some not working code, i'm working with. Edit - my data is dynamic, I can add a mostly transparent cube to the scene around the points but in a static graph… I want the cube to always render the appropriate size around the nodes as some graphs have 1000 nodes and others 50 in my data sets. Hence the bounding box.
const Graph = ForceGraph3D()
(document.getElementById('3d-graph'))
.graphData(data)
.nodeLabel('id')
.nodeAutoColorBy('group');
//.getGraphBbox();//Not sure how to use this
let helper = new THREE.Box3Helper(Graph, 0xff0000);
helper.update();
Graph.scene().add(helper)
I am creating a viewer using three.js and found that setting camera near and far plane to fixed values is causing flickering for some 3d models.
I see that this is due to the fact that GPU is running out of precision for model having bounding box length around 4000-5000.
Near plane is currently set to 0.1 and far to 20000.
You can move up your near plane to get more resolution. Maybe 1.0...
Another option to be aware of is logarithmic depth buffer:
https://threejs.org/examples/webgl_camera_logarithmicdepthbuffer.html
You can get the bounding box of the mesh via its geometry... geometry.boundingBox and geometry.boundingSphere .. sometimes you need to recalculate them using mesh.geometry.computeBoundingBox and computeBoundingSphere...
To get the bounding box in camera space is a bit tricky.. I don't know of a super optimal one-liner to do it, but someone else may weigh in...
a brute force way would be to transform the mesh vertices to screen space..
Maybe something like:
var gclone = mesh.geometry.clone();
for(var i=0;i<geometry.vertices.length;i++)
gclone.vertices[i].applyMatrix4(mesh.matrixWorld).project(camera)
gclone.computeBoundingBox()
var zExtent = gclone.boundingBox.max.z-gclone.boundingBox.min.z
I want to place a circle on an object when you click on it.
I updated this example to work with webGL renderer (I basically changed THREE.SpriteCanvasMaterialto THREE.SpriteMaterial), but not only is the circle now a square, it also glitches with the surface of the object. Here's a JSFiddle that demonstrates my problem (click an object to test).
I found some similar questions on stackoverflow, but I can't seem to make it work in my example. Any tips?
you are adding sprites into a moving 3d scene, easy way to do what you want is to add spheres
var particle = new THREE.Mesh(new THREE.SphereGeometry(10,10,10),
new THREE.MeshBasicMaterial({color:0xff0000}) );
particle.position.copy( intersects[ 0 ].point );
scene.add( particle );
if you want real flat circles you will have to create a circular geometry and align it correctly to the object so it sits atop of the rectangle wall and if you want circles that do not deform when object is turned you will need some custom shader stuff
When using a cube camera one normally sets the envMap of the material to the cubeCamera.renderTarget, e.g.:
var myMaterial = new THREE.MeshBasicMaterial({color:0xffffff,
envMap: myCubeCamera.renderTarget,
side: THREE.DoubleSide});
This works great for meshes that are meant to reflect or refract what the cube camera sees. However, I'd like to simply create a texture and apply that to my mesh. In other words, I don't want my object to reflect or refract. I want the face normals to be ignored.
I tried using a THREE.WebGLRenderTarget, but it won't handle a cube camera. And using a single perpspective camera with WebGLRenderTarget does not give me a 360 texture, obviously.
Finally, simply assigning the cubeCamera.renderTarget to the 'map' property of the material doesn't work either.
Is it possible to do what I want?
r73.
Edit: this is not what the author of the question is looking for, I'll keep my answer below for other people
Your envmap is already a texture so there's no need to apply it as a map. Also, cubemaps and textures are structurally different, so it won't be possible to swap them, or if you succeed in doing that the result is not what you probably you might expect.
I understand from what you're asking you want a static envmap instead to be updated at each frame, if that's the case simply don't run myCubeCamera.updateCubeMap() into your render function. Instead place it at the end of your scene initialization with your desired cube camera position, your envmap will show only that frame.
See examples below:
Dynamic Cubemap Example
Static Cubemap Example
The answer is: Set the refractionRatio on the material to 1.0. Then face normals are ignored since no refraction is occurring.
In a normal situation where the Cube Camera is in the same scene as the mesh, this would be pointless because the mesh would be invisible. But in cases where the Cube Camera is looking at a different scene, then this is a useful feature.
I have a problem with textures in threejs, I textured a cube like so:
var boxtexv = boxtex = THREE.ImageUtils.loadTexture("boxtex.png");
var boxtex= new THREE.MeshBasicMaterial({map: boxtexv});
And this seems to work, but if I look at the cube from certain angles, The texture stretches from the corner to the center, could anybody explain this and help me?
PS: I am using CanvasRenderer.