THREE.js - Drawing large amount of objects dynamically? - three.js

I have a large amount of 3D objects (i.e. asteroids) that I would like to render. I know I can merge all geometry into a single one OR I can use a buffer-geometry (are these the same thing?).
However, my asteroid objects always come and go. At any given second more asteroids could be created and placed into the scene.
* Question *
How to handle rendering this dynamic set of objects without having to re-create buffers or merge geometries (as these take a big hit on performance)?
I suspect that either the dynamic nature of the scene or the buffering must go.

One way to handle this is to merge all the asteroids into one geometry, but make note where each individual asteroid lies in the merged geometry (index + count). Then when an asteroid needs to be deleted, just set the position data of the relevant segment for that asteroid to 0. Eg, suppose if the merged geometry is from 3 asteroids, asteroid A from 0 to 10, B from 10 to 20; C 20 to 30. To delete asteroid B is to just setting the position data from 10 to 20 in the merged geometry to 0.
If you need to dynamically "create" new asteroids, you also need to store the segments you created holes at in your merged geometry. Continuing from the example above, if you need to add asteroid D you just fill position data from 10 to 20 (from "deleting" B) to whatever D is. Obviously, if there is no more segment holes left when you need to create a new asteroid, you need to enlarge the entire merged geometry and handle accordingly.

Related

Possibly to prioritise drawing of objects in Threejs?

I am working on a CAD type system using threejs. I have thin objects next to other objects (think thin 2mm metal sheeting fixed to posts on a building measured in metres). When I am zoomed in it all looks fine. The objects do not intersect at all. As I zoom out the objects get smaller and I end up with cases where the post object 'glimmers' (sort of shows through) the metal sheet object as I rotate it around.
I understand it's the small numbers I am working with that is causing this effect. However, is there a way to set a priority such that one object (the metal sheeting) is more important than another object (post) so it doesn't get that sort of effect?
To answer the question from the title, it is possible to prioritize drawing orders with.
myMesh.renderOrder = 5
myOtherMesh.renderOrder = 7
It is then possible to apply different depth effects, turn off the test etc.
Another way is to group objects with, layers. Set the appropriate layer mask on the camera and then render (multiple times).
myMesh.layers.set(5)
camera.layers.set(1)
renderer.render(scene,camera)
camera.layers.set(5)
renderer.render(scene,camera)
This is called z-fighting, where two fragments are so close in the given depth space that their z-values are within the margin of error that their true depths might get inverted.
The easiest way to resolve this is to reduce the scale of your depth buffer. This is controlled by the near and far properties on your camera. You'll need to play with the values to determine what works best for your senario. If you can minimize the distance between the planes, you'll have better luck avoiding z-fighting.
For example, if (as a loose estimate) the bounding sphere of your entire model has a diameter of 100, then the distance between near and far need only be 100. However, their values are set as the distance into camera space. So as you zoom out, and your camera moves further away, you should adjust the values to maintain the minimum distance between them. If your camera is at z = 100, then set near = 50 and far = 150. When you pull your camera back to z = 250, then update near = 200 and far = 300.
Another option is to use the WebGLRenderer.logarithmicDepthBuffer option. (example)
Edit: There is one other cause: the faces of the shapes are actually co-planar. If two triangles are occupying the same space, then you're all but guaranteeing z-fighting.
The simple solution is to move one of the components such that the faces are no longer co-planar. You could also potentially apply a polygonOffset to the sheet metal material, but your use-case doesn't sound like that is appropriate.

Find which object3D's the camera can see in Three.js - Raycast from each camera to object

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

How to create invisible mesh? [duplicate]

This question already has answers here:
Show children of invisible parents
(2 answers)
Closed 7 years ago.
I'd like to have a collection of objects that appear to be floating free in space but are actually connected to each other so that they move and rotate as one. Can I put them inside a larger mesh that is itself completely invisible, that I can apply transformations to? I tried setting transparency: true on the MeshNormalMaterial constructor, but that didn't seem to have any effect.
As a simple representative example: say I want to render just one pair of opposite corner cubies in a Rubik's Cube, but leave the rest of the Cube invisible. I can rotate the entire cube and watch the effect on the smaller cubes as they move together, or I can rotate them in place and break the illusion that they're part of a larger object.
In this case, I imagine I would create three meshes using BoxGeometry or CubeGeometry, one with a side length triple that of the other two. I would add the two smaller meshes to the larger one, and add the larger one to the scene. But when I try that, I get one big cube and can't see the smaller ones inside it. If I set visible to false on the larger mesh, the smaller meshes disappear along with it, even if I explicitly set visible to true on them.
Group them inside an Object3D.
var parent = new THREE.Object3d();
parent.add( child1 ); // etc
parent.rotation.x = 2; // rotates group

Data structure / approach for efficient raytracing

I'm writing a 3D raytracer as a personal learning project (Enlight) and have run into an interesting problem related to doing intersection tests between a ray and a scene of objects.
The situation is:
I have a number of primitives that rays can intersect with (spheres, boxes, planes, etc.) and groups thereof. Collectively I'm calling these scene objects.
I want to be able to scene objects primitives with arbitrary affine transformations by wrapping them in a Transform object (importantly, this will enable multiple instances of the same primitive(s) to be used in different positions in the scene since primitives are immutable)
Scene objects may be stored in a bounding volume hierarchy (i.e. I'm doing spatial partitioning)
My intersection tests work with Ray objects that represent a partial ray segment (start vector, normalised direction vector, start distance, end distance)
The problem is that when a ray hits the bounding box of a Transform object, it looks like the only way to do an intersection test with the transformed primitives contained within is to transform the Ray into the transformed co-ordinate space. This is easy enough, but then if the ray doesn't hit any transformed objects I need to fall back to the original Ray to continue the trace. Since Transforms may be nested, this means I have to maintain a whole stack of Rays for each intersection trace that is done.
This is of course within the inner loop of the whole application and the primary performance bottleneck. It will be called millions of times a second so I'm keen to minimise complexity / avoid unnecessary memory allocation.
Is there a clever way to avoid having to allocate new Rays / keep a Ray stack?
Or is there a cleverer way of doing this altogether?
Most of the time in ray-tracing you have a few (hundred, thousand) objects and quite a few more rays. Probably millions of rays. That being the case, it makes sense to see what kind of computation you can spend on the objects in order to make it faster/easier to have the rays interact with them.
A cache will be very helpful as boyfarrell suggested. It might make sense to not only create the forward and reverse transforms on the objects that would move them to or from the global frame, but also to keep a copy of the object in the global frame. It makes it more expensive to create objects or move them (because the transform changes and so do the cached global frame copies) but that's probably okay.
If you cast N rays and have M objects and N >> M then it stands to reason that every object will have multiple rays hit it. If we assume every ray hits an object then every object has N/M rays that strike it. That means transforming N/M rays to each object, hit testing, and possibly reversing it back out. Or N/M transforms per object at minimum. But if we cache the transformed object we can perform a single transform per object to get to the global frame and then not need any additional. At least for the hit-testing.
Define your primitives in their base form (unity scale, centered on 0,0,0, not rotated) and then move them in the scene using transformations only. Cache the result of the complete forward and reverse transformations in each object. (Do not forget the normal vectors, you will need them for reflections)
This will give you the ability to test the hit using simplified math (you reverse transform the ray to the object space and compute hit with base form object) and then transform the hit point and possible reflection vector back to the real world space using the other transform.
You will need to compute intersections with all objects in scene and select the hit which is closest to the ray origin (but not in negative distance). To speed this even more, enclose multiple objects to "bounding boxes" that will be very simple to compute hit on and will pass the real world ray to the enclosed objects if hit (but all objects will still use their precomputed matrices).

Vertex buffer objects and glutsolidsphere

I have to draw a great collection of spheres in a 3D physical simulation of a "spring-mass" like system.
I would like to know an efficient method to draw spheres without having to compile a display list at every step of my simulation (each step may vary from milliseconds to seconds, depending on the number of bodies involved in the computation).
I've read that vertex-buffer objects are an efficient method to draw objects which need also to be sometimes updated.
Is there any method to draw OpenGL spheres in a way faster than glutSolidSphere?
Spheres are self-similar; every sphere is just a scaled version of any other sphere. I see no need to regenerate any geometry. Indeed, I see no need to have more than one sphere at all.
It's simply a matter of providing the proper scaling matrix. I would suggest a sphere of radius one centered at the origin for your display list or buffer object mesh. Then you can just transform it to different locations, using a scale to set the new radius.
I would like to know an efficient method to draw spheres without having to compile a display list at every step of my simulation (each step may vary from milliseconds to seconds, depending on the number of bodies involved in the computation).
Why are you generating a display list at all, if the geometry you put into is is dynamic. Display lists are meant for static geometry that never or only seldomly changes.
I've read that vertex-buffer objects are an efficient method to draw objects which need also to be sometimes updated.
Actually VBOs are most efficient with static geometry as well. In general you want to keep the number of actual geometry updates as low as possible. In your case the only thing updating are the positions (and maybe the size) of the spheres. This is a prime example for instanced drawing. However this also works well, with updating only a uniform or the transformation matrix and do the call drawing a sphere.
The idea of Vertex Arrays and VBOs is, that you draw a whole batch of geometry with a single call. A sphere would be such a batch.

Resources