Three.js algorithm for marquee selecting in 3D scene - three.js

I'm creating a 3D model editor application using THREE.js where you can load a CAD model and have it display on the screen. You can pan, zoom, rotate the camera anywhere around in the scene to view the CAD model from any angle.
I want to add support to be able to draw an arbitrary rectangle on the screen (marquee select box) and anything inside this box I'd like to become selected.
What is a good algorithm to use for this operation?
My first thought was to take every loaded CAD part (that can be selected), and project its bounding box onto the screen. Then test each of these projected bounding boxes to the selection box drawn on the screen for matches. This should work, however I'm worried it would be very slow for large CAD models with 1000's of selectable parts.
Is there a better way to do marquee selections in 3D? Can raycasting somehow be used to speed up the selections?

Without knowing more details about your cad models it'll be a bit hard to give exactly relevant suggestions but I can suggest a few things I might try.
Use Hierarchical Bounding Boxes
If you have a multi-level tree of meshes you can generate bounding boxes for the non-leaf nodes of the tree. This isn't supported directly in THREE but you can manually create and check against these objects before checking if the child objects are within the marquee.
If your tree isn't spatially organized very well or is very flat then you can build an oct-tree and traverse the oct-tree nodes before checking the meshes.
Of course these data structures have to be updated whenever meshes move in your CAD model.
Cache World Bounds
If you cache versions of the bounding boxes on all the meshes in world space then instead of projecting the bounds into screen space you can create a frustum from the marquee in world space and check the all the mesh bounds without having to do any transformation of those boxes.
Asynchronous Checking
Instead of gathering all the intersected bounds on a single frame you could gather them up over multiple frames if it is taking a long time.
Unfortunately I don't think raycasting can do a whole lot for you here.
Hopefully that helps!

Related

Efficient way to create click targets larger that the actual scene object

What’s a good way to have click targets that are larger than the actual scene object?
So far we have been using a larger invisible (yet raycastable) object to do this but it comes at the cost of requiring two draw calls instead of one.
Is there any better solutions?
So far we have been using a larger invisible (yet raycastable) object to do this but it comes at the cost of requiring two draw calls instead of one.
There is no additional draw call if you set Object3D.visible to false. However, you can still perform raycasting against invisible 3D objects. Use Raycaster.layers to selectively ignore 3D objects when performing intersection tests.
So what you are doing is already fine. You might want to consider to raycast only against bounding volumes if the raycasting performance becomes a bottleneck in your app. The idea is to create an instance of Box3 (AABB) or Sphere (bounding sphere) of your actual scene object and only use it for raycasting.

Is it possible to select multiple objects with box selection?

I know that with QObjectPicker I can mouse pick a single entity. But how can I select multiple objects by drawing a rectangle on the screen?
I think this is actually pretty complicated. But here are my two cents:
If you only need to be able to select unoccluded objects
(i.e. don't need to select occluded ones) you could add a second frame graph branch to your existing one and draw each object with a unique color but to an offscreen texture. Then retrieve this texture, check which colors lie within the drawn rectangle and retrieve the corresponding objects and select them (compare to this question/answer).
I'm not sure how well this works in Qt3D because I've always had some issues with QRenderCapture. It didn't seem to have an impact where I added it in the frame graph, i.e. always captured the last state so maybe even if you have multiple render targets it might capture the wrong one etc. Qt3D is still in a pretty rough state I'd say.
If you need an example of how to render to an offscreen texture check out my example on GitHub.
If you need to be able to select occluded objects too
then it gets pretty complicated. I'm just providing some ideas here. I don't know if they will work.
If you don't have that many objects maybe you could implement the idea from above for each single object. I.e. for each object you have an offscreen frame graph branch that filters out all other objects. Then you could check each rendered texture for the rectangle drawn with the mouse. But again I'm not sure how well this works with Qt3D and if you have many objects (like in a game) it will probably crash because of the many offscreen textures.
You could also implement something like "inverse" frustum culling. In frustum culling, you omit rendering objects that lie outside the view frustum of the camera. You could compute a frustum using the rectangle coordinates drawn with the mouse. Check out the QFrustumCulling code. You would need to compute the planes differently of course, using a modified view matrix. When the user draws the rectangle, compute the frustum and check all objects. Unfortunately, this also selects objects whose bounding sphere intersects with the frustum, even though you might visible not touch any part of the object. If that bothers you, you could directly select all objects whose sphere is completely within the frustum and for all objects which only partly intersect do the intersection computation on a per-triangle basis and exit computation for the current object as soon as a triangle intersects the frustum. Depending on the number of triangles this could be very costly computational-wise.
I'd definitely stick to being able to select only unoccluded objects especially because picking in OpenGL seems to be realized by drawing the ojbects with colors these days.

Does adding many game GameObjects in scene (not rendered) inefficient for performance? [Unity]

I'm using Unity 4.6 to develop a 2D game. I want to know if having a lot of GameObjects in the scene (out of the camera's sight) has a considerable influence on performance.
For example, is it efficient to make an scrollable list of names (like 1000 of them)? (each one is a GameObject and has a text, a button etc.)
I mask them in a specified area (for example 10 of them are visible at the same time).
Thanks in advance!
Depends on whether or not the objects have visible components. If they do, the engine will draw them even if they are 'off-camera'. A game object by itself has a pretty light load - a tile based game could have thousands in memory. You'll want to toggle the visibility of sprites if you plan on drawing a large number to the scene off-camera. This is where a SpriteManager comes in. It'll check to see if the sprite is in the camera's rectangle and disabled sprites that aren't. There is a semi-offical exmaple here that is good if a little complicated:
http://wiki.unity3d.com/index.php?title=SpriteManager

Do elements drawn outside the clip plane affect OpenGL performance?

OpenGL Question:I have something to ask about clip space transformation. I am reading an online tutorial and it says that everything you draw outside the clip space will be clipped. When it come to this, does the elements outside the clip space affects the performance or not? Because it will not be drawn and thus it doesn't affect.
Assuming that it will affect performance and in case of 2d game like super mario, I am thinking about not to draw the elements outside the clip space to achieve better performance. Please clarify. Thanks.
OpenGL has only a certain amount of knowledge about your scene and will clip very late in the pipeline. It can't apply a broad phase test. Assuming you can, you should.
Supposing you had a model with 30,000 triangles, OpenGL would transform each and every one of those 30,000 triangles before considering clipping. If you know something as simple as the bounding sphere for the model it's possible you could see that the whole thing is completely outside of the frustum in a single test and save almost 30,000 extra bits of effort.
In a 2d game like Mario what this usually means is using the scroll position to index into the map and to generate geometry only for potentially visible tiles and sprites that are within the visible area.
For the map that will generally just men figuring out the (x, y) of one corner and then generating geometry for the known width and height of the screen so it means discarding the vast majority of the geometry with zero processing.
For the sprites, this is generally why in those sort of games you often see enemies reset to their starting position if you walk a little way from them and then walk back: they're added to the active list based on a map location trigger and removed when you walk far enough away. While not active, no mutable storage is afforded to them.

Adding text in three.js

I am visualizing a graph using Three.js and for each node of the graph I add a label using TextGeometry. It is a pretty small graph but when I add text my application gets really slow. What should I do about it?
TextGeometry is more suitable for cases when you are really interested in rendering the text in 3D. It will create complex geometry that will surely slow your app down specially when there is a lot of text or you use CanvasRenderer.
For labels, it is generally better to use 2D labels, which are way faster to render. There are many different approaches to this. These can go on top of the Three.js rendering canvas, on a separate canvas, or even normal HTML nodes positioned using CSS properties. Alternatively, you can dynamically create small canvases of your label texts, and use them as sprite textures always facing camera - this might be the easiest way as the labels would be part of the 3D scene as your other objects. For a separate layer approach, you need to use unprojectVector or such to figure out screen XY coordinates to match your 3D scene positions.
See these SO posts for example:
- Dynamically create 2D text in three.js
- Canvas and SpriteMaterial
- How do I add a tag/label to appear on top of several objects so that the tag always faces the camera when the user clicks the object?

Resources