Grab objects near camera - three.js

I was looking at the http://threejs.org/examples/webgl_nearestneighbour.html and had a few questions come up. I see that they use the kdtree to stick all the particles positions and then have a function to determine the nearest particle and color it. Let's say that you have a canvas with around 100 buffered geometries with around 72000 vertices / geometry. The only way I know to do this is that you get the positions of the buffered geometries and then put them into the kdtree to determine the nearest vertice and go from there. This sounds very expensive.
What other way is there to return the objects that are near the camera. Something like how THREE.LOD does it? http://threejs.org/examples/#webgl_lod It has the ability to see how far an object is and render the different levels depending on the setting you inputted.

Define "expensive". The point of the kdtree is to find nearest neighbour elements quickly, its primary focus is not on saving memory (Although it does everything inplace on a typed array, it's quit cheap in terms of memory already). If you need to save memory you maybe have to find another way.
Yet a typed array with length 21'600'000 is indeed a bit long. I highly doubt you have to have every single vertex in there. Why not have a position reference point for every geometry part? And, if you need to get the vertices associated to that point, a dictionary. Then you can call myGeometryVertices[ geometryReferencePoint ].
Three.LOD works with raycasts. If you have a (few) hundred objects that might work well. If you have hundred thousands or even millions of positions you'll get some troubles. Also if you're not using meshes; you can't raytrace e.g. a particle.
Really just build your own logic with them. None of those two provide a prebuilt perfect-for-all-cases solution.

Related

Very fast boolean difference between two meshes

Let's say I have a static object and a movable object which can be moved and rotated, what is the best way to very quickly calculate the difference of those two meshes?
Precision here is not so important, speed is though, since I have to use it in the update phase of the main loop.
Maybe, given the strict time limit, modifying the static object's vertices and triangles directly is to be preferred. Should voxels be preferred here instead?
EDIT: The use case is an interactive viewer of a wood panel (parallelepiped) and a milling tool (a revolved contour, some like these).
The milling tool can be rotated and can work oriented at varying degrees (5 axes).
EDIT 2: The milling tool may not pierce the wood.
EDIT 3: The panel can be as large as 6000x2000mm and the milling tool can be as little as 3x3mm.
If you need the best possible performance then the generic CSG approach may be too slow for you (but still depending on meshes and target hardware).
You may try to find some specialized algorithm, coded for your specific meshes. Let's say you have two cubes - one is a 'wall' and second is a 'window' - then it's much easier/faster to compute resulting mesh with your custom code, than full CSG. Unfortunately you don't say anything about your meshes.
You may also try to make it a 2D problem, use some simplified meshes to compute the result that will 'look like expected'.
If the movement of your meshes is somehow limited you may be able to precompute full or partial results for different mesh combinations to use at runtime.
You may use some space partitioning like BSP or Octrees to divide your meshes during precomputing stage. This way you could split one big problem into many smaller ones that may be faster to compute or at least to make the solution multi-threaded.
You've said about voxels - if you're fine with their look and limits you may voxelize both meshes and just read and mix two voxel values, instead of one. Then you would triangulate it using algorithm like Marching Cubes.
Those are all just some general ideas but we'll need better info to help you more.
EDIT:
With your description it looks like you're modeling some bas-relief, so you may use Relief Mapping to fake this effect. It's based on a height map stored as a texture, so you'd need to just update few pixels of the texture and render a plane. It should be quite fast compared to other approaches, the downside is that it's based on height map, so you can't get shapes that Tee Slot or Dovetail cutter would create.
If you want the real geometry then I'd start from a simple plane as your panel (don't need full 3D yet, just a front surface) and divide it with a 2D grid. The grid element should be slightly bigger than the drill size and every element is a separate mesh. In the frame update you'd cut one, or at most 4 elements that are touched with a drill. Thanks to this grid all your cutting operations will be run with very simple mesh so they may work with your intended speed. You can also cut all current elements in separate threads. After the cutting is done you'll upload to the GPU only currently modified elements so you may end up with quite complex mesh but small modifications per frame.

Three js performance

Hy!
I am working with huge vertice objects, I am able to show lots of modells, because I have split them into smaller parts(Under 65K vertices). Also I am using three js cameras. I want to increase the performance by using a priority queue, and when the user moving the camera show only the top 10, then when the moving stop show the rest. This part is not that hard, but I dont want to put modells to render, when they are behind another object, maybe send out some Rays from the view of the camera(checking the bounding box hit) and according hit list i can build the prior queue.
What do you think?
Also how can I detect if I can load the next modell or not.(on the fly)
Option A: Occlusion culling, you will need to find a library for this.
Option B: Use a AABB Plane test with camera Frustum planes and object bounding box, this will tell you if an object is in cameras field of view. (not necessarily visible behind object, as such a operation is impossible, this mostly likely already done to a degree with webgl)
Implementation:
Google it, three js probably supports this
Option C: Use a max object render Limit, prioritized based on distance from camera and size of object. Eg Calculate which objects are visible(Option B), then prioritize the closest and biggest ones and disable the rest.
pseudo-code:
if(object is in frustum ){
var priority = (bounding.max - bounding.min) / distanceToCamera
}
Make sure your shaders are only doing one pass. As that will double the calculation time(roughly depending on situation)
Option D: raycast to eight corners of bounding box if they all fail don't render
the object. This is pretty accurate but by no means perfect.
Option A will be the best for sure, Using Option C is great if you don't care that small objects far away don't get rendered. Option D works well with objects that have a lot of verts, you may want to raycast more points of the object depending on the situation. Option B probably won't be useful for your scenario, but its a part of c, and other optimization methods. Over all there has never been an extremely reliable and optimal way to tell if something is behind something else.

Three.js particles vs particle system and picking

I have an app that uses a Three.js ParticleSystem to render on the order of 50,000 points. I have spent a lot of time searching for efficient ways to do picking (ray-casting) so as to be able to interact with individual points but have not found a good solution. I am considering changing to just using an array of Particles instead of ParticleSystems.
My questions are:
Am I missing something; is there a good way to do picking with the ParticleSystem?
Will I suffer a performance hit using an array of Particles instead of the ParticleSystem, especially since I am taking advantage of the ability to pass several arrays of attributes into the shader.
Thanks for any insight anyone can provide!
You're checking 50,000 points every time. That's a bit too much.
You may want to split those points into different ParticleSystems... Like 10 objects with 5000 particles each.
Ideally each object would compose a different "quadrant" so the Raycaster can check the boundingSphere first and ignore all those points if not intersecting.

Efficient collision detection between balls (ellipses)

I'm running a simple sketch in an HTML5 canvas using Processing.js that creates "ball" objects which are just ellipses that have position, speed, and acceleration vectors as well as a diameter. In the draw function, I call on a function called applyPhysics() which loops over each ball in a hashmap and checks them against each other to see if their positions make them crash. If they do, their speed vectors are reversed.
Long story short, the number of calculations as it is now is (number of balls)^2 which ends up being a lot when I get to into the hundreds of balls. Using this sort of check slows down the sketch too much so I'm looking for ways to do smart collisions some other way.
Any suggestions? Using PGraphics somehow maybe?
I assume you're already simplifying the physics by treating the ellipses as if they were circles.
Other than that, check out quadtree collision detection:
http://gamedev.tutsplus.com/tutorials/implementation/quick-tip-use-quadtrees-to-detect-likely-collisions-in-2d-space/
I don't know your project, but if the balls have non-random forces applied to them (e.g. gravity) you might use predictive analytics also.
If you grid the space and create a data structure that reflects this (e.g. an array of row objects each containing an array of column objects each containing an ArrayList of ball objects), you can just consider interactions within each cell (or also with neighbouring cells). You can reassign data location for balls that cross boundaries. You then have far fewer interactions.

OpenGL drawing the same polygon many times in multiple places

In my opengl app, I am drawing the same polygon approximately 50k times but at different points on the screen. In my current approach, I do the following:
Draw the polygon once into a display list
for each instance of the polygon, push the matrix, translate to that point, scale and rotate appropriate (the scaling of each point will be the same, the translation and rotation will not).
However, with 50k polygons, this is 50k push and pops and computations of the correct matrix translations to move to the correct point.
A coworker of mine also suggested drawing the entire scene into a buffer and then just drawing the whole buffer with a single translation. The tradeoff here is that we need to keep all of the polygon vertices in memory rather than just the display list, but we wouldn't need to do a push/translate/scale/rotate/pop for each vertex.
The first approach is the one we currently have implemented, and I would prefer to see if we can improve that since it would require major changes to do it the second way (however, if the second way is much faster, we can always do the rewrite).
Are all of these push/pops necessary? Is there a faster way to do this? And should I be concerned that this many push/pops will degrade performance?
It depends on your ultimate goal. More recent OpenGL specs enable features for "geometry instancing". You can load all the matrices into a buffer and then draw all 50k with a single "draw instances" call (OpenGL 3+). If you are looking for a temporary fix, at the very least, load the polygon into a Vertex Buffer Object. Display Lists are very old and deprecated.
Are these 50k polygons going to move independently? You'll have to put up with some form of "pushing/popping" (even though modern scene graphs do not necessarily use an explicit matrix stack). If the 50k polygons are static, you could pre-compile the entire scene into one VBO. That would make it render very fast.
If you can assume a recent version of OpenGL (>=3.1, IIRC) you might want to look at glDrawArraysInstanced and/or glDrawElementsInstanced. For older versions, you can probably use glDrawArraysInstancedEXT/`glDrawElementsInstancedEXT, but they're extensions, so you'll have to access them as such.
Either way, the general idea is fairly simple: you have one mesh, and multiple transforms specifying where to draw the mesh, then you step through and draw the mesh with the different transforms. Note, however, that this doesn't necessarily give a major improvement -- it depends on the implementation (even more than most things do).

Resources