Fast 3D mesh generation from pointcloud - performance

I would like to build a simple mesh from a set of points in the fastest way possible. Hypothethically my point cloud could be in a very low number of points range (something like 1000 to 50000).
I've seen about 3D Delenauy triangularization and some other methods, but most of the time I don't find speed reported on papers and other times I see huge computational times in the order of minutes.
An interesting algorithm I've found is this: https://doc.cgal.org/latest/Poisson_surface_reconstruction_3/index.html
My main concern is that this is used for making 2D surfaces in 3D space, while I have points in my pointcloud which would lie in the interior of the final volume.
Could you suggest me some algorithms which could be useful in my scenario? And also a raw estimate of computational times? Is it possible to make such task in less than 5 seconds?
Think I'm not trying to make human faces, sculptures, or stuff like that. The meshes I'm trying to reconstruct are always pretty polyhedrical.
Thanks for your attention

Related

Very fast boolean difference between two meshes

Let's say I have a static object and a movable object which can be moved and rotated, what is the best way to very quickly calculate the difference of those two meshes?
Precision here is not so important, speed is though, since I have to use it in the update phase of the main loop.
Maybe, given the strict time limit, modifying the static object's vertices and triangles directly is to be preferred. Should voxels be preferred here instead?
EDIT: The use case is an interactive viewer of a wood panel (parallelepiped) and a milling tool (a revolved contour, some like these).
The milling tool can be rotated and can work oriented at varying degrees (5 axes).
EDIT 2: The milling tool may not pierce the wood.
EDIT 3: The panel can be as large as 6000x2000mm and the milling tool can be as little as 3x3mm.
If you need the best possible performance then the generic CSG approach may be too slow for you (but still depending on meshes and target hardware).
You may try to find some specialized algorithm, coded for your specific meshes. Let's say you have two cubes - one is a 'wall' and second is a 'window' - then it's much easier/faster to compute resulting mesh with your custom code, than full CSG. Unfortunately you don't say anything about your meshes.
You may also try to make it a 2D problem, use some simplified meshes to compute the result that will 'look like expected'.
If the movement of your meshes is somehow limited you may be able to precompute full or partial results for different mesh combinations to use at runtime.
You may use some space partitioning like BSP or Octrees to divide your meshes during precomputing stage. This way you could split one big problem into many smaller ones that may be faster to compute or at least to make the solution multi-threaded.
You've said about voxels - if you're fine with their look and limits you may voxelize both meshes and just read and mix two voxel values, instead of one. Then you would triangulate it using algorithm like Marching Cubes.
Those are all just some general ideas but we'll need better info to help you more.
EDIT:
With your description it looks like you're modeling some bas-relief, so you may use Relief Mapping to fake this effect. It's based on a height map stored as a texture, so you'd need to just update few pixels of the texture and render a plane. It should be quite fast compared to other approaches, the downside is that it's based on height map, so you can't get shapes that Tee Slot or Dovetail cutter would create.
If you want the real geometry then I'd start from a simple plane as your panel (don't need full 3D yet, just a front surface) and divide it with a 2D grid. The grid element should be slightly bigger than the drill size and every element is a separate mesh. In the frame update you'd cut one, or at most 4 elements that are touched with a drill. Thanks to this grid all your cutting operations will be run with very simple mesh so they may work with your intended speed. You can also cut all current elements in separate threads. After the cutting is done you'll upload to the GPU only currently modified elements so you may end up with quite complex mesh but small modifications per frame.

Rasterize polygons on GPU for hit-testing

Task
Perform many hit-tests on polygons. I have about 10,000 polygons with 1,000 points each. About 100,000,000 hit-tests will be performed against these polygons. The polygons may touch each other but never overlap.
Solution
Simple point-in-polygon test for every hit-test.
Problem
Far too slow.
Improvement
Rasterize the polygons and check which polygon the pixel of the hit-tests belongs to.
New problem
Rasterization of the polygons is slow. I'm using this algorithm: http://alienryderflex.com/polygon_fill/
Idea
Rasterize the polygons on the GPU since it's optimized for this task and can perform it in hardware.
Question
What do you think about my idea and can you give me advice where to start? Will it lead to high performance?
Sidenote
The area is sparsly covered with polygons. I want to maintain a list of pixels instead of a bitmap.
This answer suggests to use the GPU if you can:
How can I determine whether a 2D Point is within a Polygon?
Your case seems to be dominated by the hit tests, which are O(1) with rasterization. GPUs are able to rasterize to memory, but I am not aware of any GPU / rendering API that is able to work with a list of points. Also, with a list of points, you may lose the O(1) runtime for the hit check. You may be able to save some memory by using a 16 bit raster buffer (not sure if that's feasible with your GPU setup -- but you only seem to need 10000 colors).

Photon Mapping - Several issues

So I'm trying to implement a photon mapping algorithm to simulate global illumination in my ray tracing program. However, I'm running into a few issues that are making it hard to complete the implementation.
My program already successfully traces photons throughout the scene, stores them in a balanced KD-Tree and can gather the k nearest photons near any given point p, so most of the work is complete. The issue is mostly when it comes time for the radiance estimate.
First, I can't seem to get the indirect illumination to be bright enough to make any noticeable difference in my scene. If my light source emits 100,000 photons, then the power of each stored photon (which for 100,000 emitted photons would be roughly around 500,000 stored photons in my program) must be scaled down by a factor of 100,000 which makes them very dim. I thought I could mitigate this when dividing by the area if the encapsulating circle (pi*rad^2) if I have a search radius much less than 1 but decreasing the search radius to such a small number leaves me with very few photons for the estimate and if I use a large radius I can get enough photons but i don't get the extra "power boost" of using a small radius and might wind up including incorrect photons. So I don't know what to do.
Additionally, my other problem is that if I were to artificially scale up the photon power to increase the indirect lighting contribution, the resulting illumination is splotchy, uneven and ugly and doesn't look at all realistic. I know this problem is vague but I don't know why it looks this way since I'm pretty sure I'm doing the radiance estimate and brdf calculations correctly.
Without knowing the exact calculations you are doing throughout your rendering system, no one will be able to tell you why your indirect illumination is so weak. It could be that your materials are dark enough that there just isn't much indirect light. It could be that you are missing a factor of pi in your indirect illumination calculation somewhere, or you could be missing a divide-by-pi in your direct illumination calculation, so the indirect is dim in comparison.
As for splotchiness, that's what photon mapping looks like without a ton of photons. Try 100 million photons (or at least 10 million) instead and see if the issue persists.

Raytracing via diffusion algorithm

Many certain resources about raytracing tells about:
"shoot rays, find the first obstacle to cut it"
"shoot secondary rays..."
"or, do it reverse and approximate/interpolate"
I didnt see any algortihm that uses a diffusion algorithm. Lets assume a point-light is a point that has more density than other cells(all space is divided into cells), every step/iteration of lighting/tracing makes that source point to diffuse into neighbours using a velocity field and than their neighbours and continues like that. After some satisfactory iterations(such as 30-40 iterations), the density info of each cell is used for enlightment of objects in that cell.
Point light and velocity field:
But it has to be a like 1000x1000x1000 size and this would take too much time and memory to compute. Maybe just computing 10x10x10 and when finding an obstacle, partitioning that area to 100x100x100(in a dynamic kd-tree fashion) can help generating lighting/shadows for acceptable resolution? Especially for vertex-based illumination rather than triangle.
Has anyone tried this approach?
Note: Velocity field is here to make light diffuse to outwards mostly(not %100 but %99 to have some global illumination). Finite-element-method can make this embarassingly-parallel.
Edit: any object that is hit by a positive-density will be an obstacle to generate a new velocity field around the surface of it. So light cannot go through that object but can be mirrored to another direction.(if it is a lens object than light diffuse harder through it) So the reflection of light can affect other objects with a higher iteration limit
Same kd-tree can be used in object-collision algorithms :)
Just to take as a grain of salt: a neural-network can be trained for advection&diffusion in a 30x30x30 grid and that can be used in a "gpu(opencl/cuda)-->neural-network ---> finite element method --->shadows" way.
There's a couple problems with this as it stands.
The first problem is that, fundamentally, a photon in the Newtonian sense doesn't react or change based on the density of other photons around. So using a density field and trying to light to follow the classic Navier-Stokes style solutions (which is what you're trying to do, based on the density field explanation you gave) would result in incorrect results. It would also, given enough iterations, result in complete entropy over the scene, which is also not what happens to light.
Even if you were to get rid of the density problem, you're still left with the the problem of multiple photons going different directions in the same cell, which is required for global illumination and diffuse lighting.
So, stripping away the problem portions of your idea, what you're left with is a particle system for photons :P
Now, to be fair, sudo-particle systems are currently used for global illumination solutions. This type of thing is called Photon Mapping, but it's only simple to implement a direct lighting solution using it :P

How does 3D collision / object detection work?

I'v always wondered this. In a game like GTA where there are 10s of thousands of objects, how does the game know as soon as you're on a health pack?
There can't possibly be an event listener for each object? Iterating isn't good either? I'm just wondering how it's actually done.
There's no one answer to this but large worlds are often space-partitioned by using something along the lines of a quadtree or kd-tree which brings search times for finding nearest neighbors below linear time (fractional power, or at worst O( N^(2/3) ) for a 3D game). These methods are often referred to as BSP for binary space partitioning.
With regards to collision detection, each object also generally has a bounding volume mesh (set of polygons forming a convex hull) associated with it. These highly simplified meshes (sometimes just a cube) aren't drawn but are used in the detection of collisions. The most rudimentary method is to create a plane that is perpendicular to the line connecting the midpoints of each object with the plane intersecting the line at the line's midpoint. If an object's bounding volume has points on both sides of this plane, it is a collision (you only need to test one of the two bounding volumes against the plane). Another method is the enhanced GJK distance algorithm. If you want a tutorial to dive through, check out NeHe Productions' OpenGL lesson #30.
Incidently, bounding volumes can also be used for other optimizations such as what are called occlusion queries. This is a process of determining which objects are behind other objects (occluders) and therefore do not need to be processed / rendered. Bounding volumes can also be used for frustum culling which is the process of determining which objects are outside of the perspective viewing volume (too near, too far, or beyond your field-of-view angle) and therefore do not need to be rendered.
As Kylotan noted, using a bounding volume can generate false positives when detecting occlusion and simply does not work at all for some types of objects such as toroids (e.g. looking through the hole in a donut). Having objects like these occlude correctly is a whole other thread on portal-culling.
Quadtrees and Octrees, another quadtree, are popular ways, using space partitioning, to accomplish this. The later example shows a 97% reduction in processing over a pair-by-pair brute-force search for collisions.
A common technique in game physics engines is the sweep-and-prune method. This is explained in David Baraff's SIGGRAPH notes (see Motion with Constraints chapter). Havok definitely uses this, I think it's an option in Bullet, but I'm not sure about PhysX.
The idea is that you can look at the overlaps of AABBs (axis-aligned bounding boxes) on each axis; if the projection of two objects' AABBs overlap on all three axes, then the AABBs must overlap. You can check each axis relatively quickly by sorting the start and end points of the AABBs; there's a lot of temporal coherence between frames since usually most objects aren't moving very fast, so the sorting doesn't change much.
Once sweep-and-prune detects an overlap between AABBs, you can do the more detailed check for the objects, e.g. sphere vs. box. If the detailed check reveals a collision, you can then resolve the collision by applying forces, and/or trigger a game event or play a sound effect.
Correct. Normally there is not an event listener for each object. Often there is a non-binary tree structure in memory that mimics your games map. Imagine a metro/underground map.
This memory strucutre is a collection of things in the game. You the player, monsters and items that you can pickup or items that might blowup and do you harm. So as the player moves around the game the player object pointer is moved in the game/map memory structure.
see How should I have my game entities knowledgeable of the things around them?
I would like to recommend the solid book of Christer Ericson on real time collision detection. It presents the basics of collision detection while providing references on the contemporary research efforts.
Real-Time Collision Detection (The Morgan Kaufmann Series in Interactive 3-D Technology)
There are a lot of optimizations can be used.
Firstly - any object (say with index i for example) is bounded by cube, with center coordinates CXi,CYi, and size Si
Secondly - collision detection works with estimations:
a) Find all pairs cubes i,j with condition: Abs(CXi-CXj)<(Si+Sj) AND Abs(CYi-CYj)<(Si+Sj)
b) Now we work only with pairs got in a). We calculate distances between them more accurately, something like Sqrt(Sqr(CXi-CXj)+Sqr(CYi-CYj)), objects now represented as sets of few numbers of simple figures - cubes, spheres, cones - and we using geometry formulas to check these figures intersections.
c) Objects from b) with detected intersections are processed as collisions with physics calculating etc.

Resources