I have a surface mesh of triangles. Assume, the surface mesh is closed.
I intend to do a spatial query to figure out whether a space is within my surface mesh or not. Space unit can be represented by a bounding box, a voxel or any other spatial tool.
What data structures are available to do the above query?
What algorithms are available to implement the query from scratch?
Are any ready-to-use libraries available?
Thanks =)
I don't think an R-tree will help directly to find what's inside a closed mesh.
If the data has separate "bubbles", chunks of space enclosed by meshes, those could be represented by bounding boxes and put in an R-tree index. That would help find which bubbles may intersect the query object, so that further checking can be done (really it would eliminate the bubbles that could not intersect, so they don't need to be checked).
If you can somehow break up the space inside your mesh into smaller chunks, those could be indexed. OK if they overlap or extend outside the mesh.
If the mesh is totally closed, and for a single point, you can use Ray Tracing to shoot a ray from your point to any direction and see how many times it hits the mesh. If it hits an odd number of times, it's inside, if it hits an even number it's outside. For other shapes, however, you might need collision detection.
Existing libraries will depend on which platform/programming language you're doing this for, but if you have freedom to choose, maybe start with Unity?
As Antonin mentioned in their answer, an R-Tree index will help you with that but won't directly check if a point or other shape is inside your mesh. If you can break up the space inside your mesh into boxes, R-Trees will help you do "quick checks" for the positive case where your point or shape is inside the mesh.
VDB: hollow vs filled
In the following video, it is demonstrated how we can create two type of VDB by Houdini:
Distance: hollow volume: creates a shell of voxels on geometry outside
Fog: solid volume: fills the geometry with voxels
https://youtu.be/jqNRu8BYYXo?t=259
Implication
This implies that it is possible to tag hollow and filled voxels by VDB. But I don't know how to do it programmatically with voxel code.
Related
I am creating a 3D reservoir model which looks like this.
It's made of hundreds of thousands of cells with outline. The outline is needed for all cells underneath, because there is an IJK filter used to hide cells on any level and thus show the rest. Once the model is rendered, it shouldn't need to be updated in terms of position or scale.
That's enough about the background. The approach I'm using is creating one large geometry, which stores all vertices cross the reservoir in one triangle strip. It also stores IJK index for each cell, so the IJK filter works in shader level. This should create the mesh part. Then I create another object to draw all outlines using one THREE.LineSegments.
The approach works pretty well for small amount of cells, but for large data set, frame rate drops.
I'm proposing another way of doing this by barycentric outline and instancing drawing. Barycentric outline drawing removes the extra LineSegment object, since it draws outline in fragment shader. However, it comes with drawbacks. Because of the missing of geometry shader in WebGL, I have to use full triangle rather than triangle strip to store barycentric coordinates for each vertex. I'm ok with this extra memory usage, if instanced drawing can boost the performance.(?) That's to say, I draw a cube with outline, and I create as many instances as I need and put them in right position.
I am wondering if this approach is indeed gonna increase the performance theoretically. Any thoughts are welcomed!
Ok I think I am gonna answer this question myself.
I implemented the change based on above ideas and it works pretty good compared to the original version.
Let's put the result first: this approach has no problem rendering hundreds of thousands of cells at reasonable frame rate. My demo contains 400,000 cells, with the frame rate at 50 fps in worst case, running on my Nvidia 1050Ti card and 4k monitor. For comparison, if I draw 400,000 cells in the previous version, the frame rate could drop to 10 fps.
This means using instanced drawing for a large object is faster than composing a single large geometry. For rendering performance, the instanced cube is rendered only one side, while triangle-stripped cube is two-sided. Once I can draw a single unit cube with ideal outline, I can transform it to any places in "any" shape in vertex shader. But of course instanced drawing comes with its restrictions: each cell doesn't have to be at same shape, but has to have same number of vertices, faces, etc; I lost control to change vertex color...
As for memory usage, the new approach actually use less. I provide position for 8 vertices, instead of 14, in each cell. Even though the first unit cube has 36 vertices, I can use its unit position as index, for subsequent instances. That is, for 36 unit vertices (0/1, 0/1, 0/1), I only need to provide 8 real positions.
Hope this helps for people who want to implement the same optimization.
I am setting up a particle system in threejs by adapting the buffer geometry drawcalls example in threejs. I want to create a series of points, but I want them to be round.
The documentation for threejs points says it accepts geometry or buffer geometry, but I also noticed there is a circleBufferGeometry. Can I use this?
Or is there another way to make the points round besides using sprites? I'm not sure, but it seems like loading an image for each particle would cause a lot of unnecessary overhead.
So, in short, is there a more performant or simple way to make a particle system of round particles (spheres or discs) in threejs without sprites?
If you want to draw each "point"/"particle" as a geometric circle, you can use THREE.InstancedBufferGeometry or take a look at this
The geometry of a Points object defines where the points exist in 3D space. It does not define the shape of the points. Points are also drawn as quads, so they're always going to be a square, though they don't have to appear that way.
Your first option is to (as you pointed out) load a texture for each point. I don't really see how this would introduce "a lot" of overhead, because the texture would only be loaded once, and would be applied to all points. But, I'm sure you have your reasons.
Your other option is to create your own shader to draw the point as a circle. This method takes the point as a square, and discards any fragments (multiple fragments make up a pixel) outside the circle.
I am experimenting with a primitive rendering style (solid colors with highlighted edges/creases) for an open-source game I contribute to. The world geometry is fairly simplistic, and is mostly comprised of blocks, pyramids, and there may ultimately be other simple volumes like cylinders, cones, other kinds of prisms, etc. The rendering will be done with OpenGL ES 2.
I have been experimenting with edge detection methods for the edges/creases. It seemed like doing shader-based edge detection (I tried the sobel filter and several other algorithms) on the depth value and face normals would be easiest, but I was unable to get a good result, mostly due to the precision limits of the depth buffer and the complexity of far-away geometry, as well as the inability to do any good antialiasing on the edges.
I ultimately decided that I needed to render the lines geometrically so I could make them thick and smooth out the edges, etc. I would like to generate the lines programmatically from the geometry definition prior to rendering to improve runtime performance. I can get most of the effect I want by drawing the main geometry, set a depth offset, then draw lines over the geometry. However, this technique has some shortcomings, as seen below:
Overlapping Geometry
There may be several pieces of geometry overlapping or adjoining to form more complex structures. Where several pieces have overlapping/coplanar faces, I want to draw an outline around them but not around each individual piece so you can see each separate part.
Current result on top; desired result on bottom:
Creases
This issue was also visible in the image above, but the image below also shows what my goals are. I want to draw lines where there are creases in overlapping geometry to make them stand out a lot more.
Current result on top; desired result on bottom:
From what I can tell so far, for the overlapping faces problem, I think I need to do intersection tests between my lines and any nearby intersecting faces, then somehow break the lines up and get rid of the segments that cross other faces. To create lines in the creases between geometry, I think I need to do some kind of intersection tests between the two faces that form the crease. However, I'm having a hard time wrapping my head around the step-by-step procedure for doing that. Again, I would like to set up these lines programmatically in a pre-rendering step if possible. Any guidance would be much appreciated.
OpenGL Question:I have something to ask about clip space transformation. I am reading an online tutorial and it says that everything you draw outside the clip space will be clipped. When it come to this, does the elements outside the clip space affects the performance or not? Because it will not be drawn and thus it doesn't affect.
Assuming that it will affect performance and in case of 2d game like super mario, I am thinking about not to draw the elements outside the clip space to achieve better performance. Please clarify. Thanks.
OpenGL has only a certain amount of knowledge about your scene and will clip very late in the pipeline. It can't apply a broad phase test. Assuming you can, you should.
Supposing you had a model with 30,000 triangles, OpenGL would transform each and every one of those 30,000 triangles before considering clipping. If you know something as simple as the bounding sphere for the model it's possible you could see that the whole thing is completely outside of the frustum in a single test and save almost 30,000 extra bits of effort.
In a 2d game like Mario what this usually means is using the scroll position to index into the map and to generate geometry only for potentially visible tiles and sprites that are within the visible area.
For the map that will generally just men figuring out the (x, y) of one corner and then generating geometry for the known width and height of the screen so it means discarding the vast majority of the geometry with zero processing.
For the sprites, this is generally why in those sort of games you often see enemies reset to their starting position if you walk a little way from them and then walk back: they're added to the active list based on a map location trigger and removed when you walk far enough away. While not active, no mutable storage is afforded to them.
Suppose I have a 3D model:
The model is given in the form of vertices, faces (all triangles) and normal vectors. The model may have holes and/or transparent parts.
For an arbitrarily placed light source at infinity, I have to determine:
[required] which triangles are (partially) shadowed by other triangles
Then, for the partially shadowed triangles:
[bonus] what fraction of the area of the triangle is shadowed
[superbonus] come up with a new mesh that describe the shape of the shadows exactly
My final application has to run on headless machines, that is, they have no GPU. Therefore, all the standard things from OpenGL, OpenCL, etc. might not be the best choice.
What is the most efficient algorithm to determine these things, considering this limitation?
Do you have single mesh or more meshes ?
Meaning if the shadow is projected on single 'ground' surface or on more like room walls or even near objects. According to this info the solutions are very different
for flat ground/wall surfaces
is usually the best way a projected render to this surface
camera direction is opposite to light normal and screen is the render to surface. Surface is not usually perpendicular to light so you need to use projection to compensate... You need 1 render pass for each target surface so it is not suitable if shadow is projected onto near mesh (just for ground/walls)
for more complicated scenes
You need to use more advanced approach. There are quite a number of them and each has its advantages and disadvantages. I would use Voxel map but if you are limited by space than some stencil/vector approach will be better. Of course all of these techniques are quite expensive and without GPU I would not even try to implement them.
This is how Voxel map looks like:
if you want just self shadowing then voxel map size can be only some boundig box around your mesh and in that case you do not incorporate whole mesh volume instead just projection of each pixel into light direction (ignore first voxel...) to avoid shadow on lighted surface