Break solid into constituent parts - computational-geometry

The following 3d model-in stl format-is composed of cuboids and cylinders
How can I extract the dimensions and coordinates of these constituent solids from the composite, i.e. the dimensions and locations of cuboids/cylinders?
I have tried an approach based on constructive solid geometry, this is a bit too slow and unwieldy for me. Due to the lack of a dataset, machine learning or deep learning models are not an option.

If you refer to the STL format described in wikipedia, in this format, each STL solid is composed solely of triangles. The construction information you are looking for in not in the file anymore.
If, however, all the solids you are looking for are not merged into one STL solid, and each solid is a cube or a cylinder, you can easily
determine the bounding box of each solid,
determine if the solid is a cube or a cylinder by checking each point against the bounding box planes (if a point is not on the bounding box plane, the solid is a cylinder),
and the same to determine orientation of the cylinder.
With the bounding box and the type/orientation, you have the base attribute of the solids you're looking for.

Related

Surface mesh triangles: query space within

I have a surface mesh of triangles. Assume, the surface mesh is closed.
I intend to do a spatial query to figure out whether a space is within my surface mesh or not. Space unit can be represented by a bounding box, a voxel or any other spatial tool.
What data structures are available to do the above query?
What algorithms are available to implement the query from scratch?
Are any ready-to-use libraries available?
Thanks =)
I don't think an R-tree will help directly to find what's inside a closed mesh.
If the data has separate "bubbles", chunks of space enclosed by meshes, those could be represented by bounding boxes and put in an R-tree index. That would help find which bubbles may intersect the query object, so that further checking can be done (really it would eliminate the bubbles that could not intersect, so they don't need to be checked).
If you can somehow break up the space inside your mesh into smaller chunks, those could be indexed. OK if they overlap or extend outside the mesh.
If the mesh is totally closed, and for a single point, you can use Ray Tracing to shoot a ray from your point to any direction and see how many times it hits the mesh. If it hits an odd number of times, it's inside, if it hits an even number it's outside. For other shapes, however, you might need collision detection.
Existing libraries will depend on which platform/programming language you're doing this for, but if you have freedom to choose, maybe start with Unity?
As Antonin mentioned in their answer, an R-Tree index will help you with that but won't directly check if a point or other shape is inside your mesh. If you can break up the space inside your mesh into boxes, R-Trees will help you do "quick checks" for the positive case where your point or shape is inside the mesh.
VDB: hollow vs filled
In the following video, it is demonstrated how we can create two type of VDB by Houdini:
Distance: hollow volume: creates a shell of voxels on geometry outside
Fog: solid volume: fills the geometry with voxels
https://youtu.be/jqNRu8BYYXo?t=259
Implication
This implies that it is possible to tag hollow and filled voxels by VDB. But I don't know how to do it programmatically with voxel code.

Need a geometric edge/crease detection method

I am experimenting with a primitive rendering style (solid colors with highlighted edges/creases) for an open-source game I contribute to. The world geometry is fairly simplistic, and is mostly comprised of blocks, pyramids, and there may ultimately be other simple volumes like cylinders, cones, other kinds of prisms, etc. The rendering will be done with OpenGL ES 2.
I have been experimenting with edge detection methods for the edges/creases. It seemed like doing shader-based edge detection (I tried the sobel filter and several other algorithms) on the depth value and face normals would be easiest, but I was unable to get a good result, mostly due to the precision limits of the depth buffer and the complexity of far-away geometry, as well as the inability to do any good antialiasing on the edges.
I ultimately decided that I needed to render the lines geometrically so I could make them thick and smooth out the edges, etc. I would like to generate the lines programmatically from the geometry definition prior to rendering to improve runtime performance. I can get most of the effect I want by drawing the main geometry, set a depth offset, then draw lines over the geometry. However, this technique has some shortcomings, as seen below:
Overlapping Geometry
There may be several pieces of geometry overlapping or adjoining to form more complex structures. Where several pieces have overlapping/coplanar faces, I want to draw an outline around them but not around each individual piece so you can see each separate part.
Current result on top; desired result on bottom:
Creases
This issue was also visible in the image above, but the image below also shows what my goals are. I want to draw lines where there are creases in overlapping geometry to make them stand out a lot more.
Current result on top; desired result on bottom:
From what I can tell so far, for the overlapping faces problem, I think I need to do intersection tests between my lines and any nearby intersecting faces, then somehow break the lines up and get rid of the segments that cross other faces. To create lines in the creases between geometry, I think I need to do some kind of intersection tests between the two faces that form the crease. However, I'm having a hard time wrapping my head around the step-by-step procedure for doing that. Again, I would like to set up these lines programmatically in a pre-rendering step if possible. Any guidance would be much appreciated.

What is the fastest shadowing algorithm (CPU only)?

Suppose I have a 3D model:
The model is given in the form of vertices, faces (all triangles) and normal vectors. The model may have holes and/or transparent parts.
For an arbitrarily placed light source at infinity, I have to determine:
[required] which triangles are (partially) shadowed by other triangles
Then, for the partially shadowed triangles:
[bonus] what fraction of the area of the triangle is shadowed
[superbonus] come up with a new mesh that describe the shape of the shadows exactly
My final application has to run on headless machines, that is, they have no GPU. Therefore, all the standard things from OpenGL, OpenCL, etc. might not be the best choice.
What is the most efficient algorithm to determine these things, considering this limitation?
Do you have single mesh or more meshes ?
Meaning if the shadow is projected on single 'ground' surface or on more like room walls or even near objects. According to this info the solutions are very different
for flat ground/wall surfaces
is usually the best way a projected render to this surface
camera direction is opposite to light normal and screen is the render to surface. Surface is not usually perpendicular to light so you need to use projection to compensate... You need 1 render pass for each target surface so it is not suitable if shadow is projected onto near mesh (just for ground/walls)
for more complicated scenes
You need to use more advanced approach. There are quite a number of them and each has its advantages and disadvantages. I would use Voxel map but if you are limited by space than some stencil/vector approach will be better. Of course all of these techniques are quite expensive and without GPU I would not even try to implement them.
This is how Voxel map looks like:
if you want just self shadowing then voxel map size can be only some boundig box around your mesh and in that case you do not incorporate whole mesh volume instead just projection of each pixel into light direction (ignore first voxel...) to avoid shadow on lighted surface

I have an OpenGL Tessellated Sphere and I want to cut a cylindrical hole in it

I am working on a piece of software which generated a polygon mesh to represent a sphere, and I want to cut a hole through the sphere. This polygon mesh is only an overlay across the surface of the sphere. I have a good idea of how to determine which polygons will intersect my hole, and I can remove them from my collection, but after that point I am getting a little confused. I was wondering if anyone could help me with the high-level concepts?
Basically, I envision three situations:
1.) The cylindrical hole does not intersect my sphere.
2.) The cylindrical hole partially goes through my sphere.
3.) The cylindrical hole goes all the way through my sphere.
For #1, I can test for this (no polygons removed) and act accordingly (do nothing). For #2 and #3, I am not sure how to re-tessellate my sphere to account for the hole. For #3, I have somewhat of an idea that is basically along the following lines:
a.) Find your entry point (a circle)
b.) Find your exit point (a circle)
c.) Remove the necessary polygons
d.) Make new polygons along the 4* 'sides' of the hole to keep my
sphere a manifold.
This extremely simplified algorithm has some 'holes' I would like to fill in. For example, I don't actually want to have 4 sides to my hole - it should be a cylinder, or at lease a tessellated representation of a cylinder. I'm also not sure how to make these new polygons to keep my sphere with a hole in a tessellated surface.
I have no idea how to approach scenario #2.
Sounds like you want constructive solid geometry.
Carve might do what you want. If you just want run-time rendering OpenCSG will work.
Well if you want just to render this (visualize) then may be you do not need to change the generated meshes at all. Instead use Stencil buffer to render your sphere with the holes. For example I am rendering disc (thin cylinder) with circular holes near its outer edge (as a base plate for machinery) with combination of solid and transparent objects around so I need the holes are really holes. As I was lazy to triangulate the shape as is generated at runtime I chose stencil for this.
Create OpenGL context with Stencil buffer
I am using 8 bit for stencil but this technique uses just single bit.
Clear stencil with 0 and turn off Depth&Color masks
This has to be done before rendering your mesh with stencil. So if you have more objects rendered in this way you need to do this before each one of them.
Set stencil with 1 for solid mesh
Clear stencil with 0 for hole meshes
Turn on Depth&Color masks and render solid mesh where stencil is 1
In code it looks like this:
// [stencil]
glEnable(GL_STENCIL_TEST);
// whole stencil=0
glClearStencil(0);
glClear(GL_STENCIL_BUFFER_BIT);
// turn off color,depth
glStencilMask(0xFF);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glDepthMask(GL_FALSE);
// stencil=1 for solid mesh
glStencilFunc(GL_ALWAYS,1,0xFF);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glCylinderxz(0.0,y,0.0,r,qh);
// stencil=0 for hole meshes
glStencilFunc(GL_ALWAYS,0,0xFF);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
for(b=0.0,j=0;j<12;j++,b+=db)
{
x=dev_R*cos(b);
z=dev_R*sin(b);
glCylinderxz(x,y-0.1,z,dev_r,qh+0.2);
}
// turn on color,depth
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask(GL_TRUE);
// render solid mesh the holes will be created by the stencil test
glStencilFunc(GL_NOTEQUAL,0,0xFF);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
glColor3f(0.1,0.3,0.4);
glCylinderxz(0.0,y,0.0,r,qh);
glDisable(GL_STENCIL_TEST);
where glCylinderxz(x,y,z,r,h) is just function that render cylinder at (x,y,z) with radius r with y-axis as its rotation axis. The db is angle step (2*Pi/12). Radiuses are r-big, dev_r-hole radius, dev_R-hole centers And qhis the thickness of the plate.
The result looks like this (each of the 2 plates is rendered with this):
This approach is more suited for thin objects. If your cuts leads to thick enough sides then you need to add a cut side rendering otherwise the lighting could be wrong on these parts.
I implemented CSG operations using scalar fields earlier this year. It works well if performance isn't important. That is, the calculations aren't real time. The problem is that the derivative isn't defined everywhere, so you can forget about computing cheap vertex-normals that way. It has to be done as a post-step.
See here for the paper I used (in the first answer), and some screenshots I made:
CSG operations on implicit surfaces with marching cubes
Also, CSG this way requires the initial mesh to be represented using implicit surfaces. While any geometric mesh can be split into planes, it wouldn't give good results. So spheres would have to be represented by a radius and an origin, and cylinders would be represented by a radius, origin and base height.

Recommend some Bresenham's-like algorithm of sphere mapping in 2D?

I need the fastest sphere mapping algorithm. Something like Bresenham's line drawing one.
Something like the implementation that I saw in Star Control 2 (rotating planets).
Are there any already invented and/or implemented techniques for this?
I really don't want to reinvent the bicycle. Please, help...
Description of the problem.
I have a place on the 2D surface where the sphere has to appear. Sphere (let it be an Earth) has to be textured with fine map and has to have an ability to scale and rotate freely. I want to implement it with a map or some simple transformation function of coordinates: each pixel on the 2D image of the sphere is defined as a number of pixels from the cylindrical map of the sphere. This gives me an ability to implement the antialiasing of the resulting image. Also I think about using mipmaps to implement mapping if one pixel on resulting picture is corresponding to more than one pixel on the original map (for example, close to poles of the sphere). Deeply inside I feel that this can be implemented with some trivial math. But all these thoughts are just my thoughts.
This question is a little bit related to this one: Textured spheres without strong distortion, but there were no answers available on my question.
UPD: I suppose that I have no hardware support. I want to have an cross-platform solution.
The standard way to do this kind of mapping is a cube map: the sphere is projected onto the 6 sides of a cube. Modern graphics cards support this kind of texture at the hardware level, including full texture filtering; I believe mipmapping is also supported.
An alternative method (which is not explicitly supported by hardware, but which can be implemented with reasonable performance by procedural shaders) is parabolic mapping, which projects the sphere onto two opposing parabolas (each of which is mapped to a circle in the middle of a square texture). The parabolic projection is not a projective transformation, so you'll need to handle the math "by hand".
In both cases, the distortion is strictly limited. Due to the hardware support, I recommend the cube map.
There is a nice new way to do this: HEALPix.
Advantages over any other mapping:
The bitmap can be divided into equal parts (very little distortion)
Very simple, recursive geometry of the sphere with arbitrary precision.
Example image.
Did you take a look at Jim Blinn's articles "How to draw a sphere" ? I do not have access to the full articles, but it looks like what you need.
I'm a big fan of StarconII, but unfortunately I don't remember the details of what the planet drawing looked like...
The first option is triangulating the sphere and drawing it with standard 3D polygons. This has definite weaknesses as far as versimilitude is concerned, but it uses the available hardware acceleration and can be made to look reasonably good.
If you want to roll your own, you can rasterize it yourself. Foley, van Dam et al's Computer Graphics -- Principles and Practice has a chapter on Bresenham-style algorithms; you want the section on "Scan Converting Ellipses".
For the point cloud idea I suggested in earlier comments: you could avoid runtime parameterization questions by preselecting and storing the (x,y,z) coordinates of surface points instead of a 2D map. I was thinking of partially randomizing the point locations on the sphere, so that they wouldn't cause structured aliasing when transformed (forwards, backwards, whatever 8^) onto the screen. On the downside, you'd have to deal with the "fill" factor -- summing up the colors as you draw them, and dividing by the number of points. Er, also, you'd have the problem of what to do if there are no points; e.g., if you want to zoom in with extreme magnification, you'll need to do something like look for the nearest point in that case.

Resources