Is a closed polygonal mesh flipped? - algorithm

I have a 3d modeling application. Right now I'm drawing the meshes double-sided, but I'd like to switch to single sided when the object is closed.
If the polygonal mesh is closed (no boundary edges/completely periodic), it seems like I should always be able to determine if the object is currently flipped, and automatically correct.
Being flipped means that my normals point into the object instead of out of the object. Being flipped is a result of a mismatch between my winding rules and the current frontface setting, but I compute the normals directly from the geometry, so looking at the normals is a simple way to detect it.
One thing I was thinking was to take the bounding box, find the highest point, and see if its normal points up or down - if it's down, then the object is flipped.
But it seems like this solution might be prone to errors with degenerate geometry, or floating point error, as I'd only be looking at a single point. I guess I could get all 6 axis-aligned extents, but that seems like a slightly better kludge, and not a proper solution.
Is there a robust, simple way to do this? Robust and hard would also work.. :)

This is a robust, but slow way to get there:
Take a corner of a bounding box offset from the centroid (force it to be guaranteed outside your closed polygonal mesh), then create a line segment from that to the center point of any triangle on your mesh.
Measure the angle between that line segment and the normal of the triangle.
Intersect that line segment with each triangle face of your mesh (including the tri you used to generate the segment).
If there are an odd number of intersections, the angle between the normal and the line segment should be <180. If there are an even number, it should be >180.
If there are an even number of intersections, the numbers should be reversed.
This should work for very complex surfaces, but they must be closed, or it breaks down.

"I'm drawing the meshes double-sided"
Why are you doing that? If you're using OpenGL then there is a much better way to go and save yourself all the work. use:
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, 1);
With this, all the polygons are always two sided.
The only reason why you would want to use one-sided lighting is if you have an open or partially inverted mesh and you want to somehow indicate what part belong to the inside by having them unlighted.
Generally, the problem you're posing is an open problem in geometry processing and AFAIK there is no sure-fire general way that can always determine the orientation. As you suggest, there are heuristics that work almost always.
Another approach is reminiscent of a famous point-in-polygon algorithm: choose a vertex on the mesh and shoot a ray from it in the direction of the normal. If the ray hits an even number of faces then the normal is pointed to the outside, if it is odd then the normal is towards to inside. notice not to count the point of origin as an intersection. This approach will work only if the mesh is a closed manifold and it can reach some edge cases if the ray happens to pass exactly between two polygons so you might want to do it a number of times and take the leading vote.

Related

2d geometry push algorithm

I have a rectangle board and in it, there are some disjoint 2D shapes such as rectangles, polygons, and more complex geometries, such as simple shapes with arc/line edges.
Usually they are compact, but for some shapes, we may be able to rotate or translate them.
If we move one geometry according a given direction, the adjacent geometry should also be moved or rotated. It looks like the first geometry pushes the second geometry. The second geometry might push the other two geometries. Finally, we may achieve another stable state, or there is no room to push.
Is there any existing investigation on this?
Let's first focus on simple polygons, convex and non-convex.
Push might be any direction.
example image
I'm doing some investigation but could not find existing papers about this topic.
Can we simulate it through mechanics or dynamics? Or pure geometry algorithm?
Just some keywords for paper search is also very useful.
It's similar with EDA's auto push concept. User can move one element (pin/wire) of a circuit, then the software automatically pushes adjacent elements so that the topology is kept and meet design rules.
I think I can use some concepts in mechanics, to compute moving direction at least:
If the connected part of polygon A and polygon B is a point, then pushing A by one direction then generates a force to B along the normal direction. But the force may not generate a move. We need loop all the parts or reach the boundary to check how much it can move.
Let's ignore rotation first.
I post an answer because I don't have enough reputation for a comment. If I don't misunderstand the problem, geometrically this sounds as a collision detection problem. You have to apply a transformation (translation, rotation) to your geometry, and check if this new position overlaps with another geometry. If this is the case you have to apply another transformation to the second. Collision detection is a big topic in games and simulation.

Fitting a mesh and a drawing together

Suppose you're trying to render a user's freehand drawings using a 2D triangular mesh. Start with a plain regular mesh of triangles and color their edges to match the drawing as closely as possible. To improve the results, you can move the vertices of the mesh slightly, but keep them within a certain distance of where they would be in a regular mesh so the mesh doesn't become a mess. Let's say that 1/4 of the length of an edge is a fair distance, giving the vertices room to move while keeping them out of each other's personal space.
Here is a hand-made representation of roughly what we're trying to do. Since the drawing is coming freehand from the user, it's a series of line segments taken from mouse movements.
The regular mesh is slightly distorted to allow the user's drawing to be better represented by the edges of the mesh. Unfortunately the end result looks quite bad, but perhaps we could have somehow distorted the drawing to better fit the mesh, and the combination of the two distortions would have created something far more recognizable as the original drawing.
The important thing is to preserve angles, so if the user draws a 90-degree corner it ends up looking close to a 90-degree corner, and if the user draws a straight line it doesn't end up looking like a zigzag. Aside from that, there's no reason why we shouldn't change the drawing in other ways, like translating it, scaling it and so on, because we don't need to exactly preserve distances.
One tricky test case is a perfectly vertical line. The triangular mesh in the image above can easily handle horizontal lines, but a naive approach would turn a vertical line into a jagged mess. The best technique seems to be to horizontally translate the line until it passes through each horizontal edge alternating between 1/4 and 3/4 of the way along the edge. That way we can nudge the vertices to the left or right by 1/4 and get a perfect vertical line. That's obvious to a person, but how can an algorithm be made to see that? It involves moving the line further away from vertices, which is the opposite of what we usually want.
Is there some trick to doing this? Does anyone know of a simple algorithm that gives excellent results?

How can I create an internal spiral for a polygon?

For any shape how can I create a spiral inside it of a similar shape. This would be a similar idea to bounding (using Minkowski sum). Rather than creating the same shape inside the shape though it would be a spiral of same shape.
I found this - http://www.cis.upenn.edu/~cis110/13su/lectures/Spiral.java
It creates a spiral based on the parameters passed so it can be for any regular shape.
I want the above for all shapes i.e. irregular polygons too.
I am not hugely familiar with geometric terminology but I have looked up Involutes and an Internal Spiral Search Algorithm too but haven't been useful to me.
Does anyone have any idea where I'd find an algorithm such as this or at least ideas of how I'd come up with one?
this task is extremly hard to do.
need to have the boundary polygon you want to fill with spiral
I think you have it already
create new smaller polygon by shifting all lines inwards by the step.
It is similar to create stroke line around polygon. Step is the screw width so at start of polygon it is 0 and on the end it is d
remove invalid lines from the newly generated screw
Some lines on corners and curvatures will intersect. This is very hard to detect/repair reliably see
this for basics
repeat (do next screw) ... until no space for screw found
But now after the first screw the step is always d this will not necessarily fill the whole shape. For example if you have some thinner spot on shape it will be filled much more faster then the rest so there can still be some holes left.
You should detect them and handle as you see fit see
Finding holes in 2d point sets
Be aware detection if the area is filled is also not trivial
This is how this approach looks like:
[Notes]
If you forget about the spiral and want fill the interior with a zig zag or similar pattern then this is not that hard.
Spiral filling makes a lot of hard geometric problems and if you are not skilled in geometry and vector math this task could be a too big challenge for beginner or even medium skilled programmer in this field to make it work properly. That is at least my opinion (as I done this before) so handle it as such.
I worked on something like this using offsets from the polgyon based on medial axis of Voronoi diagram. It's not simple. I can't share the code as it belongs to the company I worked for and it may not exactly fit your needs.
But here are some similar things I found by other people:
http://www.me.berkeley.edu/~mcmains/pubs/DAC05OffsetPolygon.pdf
http://www.cosy.sbg.ac.at/~held/teaching/seminar/seminar_2010-11/hsm.pdf

3D algorithm for clamping a model to view frustum

What would be an efficient approach for constraining an object so that it's always at least partially intersecting the view frustum?
The use case is that when viewing a model I want to clamp camera panning, as well as model translation, so that the view frustum is never looking at empty space.
One approach I tried was to wrap the model objects in bounding volumes, then enforce the constraint when those fall outside the frustum. I've tried bounding boxes so far, but am considering using a minimal convex hull.
The problem is that that when you zoom in close enough, it's still possible to be looking at empty space within the boundary, as shown in the attached diagram.
This is for a WebGL application, so needs to be fairly efficient in JavaScript, and also for thousand of vertices.
Ideally you would have a aabb tree of your mesh, and then you can recursively project on camea/screen until you get an intersection ?
http://www.codersnotes.com/algorithms/projected-area-of-an-aabb
edit: it's just frustum culling algo against aabtree does anyway, so looking for optimized solution, is looking for optimized frustum culling things
https://fgiesen.wordpress.com/2010/10/17/view-frustum-culling/
http://www2.in.tu-clausthal.de/~zach/teaching/cg_literatur/vfc_bbox.pdf
As general approximation is possible I would try the point cloud. First create a list of points - either by every Nth mesh vertex or every Nth face centre. With N being for example 10. Once you have this point array all you do is check if any of points is in frustum while updating its orientation. If not then this means that user moved or rotated the camera too much and you need to restore last acceptable orientation.
I know that this may seem quite ridiculous but I think that it is fairly easy to implement and frustum checking of vertex is just couple of multiplications per plane. It will be not perfect though.
You can also make frustum a little smaller to ensure that there is some border around the object.

Ray-Polygon Intersection Point on the surface of a sphere

I have a point (Lat/Lon) and a heading in degrees (true north) for which this point is traveling along. I have numerous stationary polygons (Points defined in Lat/Lon) which may or may not be convex.
My question is, how do I calculate the closest intersection point, if any, with a polygon. I have seen several confusing posts about Ray Tracing but they seem to all relate to 3D when the Ray and Polygon are not on the same Plane and also the Polygons must be convex.
sounds like you should be able to do a simple 2d line intersection...
However I have worked with Lat/Long before and know that they aren't exactly true to any 2d coordinate system.
I would start with a general "IsPointInPolygon" function, you can find a million of them by googling, and then test it on your poly's to see how well it works. If they are accurate enough, just use that. But it is possible that due to the non-square nature of lat/long coordinates, you may have to do some modifications using Spherical geometry.
In 2D, the calculations are fairly simple...
You could always start by checking to make sure the ray's endpoint is not inside the polygon (since that's the intersection point in that case).
If the endpoint is out of the line, you could do a ray/line segment intersection with each of the boundary features of the polygon, and use the closest found location. That handles convex/concave features, etc.
Compute whether the ray intersects each line segment in the polygon using this technique.
The resulting scaling factor in (my accepted) answer (which I called h) is "How far along the ray is the intersection." You're looking for a value between 0 and 1.
If there are multiple intersection points, that's fine! If you want the "first," use the one with the smallest value of h.
The answer on this page seems to be the most accurate.
Question 1.E GodeGuru

Resources