2d geometry push algorithm - algorithm

I have a rectangle board and in it, there are some disjoint 2D shapes such as rectangles, polygons, and more complex geometries, such as simple shapes with arc/line edges.
Usually they are compact, but for some shapes, we may be able to rotate or translate them.
If we move one geometry according a given direction, the adjacent geometry should also be moved or rotated. It looks like the first geometry pushes the second geometry. The second geometry might push the other two geometries. Finally, we may achieve another stable state, or there is no room to push.
Is there any existing investigation on this?
Let's first focus on simple polygons, convex and non-convex.
Push might be any direction.
example image
I'm doing some investigation but could not find existing papers about this topic.
Can we simulate it through mechanics or dynamics? Or pure geometry algorithm?
Just some keywords for paper search is also very useful.
It's similar with EDA's auto push concept. User can move one element (pin/wire) of a circuit, then the software automatically pushes adjacent elements so that the topology is kept and meet design rules.
I think I can use some concepts in mechanics, to compute moving direction at least:
If the connected part of polygon A and polygon B is a point, then pushing A by one direction then generates a force to B along the normal direction. But the force may not generate a move. We need loop all the parts or reach the boundary to check how much it can move.
Let's ignore rotation first.

I post an answer because I don't have enough reputation for a comment. If I don't misunderstand the problem, geometrically this sounds as a collision detection problem. You have to apply a transformation (translation, rotation) to your geometry, and check if this new position overlaps with another geometry. If this is the case you have to apply another transformation to the second. Collision detection is a big topic in games and simulation.

Related

3D mesh edge detection / feature line computation algorithm

I have a program that visualizes triangular meshes and allows the users to draw on the meshes using a pen. I want to have a "snapping" mode in my system. The snapping mode performs drawing corrections for the user in the sense that the user-drawn lines are snapped to the nearest edge (or the silhouette) of that part of the mesh.
I'm looking for an algorithm that compute the edges visible on the mesh from a given point of view. By edges, I'm referring to the outlines of the shape: corner points and the lines between them (similar to the definition of an edge in computer vision/image processing -- such as Canny edges).
So far I've thought of two approaches for this:
Edge detection: so far I've only found this paper. Their method is understandable, yet the implementation is not trivial (due to tensor computations and some ambiguity in their explanations). The problem with this approach is that it produces "edge strength values" which is a value in the range [0, 1] for every vertex. The value of 1 indicates an edge vertex with a high confidence. This introduces extra thresholding parameters in the system which I'd rather not have. Their output looks like this (range [0, 1] scaled to [0, 65535]):
Rendering or non-photorealistic methods such as the one asked in this question or this paper. They seem to be able to create the silhouette that I'm after as can be seen below:
I'm not a graphics expert and as of yet I don't know whether their methods can be used for computation of the feature lines rather than rendering.
I was wondering if anybody has any ideas about a good algorithm for what I want to do. Since the system is very interactive, the performance is important. The snapping feature does not have to be enabled all the time (therefore, if the method is computationally expensive, some delay in when "snapping enabled" mode is toggled can be tolerated while the algorithm is computing the edges.) Also, if you know of any implementation (preferably open source), I'd be grateful if you could share it with me.
There are two types of edges that you want to detect:
silhouette edges are viewpoint dependent, they correspond to the places where the line of sight tangents the surfaces. With a triangulated model, they are easy to determine, as they are shared by a front-facing triangle and a back-facing one.
"angular" edges are viewpoint independent and formed by a discontinuity in the tangent plane direction. As a triangulated model has itself this kind of discontinuity, there is no exact criterion to find them. Just set a threshold on the angle formed by two triangles. This threshold must be such that smooth patches do not trigger.
By this approach, you will find the wanted edges in 3D.
This is not enough, as part of them are hidden by other surfaces. You have the option of integrating them as edges in the 3D model and letting the rendering engine do its job, or, if you have the courage, to implement an hidden lines removal algorithm. (The wikipedia link is a little terse.)
Since posting the question, something else came into my head. Since 2D edge detection is a very well-studied problem, one way of tackling the problem is performing 2D edge detection on the projection image of the mesh.
In other words, given a specific view of the mesh, one could generate a 2D image. A 2D edge detection algorithm (such as Canny edge detector) could then be run on the 2D image and the results can be back-projected to 3D to determine the silhouettes of the mesh in question. One possible advantage of this is simplicity!
Edit (2017):
Even though I moved away from this, I returned to this problem again for a different purpose. To anybody else looking into this problem: there is a paper that talks about various contours from meshes that's worth reading (the paper is "Suggestive Contours for Conveying Shape" by DeCarlo et al.).
Working implementation of the methods discussed in the paper are available here.

WEBGL Draw pixels inside vertices position

I am new to the WebGL and shaders world, and I was wondering what the best way for me to paint only the pixels within a path. I have the positions 2d of each point and I would like to fill with a color inside the path.
2D Positions
Fill
Could someone give me a direction? Thanks!
Unlike the canvas 2d API to do this in WebGL requires you to triangulate the path. WebGL only draws points (squares), lines, and triangles. Everything else (circles, paths, 3d models) is up to you to creatively use those 3 primitives.
In your case you need turn your path into a set of triangles. There are tons of algorithms to do that. Each one has tradeoffs, some only handle convex paths, some don't handle holes, some add more points in the middle and some don't. Some are faster than others. There are also libraries that do it like this one for example
It's kind of a big topic arguably too big to go into detail here. Other SO questions about it already have answers.
Once you do have the path turned into triangles then it's pretty straightforward to pass those triangles into WebGL and have them drawn.
Plenty of answers on SO already cover that as well. Examples
Drawing parametric shapes in webGL (without three.js)
Or you might prefer some tutorials
There is a simple triangulation (mesh generation) for your case. First sort all your vertices into CCW order. Then calculate the middle point of all vertices. Then iterate over your sorted vertices, and push a triangle made of the middle point, the point at vertices[index] and the point at vertices[index+1] to the mesh.

How to write a program to draw a tiled picture like this?

I want to write a program to draw a picture which covers a plane with tiled irregular quadrangles, just like this one:
However, I don't know the relevant algorithms, for example, in which order should I draw the edges?
Could someone point a direction for me?
Apologies, in my previous answer, I misunderstood the question.
Here is one stab at an algorithm (not necessarily the most optimal way, but a way). All you need is the ability to render a polygon and a basic rotation.
If you don't want the labels to be flipped, draw them separately (the labels can be stored in the vertices, e.g., and rotated with the polygon points, but drawn upright as text).
Edit
I received a question about the "start with an arbitrary polygon" step. I didn't communicate that step very clearly, as I actually intended to merely suggest an arbitrary polygon from the provided diagram, and not any arbitrary polygon in the world.
However, this should work at least for arbitrary quads, including concave ones, like so:
I'm afraid I lack the proper background to provide a proof as to why this works, however. Perhaps more mathematically-savvy people can help there with the proof.
I think one way to tackle the proof is to first start with the notion that all tiled edges are manifold -- this is a given considering that we're generating a neighboring polygon at every edge in order to generate the tiled result. Then we might be able to prove that every 2-valence boundary vertex is going to become a 4-valence vertex as a result of this operation (since each of its two edges are going to become manifold, and that introduces two new vertex edges into the mix -- this seems like the hardest part to prove to me). Last step might be to prove that the sum of the angles at each 4-valence vertex will always add up to 360 degrees.

Detect when 2 moving objects in 2d plane are close

Imagine we have a 2D sky (10000x10000 coordinates). Anywhere on this sky we can have an aircraft, identified by its position (x, y). Any aircraft can start moving to another coordinates (in straight line).
There is a single component that manages all this positioning and movement. When a aircraft wants to move, it send it a message in the form of (start_pos, speed, end_pos). How can I tell in the component, when one aircraft will move in the line of sight of another (each aircraft has this as a property as radius of sight) in order to notify it. Note that many aircrafts can be moving at the same time. Also, this algorithm is good to be effective sa it can handle ~1000 planes.
If there is some constraint, that is limiting your solution - it can probably be removed. The problem is not fixed.
Use a line to represent the flight path.
Convert each line to a rectangle embracing it. The width of the rectangle is determined by your definition of "close" (The bigger the safety distance is, the wider the rectangle should be).
For each new flight plan:
Check if the new rectangle intersects with another rectangle.
If so, calculate when will each plane reach the collision point. If the time difference is too small (and you should define too small according to the scenario), refuse the new flight plan.
If you want to deal with the temporal aspect (i.e. dealing with the fact that the aircraft move), then I think a potentially simplification is lifting the problem by the time dimension (adding one more dimension - hence, the original problem, being 2D, becomes a 3D problem).
Then, the problem becomes a matter of finding the point where a line intersects a (tilted) cylinder. Finding all possible intersections would then be n^2; not too sure if that is efficient enough.
See Wikipedia:Quadtree for a data structure that will make it easy to find which airplanes are close to a given airplane. It will save you from doing O(N^2) tests for closeness.
You have good answers, I'll comment only on one aspect and probably not correctly
you say that you aircrafts move in form (start_pos, speed, end_pos)
if all aircrafts have such, let's call them, flightplans then you should be able to calculate directly when and where they will be within certain distance from each other, or when will they be at closest point from each other or if the will collide/get too near
So, if they indeed move according to the flightplans and do not deviate from them your problem is deterministic - it boils down to solving a set of equations, which for ~1000 planes is not such a big task.
If you do need to solve these equations faster you can employ the techniques described in other answers
using efficient structures that can speedup calculating distances (quadtree, octree, kd-trees),
splitting the problem to solve the equations only for some relevant future timeslice
prioritize solving equations for pairs for which the distance changes most rapidly
Of course converting time to a third dimension turns the aircrafts from points into lines and you end up searching for the closest points between two 3d lines (here's some math)
I actually found an answer to this question.
It is in the book Real-Time Collision Detection, p. 223. It's better named, as well: Intersecting Moving Sphere Against Sphere, where a 2D sphere is a circle. It's not so simple (and I may also violate some rights) to explain it here, but the basic idea is to fix one of the circles as a point, adding its radius to the radius of the moving one. The new direction for the moving one is the sum of the two original vectors.

Is a closed polygonal mesh flipped?

I have a 3d modeling application. Right now I'm drawing the meshes double-sided, but I'd like to switch to single sided when the object is closed.
If the polygonal mesh is closed (no boundary edges/completely periodic), it seems like I should always be able to determine if the object is currently flipped, and automatically correct.
Being flipped means that my normals point into the object instead of out of the object. Being flipped is a result of a mismatch between my winding rules and the current frontface setting, but I compute the normals directly from the geometry, so looking at the normals is a simple way to detect it.
One thing I was thinking was to take the bounding box, find the highest point, and see if its normal points up or down - if it's down, then the object is flipped.
But it seems like this solution might be prone to errors with degenerate geometry, or floating point error, as I'd only be looking at a single point. I guess I could get all 6 axis-aligned extents, but that seems like a slightly better kludge, and not a proper solution.
Is there a robust, simple way to do this? Robust and hard would also work.. :)
This is a robust, but slow way to get there:
Take a corner of a bounding box offset from the centroid (force it to be guaranteed outside your closed polygonal mesh), then create a line segment from that to the center point of any triangle on your mesh.
Measure the angle between that line segment and the normal of the triangle.
Intersect that line segment with each triangle face of your mesh (including the tri you used to generate the segment).
If there are an odd number of intersections, the angle between the normal and the line segment should be <180. If there are an even number, it should be >180.
If there are an even number of intersections, the numbers should be reversed.
This should work for very complex surfaces, but they must be closed, or it breaks down.
"I'm drawing the meshes double-sided"
Why are you doing that? If you're using OpenGL then there is a much better way to go and save yourself all the work. use:
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, 1);
With this, all the polygons are always two sided.
The only reason why you would want to use one-sided lighting is if you have an open or partially inverted mesh and you want to somehow indicate what part belong to the inside by having them unlighted.
Generally, the problem you're posing is an open problem in geometry processing and AFAIK there is no sure-fire general way that can always determine the orientation. As you suggest, there are heuristics that work almost always.
Another approach is reminiscent of a famous point-in-polygon algorithm: choose a vertex on the mesh and shoot a ray from it in the direction of the normal. If the ray hits an even number of faces then the normal is pointed to the outside, if it is odd then the normal is towards to inside. notice not to count the point of origin as an intersection. This approach will work only if the mesh is a closed manifold and it can reach some edge cases if the ray happens to pass exactly between two polygons so you might want to do it a number of times and take the leading vote.

Resources