I'm not sure if this best belongs here or in math but I figure I can get some pointers here about the code as well.
For an assignment I need to solve convex Tangram puzzles using Prolog.
All puzzles and available pieces are defined as lists of vertices. For example:
puzzle(1,[(0,0),(4,0),(4,4),(0,4)]) represents a square puzzle and piece(1,[(0,0),(4,0),(2,2)]) could be one of the large triangles.
I already have defined all 7 pieces with an id and a list of points and I think I should be able to write the proper code to iterate through these pieces and perform some operations on them. However, I'm not that insightful when it comes to geometry so I have no clue how I could determine which piece fits where on a puzzle simply based on its vertices.
Most of the assignments in this course are based on classic combinatorial problems such as Travelling Salesman. Are there any such problems involving convex shapes (or any kind of shape) that might inspire me to come up with a solution? I'm having a hard time finding online examples of declarative code that deals with shapes in this way. It would be very helpful if I knew what to look for.
I figure I can verify a solution is correct by checking if the outer borders of the puzzle are covered once and the inner ones (resulting from placing pieces) are covered twice. I could probably use this fact as a base case for some part of my solution. Other than that the best I can think of at the moment is brute forcing every piece into some unoccupied space between the borders of the puzzle till they fit.
Do you have to solve the problem with pure Prolog, or can you use Constraint Programming as well?
If you can use CP, then take a look at this paper: Perspectives on Logic-based Approaches for Reasoning
About Actions and Change. Section 6 describes how the authors solved tangram with CLP(FD).
Maybe the paper gives you an idea how to solve it even if you have to use pure Prolog, since constraints can be replaced by passive tests. The search will then take longer, though, since the search tree won't be pruned by the constraints.
I also remember that someone in a CLP course I took long ago used Gröbner bases to reason about geometry ("how to move a piano around a tight corner?"), although I'm not sure whether that would be applicable for solving tangrams.
I'm sorry if that's all a bit theoretical and advanced.
I think the key to solve this problem should be detection of pieces' overlapping. By definition, if no overlapping occurs, each admissible placement will be a solution. Then, iterating piece' placement we should detect if any overlapping occurs.
Each shape can be represented as the union of the smallest triangles resulting from subdivision of unit grid. We have a total of 100 (4*5*5) small triangles.
Thus overlapping can easily be detected by intersection, when we have a proper translation of list of coords to list of small triangles.
For instance, numbering in ascending coords and clockwise, the piece(1, [(0,0), (1,1), (2,0)]) becomes [2, 3, 4, 7].
Rotating a shape clockwise of 90° around the origin it's easy, if we note that for each rotation: X'=Y and Y'=-X. The piece above, rotated 90° clockwise: piece(1, [(0,0), (1,-1), (0,-2)]). When normalized on Y: piece(1, [(0,2), (1,1), (0,0)]).
Determining which small triangles cover a shape can be done naively repeating the 'point in polygon' test for each small triangle.
Related
Note: I'm very much from a programming background, not a mathematics background. This will become obvious very quickly.
Assume I have a bounded n-dimensional space - for example here I'll use n=2. In that space, I have a set of pre-defined points. (As it happens, I'm doing something questionable with genetic algorithms for finding minima in non-mathematically-solvable equations, but that's not directly relevant).
Next, I have defined a new point within that 2D space. I want to know which three (n+1) points form the smallest (or possibly nearest?) triangle that contains that point. Illustration here:
Now, as that illustration shows, I'm not entirely sure what I'm doing, in that I've failed to adequately describe the criteria by which the candidate triangles are judged - points 10, 5, and 8 would enclose the point within a triangle of smaller area, for example. This is because those specifics are somewhat flexible. What I really care about is:
Computational efficiency: I could potentially end up scaling this algorithm up to thousands of points in a hundred dimensions. As such, I need a solution that's better than exhaustively testing every potential convex hull against a specific equation, and ideally one with a decent big-O notation. Presumably I'm going to need a more intelligent structure that a big unordered list to do this.
If I had a preferred criteria, it's proximity of the hull vertices to the test point. However, if it's easier to build an algorithm which judges by area/volume or something else, I can live with that.
I need to be able to handle edge-cases, where the test-point is, for example, on the edge between two vertices.
Where do I even start with this? Some cursory googling suggests Voronoi diagrams might be a step in the right direction, is that correct? What are the right tools for this job? Any help would be greatly appreciated.
I am certain there is already some algorithm that does what I need, but I am not sure what phrase to Google, or what is the algorithm category.
Here is my problem: I have a polyhedron made up by several contacting blocks (hyperslabs), i. e. the edges are axis aligned and the angles between edges are 90°. There may be holes inside the polyhedron.
I want to break up this concave polyhedron in as little convex rectangular axis-aligned whole blocks are possible (if the original polyhedron is convex and has no holes, then it is already such a block, and therefore, the solution). To illustrate, some 2-D images I made (but I need the solution for 3-D, and preferably, N-D):
I have this geometry:
One possible breakup into blocks is this:
But the one I want is this (with as few blocks as possible):
I have the impression that an exact algorithm may be too expensive (is this problem NP-hard?), so an approximate algorithm is suitable.
One detail that maybe make the problem easier, so that there could be a more appropriated/specialized algorithm for it is that all edges have sizes multiple of some fixed value (you may think all edges sizes are integer numbers, or that the geometry is made up by uniform tiny squares, or voxels).
Background: this is the structured grid discretization of a PDE domain.
What algorithm can solve this problem? What class of algorithms should I
search for?
Update: Before you upvote that answer, I want to point out that my answer is slightly off-topic. The original poster have a question about the decomposition of a polyhedron with faces that are axis-aligned. Given such kind of polyhedron, the question is to decompose it into convex parts. And the question is in 3D, possibly nD. My answer is about the decomposition of a general polyhedron. So when I give an answer with a given implementation, that answer applies to the special case of polyhedron axis-aligned, but it might be that there exists a better implementation for axis-aligned polyhedron. And when my answer says that a problem for generic polyhedron is NP-complete, it might be that there exists a polynomial solution for the special case of axis-aligned polyhedron. I do not know.
Now here is my (slightly off-topic) answer, below the horizontal rule...
The CGAL C++ library has an algorithm that, given a 2D polygon, can compute the optimal convex decomposition of that polygon. The method is mentioned in the part 2D Polygon Partitioning of the manual. The method is named CGAL::optimal_convex_partition_2. I quote the manual:
This function provides an implementation of Greene's dynamic programming algorithm for optimal partitioning [2]. This algorithm requires O(n4) time and O(n3) space in the worst case.
In the bibliography of that CGAL chapter, the article [2] is:
[2] Daniel H. Greene. The decomposition of polygons into convex parts. In Franco P. Preparata, editor, Computational Geometry, volume 1 of Adv. Comput. Res., pages 235–259. JAI Press, Greenwich, Conn., 1983.
It seems to be exactly what you are looking for.
Note that the same chapter of the CGAL manual also mention an approximation, hence not optimal, that run in O(n): CGAL::approx_convex_partition_2.
Edit, about the 3D case:
In 3D, CGAL has another chapter about Convex Decomposition of Polyhedra. The second paragraph of the chapter says "this problem is known to be NP-hard [1]". The reference [1] is:
[1] Bernard Chazelle. Convex partitions of polyhedra: a lower bound and worst-case optimal algorithm. SIAM J. Comput., 13:488–507, 1984.
CGAL has a method CGAL::convex_decomposition_3 that computes a non-optimal decomposition.
I have the feeling your problem is NP-hard. I suggest a first step might be to break the figure into sub-rectangles along all hyperplanes. So in your example there would be three hyperplanes (lines) and four resulting rectangles. Then the problem becomes one of recombining rectangles into larger rectangles to minimize the final number of rectangles. Maybe 0-1 integer programming?
I think dynamic programming might be your friend.
The first step I see is to divide the polyhedron into a trivial collection of blocks such that every possible face is available (i.e. slice and dice it into the smallest pieces possible). This should be trivial because everything is an axis aligned box, so k-tree like solutions should be sufficient.
This seems reasonable because I can look at its cost. The cost of doing this is that I "forget" the original configuration of hyperslabs, choosing to replace it with a new set of hyperslabs. The only way this could lead me astray is if the original configuration had something to offer for the solution. Given that you want an "optimal" solution for all configurations, we have to assume that the original structure isn't very helpful. I don't know if it can be proven that this original information is useless, but I'm going to make that assumption in this answer.
The problem has now been reduced to a graph problem similar to a constrained spanning forest problem. I think the most natural way to view the problem is to think of it as a graph coloring problem (as long as you can avoid confusing it with the more famous graph coloring problem of trying to color a map without two states of the same color sharing a border). I have a graph of nodes (small blocks), each of which I wish to assign a color (which will eventually be the "hyperslab" which covers that block). I have the constraint that I must assign colors in hyperslab shapes.
Now a key observation is that not all possibilities must be considered. Take the final colored graph we want to see. We can partition this graph in any way we please by breaking any hyperslab which crosses the partition into two pieces. However, not every partition is meaningful. The only partitions that make sense are axis aligned cuts, which always break a hyperslab into two hyperslabs (as opposed to any more complicated shape which could occur if the cut was not axis aligned).
Now this cut is the reverse of the problem we're really trying to solve. That cutting is actually the thing we did in the first step. While we want to find the optimal merging algorithm, undoing those cuts. However, this shows a key feature we will use in dynamic programming: the only features that matter for merging are on the exposed surface of a cut. Once we find the optimal way of forming the central region, it generally doesn't play a part in the algorithm.
So let's start by building a collection of hyperslab-spaces, which can define not just a plain hyperslab, but any configuration of hyperslabs such as those with holes. Each hyperslab-space records:
The number of leaf hyperslabs contained within it (this is the number we are eventually going to try to minimize)
The internal configuration of hyperslabs.
A map of the surface of the hyperslab-space, which can be used for merging.
We then define a "merge" rule to turn two or more adjacent hyperslab-spaces into one:
Hyperslab-spaces may only be combined into new hyperslab-spaces (so you need to combine enough pieces to create a new hyperslab, not some more exotic shape)
Merges are done simply by comparing the surfaces. If there are features with matching dimensionalities, they are merged (because it is trivial to show that, if the features match, it is always better to merge hyperslabs than not to)
Now this is enough to solve the problem with brute force. The solution will be NP-complete for certain. However, we can add an additional rule which will drop this cost dramatically: "One hyperslab-space is deemed 'better' than another if they cover the same space, and have exactly the same features on their surface. In this case, the one with fewer hyperslabs inside it is the better choice."
Now the idea here is that, early on in the algorithm, you will have to keep track of all sorts of combinations, just in case they are the most useful. However, as the merging algorithm makes things bigger and bigger, it will become less likely that internal details will be exposed on the surface of the hyperslab-space. Consider
+===+===+===+---+---+---+---+
| : : A | X : : : :
+---+---+---+---+---+---+---+
| : : B | Y : : : :
+---+---+---+---+---+---+---+
| : : | : : : :
+===+===+===+ +---+---+---+
Take a look at the left side box, which I have taken the liberty of marking in stronger lines. When it comes to merging boxes with the rest of the world, the AB:XY surface is all that matters. As such, there are only a handful of merge patterns which can occur at this surface
No merges possible
A:X allows merging, but B:Y does not
B:Y allows merging, but A:X does not
Both A:X and B:Y allow merging (two independent merges)
We can merge a larger square, AB:XY
There are many ways to cover the 3x3 square (at least a few dozen). However, we only need to remember the best way to achieve each of those merge processes. Thus once we reach this point in the dynamic programming, we can forget about all of the other combinations that can occur, and only focus on the best way to achieve each set of surface features.
In fact, this sets up the problem for an easy greedy algorithm which explores whichever merges provide the best promise for decreasing the number of hyperslabs, always remembering the best way to achieve a given set of surface features. When the algorithm is done merging, whatever that final hyperslab-space contains is the optimal layout.
I don't know if it is provable, but my gut instinct thinks that this will be an O(n^d) algorithm where d is the number of dimensions. I think the worst case solution for this would be a collection of hyperslabs which, when put together, forms one big hyperslab. In this case, I believe the algorithm will eventually work its way into the reverse of a k-tree algorithm. Again, no proof is given... it's just my gut instinct.
You can try a constrained delaunay triangulation. It gives very few triangles.
Are you able to determine the equations for each line?
If so, maybe you can get the intersection (points) between those lines. Then if you take one axis, and start to look for a value which has more than two points (sharing this value) then you should "draw" a line. (At the beginning of the sweep there will be zero points, then two (your first pair) and when you find more than two points, you will be able to determine which points are of the first polygon and which are of the second one.
Eg, if you have those lines:
verticals (red):
x = 0, x = 2, x = 5
horizontals (yellow):
y = 0, y = 2, y = 3, y = 5
and you start to sweep through of X axis, you will get p1 and p2, (and we know to which line-equation they belong ) then you will get p3,p4,p5 and p6 !! So here you can check which of those points share the same line of p1 and p2. In this case p4 and p5. So your first new polygon is p1,p2,p4,p5.
Now we save the 'new' pair of points (p3, p6) and continue with the sweep until the next points. Here we have p7,p8,p9 and p10, looking for the points which share the line of the previous points (p3 and p6) and we get p7 and p10. Those are the points of your second polygon.
When we repeat the exercise for the Y axis, we will get two points (p3,p7) and then just three (p1,p2,p8) ! On this case we should use the farest point (p8) in the same line of the new discovered point.
As we are using lines equations and points 2 or more dimensions, the procedure should be very similar
ps, sorry for my english :S
I hope this helps :)
I have a connected shape that consists of squares put together, e.g. take a squared paper and draw a line along the existing lines that ends at its beginning and does not cross itself.
The goal is now to find an algorithm (not brute-force) that fills this shape with as few, non-overlapping rectangles as possible.
I'm looking for the optimal solution. As can be seen in the images, the naive greedy approach (take the largest rectangle) does not work.
(Optimal)
(Greedy)
My scenario is vertices reduction, but I'm sure there are other use-cases as well.
Note: This problem seems basic, but I was not able to find a solution elsewhere. Also, is this problem NP-hard?
Edit: I just realized that, in my scenario, filling the shape with as few non-overlapping triangles as possible, would give an even better result.
I've spend a lot of time researching this, since I asked the initial question. For the first problem (optimally filling the shape with rectangles), I've written the solution here under the header "Optimal Greedy Meshing":
http://blackflux.wordpress.com/2014/03/01/meshing-in-voxel-engines-part-2/
The complexity is actually better (faster) than for optimally triangulating a polygon without holes. The slowest part is the Hopcroft-Karp algorithm.
Treating the shape as a polygon is also discussed in the linked blog post. Note that I'm also considering holes.
The first problem is harder than the one with triangles; for triangles, see the algorithms in
http://en.wikipedia.org/wiki/Polygon_triangulation
which can do it without any extra vertices.
I have successfully implemented the marching cubes algorithm. I used the standard materials as a reference, but I rewrote it entirely from scratch. It works, but I am observing the ambiguities that lead to holes in the mesh.
I was considering the marching tetrahedrons algorithm, which supposedly does not suffer from ambiguities. I fail to see how this is possible.
The marching tetrahedrons algorithm uses six tetrahedrons in place of a cube, with triangulations for each tetrahedron. But, suppose I were to implement the marching cubes algorithm, but for each of the 256 triangulations, simply choose the one that is the "sum" (union) of the cube's tetrahedron's triangulations? As far as I know, this is what marching tetrahedrons does--so why does that magically fix the ambiguities?
There are 16 unique cases, I think, and the 240 others are just reflections/rotations of those 16. I remember reading in some paper somewhere that to resolve ambiguities, you need 33 cases. Could this be related to why marching tetrahedons somehow doesn't suffer from problems?
So, questions:
Why does marching tetrahedrons not suffer from ambiguities?
If it doesn't, why don't people just use the marching cubes algorithm, but with the tetrahedrons' triangulations instead?
I feel like I'm missing something here. Thanks.
Okay, well I've just finished implementing my version of marching tetrahedrons, and while I easily saw ambiguities lead to problems in the marching cubes's mesh, the marching tetrahedrons's mesh seems to be consistently topologically correct. There are some annoying features along very thin points where some vertices can't quite decide which side of the divide they want to be on, but the mesh is always watertight.
In answer to my questions:
To resolve ambiguities in the marching cubes algorithm, as far as I can tell, one evaluates the function more carefully in the cell. In the tetrahedrons algorithm, one explicitly samples the center of the cell and polygonizes to that. I suspect that because the tetrahedral mesh includes this vertex in particular, ambiguities are implicitly handled. The other extra vertices on the side probably also have something to do with it. As a key point, the function is actually being sampled in more places when you go to refine it.
I'm pretty sure they do. My marching tetrahedrons algorithm does just that, and I think that, internally, it's doing the same thing as the classic marching tetrahedrons algorithm. In my implementation, the tetrahedrons' triangles are all listed for each possible cube, which I suspect makes it faster than figuring out the one or two triangles for each individual tetrahedron individually.
If I had the time and attention span (neither of which I do), it might be beneficial to remesh the insides of each cube to use fewer triangles maximum, which I think wouldn't hurt it.
To answer the question "Why does Marching Tetrahedrons algo has ambiguities?" it is required to understand why do the ambiguities arise in the first place in Marching Cubes.
Ambiguities may occur when there are two diagonally opposite "positive" vertices and two diagonally opposite "negative" vertices in a cube. It took me some time to wrap my mind around it, but the problem with ambiguities is that they theoretically allow to create isosurface patches for adjacent cubes that are incompatible with one another. That's the obvious part. The interesting part is that two adjacent isosurface patches from two ambiguous configurations are incompatible if (and only if) one of them separates "negative" vertices, and the other one separates "positive" verticies.
Here is a relevant quote from Rephael Wenger's great book "Isosurfaces Geometry, Topology & Algorithms" (can't post more then 2 links, so I've merged all relevant images from the book into a single one):
The border of a cube’s three-dimensional isosurface patch defines an
isocontour on each of the cube’s square facets. If some configuration’s
isosurface patch separates the negative vertices on the facet while an
adjacent configuration’s isosurface patch separates the positive ones,
then the isosurface edges on the common facet will not align. The
isosurface patches in Figure 2.16 do not separate the positive
vertices on any facet. Moreover, the derived isosurface surface
patches in any rotation or reflection of the configurations also do not
separate positive vertices on any facet. Thus the isosurface patches
in any two adjacent cubes are properly aligned on their boundaries. An
equally valid, but combinatorially distinct, isosurface table could be
generated by using isosurface patches that do not separate the
negative vertices on any square facet.
What this means is that if all used ambiguous configurations follow the same pattern (i.e. always separate "negative" vertices), then it is impossible to produce a topologically incorrect surface. And problems will arise if you use configurations "from both worlds" for a single isosurface.
The surface that was constructed using the same ambiguity resolution pattern still may contain unwanted errors like this (taken from "Efficient implementation of Marching Cubes’ cases with topological guarantees" article by Thomas Lewiner Helio Lopes, Antonio Wilson Vieira and Geovan Tavares), but it will be, as you said, watertight.
To achieve this, you will need to use the look-up table based on the 22 unique configurations (not the standard 14 or 15) shown in the Figure 2.16.
Now, back to the original question at last - why does Marching Tetrahedrons does not suffer from ambiguities? For the same reason there will be no ambiguities in the Marching Cubes, if done as described above - because you arbitrarily chose to use one of the two available variants of ambiguous configuration resolution. In Marching Cubes it is not obvious at all (at least for me, had to do a lot of digging) that this is even an option, but in Marching Tetrahedrons it is done for you by the algorithm itself. Here is another quote from Rephael Wenger's book:
The regular grid cubes have ambiguous configurations while the
tetrahedral decomposition does not. What happened to the ambiguous
configurations? These configurations are resolved by the choice of
triangulation. For instance, in Figure 2.31, the first triangulation
gives an isosurface patch with two components corresponding to 2B-II
in Figure 2.22 while the second gives an isosurface patch with one
component corresponding to 2B-I.
Note how cubes are sliced into tetrahedrons in two different ways in Figure 2.31. The choice of this slicing or the other is the secret sauce that resolves the ambiguities.
One may ask himself - if the ambiguity problem can be resolved just by using the same pattern for all cubes then why are there so much books and papers about more complicated solutions? Why do I need Asymptotic Decider and all that stuff? As far as I can tell, it all comes down to what you need to achieve. If topological correctness (as in, no holes) is enough for you, then you do not need all the advanced stuff. If you want to resolve problems like those shown in "Efficient implementation of Marching Cubes" article shown above, then you need to dive deeper.
I highly recommend reading the relevant chapters of Rephael Wenger's book "Isosurfaces Geometry, Topology & Algorithms" to better understand the nature of these algorithms, what are the problems, where do the problems come from and how can they be solved.
As was pointed out by Li Xiaosheng, the basics can be better understood by carefully examining the Marching Squares algo first. Actually, the whole answer was layed down by Li Xiaosheng, I've just expanded the explanations a bit.
Take the following 2d example (which introduces ambiguities):
01
10
If we divide this square into two triangles, we will get different results in the diagonal we chose to divide the square.
Along 0-0 diagonal, we get triangles (010,010) while for the 1-1 diagonal, we get triangles (101,101). Obviously, different decomposition of square lead to different results. Either is correct and this is same for 3D Cubes.
MT doesnot resolve the ambiguities actually but it can produce topologically consist result by choosing the same decomposition strategy for all cubes. That the way it get rid of suffering from ambiguities.
I've been searching far and wide on the seven internets, and have come to no avail. The closest to what I need seems to be The cutting stock problem, only in 2D (which is disappointing since Wikipedia doesn't provide any directions on how to solve that one). Another look-alike problem would be UV unwrapping. There are solutions there, but only those that you get from add-ons on various 3D software.
Cutting the long talk short - what I want is this: given a rectangle of known width and height, I have to find out how many shapes (polygons) of known sizes (which may be rotated at will) may I fit inside that rectangle.
For example, I could choose a T-shaped piece and in the same rectangle I could pack it both in an efficient way, resulting in 4 shapes per rectangle
as well as tiling them based on their bounding boxes, case in which I could only fit 3
But of course, this is only an example... and I don't think it would be much use to solving on this particular case. The only approaches I can think of right now are either like backtracking in their complexity or solve only particular cases of this problem. So... any ideas?
Anybody up for a game of Tetris (a subset of your problem)?
This is known as the packing problem. Without knowing what kind of shapes you are likely to face ahead of time, it can be very difficult if not impossible to come up with an algorithm that will give you the best answer. More than likely unless your polygons are "nice" polygons (circles, squares, equilateral triangles, etc.) you will probably have to settle for a heuristic that gives you the approximate best solution most of the time.
One general heuristic (though far from optimal depending on the shape of the input polygon) would be to simplify the problem by drawing a rectangle around the polygon so that the rectangle would be just big enough to cover the polygon. (As an example in the diagram below we draw a red rectangle around a blue polygon.)
Once we have done this, we can then take that rectangle and try to fit as many of that rectangle into the large rectangle as possible. This simplfies the problem into a rectangle packing problem which is easier to solve and wrap your head around. An example of an algorithm for this is at the following link:
An Effective Recursive Partitioning Approach for the Packing of Identical Rectangles in a Rectangle.
Now obviously this heuristic is not optimal when the polygon in question is not close to being the same shape as a rectangle, but it does give you a minimum baseline to work with especially if you don't have much knowledge of what your polygon will look like (or there is high variance in what the polygon will look like). Using this algorithm, it would fill up a large rectangle like so:
Here is the same image without the intermediate rectangles:
For the case of these T-shaped polygons, the heuristic is not the best it could be (in fact it may be almost a worst case scenario for this proposed approximation), but it would work very well for other types of polygons.
consider what the other answer said by placing the t's into a square, but instead of just leaving it as a square set the shapes up in a list. Then use True and False to fill the nested list as the shape i.e. [[True,True,True],[False,True,False]] for your T shape. Then use a function to place the shapes on the grid. To optimize the results, create a tracker which will pay attention to how many false in a new shape overlap with trues that are already on the grid from previous shapes. The function will place the shape in the place with the most overlaps. There will have to be modifications to create higher and higher optimizations, but that is the general premise which you are looking for.