Nesting maximum amount of shapes on a surface - algorithm

In industry, there is often a problem where you need to calculate the most efficient use of material, be it fabric, wood, metal etc. So the starting point is X amount of shapes of given dimensions, made out of polygons and/or curved lines, and target is another polygon of given dimensions.
I assume many of the current CAM suites implement this, but having no experience using them or of their internals, what kind of computational algorithm is used to find the most efficient use of space? Can someone point me to a book or other reference that discusses this topic?

After Andrew in his answer pointed me to the right direction and named the problem for me, I decided to dump my research results here in a separate answer.
This is indeed a packing problem, and to be more precise, it is a nesting problem. The problem is mathematically NP-hard, and thus the algorithms currently in use are heuristic approaches. There does not seem to be any solutions that would solve the problem in linear time, except for trivial problem sets. Solving complex problems takes from minutes to hours with current hardware, if you want to achieve a solution with good material utilization. There are tens of commercial software solutions that offer nesting of shapes, but I was not able to locate any open source solutions, so there are no real examples where one could see the algorithms actually implemented.
Excellent description of the nesting and strip nesting problem with historical solutions can be found in a paper written by Benny Kjær Nielsen of University of Copenhagen (Nielsen).
General approach seems to be to mix and use multiple known algorithms in order to find the best nesting solution. These algorithms include (Guided / Iterated) Local Search, Fast Neighborhood Search that is based on No-Fit Polygon, and Jostling Heuristics. I found a great paper on this subject with pictures of how the algorithms work. It also had benchmarks of the different software implementations so far. This paper was presented at the International Symposium on Scheduling 2006 by S. Umetani et al (Umetani).
A relatively new and possibly the best approach to date is based on Hybrid Genetic Algorithm (HGA), a hybrid consisting of simulated annealing and genetic algorithm that has been described by Wu Qingming et al of Wuhan University (Quanming). They have implemented this by using Visual Studio, SQL database and genetic algorithm optimization toolbox (GAOT) in MatLab.

You are referring to a well known computer science domain of packing, for which there are a variety of problems defined and research done, for both 2-dimnensional space as well as 3-dimensional space.
There is considerable material on the net available for the defined problems, but to find it you knid of have to know the name of the problem to search for.
Some packages might well adopt a heuristic appraoch (which I suspect they will) and some might go to the lengths of calculating all the possibilities to get the absolute right answer.
http://en.wikipedia.org/wiki/Packing_problem

Related

What's the theory behind this puzzle?

I recently came across the above puzzle game. The objective is to form a large triangle in such a way that the shapes and colors of the parts of the figures on neighboring triangles match.
One way to solve this problem is to apply an exhaustive search and to test every possible combination (roughly 7.1e9). I wrote a simple script to solve it (github).
Since this puzzle is quite old, brute-forcing this problem may not have been feasible back then. So, what's a more efficient way (algorithm/mathematical theory) to solve this?
This is equivalent to the Edge-matching problem (with some regular polygons), which is of course np-complete (and there are more negative results i assume about approximations). This means, that there exists puzzles which are very hard to solve (at least if P != NP).
One interesting side-note: there is a very popular (commercial) edge-matching puzzle called Eternity II which had a prize value of two million dollars. It's still unsoved to my knowledge.
This problem resulted in many attempts and blog-writings, which should offer you much about solving these kind of problems.
Failed (in terms of: did not solve the full-size E2 puzzle; but other hard ones) approaches, which should work much better than exhaustive-search (without heuristics) are:
SAT-solving (in my opinion most powerful complete approach)
Constraint-programming
Common Metaheuristics (a lot of potential when tuned to some problem-statistics)
Some interesting resources:
Complexity-theory: Demaine, Erik D., and Martin L. Demaine. "Jigsaw puzzles, edge matching, and polyomino packing: Connections and complexity." Graphs and Combinatorics 23.1 (2007): 195-208.
General hardness analysis (practical): Ansótegui, Carlos, et al. "How Hard is a Commercial Puzzle: the Eternity II Challenge." CCIA. 2008.
SAT-solving approach: Heule, Marijn JH. "Solving edge-matching problems with satisfiability solvers." SAT (2009): 69-82.
Edge-matching as benchmarks (because of hardness): Ansótegui, Carlos, et al. "Edge matching puzzles as hard sat/csp benchmarks." International Conference on Principles and Practice of Constraint Programming. Springer Berlin Heidelberg, 2008.
One common approach to solving this sort of problem is with backtracking.
You choose a starting place, put down one of the tiles and then try to find matches for it in the neighboring places. When you get stuck, you back up one, and try an alternative there.
Eventually you have tried every possibility, without bothering with a huge number of dead ends. Once you get stuck, there is no point in filling in the rest in any way, because you'll still be stuck at that one point.
More recently, Knuth has applied his Dancing Links algorithm to problems of this nature, with even greater efficiencies gained thereby.
For a problem the size of your example, with just 9 pieces and two "colors", all solutions would be found in a matter of seconds at the most.

Course Scheduling Algorithms: why use of DFS or Graph coloring is not suggested?

I need to develop a Course Timetabling software which can allot timeslots and rooms efficiently. This is a curriculum based routine, not post-enrollment based. And efficiently means classes are assigned timeslots according to staff time preferences and also need to minimize 1st year-2nd year class overlap so that 2nd year students can retake the courses they've failed to pass.(and also for 3rd-4th yr pair).
Now, at first i thought that would be an easy problem, but now it seems different. Most of the papers i've looked on uses Genetic Algorithm/PSO/Simulated Annealing or these type of algorithm. And i'm still unable to interpret the problem to a GA problem.
what i'm confused about is why almost none of them suggests DFS or Graph-coloring algorithm?
Can someone explain the scenario if DFS/graph-coloring is used? Or why they aren't suggested or tried.
My experience with solving this problem for a complex department, is that the hard constraints (like no overlapping of courses that are taken by the same population, and hard constraints of the teachers) are rather easily solvable by exact methods. I modeled the problem with 0-1 integer linear programming, and solved it with a SAT-based tool called minisat+. Competitive commercial tools like cplex can also solve it.
So with today's tools there is no need to approximate as suggested above, even when the input is rather large.
Now, optimizing the solution is a different story. There can be many (weighted) objectives, and finding the solution that brings the objective to minimum is indeed very hard computationally (no tool that I tried can solve it within 24 hours), but they reach near optimum in a few hours (I know it is near optimum because I can compute the theoretical bound on the solution).
This document describes applying a GA approach to university time-tabling, so it should be directly applicable to your requirement: Using a GA to solve university time-tabling

How do people prove the correctness of Computer Vision methods?

I'd like to pose a few abstract questions about computer vision research. I haven't quite been able to answer these questions by searching the web and reading papers.
How does someone know whether a computer vision algorithm is correct?
How do we define "correct" in the context of computer vision?
Do formal proofs play a role in understanding the correctness of computer vision algorithms?
A bit of background: I'm about to start my PhD in Computer Science. I enjoy designing fast parallel algorithms and proving the correctness of these algorithms. I've also used OpenCV from some class projects, though I don't have much formal training in computer vision.
I've been approached by a potential thesis advisor who works on designing faster and more scalable algorithms for computer vision (e.g. fast image segmentation). I'm trying to understand the common practices in solving computer vision problems.
You just don't prove them.
Instead of a formal proof, which is often impossible to do, you can test your algorithm on a set of testcases and compare the output with previously known algorithms or correct answers (for example when you recognize the text, you can generate a set of images where you know what the text says).
In practice, computer vision is more like an empirical science: You gather data, think of simple hypotheses that could explain some aspect of your data, then test those hypotheses. You usually don't have a clear definition of "correct" for high-level CV tasks like face recognition, so you can't prove correctness.
Low-level algorithms are a different matter, though: You usually have a clear, mathematical definition of "correct" here. For example if you'd invent an algorithm that can calculate a median filter or a morphological operation more efficiently than known algorithms or that can be parallelized better, you would of course have to prove it's correctness, just like any other algorithm.
It's also common to have certain requirements to a computer vision algorithm that can be formalized: For example, you might want your algorithm to be invariant to rotation and translation - these are properties that can be proven formally. It's also sometimes possible to create mathematical models of signal and noise, and design a filter that has the best possible signal to noise-ratio (IIRC the Wiener filter or the Canny edge detector were designed that way).
Many image processing/computer vision algorithms have some kind of "repeat until convergence" loop (e.g. snakes or Navier-Stokes inpainting and other PDE-based methods). You would at least try to prove that the algorithm converges for any input.
This is my personal opinion, so take it for what it's worth.
You can't prove the correctness of most of the Computer Vision methods right now. I consider most of the current methods some kind of "recipe" where ingredients are thrown down until the "result" is good enough. Can you prove that a brownie cake is correct?
It is a bit similar in a way to how machine learning evolved. At first, people did neural networks, but it was just a big "soup" that happened to work more or less. It worked sometimes, didn't on other cases, and no one really knew why. Then statistical learning (through Vapnik among others) kicked in, with some real mathematical backup. You could prove that you had the unique hyperplane that minimized a particular loss function, PCA gives you the closest matrix of fixed rank to a given matrix (considering the Frobenius norm I believe), etc...
Now, there are still a few things that are "correct" in computer vision, but they are pretty limited. What comes to my mind is the wavelet : they are the sparsest representation in an orthogonal basis of function. (i.e : the most compressed way to represent an approximation of an image with minimal error)
Computer Vision algorithms are not like theorems which you can prove, they usually try to interpret the image data into the terms which are more understandable to us humans. Like face recognition, motion detection, video surveillance etc. The exact correctness is not calculable, like in the case of image compression algorithms where you can easily find the result by the size of the images.
The most common methods used to show the results in Computer Vision methods(especially classification problems) are the graphs of precision Vs recall, accuracy Vs false positives. These are measured on standard databases available on various sites. Usually the harsher you set the parameters for correct detection, the more false positives you generate. The typical practice is to choose the point from the graph according to your requirement of 'how many false positives are tolerable for the application'.

3D symmetry search algorithm

This may be more appropriate for math overflow, but nevertheless:
Given a 3D structure (for example, a molecule), what is a good approach/algorithm to find symmetry (rotational/reflection/inversion/etc.)?
I came up with brute force naïve algorithm, but it seems there should be better approach.
I am not so much interested in genetic algorithms as I would like best symmetry rather then almost the best symmetry
there is this here: http://pubs.acs.org/doi/abs/10.1021/ci990322q from my field. would be good to know what mathematicians/computer science people came up with as well.
A link to website/paper would be great. Thanks
This paper should get you started:
http://graphics.stanford.edu/~niloy/research/approx_symmetry/paper_docs/approx_symmetry_sig_06.pdf
See this website for Symmetry Detection and Structure Discovery research. The papers at the bottom include the one that #Xavier Ho mentions.

Best Fit Scheduling Algorithm

I'm writing a scheduling program with a difficult programming problem. There are several events, each with multiple meeting times. I need to find an arrangement of meeting times such that each schedule contains any given event exactly once, using one of each event's multiple meeting times.
Obviously I could use brute force, but that's rarely the best solution. I'm guessing this is a relatively basic computer science problem, which I'll learn about once I am able to start taking computer science classes. In the meantime, I'd prefer any links where I could read up on this, or even just a name I could Google.
I think you should use genetic algorithm because:
It is best suited for large problem instances.
It yields reduced time complexity on the price of inaccurate answer(Not the ultimate best)
You can specify constraints & preferences easily by adjusting fitness punishments for not met ones.
You can specify time limit for program execution.
The quality of solution depends on how much time you intend to spend solving the program..
Genetic Algorithms Definition
Genetic Algorithms Tutorial
Class scheduling project with GA
There are several ways to do this
One approach is to do constraint programming. It is a special case of the dynamic programming suggested by feanor. It is helful to use a specialized library that can do the bounding and branching for you. (Google for "gecode" or "comet-online" to find libraries)
If you are mathematically inclined then you can also use integer programming to solve the problem. The basic idea here is to translate your problem in to a set of linear inequalities. (Google for "integer programming scheduling" to find many real life examples and google for "Abacus COIN-OR" for a useful library)
My guess is that constraint programming is the easiest approach, but integer programming is useful if you want to include real variables in you problem at some point.
Your problem description isn't entirely clear, but if all you're trying to do is find a schedule which has no overlapping events, then this is a straightforward bipartite matching problem.
You have two sets of nodes: events and times. Draw an edge from each event to each possible meeting time. You can then efficiently construct the matching (the largest possible set of edges between the nodes) using augmented paths. This works because you can always convert a bipartite graph into an equivalent flow graph.
An example of code that does this is BIM. Standard graphing libraries such as GOBLIN and NetworkX also have bipartite matching implementations.
This sounds like this could be a good candidate for a dynamic programming solution, specifically something similar to the interval scheduling problem.
There are some visuals here for the interval scheduling problem specifically, which may make the concept clearer. Here is a good tutorial on dynamic programming overall.

Resources