Packing differently sized chunks of data into multiple bins - algorithm

EDIT: It seems like this problem is called "Cutting stock problem"
I need an algorithm that gives me the (space-)optimal arrangement of chunks in bins. One way would be put the bigger chunks in first. But see how that algorithm fails in this example:
Chunks Bins
-----------------------------
AAA BBB CC DD ( ) ( )
Algorithm Result
-----------------------------
biggest first (AAABBB ) (CC )
optimal (AAACCDD) (BBB)
"Biggest first" can't fit in DD. Maybe it helps to build a table like this:
Size 1: ---
Size 2: CC, DD
Size 3: AAA, BBB
Size 4: CCDD
Size 5: AAACC, AAADD, BBBCC, BBBDD
Size 6: AAABBB
Size 7: AAACCDD, BBBCCDD
Size 8: AAABBBCC, AAABBBDD
Size 10: AAABBBCCDD

This is basically a variant of the bin-packing problem. This problem is is known to be NP-hard, so don't expect to find an efficient optimal algorithm for complex cases (i.e. with many objects and bins).
However, if your number of objects/bins is relatively small, you will probably be fine just exhaustively searching all the possible combinations with a depth-first search.
This is pretty easy to implement: just take the first object, then recursively re-run the algorithm with the first object placed in each of the bins in turn (i.e. subtracting the size of the object from the available bin space). Finally, you just need to keep track of the best "solution" found so far and return this as your final answer once you have tried all combinations.
You may also be able make this algorithm algorithm run faster by:
Considering all objects of equal size as equivalent
Pruning the search tree (i.e. returning early from a branch) if you can't possibly beat the current best solution e.g. when you have already found a perfect fit
Updated based on comments on problem size
Given that it looks like you have a very large number of chunks to deal with, you might want to try the following:
Fit the largest 10-20 chunks using an exhaustive search as above
Allocate the remainder using a largest fit approach

Mikera is right: this multiple Knapsack problem (a variant of the bin packing problem) is NP hard.
Here are a couple of your options (copied from my answer on a similar question):
Brute force, or better yet, branch and bound. Doesn't scale (at all!), but will find you the optimal solution (probably not in our lifetimes though).
Deterministic algorithm: sort the chunks on largest size and go through that list one by one and assign it the best remaining spot. That will finish very fast, but the solution can be far from optimal (or feasible). Here's a nice picture showing an example what can go wrong. But if you want to keep it simple, that's the way to go.
Meta-heuristics, starting from the result of a deterministic algorithm. This will give you a very good result in reasonable time, better than what humans come up with. Depending on how much time you give it and the difficulty of the problem it might or might not be the optimal solution. There are a couple of libraries out there, such as Drools Planner (open source java).

A general best algorithm for this problem doesn't exist yet (see bin packing problem). You can find a few different approaches on wikipedia and/or googling for the "bin packing problem" and maybe "knapsack problem" would also provide some help.

Donald Knuth's Dancing Links algorithm is quick at finding solutions to "exact covering" problems.

Related

Traveling Salesman Problem for a large number of vertices

I need to solve TSP for a large number of vertices(30-100) with good accuracy and adequate time(like 1-2 days). My graph can contain asymmetrical edges(g[i][j] not equal g[j][i]).
I tried greedy, little(maybe my bad, but that shows worse results than greedy), simple genetic algo(barely better than greedy) , dynamic for O(2^n*n) (fast out of memory).
Well, 30-100 is not really large number of vertices. Did you miss some zeroes? Or you are facing some special hard to solve cases like p43 from TSPLIB?
In any case, if you are looking for a good heuristic I used to use Ant Colony Optimization for Asymmetric TSP. It is easy to implement and providers quite good performance.
You might take a look at my old implementation: https://github.com/aligusnet/optimer/tree/master/src/heuristics/aco
If you can accept not an optimal, but "close to optimal" solution, I can suggest you to use "random traveling" algorithm. Idea of this algorithm - do not BFS/DFS search through entire combination tree, but search just random DFS-subtrees.
For example, you have vertices [A-Z], and you start point within [A]. Try 10000 attempts for each path (total 32 prefix), started from [A-B-...], [A-C-...] and so on, where [...] is randomly selected full-depth path through your graph, according your rules. Keep cost of appropriate paths within array, where cost is sum of costs from each prefix. Because of you use equal attempts to all "start prefixes", sum of minimal prefix will show you best step from [A]. Of course, this is not guarantee for optimal, but this is high probability to be so.
For example, sum of 10,000 attempts withing path [A-K] is lowest. Next step - accept first step [A-K], and again repeat algorithm, until you found the solution.
Here's the TSP source code of the OptaPlanner implementation, fwiw. It deals with datasets up to 10k visits pretty good when NearbySelection is activated (or up to 500 visits or so if it's not activated) - to go above 10k you'll need to activate Partitioned Search which comes at a trade-off.
It has asymmetric datasets (using OpenStreetMap data) in the import/belgium/road-time directory. It can't prove if it reaches the optimal solution or not. Usually termination is set on either a few minutes or on a few unimproved minutes.
Benchmarks showed that Late Acceptance has slightly better results than Simulated Annealing and Tabu Search, given a specific set of MoveSelectors configured, but your mileage may vary...

How to find neighboring solutions in simulated annealing?

I'm working on an optimization problem and attempting to use simulated annealing as a heuristic. My goal is to optimize placement of k objects given some cost function. Solutions take the form of a set of k ordered pairs representing points in an M*N grid. I'm not sure how to best find a neighboring solution given a current solution. I've considered shifting each point by 1 or 0 units in a random direction. What might be a good approach to finding a neighboring solution given a current set of points?
Since I'm also trying to learn more about SA, what makes a good neighbor-finding algorithm and how close to the current solution should the neighbor be? Also, if randomness is involved, why is choosing a "neighbor" better than generating a random solution?
I would split your question into several smaller:
Also, if randomness is involved, why is choosing a "neighbor" better than generating a random solution?
Usually, you pick multiple points from a neighborhood, and you can explore all of them. For example, you generate 10 points randomly and choose the best one. By doing so you can efficiently explore more possible solutions.
Why is it better than a random guess? Good solutions tend to have a lot in common (e.g. they are close to each other in a search space). So by introducing small incremental changes, you would be able to find a good solution, while random guess could send you to completely different part of a search space and you'll never find an appropriate solution. And because of the curse of dimensionality random jumps are not better than brute force - there will be too many places to jump.
What might be a good approach to finding a neighboring solution given a current set of points?
I regret to tell you, that this question seems to be unsolvable in general. :( It's a mix between art and science. Choosing a right way to explore a search space is too problem specific. Even for solving a placement problem under varying constraints different heuristics may lead to completely different results.
You can try following:
Random shifts by fixed amount of steps (1,2...). That's your approach
Swapping two points
You can memorize bad moves for some time (something similar to tabu search), so you will use only 'good' ones next 100 steps
Use a greedy approach to generate a suboptimal placement, then improve it with methods above.
Try random restarts. At some stage, drop all of your progress so far (except for the best solution so far), raise a temperature and start again from a random initial point. You can do this each 10000 steps or something similar
Fix some points. Put an object at point (x,y) and do not move it at all, try searching for the best possible solution under this constraint.
Prohibit some combinations of objects, e.g. "distance between p1 and p2 must be larger than D".
Mix all steps above in different ways
Try to understand your problem in all tiniest details. You can derive some useful information/constraints/insights from your problem description. Assume that you can't solve placement problem in general, so try to reduce it to a more specific (== simpler, == with smaller search space) problem.
I would say that the last bullet is the most important. Look closely to your problem, consider its practical aspects only. For example, a size of your problems might allow you to enumerate something, or, maybe, some placements are not possible for you and so on and so forth. THere is no way for SA to derive such domain-specific knowledge by itself, so help it!
How to understand that your heuristic is a good one? Only by practical evaluation. Prepare a decent set of tests with obvious/well-known answers and try different approaches. Use well-known benchmarks if there are any of them.
I hope that this is helpful. :)

variant of knapsack problem

I have 'n' number of amounts (non-negative integers). My requirement is to determine an optimal set of amounts so that the sum of the combination is less than or equal to a given fixed limit and the total is as large as possible. There is no limit to the number of amounts that can be included in the optimal set.
for sake of example: amounts are 143,2054,546,3564,1402 and the given limit is 5000.
As per my understanding the knapsack problem has 2 attributes for each item (weight and value). But the problem stated above has only one attribute (amount). I hope that would make things simpler? :)
Can someone please help me with the algorithm or source code for solving this?
this is still an NP-hard problem, but if you want to (or have to) to do something like that, maybe this topic helps you out a bit:
find two or more numbers from a list of numbers that add up towards a given amount
where i solved it like this and NikiC modified it to be faster. only difference: that one was about getting the exact amount, not "as close as possible", but that would be only some small changes in code (and you'll have to translate it into the language you're using).
take a look at the comments in my code to understand what i'm trying to do, wich is, in short form:
calculating all possible combinations of the given parts and sum them up
if the result is the amount i'm looking for, save the solution to an array
at least, sort all possible solutions to get the one using the least parts
so you'll have to change:
save a solution if it's lower than the amount you're looking for
sort solutions by total amount instead of number of used parts
The book "Knapsack Problems" By Hans Kellerer, Ulrich Pferschy and David Pisinger calls this The Subset Sum Problem and dedicates an entire chapter (Ch 4) to it. The chapter is very comprehensive and covers algorithms as well as computational results.
Even though this problem is a special case of the knapsack problem, it is still NP-hard.

Find the priority function / alphabet order for extreme higher order elements relation

This question is an extension to the following one. The difference is that now our function to optimize will have higher order relations between elements:
We have an array of elements a1,a2,...aN from an alphabet E. Assuming |N| >> |E|.
For each symbol of the alphabet we define an unique integer priority = V(sym). Let's define V{i} := V(symbol(ai)) for the simplicity.
The task is to find a priority function V for which:
Count(i)->MIN | V{i} > V{i+1} <= V{i+2}
In other words, I need to find the priorities / permutation of the alphabet for which the number of positions i, satisfying the condition V{i}>V{i+1}<=V{i+2}, is minimum.
Maximum required abstraction (low priority for me). I guess once the solution model for the initial question is extended to cover the first part of this one, extending it farther (see below) will be easier.
Given a matrix of signs B of size MxK (basically B[i,j] is from the set {<,>,<=,>=}), find the priority function V for which:
Sum(for all j in range [1,M]) {Count(i)}->EXTREMUM | V{i} B[j,1] V{i+1} B[j,2] ... B[j,K] V{i+K}
As an example, find the priority function V, for which the number of i, satisfying V{i}<V{i+1}<V{i+2} or V{i}>V{i+1}>V{i+2}, is minimum.
My intuition is that all variations on this problem will prove to be NP-hard. So I'd begin looking for heuristics that produce reasonable answers. This may involve some trial and error.
A simplistic approach is to write down a possible permutation. And then try possible swaps until you've arrived at a local minimum. Try several times, and pick the best answer.
Simulated annealing provides a more sophisticated version of this approach, see http://en.wikipedia.org/wiki/Simulated_annealing for a description. It may take some experimentation to find a set of parameters that seems to converge relatively well.
Another idea is to look for a genetic algorithm. Based on a quick Google search it looks like the standard way to do this is to try to turn an NP-complete problem into a SAT problem, and then use a genetic algorithm on that problem. This approach would require turning this into a SAT problem in some reasonable way. Unfortunately it is not obvious to me how one would go about doing this reduction. Indeed in the first version that you had, your problem was closely connected to a classic NP-hard problem. The fact that it is labeled NP-hard rather than NP-complete is evidence that people haven't found a good way to transform it into a SAT problem. So if it isn't obvious how to turn the simple version into a SAT problem, then you are unlikely to convert the hard problem either.
But you could still try some variation on genetic algorithms. Mutation is pretty simple, just swap some elements around. One way to combine elements would be to take 3 permutations and use quicksort to find the combination as follows: take a random pivot, and then use "majority wins" to bucket elements into bigger and smaller. Sort each half in the same way.
I'm sorry that I can't just give you an approach and say, "This should work." You've got what looks like an open-ended research project, and the best I can do is give you some ideas about things you can try that might work reasonably well.

Calculating a cutting list with the least amount of off cut waste

I am working on a project where I produce an aluminium extrusion cutting list.
The aluminium extrusions come in lengths of 5m.
I have a list of smaller lengths that need to be cut from the 5m lengths of aluminium extrusions.
The smaller lengths need to be cut in the order that produces the least amount of off cut waste from the 5m lengths of aluminium extrusions.
Currently I order the cutting list in such a way that generally the longest of the smaller lengths gets cut first and the shortest of smaller lengths gets cut last. The exception to this rule is whenever a shorter length will not fit in what is left of the 5m length of aluminium extrusion, I use the longest shorter length that will fit.
This seems to produce a very efficient (very little off cut waste) cutting list and doesn't take long to calculate. I imagine, however, that even though the cutting list is very efficient, it is not necessarily the most efficient.
Does anyone know of a way to calculate the most efficient cutting list which can be calculated in a reasonable amount of time?
EDIT: Thanks for the answers, I'll continue to use the "greedy" approach as it seems to be doing a very good job (out performs any human attempts to create an efficient cutting list) and is very fast.
This is a classic, difficult problem to solve efficiently. The algorithm you describe sounds like a Greedy Algorithm. Take a look at this Wikipedia article for more information: The Cutting Stock Problem
No specific ideas on this problem, I'm afraid - but you could look into a 'genetic algorithm' (which would go something like this)...
Place the lengths to cut in a random order and give that order a score based on how good a match it is to your ideal solution (0% waste, presumably).
Then, iteratively make random alterations to the order and re-score it. If the score is higher, ditch the result. If the score is lower, keep it and use it as the basis for your next calculation. Keep going until you get your score within acceptable limits.
What you described is indeed classified as a Cutting Stock problem, as Wheelie mentioned, and not a Bin Packing problem because you try to minimize the waste (sum of leftovers) rather than the number of extrusions used.
Both of those problems can be very hard to solve, but the 'best fit' algorithm you mentioned (using the longest 'small length' that fits the current extrusion) is likely to give you very good answers with a very low complexity.
Actually, since the size of material is fixed, but the requests are not, it's a bin packing problem.
Again, wikipedia to the rescue!
(Something I might have to look into for work too, so yay!)
That's an interesting problem because I suppose it depends on the quantity of each length you're producing. If they are all the same quantity and you can get Each different length onto one 5m extrusion then you have the optimum soloution.
However if they don't all fit onto one extrusion then you have a greater problem. To keep the same amount of cuts for each length you need to calculate how many lengths (not necessarily in order) can fit on one extrusion and then go in an order through each extrusion.
I've been struggling with this exact ( the lenght for my problem is 6 m) problem here too.
The solution I'm working on is a bit ugly, but I don't settle for your solution. Let me explain:
Stock size 5 m
Needs to cut in sizes(1 of each):
**3,5
1
1,5**
Your solution:
3,5 | 1 with a waste of 0,5
1,5 with a left over of 3,5
See the problem?
The solution I'm working on -> Brute force
1 - Test every possible solution
2 - Order the solutuion by their waste
3 - Choose the best solution
4 - Remove the items in the solution from the "Universe"
5 - Goto 1
I know it's time consuming (but I take 1h30 m to lunch... so... :) )
I really need the optimum solution (I do an almoust optimum solution by hand (+-) in excel) not just because I'm obsecive but also the product isn't cheap.
If anyone has an easy better solution I'd love it
The Column generation algorithm will quickly find a solution with the minimum possible waste.
To summarize, it works well because it doesn't generate all possible combinations of cuts that can fit on a raw material length. Instead, it iteratively solves for combinations that would improve the overall solution, until it reaches an optimum solution.
If anyone needs a working version of this, I've implemented it with python and posted it on GitHub: LengthNestPro

Resources