Calculating a cutting list with the least amount of off cut waste - algorithm

I am working on a project where I produce an aluminium extrusion cutting list.
The aluminium extrusions come in lengths of 5m.
I have a list of smaller lengths that need to be cut from the 5m lengths of aluminium extrusions.
The smaller lengths need to be cut in the order that produces the least amount of off cut waste from the 5m lengths of aluminium extrusions.
Currently I order the cutting list in such a way that generally the longest of the smaller lengths gets cut first and the shortest of smaller lengths gets cut last. The exception to this rule is whenever a shorter length will not fit in what is left of the 5m length of aluminium extrusion, I use the longest shorter length that will fit.
This seems to produce a very efficient (very little off cut waste) cutting list and doesn't take long to calculate. I imagine, however, that even though the cutting list is very efficient, it is not necessarily the most efficient.
Does anyone know of a way to calculate the most efficient cutting list which can be calculated in a reasonable amount of time?
EDIT: Thanks for the answers, I'll continue to use the "greedy" approach as it seems to be doing a very good job (out performs any human attempts to create an efficient cutting list) and is very fast.

This is a classic, difficult problem to solve efficiently. The algorithm you describe sounds like a Greedy Algorithm. Take a look at this Wikipedia article for more information: The Cutting Stock Problem

No specific ideas on this problem, I'm afraid - but you could look into a 'genetic algorithm' (which would go something like this)...
Place the lengths to cut in a random order and give that order a score based on how good a match it is to your ideal solution (0% waste, presumably).
Then, iteratively make random alterations to the order and re-score it. If the score is higher, ditch the result. If the score is lower, keep it and use it as the basis for your next calculation. Keep going until you get your score within acceptable limits.

What you described is indeed classified as a Cutting Stock problem, as Wheelie mentioned, and not a Bin Packing problem because you try to minimize the waste (sum of leftovers) rather than the number of extrusions used.
Both of those problems can be very hard to solve, but the 'best fit' algorithm you mentioned (using the longest 'small length' that fits the current extrusion) is likely to give you very good answers with a very low complexity.

Actually, since the size of material is fixed, but the requests are not, it's a bin packing problem.
Again, wikipedia to the rescue!
(Something I might have to look into for work too, so yay!)

That's an interesting problem because I suppose it depends on the quantity of each length you're producing. If they are all the same quantity and you can get Each different length onto one 5m extrusion then you have the optimum soloution.
However if they don't all fit onto one extrusion then you have a greater problem. To keep the same amount of cuts for each length you need to calculate how many lengths (not necessarily in order) can fit on one extrusion and then go in an order through each extrusion.

I've been struggling with this exact ( the lenght for my problem is 6 m) problem here too.
The solution I'm working on is a bit ugly, but I don't settle for your solution. Let me explain:
Stock size 5 m
Needs to cut in sizes(1 of each):
**3,5
1
1,5**
Your solution:
3,5 | 1 with a waste of 0,5
1,5 with a left over of 3,5
See the problem?
The solution I'm working on -> Brute force
1 - Test every possible solution
2 - Order the solutuion by their waste
3 - Choose the best solution
4 - Remove the items in the solution from the "Universe"
5 - Goto 1
I know it's time consuming (but I take 1h30 m to lunch... so... :) )
I really need the optimum solution (I do an almoust optimum solution by hand (+-) in excel) not just because I'm obsecive but also the product isn't cheap.
If anyone has an easy better solution I'd love it

The Column generation algorithm will quickly find a solution with the minimum possible waste.
To summarize, it works well because it doesn't generate all possible combinations of cuts that can fit on a raw material length. Instead, it iteratively solves for combinations that would improve the overall solution, until it reaches an optimum solution.
If anyone needs a working version of this, I've implemented it with python and posted it on GitHub: LengthNestPro

Related

Algorithm for highest value inside budget

I wasn't entirely sure the best way to ask this question (or do the research to see if it has been previously answered).
Given a data set where each entry has a Point value and a Dollar value, I'm looking to generate a list of length N entries that yields the highest aggregate Point value whilst staying within budget B.
Example data set:
Item Points Dollars
Apple 3.0 $1.00
Pear 2.5 $0.75
Peach 2.8 $0.88
And with this (small) data set, say my budget (B) is $2.25, and list length (N) must be 2. You MUST use the fixed list length, but are not required to use ALL of the budget.
Obviously the example provided is easy to do in one's head, but given a much larger data set, and both higher N and B values, I'm looking for an algorithm that can generate the list. Having a hard time wrapping my head around this one.
Just looking for a pseudo-algorithm, but if you prefer any given language feel free to respond with that!
I am quite positive that this can be reduced to an NP-complete problem and hence it's not really worth trying to develop a process that will always give you the 'correct' answer as many people have tried and failed to do this efficiently over a large data set. However, you can use a much more efficient approximation technique that whilst it will not guarantee to give you the correct answer, many popular approximation algorithms are capable of achieving a high degree of accuracy.
Hope this helps you out :)
This problem is NP-Complete (NP and NP-Hard), meaning, that until now there is no algorithm found, that solves this problem in a polynomial amount time (polynomial to the input size) and if you find an algorithm that does, you would have solved one of the greatest problems in computer science (P=NP), which would you at least bring a million dollar reward.
If you are satisfied with an approximation, I would recommend the Greedy-Algorithm:
https://en.wikipedia.org/wiki/Greedy_algorithm

Finding the average of large list of numbers

Came across this interview question.
Write an algorithm to find the mean(average) of a large list. This
list could contain trillions or quadrillions of number. Each number is
manageable in hundreds, thousands or millions.
Googling it gave me all Median of Medians solutions. How should I approach this problem?
Is divide and conquer enough to deal with trillions of number?
How to deal with the list of the such a large size?
If the size of the list is computable, it's really just a matter of how much memory you have available, how long it's supposed to take and how simple the algorithm is supposed to be.
Basically, you can just add everything up and divide by the size.
If you don't have enough memory, dividing first might work (Note that you will probably lose some precision that way).
Another approach would be to recursively split the list into 2 halves and calculating the mean of the sublists' means. Your recursion termination condition is a list size of 1, in which case the mean is simply the only element of the list. If you encounter a list of odd size, make either the first or second sublist longer, this is pretty much arbitrary and doesn't even have to be consistent.
If, however, you list is so giant that its size can't be computed, there's no way to split it into 2 sublists. In that case, the recursive approach works pretty much the other way around. Instead of splitting into 2 lists with n/2 elements, you split into n/2 lists with 2 elements (or rather, calculate their mean immediately). So basically, you calculate the mean of elements 1 and 2, that becomes you new element 1. the mean of 3 and 4 is your new second element, and so on. Then apply the same algorithm to the new list until only 1 element remains. If you encounter a list of odd size, either add an element at the end or ignore the last one. If you add one, you should try to get as close as possible to your expected mean.
While this won't calculate the mean mathematically exactly, for lists of that size, it will be sufficiently close. This is pretty much a mean of means approach. You could also go the median of medians route, in which case you select the median of sublists recursively. The same principles apply, but you will generally want to get an odd number.
You could even combine the approaches and calculate the mean if your list is of even size and the median if it's of odd size. Doing this over many recursion steps will generate a pretty accurate result.
First of all, this is an interview question. The problem as stated would not arise in practice. Also, the question as stated here is imprecise. That is probably deliberate. (They want to see how you deal with solving an imprecisely specified problem.)
Write an algorithm to find the mean(average) of a large list.
The word "find" is rubbery. It could mean calculate (to some precision) or it could mean estimate.
The phrase "large list" is rubbery. If could mean a list or array data structure in memory, or the "list" could be the result of a database query, the contents of a file or files.
There is no mention of the hardware constraints on the system where this will be implemented.
So the first thing >>I<< would do would be to try to narrow the scope by asking some questions of the interviewer.
But assuming that you can't, then a complete answer would need to cover the following points:
The dataset probably won't fit in memory at the same time. (But if it does, then that is good.)
Calculating the average of N numbers is O(N) if you do it serially. For N this size, it could be an intractable problem.
An alternative is to split into sublists of equals size and calculate the averages, and the average of the averages. In theory, this gives you O(N/P) where P is the number of partitions. The parallelism could be implemented with multiple threads, with multiple processes on the same machine, or distributed.
In practice, the limiting factors are going to be computational, memory and/or I/O bandwidth. A parallel solution will be effective if you can address these limits. For example, you need to balance the problem of each "worker" having uncontended access to its "sublist" versus the problem of making copies of the data so that that can happen.
If the list is represented in a way that allows sampling, then you can estimate the average without looking at the entire dataset. In fact, this could be O(C) depending on how you sample. But there is a risk that your sample will be unrepresentative, and the average will be too inaccurate.
In all cases doing calculations, you need to guard against (integer) overflow and (floating point) rounding errors. Especially while calculating the sums.
It would be worthwhile discussing how you would solve this with a "big data" platform (e.g. Hadoop) and the limitations of that approach (e.g. time taken to load up the data ...)

variant of knapsack problem

I have 'n' number of amounts (non-negative integers). My requirement is to determine an optimal set of amounts so that the sum of the combination is less than or equal to a given fixed limit and the total is as large as possible. There is no limit to the number of amounts that can be included in the optimal set.
for sake of example: amounts are 143,2054,546,3564,1402 and the given limit is 5000.
As per my understanding the knapsack problem has 2 attributes for each item (weight and value). But the problem stated above has only one attribute (amount). I hope that would make things simpler? :)
Can someone please help me with the algorithm or source code for solving this?
this is still an NP-hard problem, but if you want to (or have to) to do something like that, maybe this topic helps you out a bit:
find two or more numbers from a list of numbers that add up towards a given amount
where i solved it like this and NikiC modified it to be faster. only difference: that one was about getting the exact amount, not "as close as possible", but that would be only some small changes in code (and you'll have to translate it into the language you're using).
take a look at the comments in my code to understand what i'm trying to do, wich is, in short form:
calculating all possible combinations of the given parts and sum them up
if the result is the amount i'm looking for, save the solution to an array
at least, sort all possible solutions to get the one using the least parts
so you'll have to change:
save a solution if it's lower than the amount you're looking for
sort solutions by total amount instead of number of used parts
The book "Knapsack Problems" By Hans Kellerer, Ulrich Pferschy and David Pisinger calls this The Subset Sum Problem and dedicates an entire chapter (Ch 4) to it. The chapter is very comprehensive and covers algorithms as well as computational results.
Even though this problem is a special case of the knapsack problem, it is still NP-hard.

Packing differently sized chunks of data into multiple bins

EDIT: It seems like this problem is called "Cutting stock problem"
I need an algorithm that gives me the (space-)optimal arrangement of chunks in bins. One way would be put the bigger chunks in first. But see how that algorithm fails in this example:
Chunks Bins
-----------------------------
AAA BBB CC DD ( ) ( )
Algorithm Result
-----------------------------
biggest first (AAABBB ) (CC )
optimal (AAACCDD) (BBB)
"Biggest first" can't fit in DD. Maybe it helps to build a table like this:
Size 1: ---
Size 2: CC, DD
Size 3: AAA, BBB
Size 4: CCDD
Size 5: AAACC, AAADD, BBBCC, BBBDD
Size 6: AAABBB
Size 7: AAACCDD, BBBCCDD
Size 8: AAABBBCC, AAABBBDD
Size 10: AAABBBCCDD
This is basically a variant of the bin-packing problem. This problem is is known to be NP-hard, so don't expect to find an efficient optimal algorithm for complex cases (i.e. with many objects and bins).
However, if your number of objects/bins is relatively small, you will probably be fine just exhaustively searching all the possible combinations with a depth-first search.
This is pretty easy to implement: just take the first object, then recursively re-run the algorithm with the first object placed in each of the bins in turn (i.e. subtracting the size of the object from the available bin space). Finally, you just need to keep track of the best "solution" found so far and return this as your final answer once you have tried all combinations.
You may also be able make this algorithm algorithm run faster by:
Considering all objects of equal size as equivalent
Pruning the search tree (i.e. returning early from a branch) if you can't possibly beat the current best solution e.g. when you have already found a perfect fit
Updated based on comments on problem size
Given that it looks like you have a very large number of chunks to deal with, you might want to try the following:
Fit the largest 10-20 chunks using an exhaustive search as above
Allocate the remainder using a largest fit approach
Mikera is right: this multiple Knapsack problem (a variant of the bin packing problem) is NP hard.
Here are a couple of your options (copied from my answer on a similar question):
Brute force, or better yet, branch and bound. Doesn't scale (at all!), but will find you the optimal solution (probably not in our lifetimes though).
Deterministic algorithm: sort the chunks on largest size and go through that list one by one and assign it the best remaining spot. That will finish very fast, but the solution can be far from optimal (or feasible). Here's a nice picture showing an example what can go wrong. But if you want to keep it simple, that's the way to go.
Meta-heuristics, starting from the result of a deterministic algorithm. This will give you a very good result in reasonable time, better than what humans come up with. Depending on how much time you give it and the difficulty of the problem it might or might not be the optimal solution. There are a couple of libraries out there, such as Drools Planner (open source java).
A general best algorithm for this problem doesn't exist yet (see bin packing problem). You can find a few different approaches on wikipedia and/or googling for the "bin packing problem" and maybe "knapsack problem" would also provide some help.
Donald Knuth's Dancing Links algorithm is quick at finding solutions to "exact covering" problems.

Writing Simulated Annealing algorithm for 0-1 knapsack in C#

I'm in the process of learning about simulated annealing algorithms and have a few questions on how I would modify an example algorithm to solve a 0-1 knapsack problem.
I found this great code on CP:
http://www.codeproject.com/KB/recipes/simulatedAnnealingTSP.aspx
I'm pretty sure I understand how it all works now (except the whole Bolzman condition, as far as I'm concerned is black magic, though I understand about escaping local optimums and apparently this does exactly that). I'd like to re-design this to solve a 0-1 knapsack-"ish" problem. Basically I'm putting one of 5,000 objects in 10 sacks and need to optimize for the least unused space. The actual "score" I assign to a solution is a bit more complex, but not related to the algorithm.
This seems easy enough. This means the Anneal() function would be basically the same. I'd have to implement the GetNextArrangement() function to fit my needs. In the TSM problem, he just swaps two random nodes along the path (ie, he makes a very small change each iteration).
For my problem, on the first iteration, I'd pick 10 random objects and look at the leftover space. For the next iteration, would I just pick 10 new random objects? Or am I best only swapping out a few of the objects, like half of them or only even one of them? Or maybe the number of objects I swap out should be relative to the temperature? Any of these seem doable to me, I'm just wondering if someone has some advice on the best approach (though I can mess around with improvements once I have the code working).
Thanks!
Mike
With simulated annealing, you want to make neighbour states as close in energy as possible. If the neighbours have significantly greater energy, then it will just never jump to them without a very high temperature -- high enough that it will never make progress. On the other hand, if you can come up with heuristics that exploit lower-energy states, then exploit them.
For the TSP, this means swapping adjacent cities. For your problem, I'd suggest a conditional neighbour selection algorithm as follows:
If there are objects that fit in the empty space, then it always puts the biggest one in.
If no objects fit in the empty space, then pick an object to swap out -- but prefer to swap objects of similar sizes.
That is, objects have a probability inverse to the difference in their sizes. You might want to use something like roulette selection here, with the slice size being something like (1 / (size1 - size2)^2).
Ah, I think I found my answer on Wikipedia.. It suggests moving to a "neighbor" state, which usually implies changing as little as possible (like swapping two cities in a TSM problem)..
From: http://en.wikipedia.org/wiki/Simulated_annealing
"The neighbours of a state are new states of the problem that are produced after altering the given state in some particular way. For example, in the traveling salesman problem, each state is typically defined as a particular permutation of the cities to be visited. The neighbours of some particular permutation are the permutations that are produced for example by interchanging a pair of adjacent cities. The action taken to alter the solution in order to find neighbouring solutions is called "move" and different "moves" give different neighbours. These moves usually result in minimal alterations of the solution, as the previous example depicts, in order to help an algorithm to optimize the solution to the maximum extent and also to retain the already optimum parts of the solution and affect only the suboptimum parts. In the previous example, the parts of the solution are the parts of the tour."
So I believe my GetNextArrangement function would want to swap out a random item with an item unused in the set..

Resources