Brute force for the Bin packing - c++11

I am looking to implement the (intractable but) optimal solution to the Bin Packing problem
I know that the idea is to examine all possible ordering and find the one that uses the min number of bins but I am stuck on how to actually write the code for that.
Desccription of bin packing problem:
The bin packing problem is the problem of packing various objects of certain sizes into a minimum number of bins with a fixed size (where the bin size is usually smaller than the sum of all objects).
You are given an array containing the weight of each objects ( n objects)
and an int representing the max capacity that each bin can carry ( all bins are the same size. Assume you have n bins available)
I have the psuedo code but I am stuck how to actually implement it in c++
recurse(int itemID)
if pastLastItem(itemID)
if betterThanBestSolution
bestSolution = currentAssignment
return
for each bin i:
putIntoBin(itemID, i)
recurse(itemID+1)
removeFromBin(itemID, i)

Related

A new Bin-packing?

I'm looking in to a kind-of bin-packing problem, but not quite the same.
The problem asks to put n items into minimum number of bins without total weight exceeding capacity of bins. (classical definition)
The difference is:
Each item has a weight and bound, and the capacity of the bin is dynamically determined by the minimum bound of items in that bin.
E.g.,
I have four items A[11,12], B[1,10], C[3,4], D[20,22] ([weight,bound]).
Now, if I put item A into a bin, call it b1, then the capacity of b1 become 12. Now I try to put item B into b1, but failed because the total weight is 11+1 =12, and the capacity of b1 become 10, which is smaller than total weight. So, B is put into bin b2, whose capacity become 10. Now, put item C into b2, because the total weight is 1+3 =4, and the capacity of b2 become 4.
I don't know whether this question has been solved in some areas with some name. Or it is a variant of bin-packing that has been discussed somewhere.
I don't know whether this is the right place to post the question, any helps are appreciated!
Usually with algorithm design for NP-hard problems, it's necessary to reuse techniques rather than whole algorithms. Here, the algorithms for standard bin packing that use branch-and-bound with column generation carry over well.
The idea is that we formulate an enormous set cover instance where the sets are the sets of items that fit into a single bin. Integer programming is a good technique for normal set cover, but there are so many sets that we need to do something else, i.e., column generation. There is a one-to-one correspondence between sets and columns, so we rip out the part of the linear programming solver that uses brute force to find a good column to enter and replace it with a solver for what turns out to be the knapsack analog of this problem.
This modified knapsack problem is, given items with weights, profits, and bounds, find the most profitable set of items whose total weight is less than the minimum bound. The dynamic program for solving knapsack with small integer weights happily transfers over with no loss of efficiency. Just sort the items by descending bounds; then, when forming sets involving the most recent item, the weight limit is just that item's bound.
The following is based on Anony-mouse's answer. I am not an algorithm expert, so consider the following as "just my two cents", for what they are worth.
I think Anony-mouse is correct in starting with the smallest items (by bound). This is because a bin tends to get smaller in capacity the more items you add to it; a bin's maximum capacity is determined with the first item placed in it, it can never get larger after that point.
So instead of starting with a large bin and have its capacity slowly reduced, and having to worry about taking out too-large items that previously fit, let's jut try to keep bins' capacities as constant as possible. If we can keep the bins' capacities stable, we can use "standard" algorithms that know nothing about "bound".
So I'd suggest this:
Group all items by bound.
This will allow you to use a standard bin packing algorithm per group because if all items have the same bound (i.e. bound is constant), it can essentially be disregarded. All that the bound means now is that you know the resulting bins' capacity in advance.
Start with the group with the smallest bound and perform a standard bin packing for its items.
This will result in 1 or more bins that have a capacity equal to the bound of all items in them.
Proceed with the item group having the next-larger bound. See if there are any items that could still be put in an already existing bin (i.e. a bin produced by the previous steps).
Note that bound can again be ignored; since all pre-existing bins already have a smaller capacity than these additional items' bound, the bins' capacity cannot be affected; only weight is relevant, so you can use "standard" algorithms.
I suspect this step is an instance of the (multiple) knapsack problem, so look towards knapsack algorithms to determine how to distribute these items over and into the pre-existing, partially filled bins.
It's possible that the item group from the previous group has only been partially processed, there might be items left. These will go into one or more new bins: Basically, repeat step 3.
Repeat the above steps (from 3 onwards) until no more items are left.
It can still be written as an ILP instance, like so:
Make a binary variable x_{i,j} signifying whether item j goes into bin i, helper variables y_i that signify whether bin i is used, helper variables c_i that determine the capacity of bin i, and there are constants s_j (size of item j) b_j (bound of item j) and M (a large enough constant), now
minimize sum[j] y_j
subject to:
1: for all j:
(sum[i] x_{i,j}) = 1
2: for all i,j:
y_i ≥ x_{i,j}
3: for all i:
(sum[j] s_j * x_{i,j}) ≤ c_i
4: for all i,j:
c_i ≤ b_j + (M - M * x_{i,j})
5: x_{i,j} ϵ {0,1}
6: y_i ϵ {0,1}
The constraints mean
any item is in exactly one bin
if an item is in a bin, then that bin is used
the items in a bin do not exceed the capacity of that bin
the capacity of a bin is no more than the lowest bound of the items that are in it (the thing with the big M prevents items that are not in the bin from changing the capacity, provided you choose M no less than the highest bound)
and 6., variables are binary.
But the integrality gap can be atrocious.
First of all i might be totally wrong and there might exist an algorithm that is even better than mine.
Bin packing is NP-hard and is efficiently solved using classic algorithms like First Fit etc.There are some improvements to this too.Korf's algorithm
I aim to reduce this to normal bin packing by sorting the items by thier bound.The steps are
Sort items by bound :Sorting items by bound will help us in arranging the bins as limiting condition is minimum of bound.
Insert smallest item(by bound) into a bin
Check whether the next item(sorted by bound) can coexist in this bin.If it can then keep the item in the bin too.If not then try putting it in another bin or create another bin for it.
Repeat the procedure till all elements are arranged. The procedure is repeated in ascending order of bounds.
I think this pretty much solves the problem.Please inform me if it doesn't.I am trying to implement the same.And if there are any suggestions or improvements inform me that too. :) Thank you

Reduce Subset Sum to Polyomino Packing

This is a homework assignment, so any help is appreciated.
I should prove that the following problem is NP-complete. The hint says that you should reduce the subset sum problem to this problem.
Given a set of shapes, like the below, and an m-by-n board, decide whether is it possible to cover the board fully with all the shapes. Note that the shapes may not rotate.
For example, for a 3-by-5 board and the following pieces, the board can be covered like this:
Now the important thing to note is that the subset sum problem we are trying to reduce should be given input length polynomial in terms of m and n.
Any ideas for using another NP-complete problem are appreciated.
The easiest reduction is from the partition problem.
Suppose that we have a set of positive numbers that sum to 2n and we want to know a subset of them sums to n.
We create a set of blocks of length the numbers and width 1, then try to fit them into a rectangle of width 2 and length n. We can succeed if and only if the partition problem was solvable for those numbers, and the rows are the partition. So any partition problem can be reduced to a polyomino packing problem in linear time. Since the partition problem is NP-complete, we are done.
But they said subset sum. If they mean subset sum on positive numbers, then we can just use another trick. Suppose that our numbers sum to n and we want to know if a subset sums to k. Then we just add 2 numbers to the problem of size k+1 and size n-k+1 and aim to solve the partition problem. If we succeed, our additional 2 numbers couldn't have been in the same partition and so the rest are a solution to the partition problem. Since we've already reduced the partition problem to polyomino packing problem, we are done.
If they intended subset sum from numbers that can be both positive and negative, then I don't see the reduction that they suggested. But since I've managed to reduce 2 well-known NP-complete problems to this one, I think we're good.
Like btilly said we create a set of block of length of the numbers and width 1 with one additional helper piece (red). This alternative is simpler (subset sum is directly reduced), more visual and without the issue that pieces of 2x1 can go both vertical and horizontal and mess things up. All polynominos can be placed in the rectangle (subset-sum = k placed at top row, total set-sum = n placed on bottom row) only if there is a solution to the subset problem.

Sort object pairs into separate bins

I have N object pairs (master copy/slave copy) all with the same size. I wish to distribute the copies among M bins each with a different capacity so that no bin will include both the master and slave copy.
What's the most efficient algorithm? And more importantly what's the most efficient algorithm to find out if there is a possible solution for a given input (without actually generating the solution)?
Hard to imagine anything better thn brute force: track the M bins in a prioirty queue by descending remaining capacity, and add each object pair to the first two bins in the queue; rebalance queue and repeat. Solution exists if total capacity of the M bins >= 2*N.
That would seem to be complexity O(N * log M)
Note: For exactly three bins, no solution exists for N > M1 + M2 where Mn is the capacity of bin n sorted by descending capacity for n in range 0..M, regardless of the capacity of M0.
Likewise for exactly 2 bins, solutions exist only for N <= M1.
A simple solutions is:
Sort the M buckets in descending order according to their capacity: x1, x2,..,xm
Pick the topmost two buckets, assign an object to that pair, decrement the available capacities of the two buckets and rearrange the buckets. You can use a heap to keep track of buckets and the complexity is close to O(n)
Keep repeating until all the objects are allocated.

greedy algorithms

I am new to algorithms and am currently studying using you-tube video tutorials/lectures and a book, I firstly watch the video and then read the book and finally try a question from the book to make sure I have learned the topic correctly. I am currently up to greedy algorithms and it is very confusing.
Inside the book there are various problems but I am having trouble understanding and answering a particular one.
Firstly it gives the problem which is (I've just copied the text).
there is a set of n objects of sizes {x1; x2;..... xn} and a bin with
capacity B. All these are positive integers. Try to find a subset of these objects
so that their total size is smaller than or equal to B, but as close to B as possible.
All objects are 1-dimensional. For example, if the objects have sizes 4, 7, 10, 12, 15, and
B = 20, then we should choose 4 and 15 with total size 19 (or equivalently, 7 and 12).
For each of the following greedy algorithms, show that they are not optimal by creating
a counter-example. try to make your examples as bad as you can, where "badness"
is measured by the ratio between the optimal and greedy solutions. Thus if the best
solution has value 10 and the greedy solution has value 5, then the ratio is 2.
how do I do this for the following?
1) Always choose the object with the largest size so that the total size of this and all
other objects already chosen does not exceed B. Repeat this for the remaining objects.
Assume the following instance of the problem:
You have a box of size 2n, one element of size n+1 and the rest are of size n.
It is easy to see that the optimal is 2 elements of size n, while the greedy will get you one element of size n+1.
Since it is true for each n, it actually gives you a desired ratio of at least using this greedy approach 2.
This sounds similar to the 0-1 Knapsack problem where each item has a different size but the same value, which means any one item doesn't have any preference to being placed into the bin other than its size. In your code, you need to examine each item and calculate the maximum total size that results whether or not putting it into the bin without exceeding the capacity of the bin.

Algorithm to optimally group list of values

I have several numbers. I need to group them in several groups, so that sums of all numbers in one group are between predefined min and max. The point is to left as few numbers ungrouped as possible.
Input:
min, max: range for sum of numbers
N1, N2, N3 ... Ni: numbers to group
Output:
[N1,N3,N5],[Ni,Nj,Nk,Nm...]...: groups where sum of numbers is between min and max
Na,Nb,Nc...: numbers, left ingrouped.
This problem could be viewed as bin packing into bins of size max, with a funny objective: minimize the number of items not packed into bins holding at least min. One idea from the bin-packing literature is that the "small" items (in this case, items that are small relative to max - min) are easy to pack but are accountable for most of the combinatorial explosion of possibilities. Thus some approximation algorithms for bin packing do something clever for big items and then fill in with the small. Another way to reduce the number of possibilities is to round the numbers to belong to a smaller set. It's somewhat obvious how to do that for bin packing (round up), but it's not clear what to do for this problem.
Okay, I'll give an example of how these ideas could be instantiated. Suppose that max = 1 and min = 1/2. Let's try to find a solution that's competitive with the optimum for when max = 2 and min = 1/2. (That may sound terrible, but this sort of approximation guarantee where OPT is held to higher standards is sometimes used in the literature.)
First round every item's size up to a power of 2. Very large items, of size 4 or greater, can't be packed. Large items, of size 2 or 1 or 1/2, are given their own bins. Small items, of size 1/4 or less, are dealt with as follows. Whenever two items of size 1/4 or less have the same size, combine them into one super-item. Pack all of the new items of size 1/2 into their own bins. The remainder has total size less than 1/2. If there is space in another bin, put them there. Otherwise, give them their own bin.
The quality of the resulting solution for max = 2 is at least as good as the quality of OPT for max = 1. Take the optimal solution for max = 1 and round the item sizes. The set of bad bins remains the same, because no item is smaller, and each bin stores less than 2 because each item is less than twice as large as it used to be. Now it suffices to show that the packing algorithm I gave for powers of 2 is optimal. I'll leave that as an exercise.
I don't expect this instantly to generalize into a full algorithm. I have to get back to work, but the approach I would take would be to force OPT to deal with max = 1 while ALG gets to use max = 1 + epsilon, substitute powers of (1 + epsilon) for powers of two in the rounding step, and then figure out how to pack the small items, probably using a dynamic program since greed likely won't work.
If you're not worried about efficiency, simply generate each possible grouping and choose the one that is correct and optimal in the sense you describe. Clearly, this works for any finite list of numbers (and is, by definition, optimal).
If efficiency is desired, the problem seems to become somewhat more difficult. :D I'll keep thinking.
EDIT: Come to think of it, this problem seems at least as hard as "subset sum" and, as such, I don't think there is a solution significantly better than the one I give (i.e., no known polynomial-time algorithm can solve it, if it is NP-Hard.

Resources