Optimal way of filling 2 knapsacks? - algorithm

The dynamic programming algorithm to optimally fill a knapsack works well in the case of one knapsack. But is there an efficient known algorithm that will optimally fill 2 knapsacks (capacities can be unequal)?
I have tried the following two approaches and neither of them is correct.
First fill the first knapsack using the original DP algorithm to fill one knapsack and then fill the other knapsack.
First fill a knapsack of size W1 + W2 and then split the solution into two solutions (where W1 and W2 are the capacities of the two knapsacks).
Problem statement (see also Knapsack Problem at Wikipedia):
We have to fill the knapsack with a set of items (each item has a weight and a value) so as to maximize the value that we can get from the items while having a total weight less than or equal to the knapsack size.
We cannot use an item multiple times.
We cannot use a part of an item. We cannot take a fraction of an item. (Every item must be either fully included or not).

I will assume each of the n items can only be used once, and you must maximize your profit.
Original knapsack is dp[i] = best profit you can obtain for weight i
for i = 1 to n do
for w = maxW down to a[i].weight do
if dp[w] < dp[w - a[i].weight] + a[i].gain
dp[w] = dp[w - a[i].weight] + a[i].gain
Now, since we have two knapsacks, we can use dp[i, j] = best profit you can obtain for weight i in knapsack 1 and j in knapsack 2
for i = 1 to n do
for w1 = maxW1 down to a[i].weight do
for w2 = maxW2 down to a[i].weight do
dp[w1, w2] = max
{
dp[w1, w2], <- we already have the best choice for this pair
dp[w1 - a[i].weight, w2] + a[i].gain <- put in knapsack 1
dp[w1, w2 - a[i].weight] + a[i].gain <- put in knapsack 2
}
Time complexity is O(n * maxW1 * maxW2), where maxW is the maximum weight the knapsack can carry. Note that this isn't very efficient if the capacities are large.

The original DP assumes you mark in the dp array that values which you can obtain in the knapsack, and updates are done by consequently considering the elements.
In case of 2 knapsacks you can use 2-dimensional dynamic array, so dp[ i ][ j ] = 1 when you can put weight i to first and weight j to second knapsack. Update is similar to original DP case.

The recursive formula is anybody is looking:
Given n items, such that item i has weight wi and value pi. The two knapsacks havk capacities of W1 and W2.
For every 0<=i<=n, 0<=a<=W1, 0<=b<=W2, denote M[i,a,b] the maximal value.
for a<0 or b<0 - M[i,a,b] = −∞
for i=0, or a,b=0 - M[i,a,b] = 0
The formula:
M[i,a,b] = max{M[i-1,a,b], M[i-1,a-wi,b] + pi, M[i-1,a,b-wi] + pi}
Every solution to the problem with i items either has item i in knapsack 1, in knapsack 2, or in none of them.

Related

Variant of Knapsack

I'm working on a program to solve a variant of the 0/1 Knapsack problem.
The original problem is described here: https://en.wikipedia.org/wiki/Knapsack_problem.
In case the link goes missing in the future, I will give you a summary of the 0/1 Knapsack problem (if you are familiar with it, jump this paragraph):
Let's say we have n items, each with weight wi and value vi. We want to put items in a bag, that supports a maximum weight W, so that the total value inside the bag is the maximum possible without overweighting the bag. Items cannot have multiple instances (i.e., we only have one of each). The objective of the problem is to maximize SUM(vi.xi) so that SUM(wi.xi) <= W and xi = 0, 1 (xi represents the state of an item being or not in the bag).
For my case, there are small differences in both conditions and objective:
The weight of all items is 1, wi = 1, i = 1...n
I always want to put exactly half the items in the bag. So, the maximum weight capacity of the bag is half (rounded up) of the number of items.W = ceil[n/2] or W = floor[(n+1)/2].
Also, the weight inside the bag must be equal to its maximum capacity SUM(wi.xi) = W
Finally, instead of maximizing the value of the items inside the bag, the objective is that the value of the items inside is as close as possible to the value of the items outside. Hence, my objective is to minimize |SUM(vi.-xi) - SUM[vi(1-xi)]|, which simplifies into something like minimize |SUM[vi(2xi - 1)]|.
Now, there is a pseudo-code for the original 0/1 Knapsack problem in the Wikipedia page above (you can find it on the bottom of this text), but I am having trouble adapting it to my scenario. Can someone help? (I am not asking for code, just for an idea, so language is irrelevant)
Thanks!
Wikipedia's pseudo-code for 0/1 Knapsack problem:
Assume w1, w2, ..., wn, W are strictly positive integers. Define
m[i,w] to be the maximum value that can be attained with weight less
than or equal to w using items up to i (first i items).
We can define m[i,w] recursively as follows:
m[0, w]=0
m[i, w] = m[i-1, w] if wi > w (the new item is more than the current weight limit)
m[i, w]= max(m[i-1, w], m[i-1, w-wi] + vi) if wi <= w.
The solution can then be found by calculating m[n,W].
// Input:
// Values (stored in array v)
// Weights (stored in array w)
// Number of distinct items (n)
// Knapsack capacity (W)
for j from 0 to W do:
m[0, j] := 0
for i from 1 to n do:
for j from 0 to W do:
if w[i-1] <= j then:
m[i, j] := max(m[i-1, j], m[i-1, j-w[i-1]] + v[i-1])
else:
m[i, j] := m[i-1, j]
Thanks to #harold, it seems like this problem is not a Knapsack problem, but a Partition problem. Part of the pseudo-code I was seeking is in the corresponding Wikipedia page: https://en.wikipedia.org/wiki/Partition_problem
EDIT: well, actually, Partition problem algorithms tell you whether a Set of items can be partitioned in 2 sets of equal value or not. Suppose it can't, you have approximation algorithms, which say whether you can have the set partiotioned in 2 sets with the difference their values being lower than d.
BUT, they don't tell you the resulting sub-sets, and that's what I was seeking.
I ended up finding a question here asking for that (here: Balanced partition), with a code example which I have tested and works fine.

Knapsack with one additional constraint

This is the old and famous knapsack problem : Knapsack Problem
Here I have Knapsack problem with one constraint.
I have Knapsack with size W = 100000000 and N = 100 items I wrote the dynamic solution for it the Complexity of my algorithm is O(100000000*100) and this is too big in both time and space but there is one condition here that either W ≤ 50000 or max 1≤ i ≤ n Vi ≤ 500. so if my Knapsack size is more than 50000 my maximum value of items is limited.
So now I wonder how can I reduce the time Complexity of my algorithm with this condition I thought Knapsack problem depends on the size of knapsack and the number of items so how the value of items can change the change my algorithm?
Instead of creating a table of size W*n, where each entry D[x][i] indicates the best value (highest) you can get with at most x weight using the first i items, use the table where now D[x][i] is the minimal weight needed to get to value of x, using the first i elements:
D(0,i) = 0 i>0
D(x,0) = infinity x > 0
D(x,i) = infinity x<0 or i<0
D(x,i) = min{ D(x,i-1), D(x-value[i],i-1) + weight[i])
When you are done, find max{ x | D(x,n) <= W) } - this is the highest value you can get, using at most W weight, and is done by a linear scan of the last line of the DP matrix.
Checking which variation you need is done by a single scan of the data.

algorithm for the 0-1 Knapsack with 2 sacks?

formally, say, we have 2 sacks with capacities c1 and c2. There are N items with profits pi and weights wi. As in 0-1 Knapsack problem, we need to fill in c1 and c2 with these items in such a way the overall profit is maximized. Assume pi and wi are positive integers!
For the 2 knapsack problem does below recurrence relation hold good?
DP[I][J][K] is maximum profit we could achieve from the first i items such that the weight of exactly j was used in knapsack #1 and a weight of exactly k was used in knapsack #2
DP[i][j][k] = max(DP[i-1][j][k], DP[i][j-1][k], DP[i][j][k-1], DP[i][j-W[j]][k] + C[i], DP[i][j][k-W[k]] + C[i])
Suppose C[i] and W[i] are the value and weight of item respectively.
Given that j-W[i] >0, k-W[i] > 0 (for the ease of writing the formula. We could still write a formula without this assumption by adding two more lines), the equation should be
DP[i][j][k] = max(DP[i-1][j][k], DP[i-1][j-W[i]][k]+C[i],DP[i-1][j][k-W[i]]+C[i])

Computing Combinations

I am facing difficulty in coming up with a solution for the problem given below:
We are given n boxes each having a weight ( it means each ball in box B_i have weight C_i),
Each box contain some balls specifically
{b1,b2,b3...,b_n} (b_i is the count of balls in Box B_i).
we have to choose m balls out of it such that sum of the weights of m chosen balls be less than a given number T.
How many ways to do it?
First, let's have a look on a similar problem:
The similar problem is: you are looking to maximize the sum (such that it is still smaller then T), you are facing a variation of subset-sum problem, which is NP-Hard. The variation with a constant number of items is discussed in this thread: Sum-subset with a fixed subset size.
An alternative way to look at the problem is with a 2-dimensional knapsack problem, where weight = cost, and an extra dimension for number of elements. This concept is discussed in this thread: What's the fastest way to solve knapsack prob with two properties
Now, look at your problem: Finding the number of possible ways to achieve a sum which is smaller/equal T is still NP-Hard.
Assume you had a polynomial algorithm to do it, let it be A.
Running A(T) and A(T-1) will give you two numbers, if A(T) > A(T-1), the answer to the subset sum problem would have been true - otherwise it is false, so given a polynomial solution to this problem, we could prove P=NP.
You can solve it by using dynamic programming techniques.
Let f[i][j][k] denote the number of ways to choose j balls from B_1 to B_i with sum of weights to be exactly k. The answer you want to get is f[n][m][T].
Initially, let f[i][j][k] = 1 for all i,j,k
for i = 1 to n
for j = 0 to m
for k = 0 to T
for x = 0 to min(b_i,j) # choose x balls from B_i
y = x * C_i
if y <= k
f[i][j][k] = f[i][j][k] * f[i-1][j-x][k-y] * Comb(b_i,x)
Comb(n,k) is the number of ways to choose k elements from n elements.
The time complexity is O(n m T b) where b is the maximum number of balls in a box.
Note that, because of the T in the big-O notation, theoretically it is NP-hard. However, in practice, when T is relatively small, this algorithm is still feasible.

Distance measure between two sets of possibly different size

I have 2 sets of integers, A and B, not necessarily of the same size. For my needs, I take the distance between each 2 elements a and b (integers) to be just abs(a-b).
I am defining the distance between the two sets as follows:
If the sets are of the same size, minimize the sum of distances of all pairs [a,b] (a from A and b from B), minimization over all possible 'pairs partitions' (there are n! possible partitions).
If the sets are not of the same size, let's say A of size m and B of size n, with m < n, then minimize the distance from (1) over all subsets of B which are of size m.
My question is, is the following algorithm (just an intuitive guess) gives the right answer, according to the definition written above.
Construct a matrix D of size m X n, with D(i,j) = abs(A(i)-B(j))
Find the smallest element of D, accumulate it, and delete the row and the column of that element. Accumulate the next smallest entry, and keep accumulating until all rows and columns are deleted.
for example, if A={0,1,4} and B={3,4}, then D is (with the elements above and to the left):
3 4
0 3 4
1 2 3
4 1 0
And the distance is 0 + 2 = 2, coming from pairing 4 with 4 and 3 with 1.
Note that this problem is referred to sometimes as the skis and skiers problem, where you have n skis and m skiers of varying lengths and heights. The goal is to match skis with skiers so that the sum of the differences between heights and ski lengths is minimized.
To solve the problem you could use minimum weight bipartite matching, which requires O(n^3) time.
Even better, you can achieve O(n^2) time with O(n) extra memory using the simple dynamic programming algorithm below.
Optimally, you can solve the problem in linear time if the points are already sorted using the algorithm described in this paper.
O(n^2) dynamic programming algorithm:
if (size(B) > size(A))
swap(A, B);
sort(A);
sort(B);
opt = array(size(B));
nopt = array(size(B));
for (i = 0; i < size(B); i++)
opt[i] = abs(A[0] - B[i]);
for (i = 1; i < size(A); i++) {
fill(nopt, infinity);
for (j = 1; j < size(B); j++) {
nopt[j] = min(nopt[j - 1], opt[j - 1] + abs(A[i] - B[j]));
swap(opt, nopt);
}
return opt[size(B) - 1];
After each iteration i of the outer for loop above, opt[j] contains the optimal solution matching {A[0],..., A[i]} using the elements {B[0],..., B[j]}.
The correctness of this algorithm relies on the fact that in any optimal matching if a1 is matched with b1, a2 is matched with b2, and a1 < a2, then b1 <= b2.
In order to get the optimum, solve the assignment problem on D.
The assignment problem finds a perfect matching in a bipartite graph such that the total edge weight is minimized, which maps perfectly to your problem. It is also in P.
EDIT to explain how OP's problem maps onto assignment.
For simplicity of explanation, extend the smaller set with special elements e_k.
Let A be the set of workers, and B be the set of tasks (the contents are just labels).
Let the cost be the distance between an element in A and B (i.e. an entry of D). The distance between e_k and anything is 0.
Then, we want to find a perfect matching of A and B (i.e. every worker is matched with a task), such that the cost is minimized. This is the assignment problem.
No It's not a best answer, for example:
A: {3,7} and B:{0,4} you will choose: {(3,4),(0,7)} and distance is 8 but you should choose {(3,0),(4,7)} in this case distance is 6.
Your answer gives a good approximation to the minimum, but not necessarily the best minimum. You are following a "greedy" approach which is generally much easier, and gives good results, but can not guarantee the best answer.

Resources