Bin Packing using Dynamic Programming - algorithm

Problem Statement: You have n1 items of size s1, n2 items of size s2, and n3 items of size s3. You'd like to pack all of these items into bins each of capacity C, such that the total number of bins used is minimized.
My Solution:
Bin(C,N1,N2,N3) = max{Bin(C-N1,N1-1,N2,N3)+N1 if N1<=C and N1>0,
Bin(C-N2,N1,N2-1,N3)+N2 if N2<=C and N2>0,
Bin(C-N3,N1,N2,N3-1)+N3 if N3<=C and N3>0,
0 otherwise}
The above solution only fills a single bin efficiently. Can anybody suggest how to modify the above relation so that I get the total bins used for packing items efficiently?

Problem
You have n1 items of size s1 and n2 items of size s2. You must pack all of these items into bins, each of capacity C, such that the total number of bins used is minimised. Design a polynomial time algorithm for such packaging.
Here is my solution to this problem, and it's very similar to what you're asking.
DP method
Suppose Bin(i, j) gives the min total number of bins used, then Bin(i, j) = min{Bin(i′, j′) + Bin(i − i′,j − j′)} where i + j > i′ + j′ > 0. There will be n^2 − 2 different (i′,j′) combinations and one pair of (n1,n2) combination. So the complexity is about O(n^2).
Complexity
O(n^2)
Example:
Let s1 = 3, n1 = 2, s2 = 2, n2 = 2, C = 4. Find the min bins needed, i.e., b.
<pre>
i j b
- - -
0 1 1
0 2 2
1 0 1
1 1 1
1 2 2
2 0 2
2 1 3
2 2 3 -> (n1,n2) pair
</pre>
So as you can see, 3 bins are needed.
<pre>
Note that Bin(2,2) = min{
Bin(2,1) + Bin(0,1),
Bin(2,0) + Bin(0,2),
Bin(1,2) + Bin(1,0),
Bin(1,1) + Bin(1,1)}
= min{3, 4}
= 3
</pre>

Related

Finding the amount of combination of three numbers in a sequence which fulfills a specific requirement

The question is, given a number D and a sequence of numbers with amount N, find the amount of the combinations of three numbers that have a highest difference value within it that does not exceed the value D. For example:
D = 3, N = 4
Sequence of numbers: 1 2 3 4
Possible combinations: 1 2 3 (3-1 = 2 <= D), 1 2 4 (4 - 1 = 3 <= D), 1 3 4, 2 3 4.
Output: 4
What I've done: link
Well my concept is: iterate through the whole sequence of numbers and find the smallest number that exceeds the D value when subtracted to the current compared number. Then, find the combinations between those two numbers with the currently compared number being a fixed value (which means combination of n [numbers between the two numbers] taken 2). If even the biggest number in the sequence subtracted with the currently compared number does not exceed D, then use a combination of the whole elements taken 3.
N can be as big as 10^5 with the smallest being 1 and D can be as big as 10^9 with the smallest being 1 too.
Problem with my algorithm: overflow occurs when I do a combination of the 1st element and 10^5th element. How can I fix this? Is there a way to calculate that large amount of combination without actually doing the factorials?
EDIT:
Overflow occurs when worst case happens: currently compared number is still in index 0 while all other numbers, when subtracted with the currently compared number, is still smaller than D. For example, the value of number at index 0 is 1, the value of number at index 10^5 is 10^5 + 1 and D is 10^9. Then, my algorithm will attempt to calculate the factorial of 10^5 - 0 which then overflows. The factorial will be used to calculate the combination of 10^5 taken 3.
When you seek for items in value range D in sorted list, and get index difference M, then you should calculate C(M,3).
But for such combination number you don't need to use huge factorials:
C(M,3) = M! / (6 * (M-3)!) = M * (M-1) * (M-2) / 6
To diminish intermediate results even more:
A = (M - 1) * (M - 2) / 2
A = (A * M) / 3
You didn't add the C++ tag to your question, so let me write the answer in Python 3 (it should be easy to translate it to C++):
N = int(input("N = "))
D = int(input("D = "))
v = [int(input("v[{}] = ".format(i))) for i in range (0, N)]
count = 0
i, j = 0, 1
while j + 1 < N:
j += 1
while v[j] - v[i] > D:
i += 1
d = j - i
if d >= 2:
count += (d - 1) * d // 2 # // is the integer division
print(count)
The idea is to move up the upper index of the triples j, while dragging the lower index i at the greatest distance j-i=d where v[j]-v[i]<=D. For each i-j pair, there are 1+2+3+...+d-1 possible triples keeping j fixed, i.e., (d-1)*d/2.

Get all the possible permutations and combinations given min and max number

I have suppose two places: -,- .Each of these place has a max limit. Such as first place has max limit of 3 and 2nd place has max limit of 7.
I have other 2 numbers which is totalmaxlimit and other is totalminlimit.
Ex; totalmaxlimit = 6
totalminlimit = 3
I want to write a code where I can fill above two places with all possible permutation and combinations such that the sum of three places is greater than equal to 3 and less than equal to 6.
Example:
3 0
3 1
2 0
2 1
2 4
Also,
2 6 will be wrong result because sum is greater than totalmaxlimit.
4 2 is also wrong as first place has max limit of 3.
Code in any language is fine. Thanks in advance.
Let's assume that:
1) The place is given by A, B coordinates
2) You have a totalMin (m) and a totalMax (M)
3) The rules are that A, B, and A+B should be >= m and <=M
4) The amount of values is given by the equation M-m. (e.g, if M =10 and m=0, we will have 10 valid values).
You can get a permutation by using the formula P = n! / (n-k)! where N is you number of values, and k is the valid numbers you can have.
So, for example, if m = 0, M = 6:
Permutations = (0,6) , (1,5), (2,4), etc...
Basically you have the sum of (X_n, X_M) where, when n grows, M decreases. M = M - n
I hope this helps for now, but I can provide a more 'profissional' formula if you like. The language could be python because I think it would be easier. But first you need to get the algorithm, passing it to code is trivial.
Here's the code:
def permutations(min, max):
# init variables
m = min
M = max
results = []
# main loop
for i in range(min, max):
for j in range(min, max):
if ((i + j) >= min and (i+j) <= max ):
results.append([i, j])
print(results)

Computing all infix products for a monoid / semigroup

Introduction: Infix products for a group
Suppose I have a group
G = (G, *)
and a list of elements
A = {0, 1, ..., n} ⊂ ℕ
x : A -> G
If our goal is to implement a function
f : A × A -> G
such that
f(i, j) = x(i) * x(i+1) * ... * x(j)
(and we don't care about what happens if i > j)
then we can do that by pre-computing a table of prefixes
m(-1) = 1
m(i) = m(i-1) * x(i)
(with 1 on the right-hand side denoting the unit of G) and then implementing f as
f(i, j) = m(i-1)⁻¹ * m(j)
This works because
m(i-1) = x(0) * x(1) * ... * x(i-1)
m(j) = x(0) * x(1) * ... * x(i-1) * x(i) * x(i+1) * ... * x(j)
and so
m(i)⁻¹ * m(j) = x(i) * x(i+1) * ... * x(j)
after sufficient reassociation.
My question
Can we rescue this idea, or do something not much worse, if G is only a monoid, not a group?
For my particular problem, can we do something similar if G = ([0, 1] ⊂ ℝ, *), i.e. we have real numbers from the unit line, and we can't divide by 0?
Yes, if G is ([0, 1] ⊂ ℝ, *), then the idea can be rescued, making it possible to compute ranged products in O(log n) time (or more accurately, O(log z) where z is the number of a in A with x(a) = 0).
For each i, compute the product m(i) = x(0)*x(1)*...*x(i), ignoring any zeros (so these products will always be non-zero). Also, build a sorted array Z of indices for all the zero elements.
Then the product of elements from i to j is 0 if there's a zero in the range [i, j], and m(j) / m(i-1) otherwise.
To find if there's a zero in the range [i, j], one can binary search in Z for the smallest value >= i in Z, and compare it to j. This is where the extra O(log n) time cost appears.
General monoid solution
In the case where G is any monoid, it's possible to do precomputation of n products to make an arbitrary range product computable in O(log(j-i)) time, although its a bit fiddlier than the more specific case above.
Rather than precomputing prefix products, compute m(i, j) for all i, j where j-i+1 = 2^k for some k>=0, and 2^k divides both i and j. In fact, for k=0 we don't need to compute anything, since the values of m(i, i+1) is simply x(i).
So we need to compute n/2 + n/4 + n/8 + ... total products, which is at most n-1 things.
One can construct an arbitrary interval [i, j] from at O(log_2(j-i+1)) of these building blocks (and elements of the original array): pick the largest building block contained in the interval and append decreasing sized blocks on either side of it until you get to [i, j]. Then multiply the precomputed products m(x, y) for each of the building blocks.
For example, suppose your array is of size 10. For example's sake, I'll assume the monoid is addition of natural numbers.
i: 0 1 2 3 4 5 6 7 8 9
x: 1 3 2 4 2 3 0 8 2 1
2: ---- ---- ---- ---- ----
4 6 5 8 3
4: ----------- ----------
10 13
8: ----------------------
23
Here, the 2, 4, and 8 rows show sums of aligned intervals of length 2, 4, 8 (ignoring bits left over if the array isn't a power of 2 in length).
Now, suppose we want to calculate x(1) + x(2) + x(3) + ... + x(8).
That's x(1) + m(2, 3) + m(4, 7) + x(8) = 3 + 6 + 13 + 2 = 24.

Count number of subsequences with given k modulo sum

Given an array a of n integers, count how many subsequences (non-consecutive as well) have sum % k = 0:
1 <= k < 100
1 <= n <= 10^6
1 <= a[i] <= 1000
An O(n^2) solution is easily possible, however a faster way O(n log n) or O(n) is needed.
This is the subset sum problem.
A simple solution is this:
s = 0
dp[x] = how many subsequences we can build with sum x
dp[0] = 1, 0 elsewhere
for i = 1 to n:
s += a[i]
for j = s down to a[i]:
dp[j] = dp[j] + dp[j - a[i]]
Then you can simply return the sum of all dp[x] such that x % k == 0. This has a high complexity though: about O(n*S), where S is the sum of all of your elements. The dp array must also have size S, which you probably can't even afford to declare for your constraints.
A better solution is to not iterate over sums larger than or equal to k in the first place. To do this, we will use 2 dp arrays:
dp1, dp2 = arrays of size k
dp1[0] = dp2[0] = 1, 0 elsewhere
for i = 1 to n:
mod_elem = a[i] % k
for j = 0 to k - 1:
dp2[j] = dp2[j] + dp1[(j - mod_elem + k) % k]
copy dp2 into dp1
return dp1[0]
Whose complexity is O(n*k), and is optimal for this problem.
There's an O(n + k^2 lg n)-time algorithm. Compute a histogram c(0), c(1), ..., c(k-1) of the input array mod k (i.e., there are c(r) elements that are r mod k). Then compute
k-1
product (1 + x^r)^c(r) mod (1 - x^k)
r=0
as follows, where the constant term of the reduced polynomial is the answer.
Rather than evaluate each factor with a fast exponentiation method and then multiply, we turn things inside out. If all c(r) are zero, then the answer is 1. Otherwise, recursively evaluate
k-1
P = product (1 + x^r)^(floor(c(r)/2)) mod (1 - x^k).
r=0
and then compute
k-1
Q = product (1 + x^r)^(c(r) - 2 floor(c(r)/2)) mod (1 - x^k),
r=0
in time O(k^2) for the latter computation by exploiting the sparsity of the factors. The result is P^2 Q mod (1 - x^k), computed in time O(k^2) via naive convolution.
Traverse a and count a[i] mod k; there ought to be k such counts.
Recurse and memoize over the distinct partitions of k, 2*k, 3*k...etc. with parts less than or equal to k, adding the products of the appropriate counts.
For example, if k were 10, some of the partitions would be 1+2+7 and 1+2+3+4; but while memoizing, we would only need to calculate once how many pairs mod k in the array produce (1 + 2).
For example, k = 5, a = {1,4,2,3,5,6}:
counts of a[i] mod k: {1,2,1,1,1}
products of distinct partitions of k:
5 => 1
4,1 => 2
3,2 => 1
products of distinct partitions of 2 * k with parts <= k:
5,4,1 => 2
5,3,2 => 1
4,1,3,2 => 2
products of distinct partitions of 3 * k with parts <= k:
5,4,1,3,2 => 2
answer = 11
{1,4} {4,6} {2,3} {5}
{1,4,2,3} {1,4,5} {4,6,2,3} {4,6,5} {2,3,5}
{1,4,2,3,5} {4,6,2,3,5}

Adjacent Bit Counts

here is the problem from spoj that states
For a string of n bits x1,x2,x3,...,Xn
the adjacent bit count of the string
(AdjBC(x)) is given by
X1*X2 + X2*X3 + X3*X4 + ... + Xn-1 *
Xn
which counts the number of times a 1
bit is adjacent to another 1 bit. For
example:
AdjBC(011101101) = 3
AdjBC(111101101) = 4
AdjBC(010101010) = 0
and the question is : Write a program which takes as input integers n and k and returns the number of bit strings x of n bits (out of 2ⁿ) that satisfy AdjBC(x) = k.
I have no idea how to solve this problem. Can you help me to solve this ?
Thanks
Often in combinatorial problems, it helps to look at the set of values it produces. Using brute force I calculated the following table:
k 0 1 2 3 4 5 6
n +----------------------------
1 | 2 0 0 0 0 0 0
2 | 3 1 0 0 0 0 0
3 | 5 2 1 0 0 0 0
4 | 8 5 2 1 0 0 0
5 | 13 10 6 2 1 0 0
6 | 21 20 13 7 2 1 0
7 | 34 38 29 16 8 2 1
The first column is the familiar Fibonacci sequence, and satisfies the recurrence relation f(n, 0) = f(n-1, 0) + f(n-2, 0)
The other columns satisfies the recurrence relation f(n, k) = f(n - 1, k) + f(n - 1, k - 1) + f(n - 2, k) - f(n - 2, k - 1)
With this, you can do some dynamic programming:
INPUT: n, k
row1 <- [2,0,0,0,...] (k+1 elements)
row2 <- [3,1,0,0,...] (k+1 elements)
repeat (n-2) times
for j = k downto 1 do
row1[j] <- row2[j] + row2[j-1] + row1[j] - row1[j-1]
row1[0] <- row1[0] + row2[0]
swap row1 and row2
return row2[k]
As a hint you can split it up into two cases: numbers ending in 0 and numbers ending in 1.
def f(n, k):
return f_ending_in_0(n, k) + f_ending_in_1(n, k)
def f_ending_in_0(n, k):
if n == 1: return k == 0
return f(n - 1, k)
def f_ending_in_1(n, k):
if n == 1: return k == 0
return f_ending_in_0(n - 1, k) + f_ending_in_1(n - 1, k - 1)
This gives the correct output but takes a long time to execute. You can apply standard dynamic programming or memoization techniques to get this to perform fast enough.
I am late to the party, but I have a linear time complexity solution.
For me this is more of a mathematical problem. You can read the detailed solution in this blog post written by me. What follows is a brief outline. I wish I could put some LaTeX, but SO doesn't allows that.
Suppose for given n and k, our answer is given by the function f(n,k). Using Beggar's Method, we can arrive at the following formula
f(n,k) = SUM C(k+a-1,a-1)*C(n-k+1-a,a), where a runs from 1 to (n-k+1)/2
Here C(p,q) denotes binomial coefficients.
So to get our answer, we have to calculate both the binomial coefficients for each value of a. We can calculate the binomial table beforehand. This approach will then given our answer in O(n^2) since we have to calculate the table.
We can improve the time complexity by using the recursion formula C(p,q) = (p * C(p-1,q-1))/q to calculate the current value of binomial coefficients from their values in previous loop.
Our final code looks like this:
long long x=n-k, y=1, p=n-k+1, ans=0;
ans += x*y;
for(int a=2; a<=p/2; a++)
{
x = (x*(p-2*a+1)*(p-2*a+2))/(a*(p-a+1));
y = (y*(k+a-1))/(a-1);
ans += x*y;
}
You can find the complete accepted solution in my GitHub repository.

Resources