Find price for Rod cutting - algorithm

Given Length of Rod and P (Price ) for the first 3 rods. We are to fill in the possible price we can get for the rest of rods. Assuming we can cut the longer pieces as needed.
L = 1 2 3 4 5 6 7 8
p = 3 8 12
We basically want to get the maximum price we can get for each missing length price.
My approach
I believe that since we are given the best possible price for a rod of length 1,2, and 3 we can generate all possible combinations for the next rods.
For example to get price of rod where L = 4
price of rod where L = 1 + rod where L = 3 = 15
price of rod where L = 2 + rode where L = 2 = 16
Therefore price of rod wehre L = 4 = 16 since 16 > 15.
For example to get price of rod where L = 5
price of rod where L = 1 + rod where L = 2 and rod where L = 2 = 19
price of rod where L = 3 + rod where L = 2 = 20
price of rod where L = 4 + rod where L = 1 = 19
So this is kind of the approach i am following. However I am not sure if i am correct. I would also like it if someone can verify this approach and also maybe help me derive a formula from this. I am not looking for code as understanding the problem is enough to write the code.

You can check the explanation of a variation of this problem in CLRS (section 15.1, page 360). The problem is called the Rod Cutting problem.
Your approach is correct, and you can formalize it as a recurrence relation.
f(n) = min(f(n - i) + price(i)). 1 <= i <= n - 1
where f(n) is the minimum price to buy a rod of length n.
using memoization, this can be calculated in O(n^2).

Your approach is correct.
It can also be done in another way as answered by MrGreen (https://stackoverflow.com/a/29352580/13106102)
Let, B(i) = optimal price for cutting a rod of length i units and p(i) = price of a rod of length i units.
Formula 1: B(i) = max(1<=k<=floor(i/2)) {B(k) + B(i-k)} and P(i)
Formula 2: B(i) = max(1<=k<=i) {p(k) + B(i-k)})
Consider a rod of length 4, it can be cut in the following ways :
1) uncut of length 4
2) 3, 1
3) 2, 2
4) 2, 1, 1
5) 1, 3
6) 1, 2, 1
7) 1, 1, 2
8) 1, 1, 1, 1
According to Formula 1:
option 1 corresponds to P(4)
option 2,5,6,7,8 corresponds to B(1) + B(3)
option 3,4,6,7,8 corresponds to B(2) + B(2)
According to Formula 2:
option 1 corresponds to P(4)
option 2 corresponds to P(3) + B(1)
option 3,4 corresponds to P(2) + B(2)
option 5,6,7,8 corresponds to P(1) + B(3)
So to conclude, 1 and 2 are counting the optimal solution but in different ways, where 2 is more compact and makes lesser recursive calls compared to 1.

Related

Is there any algorithm to improve my code?

SEAWEED
You're given k days and n seaweed.
(1 ≤ n ≤ 1000, 1 ≤ k ≤ 10^17)
At the very first day, you have n seaweed at level 1
The next day, that n seaweed will reproduce, every n seaweed at level i will reproduce n*i seaweed at level 1, these level 1 seaweed will start reproduce after the day end.
Every seaweed at level i will become level i+1.
After k days, return the total number of seaweed
(im very sorry, if you dont understand the problem, i'm very bad at translating)
EXAMPLE:
INPUT : 3 3
OUTPUT : 39
EXPLANATION:
DAY 0 : 3 SEAWEED
DAY 1 : 3 Level 1 , 3 Level 2 ...
Total seaweed at day 1 = 6
DAY 2 : 3 + 3 * 2 Level 1 (there are 3 level 1 and 3 level 2, so 3 * 1 + 3 * 2 = 9), 3 Level 2 , 3 Level 3
Total seaweed at day 2 = 15
DAY 3: 9 + 3 * 2 + 3 * 3 = 24 (at day 2 there is 9 level 1, 3 level 2 and 3 level 3) Level 1 , 3 + 3*2 = 9 Level 2 , 3 Level 3 , 3 Level 4
Total seaweed at day 3 = 39
TOTAL OF SEAWEED : 39
Can you help me find any algorithm for this problem? and shorten my problem into one sentence
My code doesn't seem so fast
Here's my code for the problem:
def solver(n,k):
storage = [n]
for i in range(k):
reproduction = 0
for j in range(len(storage)):
reproduction += storage[j]*(j+1)
storage = [reproduction] + storage
return sum(storage)%(10**9+7)
Some more test case:
INPUT : n = 4, k = 3
OUTPUT : 52
INPUT : n = 5, k = 5
OUTPUT : 445
Solution might be expressed through Fibonacci numbers:
solver(n,k) = n*Fib(2*k+1)
and Fibonacci numbers for extremely high k values (using modulo 10**9+7) might be calculated with matrix exponentiation method here
The first insight is that the function is linear in n. You can imagine each of the n initial seaweed plants as a separate lineage; their descendants do not interfere with each other, so if you double n, you double the answer. So if you solve f(1, k) then you can get f(n, k) simply by multiplying by n. You could -- by some slow calculation -- make a table of values of f(1, k) for many values of k, then compute f(n, k) for any (n, k) that is requested.
The second insight is to work out e.g. f(1, 5) on paper and see the patterns in the numbers. If you are a mathematician at heart, you will recognise some terms from the Fibonacci sequence. (If you are really a mathematician at heart, you will prove the pattern.) Then you can write the formula for f(n, k), and some fast code to calculate it.

Computing all infix products for a monoid / semigroup

Introduction: Infix products for a group
Suppose I have a group
G = (G, *)
and a list of elements
A = {0, 1, ..., n} ⊂ ℕ
x : A -> G
If our goal is to implement a function
f : A × A -> G
such that
f(i, j) = x(i) * x(i+1) * ... * x(j)
(and we don't care about what happens if i > j)
then we can do that by pre-computing a table of prefixes
m(-1) = 1
m(i) = m(i-1) * x(i)
(with 1 on the right-hand side denoting the unit of G) and then implementing f as
f(i, j) = m(i-1)⁻¹ * m(j)
This works because
m(i-1) = x(0) * x(1) * ... * x(i-1)
m(j) = x(0) * x(1) * ... * x(i-1) * x(i) * x(i+1) * ... * x(j)
and so
m(i)⁻¹ * m(j) = x(i) * x(i+1) * ... * x(j)
after sufficient reassociation.
My question
Can we rescue this idea, or do something not much worse, if G is only a monoid, not a group?
For my particular problem, can we do something similar if G = ([0, 1] ⊂ ℝ, *), i.e. we have real numbers from the unit line, and we can't divide by 0?
Yes, if G is ([0, 1] ⊂ ℝ, *), then the idea can be rescued, making it possible to compute ranged products in O(log n) time (or more accurately, O(log z) where z is the number of a in A with x(a) = 0).
For each i, compute the product m(i) = x(0)*x(1)*...*x(i), ignoring any zeros (so these products will always be non-zero). Also, build a sorted array Z of indices for all the zero elements.
Then the product of elements from i to j is 0 if there's a zero in the range [i, j], and m(j) / m(i-1) otherwise.
To find if there's a zero in the range [i, j], one can binary search in Z for the smallest value >= i in Z, and compare it to j. This is where the extra O(log n) time cost appears.
General monoid solution
In the case where G is any monoid, it's possible to do precomputation of n products to make an arbitrary range product computable in O(log(j-i)) time, although its a bit fiddlier than the more specific case above.
Rather than precomputing prefix products, compute m(i, j) for all i, j where j-i+1 = 2^k for some k>=0, and 2^k divides both i and j. In fact, for k=0 we don't need to compute anything, since the values of m(i, i+1) is simply x(i).
So we need to compute n/2 + n/4 + n/8 + ... total products, which is at most n-1 things.
One can construct an arbitrary interval [i, j] from at O(log_2(j-i+1)) of these building blocks (and elements of the original array): pick the largest building block contained in the interval and append decreasing sized blocks on either side of it until you get to [i, j]. Then multiply the precomputed products m(x, y) for each of the building blocks.
For example, suppose your array is of size 10. For example's sake, I'll assume the monoid is addition of natural numbers.
i: 0 1 2 3 4 5 6 7 8 9
x: 1 3 2 4 2 3 0 8 2 1
2: ---- ---- ---- ---- ----
4 6 5 8 3
4: ----------- ----------
10 13
8: ----------------------
23
Here, the 2, 4, and 8 rows show sums of aligned intervals of length 2, 4, 8 (ignoring bits left over if the array isn't a power of 2 in length).
Now, suppose we want to calculate x(1) + x(2) + x(3) + ... + x(8).
That's x(1) + m(2, 3) + m(4, 7) + x(8) = 3 + 6 + 13 + 2 = 24.

confusion about rod cutting algorithm - dynamic programming

I recently saw a rod cutting problem, where B(i) = optimal price for cutting a rod of length i units and p(i) = price of a rod of length i units.
The algorithm given is something like this:
B(i) = max(1<=k<=i) {p(k) + B(i-k)}
Shouldn't it be something like this:
B(i) = max(1<=k<=floor(i/2)) {B(k) + B(i-k)}
where B(1) = p(1);
so that both parts 've the optimal cost instead of cost for a single rod for one part and optimal cost for the second part.
for example: B(4) = max{ (B(1) + B(3)); (B(2) + B(2)) }
instead of max{ (p(1) + B(3)); (p(2) + B(2)); (p(3) + B(1)) }
Can someone please explain this?
Actually the formula is correct. You have B(i) = max(1<=k<=i) {p(k) + B(i-k)}. Let's assume you have a rope of length i. If you are to cut it then you will cut a piece of length k where k is between 1 and i and will go on cutting the remaining part of the rope. So overall it costs you p(k)(price to cut the initial part that you decided you will not cut anymore) and the price to cut the remaining B(i-k). This is precisely what the formula does.
Your solution will also do the job but it has a slight drawback - the solution for each subproblem depends on the solution of two(instead of one) simpler subproblems. I believe because of that it will perform worse on the average. Of course having a subproblem depend on several simpler problems is not forbidden or wrong.
Let us assume that the optimal price of the rod of length i will be obtained by cutting the rod into p parts of length l1, l2, .., lp such that i= l1+ l2 +..+ lp and l1<l2<l3<…<lp (for simplicity).
There exists a rod piece of length l1 in the optimal solution means that if the rod piece of length l1 is further broken into smaller pieces, then the price of the rod piece of length l1 will decrease. Hence for a rod piece of length l1, we can say that b[l1] = p[l1]. Similarly we have established, b[l2] = p[l2], b[l3]= p[l3], ….., b[lp]= p[lp]. => b(i) = b(l1) + b(l2) +..+ b(lp) is optimal………………..Condition 1
Now consider the case of rod of length l1+l2. The claim is b(l1+l2) = b(l1) + b(l2) is optimal. Let us assume it is not the case. There exists an L such that b(l1+l2) = b(L) + b(l1+l2-L) is optimal. It means that there exists rods of length L and (l1+l2-L) such that:
b(L) + b(l1+l2-L)>b(l1)+b(l2).
=> b(l1) + b(l2) + b(l3) +..+ b(lp) < b(L) + b(l1+l2-L) +b(l3) +…+ b(lp).
=> Which is a contradiction { See Condition 1}
=> b(l1+l2) = b(l1) + b(l2) is optimal
=> Similarly b(l2+l3+l4) = b(l2) + b(l3) + b(l4) is optimal and so on.
Now we have a recurrence b(i) = b(k) + b(i-k) for 1<=k<i.
For k=l1, b(i) = b(l1) + b(i-l1) = p[l1] + b(i-l1).
For k=l1+l2, b(i) = b(l1+l2) + b(i-l1-l2)
= b(l1+l2) + b(l3 + l4 +…+lp)
= [b(l1) + b(l2)] + b(l3 + l4 +…+lp)
= b(l1) + [b(l2) + b(l3 + l4 +…+lp)]
= b(l1) + b(l2+l3+l4+…+lp)
= b(l1) + b(i-l1)
= p[l1] + b(i-l1)
Or for k= l1+l2, b(i) = p[k’] + b(i-k’) where k’=l1.
So to conclude, if we want to find optimal solution of a rod of length i, we try to break the rod of length i into 2 parts of length (l1+l2) and (i-l1+l2) and then we recursively find optimal solutions of the two rod pieces, we end up finding an optimal rod piece of length l1 and optimal solution of rod of length i-l1. Thus we can say:
b(i) = b(k) + b(i-k ) = p[k’] + b(i-k’) for 1<=k,k’<i.
The formula is correct. I think the confusion arises when we think of both formulas to be replacement of the other.
Though they count the same phenomena, it is done in two different ways:
Let, B(i) = optimal price for cutting a rod of length i units and
p(i) = price of a rod of length i units.
Formula 1: B(i) = max(1<=k<=floor(i/2)) {B(k) + B(i-k)} and P(i)
Formula 2: B(i) = max(1<=k<=i) {p(k) + B(i-k)})
Consider a rod of length 4,
it can be cut in the following ways :
1) uncut of length 4
2) 3, 1
3) 2, 2
4) 2, 1, 1
5) 1, 3
6) 1, 2, 1
7) 1, 1, 2
8) 1, 1, 1, 1
According to Formula 1:
option 1 corresponds to P(4)
option 2,5,6,7,8 corresponds to B(1) + B(3)
option 3,4,6,7,8 corresponds to B(2) + B(2)
According to Formula 2:
option 1 corresponds to P(4)
option 2 corresponds to P(3) + B(1)
option 3,4 corresponds to P(2) + B(2)
option 5,6,7,8 corresponds to P(1) + B(3)
So to conclude, 1 and 2 are counting the optimal solution but in different ways, where 2 is more compact and makes lesser recursive calls compared to 1.

solving recurrence recurrence

Ok, I'm struggling with Knuth's Concrete Mathematics and there are some examples which I do not understand yet.
J(n) = 2*J(n/2) - 1
it's from the first chapter. Specefically it solves The Josephus Problem for those who might be familiar with Concrete Mathematics. There's a solution given but absolutely no explanation.
I tried to solve it with Iteration method. Here's what ive come up with so far
J(n) = (2^k)*J(n/(2^k)) - (2^k - 1)
And I'm stuck here. Any help or hints will be appreciated.
I will recall the Josephus problem first.
We have n people gathered in circle. An executioner will process the circle in the following fashion :
The executioner starts from person at position i = 1
When at position i, he spares i but kills i's following person
He performs this until only one person is alive
By quickly looking at this procedure, we can see that every person in an even position will be killed in the first run. When all the "even" are dead, who are the remaining people ? Well it depends on the parity of n.
If n is even (say n = 2i), then the remaining people are 1,3,5,...,2i-1. The remaining problem is a circle of i people instead of n. Let's introduce a mapping mapeven between the position in the "new" circle and the initial position in the circle of n people.
mapeven(x) = 2.x - 1
This means that the person at position x in the new circle was in position 2.x - 1 in the initial one. If the survivor's position in the new circle is J(i), then the position that someone must occupy to survive in a circle of n = 2.i people is
mapeven(J(i)) = 2.J(i) - 1
We have the first recursion rule :
For any integer n :
J(2.n) = 2.J(n) - 1
But if n is odd (n = 2.j + 1), then the first run ends up killing all the "evens" and the executioner is at position n. n follower is 1 ... Thus the next to be killed is 1. The survivors are 3,5,..,2j+1 and the executioner proceeds as if we had a circle of j people. The mapping is a bit different from the even case :
mapodd(x) = 2.x + 1
3 is the new 1, 5 the new 2, and so on ...
If the survivor's position in the circle of j people is J(j), then the person who wants to survive in a circle of n = 2j+1 must occupy the position J(2j+1) :
J(2j+1) = mapodd(J(j)) = 2.J(j) + 1
The second recursion relationship is drawn :
For any integer n, we have :
J(2.n + 1) = 2.J(n) + 1
From now on, we are able to compute J(n) for ANY integer n using the 2 recursion relationships. But if we look a bit further, we can make it better ...
As a consequence, for every n = 2k, we have J(n) = 1. Ok that's great, but for other numbers ? If you write down the first results (say up to n = 20), you will see that the sequence seems pseudo-periodic :
1 2 3 4 5 6 7 8 9 10 11
1 1 3 1 3 5 7 1 3 5 7
Starting from a power of two, it seems that the position increases by 2 at each step until the next power of two, where we start again from 1 ... Since, given an integer n there is a unique integer m(n) such that
2m(n) ≤ n < 2m(n)+1
Let s(n) be the integer such that n = 2m(n) + s(n) (I call it "s" for "shift").
The mathematical translation of our observation is that J(n) = 1 + 2.s(n)
Let's prove it using strong induction.
For n = 1, we have J(1) = 1 = 1 + 2.0 = 1 + 2.s(1)
For n = 2, we have J(2) = 1 = 1 + 2.0 = 1 + 2.s(2)
Assuming J(k) = 1 + 2.s(k) for any k such that k ∈ [1,n], let's prove that J(n+1) = 1 + 2.s(n+1).
We have n = 2m(n+1) + s(n+1). Obviously, 2m(n) is even (except in the trivial case where n = 1), thus the parity of n is carried by s(n).
If s(n+1) is even, then we denote s(n+1) = 2j. We have
J(n+1) = 2.J((n+1)/2) - 1 = 2.J(2m(n+1)-1 + j) - 1
Since the statement is true for any k ∈ [1,n], it is true for 1 ≤ k = (n+1)/2 < n and thus :
J(n+1) = 2.(2j + 1) - 1 = 2.s(n+1) + 1
We can similarly resolve the odd case.
The formula is established for any integer n :
J(n) = 2.s(n) + 1, with m(n), s(n) ∈ ℕ the unique integers such that
2m(n) ≤ n < 2m(n)+1 and s(n) = n - 2m(n)
In other terms : m(n) = ⌊ln2(n)⌋ and s(n) = n - 2⌊ln2(n)⌋
Start with a few easy examples, make a guess, then use induction to (dis)prove your guess.
Consider n = some power of 2.
J(2^0) = 1 (given)
J(2^1) = 2J(2^0) - 1 = 1
J(2^2) = 2J(2^1) - 1 = 1
Okay, let's guess J(n) = 1 for all n >= 1.
Base case: J(1) = 1, which is true by definition.
Inductive step: assume J(k) = 1 for some arbitrary k. Then J(2k) = 2J(k) - 1 = 1.
Therefore, by induction, J(n) = 1 for all n (assuming division rounds down to integers).
J(n)=2*J(n/2)-1
J(n)-1=2*J(n/2)-2
J(n)-1=2*(J(n/2)-1)
T(n)=2*T(n/2), where T(n)=J(n)-1
T(n)=2^log2(n)*T(1)
J(n)=2^log2(n)*(J(1)-1)+1

Bin Packing using Dynamic Programming

Problem Statement: You have n1 items of size s1, n2 items of size s2, and n3 items of size s3. You'd like to pack all of these items into bins each of capacity C, such that the total number of bins used is minimized.
My Solution:
Bin(C,N1,N2,N3) = max{Bin(C-N1,N1-1,N2,N3)+N1 if N1<=C and N1>0,
Bin(C-N2,N1,N2-1,N3)+N2 if N2<=C and N2>0,
Bin(C-N3,N1,N2,N3-1)+N3 if N3<=C and N3>0,
0 otherwise}
The above solution only fills a single bin efficiently. Can anybody suggest how to modify the above relation so that I get the total bins used for packing items efficiently?
Problem
You have n1 items of size s1 and n2 items of size s2. You must pack all of these items into bins, each of capacity C, such that the total number of bins used is minimised. Design a polynomial time algorithm for such packaging.
Here is my solution to this problem, and it's very similar to what you're asking.
DP method
Suppose Bin(i, j) gives the min total number of bins used, then Bin(i, j) = min{Bin(i′, j′) + Bin(i − i′,j − j′)} where i + j > i′ + j′ > 0. There will be n^2 − 2 different (i′,j′) combinations and one pair of (n1,n2) combination. So the complexity is about O(n^2).
Complexity
O(n^2)
Example:
Let s1 = 3, n1 = 2, s2 = 2, n2 = 2, C = 4. Find the min bins needed, i.e., b.
<pre>
i j b
- - -
0 1 1
0 2 2
1 0 1
1 1 1
1 2 2
2 0 2
2 1 3
2 2 3 -> (n1,n2) pair
</pre>
So as you can see, 3 bins are needed.
<pre>
Note that Bin(2,2) = min{
Bin(2,1) + Bin(0,1),
Bin(2,0) + Bin(0,2),
Bin(1,2) + Bin(1,0),
Bin(1,1) + Bin(1,1)}
= min{3, 4}
= 3
</pre>

Resources