solving recurrence recurrence - algorithm

Ok, I'm struggling with Knuth's Concrete Mathematics and there are some examples which I do not understand yet.
J(n) = 2*J(n/2) - 1
it's from the first chapter. Specefically it solves The Josephus Problem for those who might be familiar with Concrete Mathematics. There's a solution given but absolutely no explanation.
I tried to solve it with Iteration method. Here's what ive come up with so far
J(n) = (2^k)*J(n/(2^k)) - (2^k - 1)
And I'm stuck here. Any help or hints will be appreciated.

I will recall the Josephus problem first.
We have n people gathered in circle. An executioner will process the circle in the following fashion :
The executioner starts from person at position i = 1
When at position i, he spares i but kills i's following person
He performs this until only one person is alive
By quickly looking at this procedure, we can see that every person in an even position will be killed in the first run. When all the "even" are dead, who are the remaining people ? Well it depends on the parity of n.
If n is even (say n = 2i), then the remaining people are 1,3,5,...,2i-1. The remaining problem is a circle of i people instead of n. Let's introduce a mapping mapeven between the position in the "new" circle and the initial position in the circle of n people.
mapeven(x) = 2.x - 1
This means that the person at position x in the new circle was in position 2.x - 1 in the initial one. If the survivor's position in the new circle is J(i), then the position that someone must occupy to survive in a circle of n = 2.i people is
mapeven(J(i)) = 2.J(i) - 1
We have the first recursion rule :
For any integer n :
J(2.n) = 2.J(n) - 1
But if n is odd (n = 2.j + 1), then the first run ends up killing all the "evens" and the executioner is at position n. n follower is 1 ... Thus the next to be killed is 1. The survivors are 3,5,..,2j+1 and the executioner proceeds as if we had a circle of j people. The mapping is a bit different from the even case :
mapodd(x) = 2.x + 1
3 is the new 1, 5 the new 2, and so on ...
If the survivor's position in the circle of j people is J(j), then the person who wants to survive in a circle of n = 2j+1 must occupy the position J(2j+1) :
J(2j+1) = mapodd(J(j)) = 2.J(j) + 1
The second recursion relationship is drawn :
For any integer n, we have :
J(2.n + 1) = 2.J(n) + 1
From now on, we are able to compute J(n) for ANY integer n using the 2 recursion relationships. But if we look a bit further, we can make it better ...
As a consequence, for every n = 2k, we have J(n) = 1. Ok that's great, but for other numbers ? If you write down the first results (say up to n = 20), you will see that the sequence seems pseudo-periodic :
1 2 3 4 5 6 7 8 9 10 11
1 1 3 1 3 5 7 1 3 5 7
Starting from a power of two, it seems that the position increases by 2 at each step until the next power of two, where we start again from 1 ... Since, given an integer n there is a unique integer m(n) such that
2m(n) ≤ n < 2m(n)+1
Let s(n) be the integer such that n = 2m(n) + s(n) (I call it "s" for "shift").
The mathematical translation of our observation is that J(n) = 1 + 2.s(n)
Let's prove it using strong induction.
For n = 1, we have J(1) = 1 = 1 + 2.0 = 1 + 2.s(1)
For n = 2, we have J(2) = 1 = 1 + 2.0 = 1 + 2.s(2)
Assuming J(k) = 1 + 2.s(k) for any k such that k ∈ [1,n], let's prove that J(n+1) = 1 + 2.s(n+1).
We have n = 2m(n+1) + s(n+1). Obviously, 2m(n) is even (except in the trivial case where n = 1), thus the parity of n is carried by s(n).
If s(n+1) is even, then we denote s(n+1) = 2j. We have
J(n+1) = 2.J((n+1)/2) - 1 = 2.J(2m(n+1)-1 + j) - 1
Since the statement is true for any k ∈ [1,n], it is true for 1 ≤ k = (n+1)/2 < n and thus :
J(n+1) = 2.(2j + 1) - 1 = 2.s(n+1) + 1
We can similarly resolve the odd case.
The formula is established for any integer n :
J(n) = 2.s(n) + 1, with m(n), s(n) ∈ ℕ the unique integers such that
2m(n) ≤ n < 2m(n)+1 and s(n) = n - 2m(n)
In other terms : m(n) = ⌊ln2(n)⌋ and s(n) = n - 2⌊ln2(n)⌋

Start with a few easy examples, make a guess, then use induction to (dis)prove your guess.
Consider n = some power of 2.
J(2^0) = 1 (given)
J(2^1) = 2J(2^0) - 1 = 1
J(2^2) = 2J(2^1) - 1 = 1
Okay, let's guess J(n) = 1 for all n >= 1.
Base case: J(1) = 1, which is true by definition.
Inductive step: assume J(k) = 1 for some arbitrary k. Then J(2k) = 2J(k) - 1 = 1.
Therefore, by induction, J(n) = 1 for all n (assuming division rounds down to integers).

J(n)=2*J(n/2)-1
J(n)-1=2*J(n/2)-2
J(n)-1=2*(J(n/2)-1)
T(n)=2*T(n/2), where T(n)=J(n)-1
T(n)=2^log2(n)*T(1)
J(n)=2^log2(n)*(J(1)-1)+1

Related

Prooving by induction that a function gets called n-1 times

This is the pseudo-code from the problem:
Procedure Foo(A,f,L), precondition:
A[f..L] is an array of integers
f,L, are two naturals >=1 with f<=L.
Code
procedure Foo(A,f,L
if (f=L) then
return A[f]
else
m <-- [(f+L)/2]
return min(Foo(A,f,m), Foo(A, m+1,L))
end if
The Question:
Using induction, argue that Foo invokes min at most n-1 times.
I am a little lost on how to continue my proof for part (iii). I have the claim written out as well as the base case. Which i believe it to be n>=2. But how do I do it for k + 1 terms? Since this is a proof by induction.
Thanks
We will proceed by induction on n = L - f + 1.
Base case: when n = 1, f=L and we immediately return A[f] calling min zero times; we have n - 1 = 1 - 1 = 0.
Induction hypothesis: assume that the claim is true for all n up to and including k - 1.
Induction step: we must show the claim is true for k. Since L > f we execute the second branch which calls min once, and invokes Foo on subarrays of sizes floor(k/2) and ceiling(k/2).
if k is even, k/2 is an integer and floor(k/2) = ceiling(k/2) = k/2. Both of these are less than k and so we know that Foo invokes min at least k/2 - 1 times for each call. But 2(k/2 - 1) + 1 = k - 2 + 1 = k - 1, so the minimum number of invocations must be k - 1 for n = k.
if k is odd, k/2 is not an integer and floor(k/2) = ceiling(k/2) - 1. For k > 1, both of these are less than k and so we know that each recursive call invokes min at least floor(k/2) - 1 and ceiling(k/2) - 1 times, respectively. But floor(k/2) - 1 + ceiling(k/2) - 1 + 1 = floor(k/2) - 1 + floor(k/2) + 1 = 2*floor(k/2) - 1 + 1 = 2*floor(k/2). Since k is an odd integer, k/2 can be written w+1/2 where w = floor(k/2). Rearranging, we have that k = 2w + 1 and we invoke min at least 2*w times. But k - 1 = 2*w + 1 - 1 = 2*w = 2*floor(k/2), as required
Since k is either even or odd, and we have shown that in both cases the minimum number of invocations of min is at least k - 1, this completes the proof.

Analyze the run time of a nested for loops algorithm

Say i have the following code
def func(A,n):
for i = 0 to n-1:
for k = i+1 to n-1:
for l = k+1 to n-1:
if A[i]+A[k]+A[l] = 0:
return True
A is an array, and n denotes the length of A.
As I read it, the code checks if any 3 consecutive integers in A sum up to 0. I see the time complexity as
T(n) = (n-2)(n-1)(n-2)+O(1) => O(n^3)
Is this correct, or am I missing something? I have a hard time finding reading material about this (and I own CLRS)
You have the functionality wrong: it checks to see whether any three elements add up to 0. To improve execution time, it considers them only in index order: i < k < j.
You are correct about the complexity. Although each loop takes a short-cut, that short-cut is merely a scalar divisor on the number of iterations. Each loop is still O(n).
As for the coding, you already have most of it done -- and Stack Overflow is not a coding service. Give it your best shot; if that doesn't work and you're stuck, post another question.
If you really want to teach yourself a new technique, look up Python's itertools package. You can use this to generate all the combinations in triples. You can then merely check sum(triple) in each case. In fact, you can use the any method to check whether any one triple sums to 0, which could reduce your function body to a single line of Python code.
I'll leave that research to you. You'll learn other neat stuff on the way.
Addition for OP's comment.
Let's set N to 4, and look at what happens:
i = 0
for k = 1 to 3
... three k loop
i = 1
for k = 2 to 3
... two k loops
i = 2
for k = 3 to 3
... one k loop
The number of k-loop executions is the "triangle" number of n-1: 3 + 2 + 1. Let m = n-1; the formula is T(m) = m(m-1)/2.
Now, you propagate the same logic to the l loops. You run T(k) loops on l for k= 1, 2, 3. If I recall, this third-order "pyramid" formula is P(m) = m(m-1)(m-2)/6.
In terms of n, this is (n-1)(n-2)(n-3)/6 loops on l. When you multiply this out, you get a straightforward cubic formula in n.
Here is the sequence for n=5:
0 1 2
0 1 3
0 1 4
change k
0 2 3
0 2 4
change k
0 3 4
change k
change k
change l
1 2 3
1 2 4
change k
1 3 4
change k
change k
change l
2 3 4
BTW, l is a bad variable name, easily confused with 1.

Pyramids dynamic programming

I encountered this question in an interview and could not figure it out. I believe it has a dynamic programming solution but it eludes me.
Given a number of bricks, output the total number of 2d pyramids possible, where a pyramid is defined as any structure where a row of bricks has strictly less bricks than the row below it. You do not have to use all the bricks.
A brick is simply a square, the number of bricks in a row is the only important bit of information.
Really stuck with this one, I thought it would be easy to solve each problem 1...n iteratively and sum. But coming up with the number of pyramids possible with exactly i bricks is evading me.
example, n = 6
X
XX
X
XX XXX
X
XXX XXXX
XX X
XXX XXXX XXXXX
X
XX XX X
XXX XXXX XXXXX XXXXXX
So the answer is 13 possible pyramids from 6 bricks.
edit
I am positive this is a dynamic programming problem, because it makes sense to (once you've determined the first row) simply look to the index in your memorized array of your remainder of bricks to see how many pyramids fit atop.
It also makes sense to consider bottom rows of width at least n/2 because we can't have more bricks atop than on the bottom row EXCEPT and this is where I lose it and my mind falls apart, in certain (few cases) you can I.e. N = 10
X
XX
XXX
XXXX
Now the bottom row has 4 but there are 6 left to place on top
But with n = 11 we cannot have a bottom row with less than n/2 bricks. There is another wierd inconsistency like that with n = 4 where we cannot have a bottom row of n/2 = 2 bricks.
Let's choose a suitable definition:
f(n, m) = # pyramids out of n bricks with base of size < m
The answer you are looking for now is (given that N is your input number of bricks):
f(N, N+1) - 1
Let's break that down:
The first N is obvious: that's your number of bricks.
Your bottom row will contain at most N bricks (because that's all you have), so N+1 is a sufficient lower bound.
Finally, the - 1 is there because technically the empty pyramid is also a pyramid (and will thus be counted) but you exclude that from your solutions.
The base cases are simple:
f(n, 0) = 1 for any n >= 0
f(0, m) = 1 for any m >= 0
In both cases, it's the empty pyramid that we are counting here.
Now, all we need still is a recursive formula for the general case.
Let's assume we are given n and m and choose to have i bricks on the bottom layer. What can we place on top of this layer? A smaller pyramid, for which we have n - i bricks left and whose base has size < i. This is exactly f(n - i, i).
What is the range for i? We can choose an empty row so i >= 0. Obviously, i <= n because we have only n bricks. But also, i <= m - 1, by definition of m.
This leads to the recursive expression:
f(n, m) = sum f(n - i, i) for 0 <= i <= min(n, m - 1)
You can compute f recursively, but using dynamic programming it will be faster of course. Storing the results matrix is straightforward though, so I leave that up to you.
Coming back to the original claim that f(N, N+1)-1 is the answer you are looking for, it doesn't really matter which value to choose for m as long as it is > N. Based on the recursive formula it's easy to show that f(N, N + 1) = f(N, N + k) for every k >= 1:
f(N, N + k) = sum f(N - i, i) for 0 <= i <= min(N, N + k - 1)
= sum f(N - i, i) for 0 <= i <= N
= sum f(N - i, i) for 0 <= i <= min(N, N + 1 - 1)
In how many ways can you build a pyramid of width n? By putting any pyramid of width n-1 or less anywhere atop the layer of n bricks. So if p(n) is the number of pyramids of width n, then p(n) = sum [m=1 to n-1] (p(m) * c(n, m)), where c(n, m) is the number of ways you can place a layer of width m atop a layer of width n (I trust that you can work that one out yourself).
This, however, doesn't place a limitation on the number of bricks. Generally, in DP, any resource limitation must be modeled as a separate dimension. So your problem is now p(n, b): "How many pyramids can you build of width n with a total of b bricks"? In the recursive formula, for each possible way of building a smaller pyramid atop your current one, you need to refer to the correct amount of remaining bricks. I leave it as a challenge for you to work out the recursive formula; let me know if you need any hints.
You can think of your recursion as: given x bricks left where you used n bricks on last row, how many pyramids can you build. Now you can fill up rows from either top to bottom row or bottom to top row. I will explain the former case.
Here the recursion might look something like this (left is number of bricks left and last is number of bricks used on last row)
f(left,last)=sum (1+f(left-i,i)) for i in range [last+1,left] inclusive.
Since when you use i bricks on current row you will have left-i bricks left and i will be number of bricks used on this row.
Code:
int calc(int left, int last) {
int total=0;
if(left<=0) return 0; // terminal case, no pyramid with no brick
for(int i=last+1; i<=left; i++) {
total+=1+calc(left-i,i);
}
return total;
}
I will leave it to you to implement memoized or bottom-up dp version. Also you may want to start from bottom row and fill up upper rows in pyramid.
Since we are asked to count pyramids of any cardinality less than or equal to n, we may consider each cardinality in turn (pyramids of 1 element, 2 elements, 3...etc.) and sum them up. But in how many different ways can we compose a pyramid from k elements? The same number as the count of distinct partitions of k (for example, for k = 6, we can have (6), (1,5), (2,4), and (1,2,3)). A generating function/recurrence for the count of distinct partitions is described in Wikipedia and a sequence at OEIS.
Recurrence, based on the Pentagonal number Theorem:
q(k) = ak + q(k − 1) + q(k − 2) − q(k − 5) − q(k − 7) + q(k − 12) + q(k − 15) − q(k − 22)...
where ak is (−1)^(abs(m)) if k = 3*m^2 − m for some integer m and is 0 otherwise.
(The subtracted coefficients are generalized pentagonal numbers.)
Since the recurrence described in Wikipedia obliges the calculation of all preceding q(n)'s to arrive at a larger q(n), we can simply sum the results along the way to obtain our result.
JavaScript code:
function numPyramids(n){
var distinctPartitions = [1,1],
pentagonals = {},
m = _m = 1,
pentagonal_m = 2,
result = 1;
while (pentagonal_m / 2 <= n){
pentagonals[pentagonal_m] = Math.abs(_m);
m++;
_m = m % 2 == 0 ? -m / 2 : Math.ceil(m / 2);
pentagonal_m = _m * (3 * _m - 1);
}
for (var k=2; k<=n; k++){
distinctPartitions[k] = pentagonals[k] ? Math.pow(-1,pentagonals[k]) : 0;
var cs = [1,1,-1,-1],
c = 0;
for (var i in pentagonals){
if (i / 2 > k)
break;
distinctPartitions[k] += cs[c]*distinctPartitions[k - i / 2];
c = c == 3 ? 0 : c + 1;
}
result += distinctPartitions[k];
}
return result;
}
console.log(numPyramids(6)); // 13

Formal proof for what algorithm return

I need to formal proof that below algorithm return 1 for n = 1 and 0 in other cases.
function K( n: word): word;
begin
if (n < 2) then K := n
else K := K(n − 1) * K(n − 2);
end;
Anyone could help? Thank you
This can be proven by induction, but as previous posters have shown, it's tricky to get formally correct when referring to K directly in the proof.
Here's my suggestion: Let P(n) be the property we want to show:
P(n) holds iff K(n) yields 1 for n = 1, and 0 for n ≠ 1.
Now we can clearly express what we want to show: Ɐn.P(n)
Base case: n &leq; 2
Trivial check by case analysis:
P(0) is ok, since K(0) = 0
P(1) is ok, since K(1) = 1
Induction hypothesis:
P(n) holds for all 2 &leq; n < c.
Inductive step: Show that P(c) holds
By definition of K we have K(c) = K(c-1) × K(c-2)
By the induction hypothesis, we know that P(c-1) and P(c-2) hold.
Since at most one of K(c-1) and K(c-2) can be 1 (and the other must be 0) the product is 0.
Which means that P(c) holds
Qed.
For n=1 by invoking the algorithm the answer is K=n=1, so we're done with that case.
For n=0, by definition, K(0) = 0.
For the case where n>1, we can solve it by induction:
Base: for n=2, we get: K(2) = K(1)*K(0) = 1*0 = 0
For n=3, we get: K(3) = K(2)*K(1) = 0*1 = 0 Note that K(2)=0 because we just showed it one line up.
Claim: For any 1<k<n, we get K(k) = 0
Proof for any n>3: K(n) = K(n-1)*K(n-2) =(1) 0*0 = 0
(1): Induction hypothesis, and since K(n-1),K(n-2) are both apply for it, since n-1,n-2>1
P.S. Note that the claim is true for non-negative numbers, for example, if you allow n=-5, you get K(-5)=-5 - which is a counter example to the claim.
Say n = 0. Since 0 < 2 we get 0 as result.
Say n = 1. Since 1 < 2 we get 1 as result.
Say n = 2. K(2) = K(1)*K(0). Since K(0) = 0 we get 0 as result.
For n > 2 now we suppose that statement about algorithm is true, i.e. K(n) = 0.
Now let show that it is also true for n + 1:
K(n + 1) = K(n)*K(n - 1). Since K(n) = 0 it is obvious that K(n)*K(n - 1) = 0.

How to do recurrence relations?

nSo we were taught about recurrence relations a day ago and we were given some codes to practice with:
int pow(int base, int n){
if (n == 0)
return 1;
else if (n == 1)
return base;
else if(n%2 == 0)
return pow(base*base, n/2);
else
return base * pow(base*base, n/2);
}
The farthest I've got to getting its closed form is T(n) = T(n/2^k) + 7k.
I'm not sure how to go any further as the examples given to us were simple and does not help that much.
How do you actually solve for the recurrence relation of this code?
Let us count only the multiplies in a call to pow, denoted as M(N), assuming they dominate the cost (a nowadays strongly invalid assumption).
By inspection of the code we see that:
M(0) = 0 (no multiply for N=0)
M(1) = 0 (no multiply for N=1)
M(N), N>1, N even = M(N/2) + 1 (for even N, recursive call after one multiply)
M(N), N>1, N odd = M(N/2) + 2 (for odd N, recursive call after one multiply, followed by a second multiply).
This recurrence is a bit complicated by the fact that it handles differently the even and odd integers. We will work around this by considering sequences of even or odd numbers only.
Let us first handle the case of N being a power of 2. If we iterate the formula, we get M(N) = M(N/2) + 1 = M(N/4) + 2 = M(N/8) + 3 = M(N/16) + 4. We easily spot the pattern M(N) = M(N/2^k) + k, so that the solution M(2^n) = n follows. We can write this as M(N) = Lg(N) (base 2 logarithm).
Similarly, N = 2^n-1 will always yield odd numbers after divisions by 2. We have M(2^n-1) = M(2^(n-1)-1) + 2 = M(2^(n-2)-1) + 4... = 2(n-1). Or M(N) = 2 Lg(N+1) - 2.
The exact solution for general N can be fairly involved but we can see that Lg(N) <= M(N) <= 2 Lg(N+1) - 2. Thus M(N) is O(Log(N)).

Resources