I was studying recurrence by a slide found at (slide 7 and 8):
http://www.cs.ucf.edu/courses/cop3502h/spring2012/Lectures/Lec8_RecurrenceRelations.pdf
I just can't accept (probably I`m not seeing it right) that the recurrence equation of factorial is :
T(n) = T(n-1)+2
T(1) = 1
when considering the number of operations ("*" and "-") of the function :
int factorial(int n) {
if (n == 1)
return 1;
return n * factorial(n-1);
}
If we use n = 5 we will get 6 by the formula above while the real number of subs and molts are 8.
My teacher also told us that if analyzing only the number of "*" it would be :
T(n) = T(n-1)+1.
Again if I use n = 5, I would get 5 but if you do it on a paper you will get 4 multiplications.
I also checked on the forum, but this Question is more messed then a hell :
Recurrence Relation
Anyone could help me understand that ? thanks.
if we use n = 5 we will get 6 by the formula above while the real number of subs and molts are 8.
It seems that the slides are counting the number of operations, not just subtractions and multiplications. In particular, the return statement is counted as one operation. (The slides say, "if it’s the base case just one operation to return.")
Thus, the real number of subtractions and multiplications is 8, but the number of operations is 9. If n is 5, then, unrolling the recursion, we get 1 + 2 + 2 + 2 + 2 = 9 operations, which looks right to me.
Related
Why is the runtime closer to O(1.6^N) and not O(2^N)? I think it has something to do with the call stack, but it's still a little unclear to me.
int Fibonacci(int n)
{
if (n <= 1)
return n;
else
return Fibonacci(n - 1) + Fibonacci(n - 2);
}
For starters, remember that big-O notation always provides an upper bound. That is, if a function is O(n), it's also O(n2), O(n3), O(2n), etc. So in that sense, you are not incorrect if you say that the runtime is O(2n). You just don't have a tight bound.
To see where the tight bound of Θ(φn) comes from, it might help to look at how many recursive calls end up getting made when evaluating Fibonacci(n). Notice that
Fibonacci(0) requires one call, the call to Fibonacci(0).
Fibonacci(1) requires one call, the call to Fibonacci(1).
Fibonacci(2) requires three calls: one to Fibonacci(2), and one each to Fibonacci(0) and Fibonacci(1).
Fibonacci(3) requires five calls: one to Fibonacci(3), then the three calls generated by invoking Fibonacci(2) and the one call generated by invoking Fibonacci(1).
Fibonacci(4) requires nine calls: one to Fibonacci(4), then five from the call to Fibonacci(3) and three from the call to Fibonacci(2).
More generally, the pattern seems to be that if you're calling Fibonacci(n) for some n ≥ 2, that the number of calls made is one (for the call itself), plus the number of calls needed to evaluate Fibonacci(n-1) and Fibonacci(n-2). If we let Ln denote the number of calls made, this means that we have that
L0 = L1 = 1
Ln+2 = 1 + Ln + Ln+1.
So now the question is how fast this sequence grows. Evaluating the first few terms gives us
1, 1, 3, 5, 9, 15, 25, 41, ...
which definitely gets bigger and bigger, but it's not clear how much bigger that is.
Something you might notice here is that Ln kinda sorta ish looks like the Fibonacci numbers. That is, it's defined in terms of the sum of the two previous terms, but it has an extra +1 term. So maybe we might want to look at the difference between Ln and Fn, since that might show us how much "faster" the L series grows. You might notice that the first two values of the L series are 1, 1 and the first two values of the Fibonacci series are 0, 1, so we'll shift things over by one term to make things line up a bit more nicely:
L(n) 1 1 3 5 9 15 25 41
F(n+1) 1 1 2 3 5 8 13 21
Diff: 0 0 1 2 4 7 12 20
And wait, hold on a second. What happens if we add one to each term of the difference? That gives us
L(n) 1 1 3 5 9 15 25 41
F(n+1) 1 1 2 3 5 8 13 21
Diff+1 1 1 2 3 5 8 13 21
Whoa! Looks like Ln - Fn+1 + 1 = Fn+1. Rearranging, we see that
Ln = 2Fn+1 - 1.
Wow! So the actual number of calls made by the Fibonacci recursive function is very closely related to the actual value returned. So we could say that the runtime of the Fibonacci function is Θ(Fn+1) and we'd be correct.
But now the question is where φ comes in. There's a lovely mathematical result called Binet's formula that says that Fn = Θ(φn). There are many ways to prove this, but they all essentially boil down to the observation that
the Fibonacci numbers seem to grow exponentially quickly;
if they grow exponentially with a base of x, then Fn+2 = Fn + Fn+1 can be rewritten as x2 = x + 1; and
φ is a solution to x2 = x + 1.
From this, we can see that since the runtime of Fibonacci is Θ(Fn+1), then the runtime is also Θ(φn+1) = Θ(φn).
The number φ = (1+sqrt(5))/2 is characterized by the two following properties:
φ >= 1
φ2 = φ + 1.
Multiplying the second equation by φ{n-1} we get
φn+1 = φn + φn-1
Since f(0) = 0, f(1) = 1 and f(n+1) = f(n) + f(n-1), using 1 and 3,
it is easy to see by induction in n that f(n) <= φn
Thus f(n) is O(φn).
A similar inductive argument shows that
f(n) >= φn-3 = φ-3φn (n >= 1)
thus f(n) = Θ(φn).
can please tell me how memoization is working in this dp example.
dp example problem, codechef
the part where i stuck is like when input is 4 then why code is calculating
n-1 i.e 4-1 when optimal step would be 4/2 or for input =10 why we will calculate n-1 till 1. Any help would be appreciated.
New to dynamic programming so please bear with me.
Memoization in dynamic programming is just storing solutions to a subproblem. for input n=4 you calculate its solution. So you try step 1. Subtract 1 + the solution to the subproblem n=3. For this to evaluate you need to solve the problem n=3, because you have not solved it previously. So you again try step 1 until you get to the base problem of n = 1 where you output 0.
After you tried step 1 for the current problem you try step 2 which means dividing n and afterwards you try step 3. You try every step for every subproblem, but because you store the best value at every subproblem you can use this when it occurs again.
For example when you get back to n=4 after you tried step 1 on it you try step 2 on it and you see that you can use n / 2 and because you already calculated the optimal value for n=2 you can output 1 + optimal value for n=2 which is 1, so in total 2.
The link explains it fairly clearly. If F(n) is the minimal number of steps to convert n to 1, then for any n > 1 we have the following recurrence relation:
F(n) = 1 + min(F(n-1), F(n/2), F(n/3)) // if n divisible by 2 and 3
F(n) = 1 + min(F(n-1), F(n/2)) // if n divisible by 2 and not 3
F(n) = 1 + min(F(n-1), F(n/3)) // if n divisible by 3 and not 2
F(n) = 1 + F(n-1) // all other cases
For your case, n=4, we have to compute F(n-1) and F(n/2) to decide which one is minimum.
As for the second question, when n=10 we will evaluate first F(9). During this evaluation all values F(8), F(7), ... F(2)are computed and memoized. Then, when we evaluate F(10/2) = F(5) it will be simply a matter of looking up the value in the array of memoized values. This will save lots of computing.
May be you can do as follows in JS;
function getSteps(n){
var fs = [i => i%3 ? false : i/3, i => i%2 ? false : i/2, i => i-1],
res = [n],
chk;
while (res[res.length-1] > 1) {
chk = false;
fs.forEach(f => !chk && (chk = f(res[res.length-1])) && res.push(chk));
}
return res;
}
var result = getSteps(1453);
console.log(result);
console.log("The number of steps:",result.length);
I assume that a particular example in my book is wrong. But am I correct?
Example: 3log n + 2 is O(log n)
Justification: 3log n + 2 <= 5 log n, for n>=2.
I understand how they get the c=5 (since they take the coefficients and add them up). But I don't see how for n=2 for instance, the left function is smaller than the right one.
If I fill in 2 in n:
3 log 2 + 2 = 2.903 and 5 log 2 = 1.5051.
Only till n=10, the left function is actually smaller or equal than the right one.
Is my assumption right?
The log in this case is 2 based, not 10 based.
3log(2) + 2 = 3 + 2 = 5
5log(2) = 5
and it is true that 5 <= 5
To expand a bit on Peter's answer, the base of the logarithm is typically assumed to be base 2 when analyzing run times. It's not necessary to specify the base in the O() notation, since logarithms of different bases differ from each other only by a constant factor. (In this example, log_10(x) / log_2(x) = log(2)/log(10) = ~0.30103.) This constant factor is not relevant to the asymptotic run time.
nSo we were taught about recurrence relations a day ago and we were given some codes to practice with:
int pow(int base, int n){
if (n == 0)
return 1;
else if (n == 1)
return base;
else if(n%2 == 0)
return pow(base*base, n/2);
else
return base * pow(base*base, n/2);
}
The farthest I've got to getting its closed form is T(n) = T(n/2^k) + 7k.
I'm not sure how to go any further as the examples given to us were simple and does not help that much.
How do you actually solve for the recurrence relation of this code?
Let us count only the multiplies in a call to pow, denoted as M(N), assuming they dominate the cost (a nowadays strongly invalid assumption).
By inspection of the code we see that:
M(0) = 0 (no multiply for N=0)
M(1) = 0 (no multiply for N=1)
M(N), N>1, N even = M(N/2) + 1 (for even N, recursive call after one multiply)
M(N), N>1, N odd = M(N/2) + 2 (for odd N, recursive call after one multiply, followed by a second multiply).
This recurrence is a bit complicated by the fact that it handles differently the even and odd integers. We will work around this by considering sequences of even or odd numbers only.
Let us first handle the case of N being a power of 2. If we iterate the formula, we get M(N) = M(N/2) + 1 = M(N/4) + 2 = M(N/8) + 3 = M(N/16) + 4. We easily spot the pattern M(N) = M(N/2^k) + k, so that the solution M(2^n) = n follows. We can write this as M(N) = Lg(N) (base 2 logarithm).
Similarly, N = 2^n-1 will always yield odd numbers after divisions by 2. We have M(2^n-1) = M(2^(n-1)-1) + 2 = M(2^(n-2)-1) + 4... = 2(n-1). Or M(N) = 2 Lg(N+1) - 2.
The exact solution for general N can be fairly involved but we can see that Lg(N) <= M(N) <= 2 Lg(N+1) - 2. Thus M(N) is O(Log(N)).
I have the homework question:
Find a theta notation for the number of times the statement x = x + 1 is executed. (10 points).
i = n
while (i >= 1)
{
for j = 1 to n
{
x = x + 1
}
i = i/2
}
This is what I have done:
Ok first let’s make it easier. We will fist find the order of growth of:
while (i >= 1)
{
x = x + 1
i = i/2
}
that has order of growth O(log(n)) actually log base 2
the other inner for loop will execute n times therefore the algorithm should be of order:
O(log(n)*n)
The part where I get confused is that I am supposed to find theta notation NOT big-O. I know that theta notation is suppose to bound the function on the upper and lower limit. Will the correct answer be Theta(log(n)*n)?
I have found the answer in this link but I don't know how you get to that answer. Why they claim that the answer is Theta(n) ?
You should now prove it is also Omega(nlogn).
I won't show exactly how, since it is homework - but it is with the same principles you show O(nlogn). You need to show [unformally explnation:] that the asymptotic behavior of the function, is growing at least as fast as nlogn. [for big O you show it is growing at most at the rate of nlogn].
Remember that if a function is both O(nlogn) and Omega(nlogn), it is Theta(nlogn) [and vise versa]
p.s. Your hunch is true, it is easy to show it is not Omega(n), and thus it is not Theta(n)
p.s. 2: I think the author of the other answer confused with a different program:
i = n
while (i >= 1)
{
for j = 1 to i //NOTE: i instead of N here!
{
x = x + 1
}
i = i/2
}
The above program is indeed Theta(n), but it is different from the one you have provided.
Rephrasing your code fragment in a more formal way, so that it could be represented easily using Sigma Notation:
for (i = n; i >= 1; i = i/2 ) {
for j = 1; j <= n; j ++) {
x = x + 1; // instruction of cost 'c'
}
}
We obtain:
As #amit mentions I already have the upper limit of the function and that is Big-O which it actually is O(n*lgn). if I plot a table of that function I will get something like:
n n*lng
1 0
2 2
3 4.754887502
4 8
5 11.60964047
6 15.509775
7 19.65148445
8 24
9 28.52932501
10 33.21928095
because that is big-O then that means that the real function will be bounded by those values. In other words the real values should be less than the values at the table. for example taking a point for instace when n=9 we know that the answer should be less than or equal to 28.52932501 by looking at the table
So now we are missing to find Omega and that is the other bound. I think that the lower bound function should be Omega(n) and then we will get the table
n Omega(n)
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
.......
so that will be the other bound. If we take again point for example where n = 9 again then that will give us 9. that means that our real function should give us a value greater or equal to 9. based on our big-O function we also know that it should be les than or equal to 28.52932501