does 2^2n implies 2^n? - algorithm

I am learning algorithm analysis. In a book I read that 2^2n = O(2^n) is not true. though I know that means we can't find a c such that : 2^2n<=c(2^n). but if we put n=5 and c=10^6 the equation will be right. can you please give me hint. what am I doing wrong?

For 2^2n = O(2^n) to hold you would have to find one c such that 2^2n<=c(2^n) for all n > n0. Your example works only for small n. Once n reaches a point where 2^n > c the inequality does no longer hold.

Let's just work out the math: 2^2n = (2^2)^n = 4^n, since a^bc = (a^b)^c.
The question is not what happens for specific values, but what is the effect when n gets one larger, or even twice as large.
If we replace n with n + 1, you get 2^2(n + 1) = 2^(2n + 2) = 2^2 * 2^2n = 4 * 2^2n. So the result becomes 4 times larger.

Related

T(n) = T(n/10) + T(an) + n, how to solve this?

Update: I'm still looking for a solution without using something from outside sources.
Given: T(n) = T(n/10) + T(an) + n for some a, and that: T(n) = 1 if n < 10, I want to check if the following is possible (for some a values, and I want to find the smallest possible a):
For every c > 0 there is n0 > 0 such that for every n > n0, T(n) >= c * n
I tried to open the function declaration step by step but it got really complicated and I got stuck since I see no advancement at all.
Here is what I did: (Sorry for adding an image, that's because I wrote a lot on word and I cant paste it as text)
Any help please?
Invoking Akra–Bazzi, g(n) = n¹, so the critical exponent is p = 1, so we want (1/10)¹ + a¹ = 1, hence a = 9/10.
Intuitively, this is like the mergesort recurrence: with linear overhead at each call, if we don't reduce the total size of the subproblems, we'll end up with an extra log in the running time (which will overtake any constant c).
Requirement:
For every c > 0 there is n0 > 0 such that for every n > n0, T(n) >= c*n
By substituting the recurrence in the inequality and solving for a, you will get:
T(n) >= c*n
(c*n/10) + (c*a*n) + n >= c*n
a >= 0.9 - (1/c)
Since our desired outcome is for all c (treat c as infinity), we get a >= 0.9. Therefore, smallest value of a is 0.9 which will satisfy T(n) >= c*n for all c.
Another perspective: draw a recursion tree and count the work done per level. The work done in each layer of the tree will be (1/10 + a) as much as the work of the level above it. (Do you see why?)
If (1/10 + a) < 1, this means the work per level is decaying geometrically, so the total work done will sum to some constant multiple of n (whose leading coefficient depends on a). If (1 / 10 + a) ≥ 1, then the work done per level stays the same or grows from one level to the next, so the total work done now depends (at least) on the number of layers in the tree, which is Θ(log n) because the subproblems sizes are dropping by a constant from one layer to the next and that can’t happen more than Θ(log n) times. So once (1 / 10 + a) = 1 your runtime suddenly is Ω(n log n) = ω(n).
(This is basically the reasoning behind the Master Theorem, just applied to non-uniform subproblems sizes, which is where the Akra-Bazzi theorem comes from.)

Specific Big O Questions

We're getting into Big O in my CS degree and am having a difficult time understanding it. There's two problems I'd like to post, one I tried to complete on my own and another I'm not sure how to start. Would it be possible for a member to tell me if my first one is correct or incorrect and maybe point me in a direction for understanding the second one? Any help is greatly appreciated.
a)
E(n) ≤ 5n^2 + 9n^3, then E(n) = O(?)
Guess: O(n^3)
Proof:
9n^3 + 5n^2 <= c*n^3, where c = 10 and n > 1,
Therefore, E(n) = O(n^3)
b)
E(n) ≤ 8n*sqrt(n) + 100n log2(n), then E(n) = O(?) .
a)
For n = 2,
9*8 + 5*4 = 92 > 10 * 8 = 80. (n > 1 is incorrect)
You should solve for an n explicitly.
b)
Should be order of O(n^3/2). Check with a large number such as 2^50. log(n) grows much more slowly than n^1/2.

solving the recurrence T(floor[n/2]) + T(ceil[n/2]) + n - 1

I have the following recurrence:
T(n) = c for n = 1.
T(n) = T(floor[n/2]) + T(ceil[n/2]) + n - 1 for n > 1.
It looks like merge sort to me so i guess that the solution to the recurrence is Θ(nlogn). According to the master method i have:
a) Θ(1) for n = 1 (constant time).
b) If we drop the floor and ceil we have: (step1)
T(N) = 2T(N/2) + n - 1 => a = 2, b = 2.
logb(a) (base b) = lg(2) = 1 so n^lg(2) = n^1 = n
Having a closer look we know that we have case 2 of master method:
if f(n) = Θ(log(b)a) our solution to the recurrence is T(n) = Θ(log(b)a log(2)n)
The solution is indeed T(n) = Θ(nlogn) but we are off my a constant factor 1.
My first question is:
at step 1 we dropped of ceil and floor. Is this correct ? The second question is how do i get rid of the constant factor 1 ? do i drop it ? or should i name it d and prove that n - 1 is indeed n (if so how do i prove it ?). Lastly is it better to prove it with the substitution method ?
Edit: if we use the substitution method we get:
We guess that the solution is O(n). We need to show that T(n) <= cn.
Substitutting in the recurrence we obtein
T(n) <= c(floor[n/2]) + c(ceil[n/2]) + n/2 - 1 = cn + n/2 - 1
So it is not merge sort ? What do i miss?
It was long time ago, but here goes
Step 1 we dropped of ceil and floor. Is this correct ?
I would rather say
T(floor(n/2)) + T(floor[n/2)) <= T(floor(n/2)) + T(ceil[n/2))
T(floor(n/2)) + T(ceil[n/2)) <= T(ceil(n/2)) + T(ceil[n/2))
in case they are not equal they differ by 1 (and you can ignore any constant)
The second question is how do i get rid of the constant factor 1 ?
You ignore it. Reasoning behind it is : even if constant is huge 10^100 it will be small compared to the size when n grows larger. In real life you can't really ignore really big constants, but that is how real life and theory differs. In any case 1 makes smallest amount of difference.
Lastly is it better to prove it with the substitution method
You can prove how you like, some are just simpler. Simpler are usually better, but other then that 'better' has no meaning. So my answer is no.

Big O Proof by Induction With Summation

I've been ripping my hair out trying to solve this:
Σ(k=0,n)3k = O(3n)
I've been looking through various things online but I still can't seem to solve it. I know it involves the formal definition of Big O, where
|f(x)| <= C*|g(x)|, x>=k
Since they are the same, I am assuming C is some value I have to find through induction to prove the original statement, and that k=0.
Thanks for your help with this.
Σ(k=0,n)3k
= 30 + 31 + ... + 3n
= (1 - 3n+1) / (1 - 3) ; sum of geometric series
= (3/2)*3n - k
<= c*3n ; for c >= 3/2
= O(3n)
Induction is not needed here; that sum is a geometric series and has closed form solution
= 1(1-3^(n + 1))/(1-3) = (3^(n + 1) - 1)/2
= (3*3^n - 1)/2
Pick C = 3/2 and F = 3/2*3^n - 1/2, G = 3^n, and this satisfies the requirement for O(3^n), but really in practice, though it might be thought informal and sloppy, you don't really worry much about an exact constant since any constant will do for satisfying Big-O.
You can rewrite it as 3n * ( 1 + 1/3 + 1/9 + ....1/3n).
There is an upper bound for that sum. Calculate the limit of that infinite series.
From there, it's easy to get a good C, eg: 2.

Big oh notation running time

How do you work this out? do you get c first which is the ratio of the two functions then with the ratio find the range of n ? how can you tell ? please explain i'm really lost, Thanks.
Example 1: Prove that running time T(n) = n^3 + 20n + 1 is O(n^3)
Proof: by the Big-Oh definition,
T(n) is O(n^3) if T(n) ≤ c·n^3 for some n ≥ n0 .
Let us check this condition:
if n^3 + 20n + 1 ≤ c·n^3 then 1 + 20/n^2 + 1/n^3 <=c .
Therefore,
the Big-Oh condition holds for n ≥ n0 = 1 and c ≥ 22 (= 1 + 20 + 1). Larger
values of n0 result in smaller factors c (e.g., for n0 = 10 c ≥ 1.201 and so on) but in
any case the above statement is valid.
I think the trick you're seeing is that you aren't thinking of LARGE numbers. Hence, let's take a counter example:
T(n) = n^4 + n
and let's assume that we think it's O(N^3) instead of O(N^4). What you could see is
c = n + 1/n^2
which means that c, a constant, is actually c(n), a function dependent upon n. Taking N to a really big number shows that no matter what, c == c(n), a function of n, so it can't be O(N^3).
What you want is in the limit as N goes to infinity, everything but a constant remains:
c = 1 + 1/n^3
Now you can easily say, it is still c(n)! As N gets really, really big 1/n^3 goes to zero. Hence, with very large N in the case of declaring T(n) in O(N^4) time, c == 1 or it is a constant!
Does that help?

Resources