How can I prove that 2^(n+a) is O(2^n)? The only thing I can think of is that n in 2^n is an arbitrary value, therefore n+a is just as arbitrary, so n+a = n. Alternatively, 2^(n+a) = 2^n * 2^a. 2^n is obviously O(2^n), and a exists as an arbitrary value in 2^a, so 2^a = 2^n = O(2^n). Is there a clearer/more formal way to prove this?
For the formal definition of big-O, there must exist an M and n0 such that 2^(n+a) <= M*2^n for all n > n0.
If we choose M = 2^a, and n0 = 0, then we can see that 2^(n+a) = 2^a * 2^n = M*2^n, which is <= M*2^n for all n > n0. Therefore, 2^(n+a) is O(2^n)
See the definition of the big-O notation here. Think about whether you can find a constant M as in the definition.
In general, to prove that f(n) is O(g(n)), you must find a positive integer N such that for all n >= N, f(n) <= g(n).
Related
Suppose that f(n)=4^n and g(n)=n^n, will it be right to conclude that f(n)=Θ(g(n)).
In my opinion it's a correct claim but I'm not 100% sure.
It is incorrect. f(n) = Theta(g(n)) if and only if both f(n) = O(g(n)) and g(n) = O(f(n)). It is true that f(n) = O(g(n)). We will show that it is not the case that g(n) = O(f(n)).
Assume g(n) = O(f(n)). Then there exists a positive real constant c and a positive natural number n0 such that for all n > n0, g(n) <= c * f(n). For our functions, this implies n^n <= c * 4^n. If we take the nth root of both sides of this inequality we get n <= 4c^(1/n). We are free to assume c >= 1 and n0 >= since if we found a smaller value that worked a larger value would work too. For all c > 1 and n > 1, 4c^(1/n) is strictly less than 4c. But then if we choose n > 4c, the inequality is false. So, there cannot be an n0 such that for all n at least n0 the condition holds. This is a contradiction; our initial assumption is disproven.
Let's say you were given two logarithmic functions like
and you were asked to find if f(n) is O(g(n)) Ω(g(n)) or Θ(g(n)), how would you go about it? I found questions like these easier when you were comparing two exponential equations, because for example with x(n) = n^2 and p(n) = n^2 you could find a c > 0 (ex 3) where x(n) <= cp(n) for all n greater than some n>0 and that would prove that x(n) = O(p(n)). However, comparing two logarithmic functions seems much more difficult for some reason. Any help is appreciated, thanks!
f(n) is O(g(n)) iff there is a constant c and n_0 such that f(n) <= c * g(n) for each n >= n_0.
f(n) is Ω(g(n)) iff there is a constant c and n_0 such that f(n) >= c * g(n) for each n >= n_0.
Now, f(n) is Θ(g(n)) iff f(n) is O(g(n)) and f(n) is Ω(g(n)).
So, in your cases, we have:
f(n) = log (n^2) = 2logn
which means, g(n) is logn and c = 2, which means f(n) <= 2 * logn and f(n) >= 2 * logn, which makes it Ω(logn).
Btw. its also f(n) <= n and f(n) >= 1, so f(n) can be O(n), but we don't use it, since we can find a better O(g(n)). In this case we don't have the same function in both notations, to for those values we don't have Ω. However, we just need one option for g(n) to declare Ω. In cases we can't find it, we say its not Ω. Note the word "we say".
In second case, we care only for "highest growing value", logn part. Now, c = 1, and g = log(n), so in this case, its also Ω(logn).
I am having a confusion. If I have to prove,
Now, in this, if I calculate the limit,
By this can I Say that this does belongs to big-o(4n).
Be
Which is not true for any value of n.
Is this the correct way of proving?
A constant doesn't influence the O time complexity.
I mean O(2*n) = 2*O(n) = O(n).
If 2n+1 is in O(4n) => 2n+1 is in O(n).
Because lim(n->infinite)(2n+1)/n = 2 is a finite number => 2n+1 is in O(n).
according to definition, if there is a c constant which hold for f(n) <= c*g(n), then f(n) belongs to g(n).
so indeed, (2n+1) belongs to O(4n) because there are the constants 1 and 4 which hold:
1*(2n+1) <= 4n <= 4*(2n+1)
(This is also show that (4n) belongs to O(2n+1). this is because they are both O(n))
There is no best constant. As you've observed, 2 is not going to work. For every epsilon > 0, however, the constant 2 + epsilon will work. This can be proved in general by expanding the definition of limit. The condition for the limit of f(n)/g(n) to approach c as n goes to infinity is that, for every epsilon > 0, there exists n0 such that, for all n > n0, we have |f(n)/g(n) - c| < epsilon, which implies f(n)/g(n) < c + epsilon, which implies that f(n) < (c + epsilon) g(n). It follows that f(n) is O(g(n)), with big-O constant c + epsilon.
I came across two asymptotic function proofs.
f(n) = O(g(n)) implies 2^f(n) = O(2^g(n))
Given: f(n) ≤ C1 g(n)
So, 2^f(n) ≤ 2^C1 g(n) --(i)
Now, 2^f(n) = O(2^g(n)) → 2^f(n) ≤ C2 2^g(n) --(ii)
From,(i) we find that (ii) will be true.
Hence 2^f(n) = O(2^g(n)) is TRUE.
Can you tell me if this proof is right? Is there any other way to solve this?
2.f(n) = O((f(n))^2)
How to prove the second example? Here I consider two cases one is if f(n)<1 and other is f(n)>1.
Note: None of them are homework questions.
The attempted-proof for example 1 looks well-intentioned but is flawed. First, “2^f(n) ≤ 2^C1 g(n)” means 2^f(n) ≤ (2^C1)*g(n), which in general is false. It should have been written 2^f(n) ≤ 2^(C1*g(n)). In the line beginning with “Now”, you should explicitly say C2 = 2^C1. The claim “(ii) will be true” is vacuous (there is no (ii)).
A function like f(n) = 1/n disproves the claim in example 2 because there are no constants N and C such that for all n > N, f(n) < C*(f(n))². Proof: Let some N and C be given. Choose n>N, n>C. f(n) = 1/n = n*(1/n²) > C*(1/n²) = C*(f(n))². Because N and C were arbitrarily chosen, this shows that there are no fixed values of N and C such that for all n > N, f(n) < C*(f(n))², QED.
Saying that “f(n) ≥ 1” is not enough to allow proving the second claim; but if you write “f(n) ≥ 1 for all n” or “f() ≥ 1” it is provable. For example, if f(n) = 1/n for odd n and 1+n for even n, we have f(n) > 1 for even n > 0, and less than 1 for odd n. To prove that f(n) = O((f(n))²) is false, use the same proof as in the previous paragraph but with the additional provision that n is even.
Actually, “f(n) ≥ 1 for all n” is stronger than necessary to ensure f(n) = O((f(n))²). Let ε be any fixed positive value. No matter how small ε is, “f(n) ≥ ε for all n > N'” ensures f(n) = O((f(n))²). To prove this, take C = max(1, 1/ε) and N=N'.
Hi
I have a question that:
consider I have T(n) = m * n^2 (n<m)
is this correct to write T(n) = O(m) ? because I have written T(n) = m*n*n So because n<m we have T(n) = O(m)
thanks
No, the best thing you can do is to write T(n,m) = O(m^3). n < m is a very weak condition and basically just gives you n in O(m). For example, n could always be m-1.
Edit: My first answer was imprecise, as T was only a function in n. If m is constant, the answer is still holds, but O(m^3) is equal to O(1).
I believe in this case T(n) = O(n^2)
The formal definition of big-O:
f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that |f(x)| ≤ M|g(x)| for all x > x0.
In your case, T(n) will always be less than or equal to m(n^2).