I have got problem about understanding the following question. It says:
Prove that exponential functions have different orders of growth for different
values of base.
It looks to me like for example, consider an. If a=3, its growth rate will be larger than when a=2. It looks obvious. Is that really what the question wants? How can i do a formal proof for that?
Thanks in advance for your help.
f(n) ∈ O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
Let 1>a>b without loss of generality, and suppose b^n ∈ O(a^n). This implies that there are positive constants c and k such that 0 ≤ b^n ≤ c.a^n for all n ≥ k, which is impossible :
b^n ≤ c.a^n for all n ≥ k implies (b/a)^n ≤ c for all n ≥ k
which is in contradiction with lim (b/a)^n = +inf because b/a>1.
If 1>a>b then b^n ∉ O(a^n), but a^n ∈ O(b^n) so O(a^n)⊊O(b^n)
Related
My question is, is this True.
g(n) ∈ O(f(n)) =⇒ (g(n))^2 ∈ O((f(n))^2)
At the long run it should be true, but i have one example (log n )^ 2 which is still in or less then O(sqrt n)
Is there a way to proof this without the graph.
THX
It is quite intuitive that, if a function g is less or equally fast than another function f, the square of g is less or equally fast than the square of f.
Formally:
Statement #1. g(n) ∈ O(f(n)) means that, for at least one choice of a constant k > 0, you can find a constant a such that the inequality 0 ≤ g(n) ≤ k f(n) holds for all n > a.
Statement #2. g(n)^2 ∈ O(f(n)^2) means that, for at least one choice of a constant k > 0, you can find a constant a such that the inequality 0 ≤ g(n)^2 ≤ k f(n)^2 holds for all n > a.
Since we want to prove that g(n) ∈ O(f(n)) implies g(n)^2 ∈ O(f(n)^2), we want to reach the statement #2 starting from the statement #1.
Let us take two constants k and a such that the statement #1 is satisfied.
First, notice that:
0 ≤ k f(n) holds for all n > a (from hypotesis);
⇒ 0 ≤ f(n) holds for all n > a (since k > 0). [Result #1]
Also notice that:
g(n) ≤ k f(n) holds for all n > a (from hypotesis);
⇒ g(n)^2 ≤ (k f(n))^2 holds for all n > a (since, from hypotesis and result #1, both g(n) and f(n) are non-negative for all n > a, so we can keep the ≤ sign);
⇒ g(n)^2 ≤ k^2 f(n)^2 holds for all n > a. [Result #2]
From results #1 and #2, the statement #2 is satisfied.
Q.E.D.
I'm wondering if an algorithm with an exponential worst time complexity, should always have that stated as O(2^n). For example, if I had an algorithm that's operations triple for each addition to the input size, would I write it's time complexity as O(3^n), or would it still be classified as O(2^n).
Any formal explanation would be greatly appreciated.
3^n != O(2^n).
Let us assume that it were true.
Then there exists constants c and n0 such that 3^n ≤ c * 2^n for all n ≥ n0.
The last requirement is equivalent to (3/2)^n ≤ c for all n ≥ n0.
However, (3/2)^n → ∞ as n → ∞, so (3/2)^n ≤ c cannot be true for all n ≥ n0 for any constant c.
No, O(2^n) and O(3^n) are different. If 3^n were O(2^n), there'd be a constant k such that 3^n <= k * 2^n for all large n. There's no such k because 3^n / 2^n is (3/2)^n which grows arbitrarily large.
Let's suppose that a recursive formula is a big-o(n^2), and at the same time a big-omega(n^2). Does this imply that the recursion is a big-Theta(n^2)?
To make the long story short: the answer is Yes, it does. See proof below.
Though everybody have heard about big-o notation lets recall what exactly does these notations mean with a help of Introduction to Algorithms. For a general case it is said Ο(g(n)), Ω(g(n)), Θ(g(n)), but we will consider yours.
Ο(n2)
Ο(n2) notation defines a set of functions for each of which the following statement holds: There exists such positive constants c and n0 that 0 ≤ f(n) ≤ cn2 holds for all n ≥ n0.
So f(n) is just a function from Ο(n2). Examples 13n, -5, 4n2 + 5. All these pertain to Ο(n2).
Ω(n2)
Ω(n2) notation defines a set of functions for each of which the following statement holds: There exists such positive constants c and n0 that 0 ≤ cn2 ≤ f(n) holds for all n ≥ n0.
So f(n) is just a function from Ω(n2). Examples n4 + n - 1, 3n, n2 - 12. All these pertain to Ω(n2).
Θ(n2)
Θ(n2) notation defines a set of functions for each of which the following statement holds: There exists such positive constants c1, c2 and n0 that 0 ≤ c1n2 ≤ f(n) ≤ c2n2 holds for all n ≥ n0.
Again f(n) is just a function from Θ(n2). These are its representatives n2/2 + 3, 5n2.
Proof
I bet saying that a recursive formula is a big-o(n^2), and at the same time a big-omega(n^2) you meant there is a function (lets call it) f(n) pertaining to
Ω(n2) and Ο(n2).
From Ω(n2) we have existence of c1 that c1n2 ≤ f(n) holds. From Ο(n2) we have existence of c2 that f(n) ≤ c2n2 holds. Consequently we have existence of c1 and c2 that c1n2 ≤ f(n) ≤ c2n2, that is exactly what Θ(n2) is about.
I came across two asymptotic function proofs.
f(n) = O(g(n)) implies 2^f(n) = O(2^g(n))
Given: f(n) ≤ C1 g(n)
So, 2^f(n) ≤ 2^C1 g(n) --(i)
Now, 2^f(n) = O(2^g(n)) → 2^f(n) ≤ C2 2^g(n) --(ii)
From,(i) we find that (ii) will be true.
Hence 2^f(n) = O(2^g(n)) is TRUE.
Can you tell me if this proof is right? Is there any other way to solve this?
2.f(n) = O((f(n))^2)
How to prove the second example? Here I consider two cases one is if f(n)<1 and other is f(n)>1.
Note: None of them are homework questions.
The attempted-proof for example 1 looks well-intentioned but is flawed. First, “2^f(n) ≤ 2^C1 g(n)” means 2^f(n) ≤ (2^C1)*g(n), which in general is false. It should have been written 2^f(n) ≤ 2^(C1*g(n)). In the line beginning with “Now”, you should explicitly say C2 = 2^C1. The claim “(ii) will be true” is vacuous (there is no (ii)).
A function like f(n) = 1/n disproves the claim in example 2 because there are no constants N and C such that for all n > N, f(n) < C*(f(n))². Proof: Let some N and C be given. Choose n>N, n>C. f(n) = 1/n = n*(1/n²) > C*(1/n²) = C*(f(n))². Because N and C were arbitrarily chosen, this shows that there are no fixed values of N and C such that for all n > N, f(n) < C*(f(n))², QED.
Saying that “f(n) ≥ 1” is not enough to allow proving the second claim; but if you write “f(n) ≥ 1 for all n” or “f() ≥ 1” it is provable. For example, if f(n) = 1/n for odd n and 1+n for even n, we have f(n) > 1 for even n > 0, and less than 1 for odd n. To prove that f(n) = O((f(n))²) is false, use the same proof as in the previous paragraph but with the additional provision that n is even.
Actually, “f(n) ≥ 1 for all n” is stronger than necessary to ensure f(n) = O((f(n))²). Let ε be any fixed positive value. No matter how small ε is, “f(n) ≥ ε for all n > N'” ensures f(n) = O((f(n))²). To prove this, take C = max(1, 1/ε) and N=N'.
In this Big-O / Computational Complexity problem
given that a and b are positive constants greater than 1 and n is a variable parameter.
I assumed that an+1 = O(an) , abn = O(an) & an+b = O(an)
First I need to know if I am correct in assuming this.
If so, how would I prove that f(n) = O(f(n)).
Recall the definition of big-O:
f(n) ∈ O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
Let g=f, c=1 and k=0, then you have a trivial demonstration of f(n) ∈ O(f(n))
Similarly, from an+1=a⋅an, let f(n)=an+1, g(n)=an, c=a, k=0, again the proof of O(an+1)=O(an) is trivial. The proof for O(an+b)=O(an) is identical.
O(abn) is not equal to O(an) with a,b>1, see Exponential growth in big-o notation