Prove that n3 is not in O(n2)
Prove that n3 is not in OMEGA(n4)
Suppose that n³ is in O(n²), then there exists some pair of positive constants c and n₀ such that n³ ≤ cn² for all n ≥ n₀, but for any constant c this is trivially false when n > c, thus we have a contradiction.
Suppose that n³ is in Ω(n⁴), then there exists some pair of positive constants c and n₀ such that n³ ≥ cn⁴ for all n ≥ n₀, but for any constant c, this is trivially false when n > max(1,1/c), thus we have a contradiction.
Related
My question is, is this True.
g(n) ∈ O(f(n)) =⇒ (g(n))^2 ∈ O((f(n))^2)
At the long run it should be true, but i have one example (log n )^ 2 which is still in or less then O(sqrt n)
Is there a way to proof this without the graph.
THX
It is quite intuitive that, if a function g is less or equally fast than another function f, the square of g is less or equally fast than the square of f.
Formally:
Statement #1. g(n) ∈ O(f(n)) means that, for at least one choice of a constant k > 0, you can find a constant a such that the inequality 0 ≤ g(n) ≤ k f(n) holds for all n > a.
Statement #2. g(n)^2 ∈ O(f(n)^2) means that, for at least one choice of a constant k > 0, you can find a constant a such that the inequality 0 ≤ g(n)^2 ≤ k f(n)^2 holds for all n > a.
Since we want to prove that g(n) ∈ O(f(n)) implies g(n)^2 ∈ O(f(n)^2), we want to reach the statement #2 starting from the statement #1.
Let us take two constants k and a such that the statement #1 is satisfied.
First, notice that:
0 ≤ k f(n) holds for all n > a (from hypotesis);
⇒ 0 ≤ f(n) holds for all n > a (since k > 0). [Result #1]
Also notice that:
g(n) ≤ k f(n) holds for all n > a (from hypotesis);
⇒ g(n)^2 ≤ (k f(n))^2 holds for all n > a (since, from hypotesis and result #1, both g(n) and f(n) are non-negative for all n > a, so we can keep the ≤ sign);
⇒ g(n)^2 ≤ k^2 f(n)^2 holds for all n > a. [Result #2]
From results #1 and #2, the statement #2 is satisfied.
Q.E.D.
Show that if f(n) is Ω(n∗g(n)), then f(n) is not O(g(n))
Assume f(n) is Ω(n ∗ g(n)) and f(n) is O(g(n)). Need to show a contradiction. The approach is to find a value of n that violates the definitions.
Proof: f(n) is Ω(n ∗ g(n)) implies there exists positive values C and k such that n > k implies f(n) ≥ C ∗ n ∗ g(n). f(n) is O(g(n)) implies there exists positive values C′ and k′ such that n > k′ implies f(n) ≤ C ∗ g(n).
So what value of n violates the definition and how can I show a contradiction?
Your approach to prove the statement by contradiction is possible. But first of all, you need to be a bit more precise:
f and g are positive non-decreasing functions on integers
C and C' are >= 0
Your last implication should read C' * g(n) (as opposed to C * g(n)).
So we start with:
(a) There exist positive integers C, C', k, k' such that for all n > k and n' > k':
C * n * g(n) <= f(n) and f(n') <= C' g(n')
By chaining together your two implications and merging the two universal quantifiers into one (by noting that for all n > k and n' > k' implies for all n > max(k,k')), you immediately get:
(b) There exist positive integers C, C', k, k' such that for all n > max(k,k'):
C * n * g(n) <= C' g(n)
Dividing by g(n) on both sides, which is valid by assumption 1. above, yields the equivalent:
(c) There exist positive integers C, C', k, k' such that for all n > max(k,k'):
C * n <= C'
This is equivalent to:
(d) There exist positive integers C, C', k, k' such that for all n > max(k,k'):
n <= C'/C
The last statement is equivalent to false. This is a contradiction and hence the original statement is true.
I'm wondering if an algorithm with an exponential worst time complexity, should always have that stated as O(2^n). For example, if I had an algorithm that's operations triple for each addition to the input size, would I write it's time complexity as O(3^n), or would it still be classified as O(2^n).
Any formal explanation would be greatly appreciated.
3^n != O(2^n).
Let us assume that it were true.
Then there exists constants c and n0 such that 3^n ≤ c * 2^n for all n ≥ n0.
The last requirement is equivalent to (3/2)^n ≤ c for all n ≥ n0.
However, (3/2)^n → ∞ as n → ∞, so (3/2)^n ≤ c cannot be true for all n ≥ n0 for any constant c.
No, O(2^n) and O(3^n) are different. If 3^n were O(2^n), there'd be a constant k such that 3^n <= k * 2^n for all large n. There's no such k because 3^n / 2^n is (3/2)^n which grows arbitrarily large.
c f(n) is in theta f(n) for c>0
I know that c is a constant, and if I can prove c f(n) is in big O(f(n) and in big Omega f(n) simultaneously, it is also in theta f(n), but how can I prove? I got confused.
c f(n) is O(f(n)) because there is a constant k such that :
|c f(n)| ≤ k |f(n)| as n -> infinity
Hence, |c| |f(n)| ≤ k |f(n)|
dividing both sides by |f(n)| we get |c| ≤ k
So, any value of k larger than |c| would satisfy this condition. Therefore, c f(n) is O(f(n))
You can use the same method to show that c f(n) is also Ω(f(n)), and therefore it is ϴ(f(n))
I have got problem about understanding the following question. It says:
Prove that exponential functions have different orders of growth for different
values of base.
It looks to me like for example, consider an. If a=3, its growth rate will be larger than when a=2. It looks obvious. Is that really what the question wants? How can i do a formal proof for that?
Thanks in advance for your help.
f(n) ∈ O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
Let 1>a>b without loss of generality, and suppose b^n ∈ O(a^n). This implies that there are positive constants c and k such that 0 ≤ b^n ≤ c.a^n for all n ≥ k, which is impossible :
b^n ≤ c.a^n for all n ≥ k implies (b/a)^n ≤ c for all n ≥ k
which is in contradiction with lim (b/a)^n = +inf because b/a>1.
If 1>a>b then b^n ∉ O(a^n), but a^n ∈ O(b^n) so O(a^n)⊊O(b^n)