Proving the order of complexity proofs - big-o

Show that if f(n) is Ω(n∗g(n)), then f(n) is not O(g(n).
Assume f(n) is Ω(n ∗ g(n)) and f(n) is O(g(n)). Want to show a contradiction. The approach is to find a value of n that violates the definitions.
Proof: f(n) is Ω(n ∗ g(n)) implies there exists positive values C and k such that n > k implies f(n) ≥ C ∗ n ∗ g(n). f(n) is O(g(n)) implies there exists positive values C′ and k′ such that n > k′ implies f(n) ≤ C ∗ g(n).

Assuming the following: Omega signifies the lower bound complexity and Big Oh signifies the upper bound complexity, this problem is solved using the definition of the two.
If f(n) is Omega(n*g(n)) then this means (from definition) that there exists an n0 and an M0, such that for all n > n0, f(n) > M0 * n * g(n).
If f(n) is O(g(n)) then this means (from definition), that there exists an n1 and an M1, such that for all n > n1, f(n) < M1 * g(n).
Let n2 = max(n0, n1), then the for all n > n2, M1 * g(n) > f(n) > M0 * n * g(n).
We will now focus on the two complexities, without the function. We have M1 * g(n) > M0 * n * g(n), thus M1 > M0 * n, thus n < M1/M0.
Now, no matter what values M1 and M0 take, n < M1/M0 is not true for all n > n2, which is necessary if we assume both complexities to be correct.
Thus, we arrive at a contradiction, so both complexities cannot be true at once.
Good luck in your studies.

Related

Proving Order of complexity proofs if f(n) is Ω(n∗g(n)), then f(n) is not O(g(n))

Show that if f(n) is Ω(n∗g(n)), then f(n) is not O(g(n))
Assume f(n) is Ω(n ∗ g(n)) and f(n) is O(g(n)). Need to show a contradiction. The approach is to find a value of n that violates the definitions.
Proof: f(n) is Ω(n ∗ g(n)) implies there exists positive values C and k such that n > k implies f(n) ≥ C ∗ n ∗ g(n). f(n) is O(g(n)) implies there exists positive values C′ and k′ such that n > k′ implies f(n) ≤ C ∗ g(n).
So what value of n violates the definition and how can I show a contradiction?
Your approach to prove the statement by contradiction is possible. But first of all, you need to be a bit more precise:
f and g are positive non-decreasing functions on integers
C and C' are >= 0
Your last implication should read C' * g(n) (as opposed to C * g(n)).
So we start with:
(a) There exist positive integers C, C', k, k' such that for all n > k and n' > k':
C * n * g(n) <= f(n) and f(n') <= C' g(n')
By chaining together your two implications and merging the two universal quantifiers into one (by noting that for all n > k and n' > k' implies for all n > max(k,k')), you immediately get:
(b) There exist positive integers C, C', k, k' such that for all n > max(k,k'):
C * n * g(n) <= C' g(n)
Dividing by g(n) on both sides, which is valid by assumption 1. above, yields the equivalent:
(c) There exist positive integers C, C', k, k' such that for all n > max(k,k'):
C * n <= C'
This is equivalent to:
(d) There exist positive integers C, C', k, k' such that for all n > max(k,k'):
n <= C'/C
The last statement is equivalent to false. This is a contradiction and hence the original statement is true.

Asymptotic bounds and Big Θ notation

Suppose that f(n)=4^n and g(n)=n^n, will it be right to conclude that f(n)=Θ(g(n)).
In my opinion it's a correct claim but I'm not 100% sure.
It is incorrect. f(n) = Theta(g(n)) if and only if both f(n) = O(g(n)) and g(n) = O(f(n)). It is true that f(n) = O(g(n)). We will show that it is not the case that g(n) = O(f(n)).
Assume g(n) = O(f(n)). Then there exists a positive real constant c and a positive natural number n0 such that for all n > n0, g(n) <= c * f(n). For our functions, this implies n^n <= c * 4^n. If we take the nth root of both sides of this inequality we get n <= 4c^(1/n). We are free to assume c >= 1 and n0 >= since if we found a smaller value that worked a larger value would work too. For all c > 1 and n > 1, 4c^(1/n) is strictly less than 4c. But then if we choose n > 4c, the inequality is false. So, there cannot be an n0 such that for all n at least n0 the condition holds. This is a contradiction; our initial assumption is disproven.

Prove that f(n) = Θ(g(n)) iff g(n) = Θ(f(n))

I have been given the problem:
f(n) are asymptotically positive functions. Prove f(n) = Θ(g(n)) iff g(n) = Θ(f(n)).
Everything I have found points to this statement being invalid. For example an answer I've come across states:
f(n) = O(g(n)) implies g(n) = O(f(n))
f(n) = O(g(n)) means g(n) grows faster than f(n). It cannot imply that f(n) grows
faster than g(n). Hence not true.
Another states:
If f(n) = O(g(n)) then O(f(n)). This is false. If f(n) = 1 and g(n) = n
for all natural numbers n, then f(n) <= g(n) for all natural numbers n, so
f(n) = O(g(n)). However, suppose g(n) = O(f(n)). Then there are natural
numbers n0 and a constant c > 0 such that n=g(n) <= cf(n) = c for all n >=
n0 which is impossible.
I understand that there are slight differences between my exact question and the examples I have found, but I've only been able to come up with solutions that do not prove it. I am correct in thinking that it is not able to be proved or am I looking over some detail?
You can start from here:
Formal Definition: f(n) = Θ (g(n)) means there are positive constants c1, c2, and k, such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ k.
Because you have that iff, you need to start from the left side and to prove the right side, and then start from the right side and prove the left side.
Left -> right
We consider that:
f(n) = Θ(g(n))
and we want to prove that
g(n) = Θ(f(n))
So, we have some positive constants c1, c2 and k such that:
0 ≤ c1*g(n) ≤ f(n) ≤ c2*g(n), for all n ≥ k
The first relation between f and g is:
c1*g(n) ≤ f(n) => g(n) ≤ 1/c1*f(n) (1)
The second relation between f and g is:
f(n) ≤ c2*g(n) => 1/c2*f(n) ≤ g(n) (2)
If we combine (1) and (2), we obtain:
1/c2*f(n) ≤ g(n) ≤ 1/c1*f(n)
If you consider c3 = 1/c2 and c4 = 1/c1, they exist and are positive (because the denominators are positive). And this is true for all n ≥ k (where k can be the same).
So, we have some positive constants c3, c4, k such that:
c3*f(n) ≤ g(n) ≤ c4*f(n), for all n ≥ k
which means that g(n) = Θ(f(n)).
Analogous for right -> left.

Proving if g(n) is o(f(n)), then f(n) + g(n) is Theta(f(n))

So I'm struggling with proving (or disproving) the above question. I feel like it is true, but I'm not sure how to show it.
Again, the question is if g(n) is o(f(n)), then f(n) + g(n) is Theta(f(n))
Note, that is a little-o, not a big-o!!!
So far, I've managed to (easily) show that:
g(n) = o(f(n)) -> g(n) < c*f(n)
Then g(n) + f(n) < (c+1)*f(n) -> (g(n) + f(n)) = O(f(n))
However, for showing Big Omega, I'm not sure what to do there.
Am I going about this right?
EDIT: Everyone provided great help, but I could only mark one. THANK YOU.
One option would be to take the limit of (f(n) + g(n)) / f(n) as n tends toward infinity. If this converges to a finite, nonzero value, then f(n) + g(n) = Θ(f(n)).
Assuming that f(n) is nonzero for sufficiently large n, the above ratio, in the limit, is
(f(n) + g(n)) / f(n)
= f(n) / f(n) + g(n) / f(n)
= 1 + g(n) / f(n).
Therefore, taking the limit as n goes to infinity, the above expression converges to 1 because the ratio goes to zero (this is what it means for g(n) to be o(f(n)).
So far so good.
For the next step, recall that in the best case, 0 <= g(n); this should get you a lower bound on g(n) + f(n).
Before we begin, lets first state what little-o and Big-Theta notations means:
Little-o notation
Formally, that g(n) = o(f(n)) (or g(n) ∈ o(f(n))) holds for
sufficiently large n means that for every positive constant ε
there exists a constant N such that
|g(n)| ≤ ε*|f(n)|, for all n > N (+)
From https://en.wikipedia.org/wiki/Big_O_notation#Little-o_notation.
Big-Θ notation
h(n) = Θ(f(n)) means there exists positive constants k_1, k_2
and N, such that k_1 · |f(n)| and k_2 · |f(n)| is an upper bound
and lower bound on on |h(n)|, respectively, for n > N, i.e.
k_1 · |f(n)| ≤ |h(n)| ≤ k_2 · |f(n)|, for all n > N (++)
From https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-big-theta-notation.
Given: g(n) ∈ o(f(n))
Hence, in our case, for every ε>0 we can find some constant N such that (+), for our functions g(n) and f(n). Hence, for n>N, we have
|g(n)| ≤ ε*|f(n)|, for some ε>0, for all n>N
Choose a constant ε < 1 (recall, the above holds for all ε > 0),
with accompanied constant N.
Then the following holds for all n>N
ε(|g(n)| + |f(n)|) ≤ 2|f(n)| ≤ 2(|g(n)| + |f(n)|) ≤ 4*|f(n)| (*)
Stripping away the left-most inequality in (*) and dividing by 2, we have:
|f(n)| ≤ |g(n)| + |f(n)| ≤ 2*|f(n)|, n>N (**)
We see that this is the very definition Big-Θ notation, as presented in (++), with constants k_1 = 1, k_2 = 2 and h(n) = g(n)+f(n). Hence
(**) => g(n) + f(n) is in Θ(f(n))
Ans we have shown that g(n) ∈ o(f(n)) implies (g(n) + f(n)) ∈ Θ(f(n)).

Understanding Big(O) in loops

I am trying to get the correct Big-O of the following code snippet:
s = 0
for x in seq:
for y in seq:
s += x*y
for z in seq:
for w in seq:
s += x-w
According to the book I got this example from (Python Algorithms), they explain it like this:
The z-loop is run for a linear number of iterations, and
it contains a linear loop, so the total complexity there is quadratic, or Θ(n2). The y-loop is clearly Θ(n).
This means that the code block inside the x-loop is Θ(n + n2). This entire block is executed for each
round of the x-loop, which is run n times. We use our multiplication rule and get Θ(n(n + n2)) = Θ(n2 + n3)
= Θ(n3), that is, cubic.
What I don't understand is: how could O(n(n+n2)) become O(n3). Is the math correct?
The math being done here is as follows. When you say O(n(n + n2)), that's equivalent to saying O(n2 + n3) by simply distributing the n throughout the product.
The reason that O(n2 + n3) = O(n3) follows from the formal definition of big-O notation, which is as follows:
A function f(n) = O(g(n)) iff there exists constants n0 and c such that for any n ≥ n0, |f(n)| ≤ c|g(n)|.
Informally, this says that as n gets arbitrary large, f(n) is bounded from above by a constant multiple of g(n).
To formally prove that n2 + n3 is O(n3), consider any n ≥ 1. Then we have that
n2 + n3 ≤ n3 + n3 = 2n3
So we have that n2 + n3 = O(n3), with n0 = 1 and c = 2. Consequently, we have that
O(n(n + n2)) = O(n2 + n3) = O(n3).
To be truly formal about this, we would need to show that if f(n) = O(g(n)) and g(n) = O(h(n)), then f(n) = O(h(n)). Let's walk through a proof of this. If f(n) = O(g(n)), there are constants n0 and c such that for n ≥ n0, |f(n)| ≤ c|g(n)|. Similarly, since g(n) = O(h(n)), there are constants n'0, c' such that for n ≥ n'0, g(n) ≤ c'|h(n)|. So this means that for any n ≥ max(c, c'), we have that
|f(n)| ≤ c|g(n)| ≤ c|c'h(n)| = c x c' |h(n)|
And so f(n) = O(h(n)).
To be a bit more precise - in the case of the algorithm described here, the authors are saying that the runtime is Θ(n3), which is a stronger result than saying that the runtime is O(n3). Θ notation indicates a tight asymptotic bound, meaning that the runtime grows at the same rate as n3, not just that it is bounded from above by some multiple of n3. To prove this, you would also need to show that n3 is O(n2 + n3). I'll leave this as an exercise to the reader. :-)
More generally, if you have any polynomial of order k, that polynomial is O(nk) using a similar argument. To see this, let P(n) = ∑i=0k(aini). Then, for any n ≥ 1, we have that
∑i=0k(aini) ≤ ∑i=0k(aink) = (∑i=0k(ai))nk
so P(n) = O(nk).
Hope this helps!
n(n+n2) == n2 + n3
Big-O notation only cares about the dominant term as n goes to infinity, so the whole algorithm is thought of as Θ(n3).
O(n(n+n^2)) = O(n^2 + n^3)
Since the n^3 term dominates the n^2 term, the n^2 term is negligible and thus it is O(n^3).
The y loop can be discounted because of the z loop (O(n) + O(n^2) -> O(n^2))
Forget the arithmetic.
Then you're left with three nested loops that all iterate over the full length of 'seq', so it's O(n^3)

Resources