Big O Notation Example Show that N^2 is not O(n) - big-o

Show that n^2 is not O(n)
f(n)=n^2
g(n) = n
c = 1
n_0=2
n^2 <= 1*n for all n_0 >= 2
4 <= 2
4 is not less than or equal to 2. Therefore, n^2 is not O(n).
I need to show that NO c works with this, however, the c of 2, with n of 2 will work. How is n^2 not n?

Let's assume that n² is in O(n).
Then there must be a c and a n₀ such that for all n ≥ n₀, n² ≤ c*n (by the definition of O notation).
Let k = max(c, n₀) + 1. By the above property we have k² ≤ c*k (since k > n₀), from which it follows that k ≤ c.
However, k > c by construction. That's a contradiction.
Therefore our assumption is false and n² cannot be in O(n).

Related

Asymptotic Notation: Proving Big Omega, O, and Theta

I have a few asymptotic notation problems I do not entirely grasp.
So when proving asymptotic complexity, I understand the operations of finding a constant and the n0 term of which the notation will be true for. So, for example:
Prove 7n+4 = Ω(n)
In such a case we would pick a constant c, such that it is lower than 7 since this regarding Big Omega. Picking 6 would result in
7n+4 >= 6n
n+4 >= 0
n = -4
But since n0 cannot be a negative term, we pick a positive integer, so n0 = 1.
But what about a problem like this:
Prove that n^3 − 91n^2 − 7n − 14 = Ω(n^3).
I picked 1/2 as the constant, reaching
1/2n^3 - 91n^2 - 7n -14 >= 0.
But I am unsure how to continue. Also, a problem like this, I think regarding theta:
Let g(n) = 27n^2 + 18n and let f(n) = 0.5n^2 − 100. Find positive constants n0, c1 and c2 such
that c1f(n) ≤ g(n) ≤ c2f(n) for all n ≥ n0.
In such a case am I performing two separate operations here, one big O comparison and one Big Omega comparison, so that there is a theta relationship, or tight bound? If so, how would I go about that?
To show n3 − 91n2 − 7n − 14 is in Ω(n3), we need to exhibit some numbers n0 and c such that, for all n ≥ n0:
n3 − 91n2 − 7n − 14 ≥ cn3
You've chosen c = 0.5, so let's go with that. Rearranging gives:
n3 − 0.5n3 ≥ 91n2 + 7n + 14
Multiplying both sides by 2 and simplifying:
182n2 + 14n + 28 ≤ n3
For all n ≥ 1, we have:
182n2 + 14n + 28 ≤ 182n2 + 14n2 + 28n2 = 224n2
And when n ≥ 224, we have 224n2 ≤ n3. Therefore, the choice of n0 = 224 and c = 0.5 demonstrates that the original function is in Ω(n3).

Finding the values in Big Oh

I am going through the Asymptotic notations from here. I am reading this f(n) ≤ c g(n)
For example, if f(n) = 2n + 2, We can satisfy it in any way as f(n) is O (c.g(n)) by adjusting the value of n and c. Or is there any specific rule or formula for selecting the value of c and n. Will no always be 1?
There is no formula per se. You can find the formal definition here:
f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n. (big-O notation).
What I understood from your question is, you are not getting the essence of big-O notation. If your complexity is, for example, O(n^2), then you can guarantee that there is some value of n (greater than k) after which f(n) in no case will exceed c g(n).
Let's try to prove f(n) = 2n + 2 is O(n):
As it seems from the function itself, you cannot set the value of c equal to 2 as you want to find f(n) ≤ c g(n). If you plug in c = 2, you have to find k such that f(n) ≤ c g(n) for n ≥ k. Clearly, there is no n for which 2n ≥ 2n + 2. So, we move on to c = 3.
Now, let's find the value of k. So, we solve the equation 3n ≥ 2n + 2. Solving it:
3n ≥ 2n + 2
=> 3n - 2n ≥ 2
=> n ≥ 2
Therefore, for c = 3, we found value of k = 2 (n ≥ k).
You must also understand, your function isn't just O(n). It is also O(n^2), O(n^3), O(n^4) and so on. All because corresponding values of c and k exist for g(n) = n^2, g(n) = n^3 and g(n) = n^4.
Hope it helps.

How is f(x) = 4x^2 - 5x + 3 is O(x^2) derived

Here are the steps that are used to prove the above
|f(x)| = |4x^2 – 5x + 3|
<= |4x^2|+ |- 5x| + |3|
<= 4x^2 + 5x + 3, for all x > 0
<= 4x^2 + 5x^2 + 3x^2, for all x > 1
<= 12x^2, for all x > 1
Hence we conclude that f(x) is O(x^2)
I referred this But it does not help
Can someone explain the above proof step by step?
Why the absolute value of f(x) is taken ?
Why and how were all the term replaced by x^2 term?
Preparations
We start by loosely stating the definition of a function or algorithm f being in O(g(n)):
If a function f is in O(g(n)), then c · g(n) is an upper
bound on f(n), for some non-negative constant c such that f(n) ≤ c · g(n)
holds, for sufficiently large n (i.e. , n ≥ n0 for some constant
n0).
Hence, to show that f ∈ O(g(n)), we need to find a set of (non-negative) constants (c, n0) that fulfils
f(n) ≤ c · g(n), for all n ≥ n0, (+)
We note, however, that this set is not unique; the problem of finding the constants (c, n0) such that (+) holds is degenerate. In fact, if any such pair of constants exists, there will exist an infinite amount of different such pairs.
Analysis
For common convention, we'll analyse your example using variable name n rather than x.
f(n) = 4n^2 - 5n + 3 (++)
Now, for your example, we may assume, without loss of generality (since we're studying asymptotic complexity: function/algorithm behavior for "large" n) that n > n0 where n0 > 0. This would correspond to the analysis you've shown in your question analyzing absolute values of x. Anyway, given this assumption, the following holds:
f(n) = 4n^2 - 5n + 3 < 4n^2 + 3, for all n > n0
Now let, again without loss of generality, n0 equal 2 (we could choose any value, but lets choose 2 here). For n0 = 2, naturally n^2 > 3 holds for n > n0, which means the following holds:
f(n) = 4n^2 - 5n + 3 < 4n^2 + 3 < 4n^2 + n^2, for all n > n0 = 2
f(n) < 5n^2, for all n > n0 = 2
Now choose c = 5 and let g(n) = n^2:
f(n) < c · g(n), for all n > n0,
with c = 5, n0 = 2, g(n) = n^2
Hence, from (+), we've shown that f as defined in (++) is in O(g(n)) = O(n^2).

which algorithm dominates f(n) or (g(n)

I am doing an introductory course on algorithms. I've come across this problem which I'm unsure about.
I would like to know which of the 2 are dominant
f(n): 100n + log n or g(n): n + (log n)^2
Given the definitions of each of:
Ω, Θ, O
I assumed f(n), so fn = Ω(g(n))
Reason being that n dominates (log n)^2, is that true?
In this case,
limn → ∞[f(n) / g(n)] = 100.
If you go over calculus definitions, this means that, for any ε > 0, there exists some m for which
100 (1 - ε) g(n) ≤ f(n) ≤ 100 (1 + ε) g(n)
for any n > m.
From the definition of Θ, you can infer that these two functions are Θ of each other.
In general, if
limn → ∞[f(n) / g(n)] = c exists, and
0 < c < ∞,
then the two functions have the same order of growth (they are Θ of each other).
n dominates both log(n) and (log n)^2
A little explanation
f(n) = 100n + log n
Here n dominates log n for large values of n.
So f(n) = O(n) .......... [1]
g(n) = n + (log n)^2
Now, (log n)^2 dominates log n.
But n still dominates (log n)^2.
So g(n) = O(n) .......... [2]
Now, taking results [1] and [2] into consideration.
f(n) = Θ(g(n)) and g(n) = Θ(f(n))
since they will grow at the same rate for large values of n.
We can say that f(n) = O(g(n) if there are constants c > 0 and n0 > 0 such that
f(n) <= c*g(n), n > n0
This is the case for both directions:
# c == 100
100n + log n <= 100(n + (log n)^2)
= 100n + 100(log(n)^2) (n > 1)
and
# c == 1
n + (log n)^2 <= 100n + log n (n > 1)
Taken together, we've proved that n + (log n)^2 <= 100n + log n <= 100(n + (log n)^2), which proves that f(n) = Θ(g(n)), which is to say that neither dominates the other. Both functions are Θ(n).
g(n) dominates f(n), or equivalently, g(n) is Ω(f(n)) and the same hold vice versa.
Considering the definition, you see that you can drop the factor 100 in the definition of f(n) (since you can multiply it by any fixed number) and you can drop both addends since they are dominated by the linear n.
The above follows from n is Ω(n + logn) and n is Ω(n + log^2n.
hope that helps,
fricke

Asymptotic runtime for an algorithm

I've decided to try and do a problem about analyzing the worst possible runtime of an algorithm and to gain some practice.
Since I'm a beginner I only need help in expressing my answer in a right way.
I came accros this problem in a book that uses the following algorithm:
Input: A set of n points (x1, y1), . . . , (xn, yn) with n ≥ 2.
Output: The squared distance of a closest pair of points.
ClosePoints
1. if n = 2 then return (x1 − x2)^2 + (y1 − y2)^2
2. else
3. d ← 0
4. for i ← 1 to n − 1 do
5. for j ← i + 1 to n do
6. t ← (xi − xj)^2 + (yi − yj)^2
7. if t < d then
8. d ← t
9. return d
My question is how can I offer a good proof that T(n) = O(n^2),T(n) = Ω(n^2) and T (n) = Θ(n^2)?,where T(n) represents the worst possible runtime.
I know that we say that f is O(g),
if and only if there is an n0 ∈ N and c > 0 in R such that for all
n ≥ n0 we have
f(n) ≤ cg(n).
And also we say that f is Ω(g) if there is an
n0 ∈ N and c > 0 in R such that for all n ≥ n0 we have
f(n) ≥ cg(n).
Now I know that the algoritm is doing c * n(n - 1) iterations, yielding T(n)=c*n^2 - c*n.
The first 3 lines are executed O(1) times line 4 loops for n - 1 iterations which is O(n) . Line 5 loops for n - i iterations which is also O(n) .Does each line of the inner loop's content
(lines 6-7) takes (n-1)(n-i) or just O(1)?and why?The only variation is how many times 8.(d ← t) is performed but it must be lower than or equal to O(n^2).
So,how should I write a good and complete proof that T(n) = O(n^2),T(n) = Ω(n^2) and T (n) = Θ(n^2)?
Thanks in advance
Count the number of times t changes its value. Since changing t is the innermost operation performed, finding how many times that happens will allow you to find the complexity of the entire algorithm.
i = 1 => j runs n - 1 times (t changes value n - 1 times)
i = 2 => j runs n - 2 times
...
i = n - 1 => j runs 1 time
So the number of times t changes is 1 + 2 + ... + n - 1. This sum is equal n(n - 1) / 2. This is dominated by 0.5 * n^2.
Now just find appropriate constants and you can prove that this is Ω(n^2), O(n^2), Θ(n^2).
T(n)=c*n^2 - c*n approaches c*n^2 for large n, which is the definition of O(n^2).
if you observe the two for loops, each for loop gives an O(n) because each loop is incrementing/decrementing in a linear fashion. hence, two loops combined roughly give a O(n^2) complexity. the whole point of big-oh is to find the dominating term- coeffecients do not matter. i would strongly recommend formatting your pseudocode in a proper manner in which it is not ambiguous. in any case, the if and else loops do no affect the complexity of the algorithm.
lets observe the various definitions:
Big-Oh
• f(n) is O(g(n)) if f(n) is
asymptotically “less than or equal” to
g(n)
Big-Omega
• f(n) is Ω(g(n)) if f(n) is
asymptotically “greater than or equal”
to g(n)
Big-Theta
• f(n) is Θ(g(n)) if f(n) is
asymptotically “equal” to g(n)
so all you need are to find constraints which satisfy the answer.

Resources