How is f(x) = 4x^2 - 5x + 3 is O(x^2) derived - big-o

Here are the steps that are used to prove the above
|f(x)| = |4x^2 – 5x + 3|
<= |4x^2|+ |- 5x| + |3|
<= 4x^2 + 5x + 3, for all x > 0
<= 4x^2 + 5x^2 + 3x^2, for all x > 1
<= 12x^2, for all x > 1
Hence we conclude that f(x) is O(x^2)
I referred this But it does not help
Can someone explain the above proof step by step?
Why the absolute value of f(x) is taken ?
Why and how were all the term replaced by x^2 term?

Preparations
We start by loosely stating the definition of a function or algorithm f being in O(g(n)):
If a function f is in O(g(n)), then c · g(n) is an upper
bound on f(n), for some non-negative constant c such that f(n) ≤ c · g(n)
holds, for sufficiently large n (i.e. , n ≥ n0 for some constant
n0).
Hence, to show that f ∈ O(g(n)), we need to find a set of (non-negative) constants (c, n0) that fulfils
f(n) ≤ c · g(n), for all n ≥ n0, (+)
We note, however, that this set is not unique; the problem of finding the constants (c, n0) such that (+) holds is degenerate. In fact, if any such pair of constants exists, there will exist an infinite amount of different such pairs.
Analysis
For common convention, we'll analyse your example using variable name n rather than x.
f(n) = 4n^2 - 5n + 3 (++)
Now, for your example, we may assume, without loss of generality (since we're studying asymptotic complexity: function/algorithm behavior for "large" n) that n > n0 where n0 > 0. This would correspond to the analysis you've shown in your question analyzing absolute values of x. Anyway, given this assumption, the following holds:
f(n) = 4n^2 - 5n + 3 < 4n^2 + 3, for all n > n0
Now let, again without loss of generality, n0 equal 2 (we could choose any value, but lets choose 2 here). For n0 = 2, naturally n^2 > 3 holds for n > n0, which means the following holds:
f(n) = 4n^2 - 5n + 3 < 4n^2 + 3 < 4n^2 + n^2, for all n > n0 = 2
f(n) < 5n^2, for all n > n0 = 2
Now choose c = 5 and let g(n) = n^2:
f(n) < c · g(n), for all n > n0,
with c = 5, n0 = 2, g(n) = n^2
Hence, from (+), we've shown that f as defined in (++) is in O(g(n)) = O(n^2).

Related

Determining whether an expression has omega complexity

6n^4 −3n^2 +3 is Ω(n4)
Hello, I need to determine whether this statement is true or false.
Any help is appreciated.
Thank you
I am leaning towards true due to the n^4, however the omega complexity is making me doubt this.
I believe if it was big O it would be a true statement.
f is Omega(g) if there exist constants c and n0 such that for all n > n0, f(n) >= c * g(n). For us, we need to evaluate whether there are constants n0 and c such that 6n^4 - 3n^2 + 3 > cn^4 for all n > n0. If we choose n = 5 we get...
6n^4 - 3n^2 + 3 > 5n^4
n^4 - 3n^2 + 3 > 0
Using the quadratic formula we can find values for n^2 where the LHS equals zero:
n^2 = [-b +- sqrt(b^2 - 4ac)] / 2a
= [3 +- sqrt(9 - 12] / 2
But the discriminant is negative, which means there are no real values of n^2 where the LHS equals 0. This means that the LHS has no roots and never crosses the X-axis; it is either always positive or always negative. We can see which is the case easily by plugging in 0:
0^4 - 30^2 + 3 = 3 > 0
So, with the choice of c=5, our inequality is true for all n; we are free to choose any n0, e.g., n0 = 1 works.
Because there exists a pair c=5 and n0=1 which gives us f(n) = 6n^4 - 3n^2 + 3 > 5n^4 = cg(n) for all n > n0, we can say that f is Omega(g).

Big O in Algorithms

I was reading an article and came across the following :
Informally, O(g(n)) can be defined as the set of mathematical functions that contains all functions that don’t “grow faster” than g(n). Thus, all functions below are in the set O(n²):
f(n) = n²+3n + 2, f(n) = n log(n), f(n) = 3n+1
.
Can please anyone tell me how f(n) = n²+3n + 2 grows faster than g(n)?
Can please anyone tell me how f(n) = n²+3n + 2 grows faster than g(n)?
Here is one way to understand it (a bit informal, but I find it more intuitive).
Let L be limit as n goes to infinity of f(n)/g(n)
If L is infinity then f(n) grows faster than g(n) (numerator overwhelms denominator).
If L is 0 then f(n) grows slower than g(n) (denominator overwhelms numerator)
If L is finite number then they have same (comparable) growth rates.
We can define O(g(n)) as the following set:
O(g(n)) = { f(n) ∶ ∃ c > 0 and n0 > 0 | 0 ≤ f(n) ≤ c ⋅ g(n), ∀n ≥ n0 }
This means O(g(n)) is the set of all functions f(n) which grow slower than g(n) for some constant c and for n ≥ n0. In order to find n0 and c we use a justification like the following:
n²+3n + 2 ≤ n² + 3n² + 2n²
n²+3n + 2 ≤ 6n² for c = 6 and n ≥ 1
Now if you just use g(n) = n² obviously f(n) = n² + 3n + 2 will grow faster than g(n); but by choosing the value of c correctly g(n) will grow faster than f(n) for n ≥ n0.

Asymptotic Notation: Proving Big Omega, O, and Theta

I have a few asymptotic notation problems I do not entirely grasp.
So when proving asymptotic complexity, I understand the operations of finding a constant and the n0 term of which the notation will be true for. So, for example:
Prove 7n+4 = Ω(n)
In such a case we would pick a constant c, such that it is lower than 7 since this regarding Big Omega. Picking 6 would result in
7n+4 >= 6n
n+4 >= 0
n = -4
But since n0 cannot be a negative term, we pick a positive integer, so n0 = 1.
But what about a problem like this:
Prove that n^3 − 91n^2 − 7n − 14 = Ω(n^3).
I picked 1/2 as the constant, reaching
1/2n^3 - 91n^2 - 7n -14 >= 0.
But I am unsure how to continue. Also, a problem like this, I think regarding theta:
Let g(n) = 27n^2 + 18n and let f(n) = 0.5n^2 − 100. Find positive constants n0, c1 and c2 such
that c1f(n) ≤ g(n) ≤ c2f(n) for all n ≥ n0.
In such a case am I performing two separate operations here, one big O comparison and one Big Omega comparison, so that there is a theta relationship, or tight bound? If so, how would I go about that?
To show n3 − 91n2 − 7n − 14 is in Ω(n3), we need to exhibit some numbers n0 and c such that, for all n ≥ n0:
n3 − 91n2 − 7n − 14 ≥ cn3
You've chosen c = 0.5, so let's go with that. Rearranging gives:
n3 − 0.5n3 ≥ 91n2 + 7n + 14
Multiplying both sides by 2 and simplifying:
182n2 + 14n + 28 ≤ n3
For all n ≥ 1, we have:
182n2 + 14n + 28 ≤ 182n2 + 14n2 + 28n2 = 224n2
And when n ≥ 224, we have 224n2 ≤ n3. Therefore, the choice of n0 = 224 and c = 0.5 demonstrates that the original function is in Ω(n3).

Is my explanation about big o correct in this case?

I'm trying to explain to my friend why 7n - 2 = O(N). I want to do so based on the definition of big O.
Based on the definition of big O, f(n) = O(g(n)) if:
We can find a real value C and integer value n0 >= 1 such that:
f(n)<= C . g(n) for all values of n >= n0.
In this case, is the following explanation correct?
7n - 2 <= C . n
-2 <= C . n - 7n
-2 <= n (C - 7)
-2 / (C - 7) <= n
if we consider C = 7, mathematically, -2 / (C - 7) is equal to negative infinity, so
n >= (negative infinity)
It means that for all values of n >= (negative infinity) the following holds:
7n - 2 <= 7n
Now we have to pick n0 such that for all n >= n0 and n0 >= 1 the following holds:
7n - 2 <= 7n
Since for all values of n >= (negative infinity) the inequality holds, we can simply take n0 = 1.
You're on the right track here. Fundamentally, though, the logic you're using doesn't work. If you are trying to prove that there exist an n0 and c such that f(n) ≤ cg(n) for all n ≥ n0, then you can't start off by assuming that f(n) ≤ cg(n) because that's ultimately what you're trying to prove!
Instead, see if you can start with the initial expression (7n - 2) and massage it into something upper-bounded by cn. Here's one way to do this: since 7n - 2 ≤ 7n, we can (by inspection) just pick n0 = 0 and c = 7 to see that 7n - 2 ≤ cn for all n ≥ n0.
For a more interesting case, let's try this with 7n + 2:
7n + 2
≤ 7n + 2n (for all n ≥ 1)
= 9n
So by inspection we can pick c = 9 and n0 = 1 and we have that 7n + 2 ≤ cn for all n ≥ n0, so 7n + 2 = O(n).
Notice that at no point in this math did we assume the ultimate inequality, which means we never had to risk a divide-by-zero error.

f(n) = log(n)^m is O(n) for all natural numbers m?

A TA told me that this is true today but I was unable to verify this by googling. This is saying functions like log(n)^2, log(n)^3, ... , log(n)^m are all O(n).
Is this true?
Claim
The function f(n) = log(n)^m, for any natural number m > 2 (m ∈ ℕ+) is in
O(n).
I.e. there exists a set of positive constants c and n0 such that
the following holds:
log(n)^m < c · n, for all n > n0, { m > 2 | m ∈ ℕ+ } (+)
Proof
Assume that (+) does not hold, and denote this assumption as (*).
I.e., given (*), there exists no set of positive constants c and n0 such that (+) holds for any value of m > 2. Under this assumption, the following holds, that for all positive constants c and n0, there exists a n > n0 such that (thanks #Oriol):
log(n)^m ≥ c · n, { m > 2 | m ∈ ℕ+ } (++)
Now, if (++) holds, then the inequality in (++) will hold also after applying any monotonically increasing function to both sides of the inequality. One such function is, conveniently, the log function itself
Hence, under the assumption that (++) holds, then, for all positive constants c and n0, there exists a n > n0 such that the following holds
log(log(n)^m) ≥ log(c · n), { m > 2 | m ∈ ℕ+ }
m · log(log(n)) ≥ log(c · n), { m > 2 | m ∈ ℕ+ } (+++)
However, (+++) is obviously a contradiction: since log(n) dominates (w.r.t. growth) over log(log(n)),
we can—for any given value of m > 2—always find a set of constants c and n0 such that (+++) (and hence (++)) is violated for all n > n0.
Hence, assumption (*) is proven false by contradiction, and hence, (+) holds.
=> for f(n) = log(n)^m, for any finite integer m > 2, it holds that f ∈ O(n).
Yes. If the function it's f(n), it means m is a parameter and f does not depend on it. In fact, we have a different f_m function for each m.
f_m(n) = log(n)^m
Then it's easy. Given m ∈ ℕ, use L'Hôpital's rule repeatively
f_m(n) log(n)^m m * log(n)^(m-1)
limsup ──────── = limsup ────────── = limsup ────────────────── =
n→∞ n n→∞ n n→∞ n
m*(m-1) * log(n)^(m-2) m!
= limsup ──────────────────────── = … = limsup ──── = 0
n n→∞ n
Therefore, f_m ∈ O(n).
Of course, it would be different if we had f(m,n) = log(n)^m. For example, taking m=n,
f(n,n) log(n)^n
limsup ──────── = limsup ────────── = ∞
n→∞ n n→∞ n
Then f ∉ O(n)
In many ways it is more intuitive that for any positive integer m we have:
x^m = O(e^x)
This says that exponential growth dominates polynomial growth (which is why exponential time algorithms are bad news in computer programming).
Assuming that this is true, simply take x = log(n) and use the fact that then x tends to infinity if and only if n tends to infinity and that e^x and log(x) are inverses:
log(n)^m = O(e^log(n)) = O(n)
Finally, since for any natural number m, the root function n => n^(1/m) is increasing, we can rewrite the result as
log(n) = O(n^(1/m))
This way of writing it says that log(n) grows slower than any root (square, cube, etc.) of n, which more obviously corresponds to e^n growing faster than any power of n.
On Edit: the above showed that log(n)^m = O(n) followed from the more familiar x^m = O(e^x). To convert it to a more self-contained proof, we can show the later somewhat directly.
Start with the Taylor series for e^x:
e^x = 1 + x/1! + x^2/2! + x^3/3! + ... + x^n/n! + ...
This is known to converge for all real numbers x. If a positive integer m is given, let K = (m+1)!. Then, if x > K we have 1/x < 1/(m+1)!, hence
x^m = x^(m+1)/x < x^(m+1)/(m+1)! < e^x
which implies x^m = O(e^x). (The last inequality in the above is true since all terms in the expansion for e^x are strictly positive if x>0 and x^(m+1)/(m+1)! is just one of those terms.)

Resources