Asymptotic Notation: Proving Big Omega, O, and Theta - complexity-theory

I have a few asymptotic notation problems I do not entirely grasp.
So when proving asymptotic complexity, I understand the operations of finding a constant and the n0 term of which the notation will be true for. So, for example:
Prove 7n+4 = Ω(n)
In such a case we would pick a constant c, such that it is lower than 7 since this regarding Big Omega. Picking 6 would result in
7n+4 >= 6n
n+4 >= 0
n = -4
But since n0 cannot be a negative term, we pick a positive integer, so n0 = 1.
But what about a problem like this:
Prove that n^3 − 91n^2 − 7n − 14 = Ω(n^3).
I picked 1/2 as the constant, reaching
1/2n^3 - 91n^2 - 7n -14 >= 0.
But I am unsure how to continue. Also, a problem like this, I think regarding theta:
Let g(n) = 27n^2 + 18n and let f(n) = 0.5n^2 − 100. Find positive constants n0, c1 and c2 such
that c1f(n) ≤ g(n) ≤ c2f(n) for all n ≥ n0.
In such a case am I performing two separate operations here, one big O comparison and one Big Omega comparison, so that there is a theta relationship, or tight bound? If so, how would I go about that?

To show n3 − 91n2 − 7n − 14 is in Ω(n3), we need to exhibit some numbers n0 and c such that, for all n ≥ n0:
n3 − 91n2 − 7n − 14 ≥ cn3
You've chosen c = 0.5, so let's go with that. Rearranging gives:
n3 − 0.5n3 ≥ 91n2 + 7n + 14
Multiplying both sides by 2 and simplifying:
182n2 + 14n + 28 ≤ n3
For all n ≥ 1, we have:
182n2 + 14n + 28 ≤ 182n2 + 14n2 + 28n2 = 224n2
And when n ≥ 224, we have 224n2 ≤ n3. Therefore, the choice of n0 = 224 and c = 0.5 demonstrates that the original function is in Ω(n3).

Related

Asymptotic Notation: Finding two constants such that n >= n0

Here is an asymptotic notation problem:
Let g(n) = 27n^2 + 18n and let f(n) = 0.5n^2 − 100. Find positive constants n0, c1 and c2 such that c1f(n) ≤ g(n) ≤ c2f(n) for all n ≥ n0.
Is this solving for theta? Do I prove 27n^2 + 18n = Ω(0.5n^2 − 100) and then prove (27n^2 + 18n) = O(0.5n^2 − 100)?
In that case wouldn't c1 and c2 be 1 and 56 respectively, and n0 would be the higher of the two n0 that I find?
There are infinitely many solutions. We just need to fiddle with algebra to find one.
The first thing to note is that both g and f are positive for all n≥15. In particular, g(15) = 6345, f(15) = 12.5. (All smaller values of n make f<0.) This implies n0=15 might work fine as well as any larger value.
Next note g'(n) = 54n + 18 and f'(n) = n.
Since f(15) < g(15) and f'(n) < g'(n) for all n >= 15, choose c1 = 1.
Proof that this is a good choice:
0.5n^2 − 100 ≤ 27n^2 + 18n <=> 26.5n^2 + 18n + 100 ≥ 0
...obviously true for all n≥15.
What about c2? First, we want c2*f(n) to grow at least as fast as g: c2f'(n)≥g'(n), or c2*n ≥ 54n + 18 for n ≥ 15. So choose c2 ≥ 56, which obviously makes this true.
Unfortunately, c2=56 doesn't quite work with n0 = 15. There's the other criterion to meet: c2*f(15)≥g(15). For that, 56 isn't big enough: 56*f(15) is only 700; g(15) is much bigger.
It turns out by substitution in the relation above and a bit more algebra that c2 = 508 does the trick.
Proof:
27n^2 + 18n ≤ 508 * (0.5n^2 − 100)
<=> 27n^2 + 18n ≤ 254n^2 − 50800
<=> 227n^2 - 18n - 50800 ≥ 0
At n=15, this is true by simple substitution. For all bigger values of n, note the lhs derivative 454n - 18 is positive for all n≥15, so the function is also non-decreasing over that domain. That makes the relation true as well.
To summarize, we've shown that n0=15, c1=1, and c2=508 is one solution.

proving Big-O runtime of an algorithm

I am trying to learn how to prove Big O correctly.
what i am trying to do is find some C and N0 for a given function.
the definition given for Big-O is
Let f(n) and g(n) be functions mapping nonnegative integers to real numbers.
We say that f(n) is O(g(n)) if there is a real constant c > 0 and an integer
constant n0 ≥ 1 such that for all n ≥ n0, f(n) ≤ c g(n).
Given the polynomial (n+1)^5 i need to show that it has a runtime of O(n^5).
my question is, how do i find such c and N0 from the definition above, and how do i continue my algebra to see if it runs n^5?
So far by trying induction i have,
(n+1)^5 = n^5 + 5n^4 + n^3 + 10n^2 + 5n^1 + n^0
find the n+1 element so
n^5 + 5n^4 + n^3 + 10n^2 + 5n^1 + n^0 <= n^5 + 5n^5 + n^5 + 10n^5 + 5n^5 + n^5
n^5 + 5n^4 + 10n^2 + 5n + 1 <= 22n^5
You want a constant c such that (n + 1) 5 ≤ c n 5. For that, you do not need induction, only a bit of algebra and it turns out you actually already found such a c, but missed the n0 in the process. So let's start from the beginning.
Note that c does not need to be tight, it can be way bigger than necessary and will still prove time-complexity. We will use that to our advantage.
We can first develop the left side as you did.
(n + 1) 5 = n5 + 5n4 + 10n3 + 10 n2 + 5n + 1
For n ≥ 1, we have that n, n2, n3, n4 ≤ n5, an thus.
(n + 1) 5 ≤ (1 + 5 + 10 + 10 + 5 + 1) n5 = 22n5
And there you got a c such that (n + 1) 5 ≤ c n5. That c is 22.
And since we stated above that this holds if n ≥ 1, then we have that n0 = 1.
Generalization
This generalizes for any degree. In general given the polynomial f(n) = (n + a)b, then you know that there exists a number c that is found by summing all the coefficients of the polynomial after development. It turns out the exact value of c does not matter so you do not need to compute it, all that matter is that we proved its existence and thus (n + a)b is O(nb).

Proving Big Omega Function

I'm trying to find the n0 (n not) of a function with a big omega size of n^3 where c=2.25
𝑓(𝑛) = 3𝑛^3 − 39𝑛^2 + 360𝑛 + 20. In order to prove that 𝑓(𝑛) is Ω(𝑛^3), we need constants 𝑐, 𝑛0 > 0 such that 𝑓(𝑛) ≥ 𝑐𝑛^3 for every 𝑛 ≥ 𝑛0
If c=2.25, how do I find the smallest integer that satisfies n0?
My first thought was to plug in n=1, because n>0, and if the inequality worked n=1 would be the smallest n (therefore n0). But, the inequality has to be satisfied for every n>=n0, and if i plug in, for example, n=15 the inequality doesn't work.
You can solve this mathematically.
To make sure that I understand what you want, I will summarize what you are asking. You want to find the smallest integer n so that:
3𝑛^3 − 39𝑛^2 + 360𝑛 + 20 ≥ 2.25𝑛^3 (1)
And any other integers bigger than n must also satisfy the equation (1).
So here is my solution:
(1) <=> 0.75𝑛^3 − 39𝑛^2 + 360𝑛 + 20 ≥ 0
Let f(n) = 0.75𝑛^3 − 39𝑛^2 + 360𝑛 + 20
f(n) = 0 <=> n1 = -0.05522 or n2 = 12.079 or n3 = 39.976
If n < n1, f(n) < 0 (try this yourself)
If n1 < n < n2, f(n) > 0 (the sign will alternate)
If n2 < n < n3, f(n) < 0 (the sign will alternate, again)
If n > n3, f(n) > 0
So to satisfy your requirements, the minimum value of n must be 40
Think about it like this. After a certain point 3𝑛^3 − 39𝑛^2 + 360𝑛 + 20 will always be greater than or equal to n^3 for the simple fact that eventually 3n^3 will beat out the -39n^2. So F(n) will never dip below n^3 for an extremely large number. You don't have to put the minimum nO, just choose an extremely large number for nO, since the question is asking after a certain value for n, the statement will hold true for ever. Choose nO, for example, to be an extremely large number X, and then use an inductive proof where X is the base case.

How is f(x) = 4x^2 - 5x + 3 is O(x^2) derived

Here are the steps that are used to prove the above
|f(x)| = |4x^2 – 5x + 3|
<= |4x^2|+ |- 5x| + |3|
<= 4x^2 + 5x + 3, for all x > 0
<= 4x^2 + 5x^2 + 3x^2, for all x > 1
<= 12x^2, for all x > 1
Hence we conclude that f(x) is O(x^2)
I referred this But it does not help
Can someone explain the above proof step by step?
Why the absolute value of f(x) is taken ?
Why and how were all the term replaced by x^2 term?
Preparations
We start by loosely stating the definition of a function or algorithm f being in O(g(n)):
If a function f is in O(g(n)), then c · g(n) is an upper
bound on f(n), for some non-negative constant c such that f(n) ≤ c · g(n)
holds, for sufficiently large n (i.e. , n ≥ n0 for some constant
n0).
Hence, to show that f ∈ O(g(n)), we need to find a set of (non-negative) constants (c, n0) that fulfils
f(n) ≤ c · g(n), for all n ≥ n0, (+)
We note, however, that this set is not unique; the problem of finding the constants (c, n0) such that (+) holds is degenerate. In fact, if any such pair of constants exists, there will exist an infinite amount of different such pairs.
Analysis
For common convention, we'll analyse your example using variable name n rather than x.
f(n) = 4n^2 - 5n + 3 (++)
Now, for your example, we may assume, without loss of generality (since we're studying asymptotic complexity: function/algorithm behavior for "large" n) that n > n0 where n0 > 0. This would correspond to the analysis you've shown in your question analyzing absolute values of x. Anyway, given this assumption, the following holds:
f(n) = 4n^2 - 5n + 3 < 4n^2 + 3, for all n > n0
Now let, again without loss of generality, n0 equal 2 (we could choose any value, but lets choose 2 here). For n0 = 2, naturally n^2 > 3 holds for n > n0, which means the following holds:
f(n) = 4n^2 - 5n + 3 < 4n^2 + 3 < 4n^2 + n^2, for all n > n0 = 2
f(n) < 5n^2, for all n > n0 = 2
Now choose c = 5 and let g(n) = n^2:
f(n) < c · g(n), for all n > n0,
with c = 5, n0 = 2, g(n) = n^2
Hence, from (+), we've shown that f as defined in (++) is in O(g(n)) = O(n^2).

Asymptotic notation big(O) and Big(Omega)

f(n) = 6*2^n + n^2
big(O) = 2^n
big(Omega) = 2^n
In above equation both big(O) and big(Omega) has same value. If big (O) is upper bound and big(omega) is lower bound shouldn't big(omega) = n^2. Why the both have same value?
It's true that O and Ω are upper and lower bounds, respectively, but they are more similar to ≤ and ≥ than to < and >. Just like it's possible that, simultaneously a ≥ b and a ≤ b (without contradiction), so can a function be both O and Ω of a different function (in fact, that's one of the ways to define Θ).
Here, for large enough n,
6 2n + n2 ≤ 12 2n so 6 2n + n2 grows at most (up to a multiplicative constant)like 2n does (it is O of it).
Conversely, 6 2n + n2 ≥ 0.1 2n so 6 2n + n2 grows at least (up to a multiplicative constant) like 2n does (it is Ω of it).
Note that you don't have to use the same multiplicative constants. The conclusion is that 6 2n + n2 = Θ( 2n)

Resources