Determining whether an expression has omega complexity - algorithm

6n^4 โˆ’3n^2 +3 is ฮฉ(n4)
Hello, I need to determine whether this statement is true or false.
Any help is appreciated.
Thank you
I am leaning towards true due to the n^4, however the omega complexity is making me doubt this.
I believe if it was big O it would be a true statement.

f is Omega(g) if there exist constants c and n0 such that for all n > n0, f(n) >= c * g(n). For us, we need to evaluate whether there are constants n0 and c such that 6n^4 - 3n^2 + 3 > cn^4 for all n > n0. If we choose n = 5 we get...
6n^4 - 3n^2 + 3 > 5n^4
n^4 - 3n^2 + 3 > 0
Using the quadratic formula we can find values for n^2 where the LHS equals zero:
n^2 = [-b +- sqrt(b^2 - 4ac)] / 2a
= [3 +- sqrt(9 - 12] / 2
But the discriminant is negative, which means there are no real values of n^2 where the LHS equals 0. This means that the LHS has no roots and never crosses the X-axis; it is either always positive or always negative. We can see which is the case easily by plugging in 0:
0^4 - 30^2 + 3 = 3 > 0
So, with the choice of c=5, our inequality is true for all n; we are free to choose any n0, e.g., n0 = 1 works.
Because there exists a pair c=5 and n0=1 which gives us f(n) = 6n^4 - 3n^2 + 3 > 5n^4 = cg(n) for all n > n0, we can say that f is Omega(g).

Related

Big Oh Notation Finding n0 and c

I was looking at this question:
Prove that 100๐‘›+5 โˆˆ ๐‘‚(๐‘›ยฒ) (Which is 100๐‘›+5 is upper bounded by ๐‘›ยฒ)
๐‘“(๐‘›) โ‰ค ๐‘๐‘”(๐‘›) for all ๐‘› โ‰ฅ ๐‘›0
so it becomes 100๐‘›+5 โ‰ค ๐‘๐‘›ยฒ
The answer was:
๐‘›0 โ‰ˆ 25.05 (the number where the ๐‘›ยฒ algorithm intercepts the ๐‘› algorithm) and ๐‘ = 4 so that when ๐‘› increases above 25.05 no matter what it will
still prove that 100๐‘›+5โˆˆ๐‘‚๐‘›ยฒ is true
My question is: how do you derive that ๐‘›0 = 25.05 and ๐‘ = 4? Is it a guess and trial method, or is there a proper way to get that particular answer? Or you just gotta start from 1 and work your way up to see if it works?
A good approach to tackle such kind of problems is to first fix the c
let's take 4 in this example
and then all you have to do is figure out n0 using a simple equality
100n + 5 = 4n^2 <=> 4n^2 - 100n - 5 = 0 <=> n = 25.05 or n = -0.05 and here you can remark that they intersect twice in -0.08 and 25.05 and as you want n0 such that after which 100n +5 is always below 4n^2 -0.05 is not the one as 25.05 > -0.05 and in 25.05 they intersect so n0 = 25.05 .
Before fixing c and trying to figure out n0 you could try big numbers for n0 to have an idea whether it's an upper bound or not.
There are infinitely many choices for n0 and c that can be used to prove this bound holds. We need to find n0 and c such that for n >= n0, f(n) <= c * g(n). In your case, we need 100n + 5 <= cn^2. We can rearrange this as follows using basic algebra:
cn^2 - 100n - 5 >= 0
We can use the quadratic formula to find the roots:
n1, n2 = [100 +- sqrt(10000 + 20c)]/2c
Because c is positive we know the sqrt term will be greater than 100 once evaluated and since we are only interested in n > 0 we can discard the smaller of these solutions and focus on this:
n0 = [100 + sqrt(10000 + 20c)]/2c
We can simplify this a bit:
n0 = [100 + sqrt(10000 + 20c)]/2c
= [100 + 2*sqrt(2500 + 5c)]/2c
= [50 + sqrt(2500 + 5c)]/c
At this point, we can choose either a value for c or a value for n0, and solve for the other one. Your example chooses c = 4 and gets the approximate answer n0 ~ 25.05. If we'd prefer to choose n0 directly (say we want n0 = 10) then we calculate as follows:
10 = [50 + sqrt(2500 + 5c)]/c
10c = 50 + sqrt(2500 + 5c)
(10c - 50) = sqrt(2500 + 5c)
(100c^2 - 1000c + 2500) = (2500 + 5c)
100c^2 - 1005c = 0
c(100c - 1005) = 0
c = 0 or c = 1005/100 ~ 10.05
Because the solution c=0 is obviously no good, the solution c ~ 10.05 appears to work for our choice of n0 = 10. You can choose other n0 or c and find the corresponding constant in this way.

Asymptotic Notation: Proving Big Omega, O, and Theta

I have a few asymptotic notation problems I do not entirely grasp.
So when proving asymptotic complexity, I understand the operations of finding a constant and the n0 term of which the notation will be true for. So, for example:
Prove 7n+4 = โ„ฆ(n)
In such a case we would pick a constant c, such that it is lower than 7 since this regarding Big Omega. Picking 6 would result in
7n+4 >= 6n
n+4 >= 0
n = -4
But since n0 cannot be a negative term, we pick a positive integer, so n0 = 1.
But what about a problem like this:
Prove that n^3 โˆ’ 91n^2 โˆ’ 7n โˆ’ 14 = โ„ฆ(n^3).
I picked 1/2 as the constant, reaching
1/2n^3 - 91n^2 - 7n -14 >= 0.
But I am unsure how to continue. Also, a problem like this, I think regarding theta:
Let g(n) = 27n^2 + 18n and let f(n) = 0.5n^2 โˆ’ 100. Find positive constants n0, c1 and c2 such
that c1f(n) โ‰ค g(n) โ‰ค c2f(n) for all n โ‰ฅ n0.
In such a case am I performing two separate operations here, one big O comparison and one Big Omega comparison, so that there is a theta relationship, or tight bound? If so, how would I go about that?
To show n3 โˆ’ 91n2 โˆ’ 7n โˆ’ 14 is in โ„ฆ(n3), we need to exhibit some numbers n0 and c such that, for all n โ‰ฅ n0:
n3 โˆ’ 91n2 โˆ’ 7n โˆ’ 14 โ‰ฅ cn3
You've chosen c = 0.5, so let's go with that. Rearranging gives:
n3 โˆ’ 0.5n3 โ‰ฅ 91n2 + 7n + 14
Multiplying both sides by 2 and simplifying:
182n2 + 14n + 28 โ‰ค n3
For all n โ‰ฅ 1, we have:
182n2 + 14n + 28 โ‰ค 182n2 + 14n2 + 28n2 = 224n2
And when n โ‰ฅ 224, we have 224n2 โ‰ค n3. Therefore, the choice of n0 = 224 and c = 0.5 demonstrates that the original function is in โ„ฆ(n3).

How is f(x) = 4x^2 - 5x + 3 is O(x^2) derived

Here are the steps that are used to prove the above
|f(x)| = |4x^2 โ€“ 5x + 3|
<= |4x^2|+ |- 5x| + |3|
<= 4x^2 + 5x + 3, for all x > 0
<= 4x^2 + 5x^2 + 3x^2, for all x > 1
<= 12x^2, for all x > 1
Hence we conclude that f(x) is O(x^2)
I referred this But it does not help
Can someone explain the above proof step by step?
Why the absolute value of f(x) is taken ?
Why and how were all the term replaced by x^2 term?
Preparations
We start by loosely stating the definition of a function or algorithm f being in O(g(n)):
If a function f is in O(g(n)), then c ยท g(n) is an upper
bound on f(n), for some non-negative constant c such that f(n) โ‰ค c ยท g(n)
holds, for sufficiently large n (i.e. , n โ‰ฅ n0 for some constant
n0).
Hence, to show that f โˆˆ O(g(n)), we need to find a set of (non-negative) constants (c, n0) that fulfils
f(n) โ‰ค c ยท g(n), for all n โ‰ฅ n0, (+)
We note, however, that this set is not unique; the problem of finding the constants (c, n0) such that (+) holds is degenerate. In fact, if any such pair of constants exists, there will exist an infinite amount of different such pairs.
Analysis
For common convention, we'll analyse your example using variable name n rather than x.
f(n) = 4n^2 - 5n + 3 (++)
Now, for your example, we may assume, without loss of generality (since we're studying asymptotic complexity: function/algorithm behavior for "large" n) that n > n0 where n0 > 0. This would correspond to the analysis you've shown in your question analyzing absolute values of x. Anyway, given this assumption, the following holds:
f(n) = 4n^2 - 5n + 3 < 4n^2 + 3, for all n > n0
Now let, again without loss of generality, n0 equal 2 (we could choose any value, but lets choose 2 here). For n0 = 2, naturally n^2 > 3 holds for n > n0, which means the following holds:
f(n) = 4n^2 - 5n + 3 < 4n^2 + 3 < 4n^2 + n^2, for all n > n0 = 2
f(n) < 5n^2, for all n > n0 = 2
Now choose c = 5 and let g(n) = n^2:
f(n) < c ยท g(n), for all n > n0,
with c = 5, n0 = 2, g(n) = n^2
Hence, from (+), we've shown that f as defined in (++) is in O(g(n)) = O(n^2).

Is my explanation about big o correct in this case?

I'm trying to explain to my friend why 7n - 2 = O(N). I want to do so based on the definition of big O.
Based on the definition of big O, f(n) = O(g(n)) if:
We can find a real value C and integer value n0 >= 1 such that:
f(n)<= C . g(n) for all values of n >= n0.
In this case, is the following explanation correct?
7n - 2 <= C . n
-2 <= C . n - 7n
-2 <= n (C - 7)
-2 / (C - 7) <= n
if we consider C = 7, mathematically, -2 / (C - 7) is equal to negative infinity, so
n >= (negative infinity)
It means that for all values of n >= (negative infinity) the following holds:
7n - 2 <= 7n
Now we have to pick n0 such that for all n >= n0 and n0 >= 1 the following holds:
7n - 2 <= 7n
Since for all values of n >= (negative infinity) the inequality holds, we can simply take n0 = 1.
You're on the right track here. Fundamentally, though, the logic you're using doesn't work. If you are trying to prove that there exist an n0 and c such that f(n) โ‰ค cg(n) for all n โ‰ฅ n0, then you can't start off by assuming that f(n) โ‰ค cg(n) because that's ultimately what you're trying to prove!
Instead, see if you can start with the initial expression (7n - 2) and massage it into something upper-bounded by cn. Here's one way to do this: since 7n - 2 โ‰ค 7n, we can (by inspection) just pick n0 = 0 and c = 7 to see that 7n - 2 โ‰ค cn for all n โ‰ฅ n0.
For a more interesting case, let's try this with 7n + 2:
7n + 2
โ‰ค 7n + 2n (for all n โ‰ฅ 1)
= 9n
So by inspection we can pick c = 9 and n0 = 1 and we have that 7n + 2 โ‰ค cn for all n โ‰ฅ n0, so 7n + 2 = O(n).
Notice that at no point in this math did we assume the ultimate inequality, which means we never had to risk a divide-by-zero error.

Big Oh - why is this inequality true?

I am reading through Skiena's "Algorithm Design Manual".
The first chapter states a formal definition for Big O notation:
f(n) = O(g(n)) means that c * g(n) is an upper bound on f(n).
i.e. there exists some constant c such that f(n) is always less than or equal to c * g(n) for large enough n. (i.e. n >= n0 for some constant n0)
So that's fine and makes sense.
But then the author goes on to describe the Big O of a particular function: 3n^2 - 100n + 6
He says that O(3n^2 - 100n - 6) is NOT equal to O(n). And his reason is that for any c that I choose, c * n is always < 3n^2 when n>c. Which is true, but what about the (-100n + 6) part?
Let's say I choose c = 1 and n = 2.
3n^2 - 100n + 6 = 12 - 200 + 6 = -182
and c * n is 1*2 which is 2. -182 is definitely less than 2, so why does Skiena ignore those terms?
Note the n >= n0 in the definition.
If you pick some c and n0, it has to be true for each n >= n0, not just n0.
So if you have c = 1 and n0 = 2, it also has to be true for n = 1000 for example, which it isn't.
3n^2 - 100n + 6
=> 3(1000)^2 - 100(1000) + 6 = 3 000 000 - 100 000 + 6 = 2 900 006
c.n
=> 1(1000) = 1 000
It's simplification. 3n^2 is greater than any 100n-6 for every n >= (SQRT(2482)+50)/3 ~= 33.2732249428 - please check - it's simple equation. Thus O(3n^2) > O(100n-6). That's why it's not worth considering that part - it does not add any value.
Please note that according to definition you have to find (at least one) c for which every c*n is always < 3n^2 - 100n + 6 for every n greater or equal than some n0 (at least one). Just pick c = 1000 and n0=1000 and you will see that it is always true for those c and n0. Because I have found such c and n0 than statement O(n) < O(3n^2 - 100n - 6) holds true.
But I agree that this simplification might be misleading.

Resources