Which is true and which false? I can't really decide which one is true and which false. Maybe in first 3 cases.
3n^5 − 16n + 2 ∈ O(n^5)
3n^5 − 16n + 2 ∈ O(n)
3n^5 − 16n + 2 ∈ O(n^17)
3n^5 − 16n + 2 ∈ Ω(n^5)
3n^5 − 16n + 2 ∈ Θ(n^5)
3n^5 − 16n + 2 ∈ Θ(n)
3n^5 − 16n + 2 ∈ Θ(n^17)
and how to prove this one:
2^(n+1) ∈ O(3^n/n )
Back to the definitions, with f and g two positive functions :
f∈𝛰(g) ⇔ ∃k,n₀∈ℕ ∀n>n₀ f(n) ≤ k.g(n)
f∈𝛺(g) ⇔ ∃k,n₀∈ℕ ∀n>n₀ k.g(n) ≤ f(n)
f∈𝛩(g) ⇔ ∃k₁,k₂,n₀∈ℕ ∀n>n₀ k₁.g(n) ≤ f(n) ≤ k₂.g(n)
It's easy to see that : f∈𝛰(g) and f∈𝛺(g) implies f∈𝛩(g)
Using these definitions it is easy to prove that 1,3,5,6 are true and that 2 and 7 are false; then 1 and 5 true implies 4 true.
for 2^(n+1) ∈ O(3^n/n ) :
can you prove lim 2^(n+1)/ ( 3^n/n ) = 0 when x→+∞ ?
If so, you proved that for all ε>0 there exists δ such that for all n>δ we have 2^(n+1)/(3^n/n)<ε
For ε=2, there exists n₀ such that for all n>n₀ 2^(n+1)<2.3^n/n
what can you conclude ?
Related
I was reading an article and came across the following :
Informally, O(g(n)) can be defined as the set of mathematical functions that contains all functions that don’t “grow faster” than g(n). Thus, all functions below are in the set O(n²):
f(n) = n²+3n + 2, f(n) = n log(n), f(n) = 3n+1
.
Can please anyone tell me how f(n) = n²+3n + 2 grows faster than g(n)?
Can please anyone tell me how f(n) = n²+3n + 2 grows faster than g(n)?
Here is one way to understand it (a bit informal, but I find it more intuitive).
Let L be limit as n goes to infinity of f(n)/g(n)
If L is infinity then f(n) grows faster than g(n) (numerator overwhelms denominator).
If L is 0 then f(n) grows slower than g(n) (denominator overwhelms numerator)
If L is finite number then they have same (comparable) growth rates.
We can define O(g(n)) as the following set:
O(g(n)) = { f(n) ∶ ∃ c > 0 and n0 > 0 | 0 ≤ f(n) ≤ c ⋅ g(n), ∀n ≥ n0 }
This means O(g(n)) is the set of all functions f(n) which grow slower than g(n) for some constant c and for n ≥ n0. In order to find n0 and c we use a justification like the following:
n²+3n + 2 ≤ n² + 3n² + 2n²
n²+3n + 2 ≤ 6n² for c = 6 and n ≥ 1
Now if you just use g(n) = n² obviously f(n) = n² + 3n + 2 will grow faster than g(n); but by choosing the value of c correctly g(n) will grow faster than f(n) for n ≥ n0.
I have a few asymptotic notation problems I do not entirely grasp.
So when proving asymptotic complexity, I understand the operations of finding a constant and the n0 term of which the notation will be true for. So, for example:
Prove 7n+4 = Ω(n)
In such a case we would pick a constant c, such that it is lower than 7 since this regarding Big Omega. Picking 6 would result in
7n+4 >= 6n
n+4 >= 0
n = -4
But since n0 cannot be a negative term, we pick a positive integer, so n0 = 1.
But what about a problem like this:
Prove that n^3 − 91n^2 − 7n − 14 = Ω(n^3).
I picked 1/2 as the constant, reaching
1/2n^3 - 91n^2 - 7n -14 >= 0.
But I am unsure how to continue. Also, a problem like this, I think regarding theta:
Let g(n) = 27n^2 + 18n and let f(n) = 0.5n^2 − 100. Find positive constants n0, c1 and c2 such
that c1f(n) ≤ g(n) ≤ c2f(n) for all n ≥ n0.
In such a case am I performing two separate operations here, one big O comparison and one Big Omega comparison, so that there is a theta relationship, or tight bound? If so, how would I go about that?
To show n3 − 91n2 − 7n − 14 is in Ω(n3), we need to exhibit some numbers n0 and c such that, for all n ≥ n0:
n3 − 91n2 − 7n − 14 ≥ cn3
You've chosen c = 0.5, so let's go with that. Rearranging gives:
n3 − 0.5n3 ≥ 91n2 + 7n + 14
Multiplying both sides by 2 and simplifying:
182n2 + 14n + 28 ≤ n3
For all n ≥ 1, we have:
182n2 + 14n + 28 ≤ 182n2 + 14n2 + 28n2 = 224n2
And when n ≥ 224, we have 224n2 ≤ n3. Therefore, the choice of n0 = 224 and c = 0.5 demonstrates that the original function is in Ω(n3).
I am trying to learn how to prove Big O correctly.
what i am trying to do is find some C and N0 for a given function.
the definition given for Big-O is
Let f(n) and g(n) be functions mapping nonnegative integers to real numbers.
We say that f(n) is O(g(n)) if there is a real constant c > 0 and an integer
constant n0 ≥ 1 such that for all n ≥ n0, f(n) ≤ c g(n).
Given the polynomial (n+1)^5 i need to show that it has a runtime of O(n^5).
my question is, how do i find such c and N0 from the definition above, and how do i continue my algebra to see if it runs n^5?
So far by trying induction i have,
(n+1)^5 = n^5 + 5n^4 + n^3 + 10n^2 + 5n^1 + n^0
find the n+1 element so
n^5 + 5n^4 + n^3 + 10n^2 + 5n^1 + n^0 <= n^5 + 5n^5 + n^5 + 10n^5 + 5n^5 + n^5
n^5 + 5n^4 + 10n^2 + 5n + 1 <= 22n^5
You want a constant c such that (n + 1) 5 ≤ c n 5. For that, you do not need induction, only a bit of algebra and it turns out you actually already found such a c, but missed the n0 in the process. So let's start from the beginning.
Note that c does not need to be tight, it can be way bigger than necessary and will still prove time-complexity. We will use that to our advantage.
We can first develop the left side as you did.
(n + 1) 5 = n5 + 5n4 + 10n3 + 10 n2 + 5n + 1
For n ≥ 1, we have that n, n2, n3, n4 ≤ n5, an thus.
(n + 1) 5 ≤ (1 + 5 + 10 + 10 + 5 + 1) n5 = 22n5
And there you got a c such that (n + 1) 5 ≤ c n5. That c is 22.
And since we stated above that this holds if n ≥ 1, then we have that n0 = 1.
Generalization
This generalizes for any degree. In general given the polynomial f(n) = (n + a)b, then you know that there exists a number c that is found by summing all the coefficients of the polynomial after development. It turns out the exact value of c does not matter so you do not need to compute it, all that matter is that we proved its existence and thus (n + a)b is O(nb).
I'm familiar with solving recurrences with iteration:
t(1) = c1
t(2) = t(1) + c2 = c1 + c2
t(3) = t(2) + c2 = c1 + 2c2
...
t(n) = c1 + (n-1)c2 = O(n)
But what if I had a recurrence with no base case? How would I solve it using the three methods mentioned in the title?
t(n) = 2t(n/2) + 1
For Master Theorem I know the first step, find a, b, and f(n):
a = 2
b = 2
f(n) = 1
But not where to go from here. I'm at a standstill because I'm not sure how to approach the question.
I know of 2 ways to solve this:
(1) T(n) = 2T(n/2) + 1
(2) T(n/2) = 2T(n/4) + 1
now replace T(n/2) from (2) into (1)
T(n) = 2[2T(n/4) + 1] + 1
= 2^2T(n/4) + 2 + 1
T(n/4) = 2T(n/8) + 1
T(n) = 2^2[2T(n/8) + 1] + 2 + 1
= 2^3T(n/8) + 4 + 2 + 1
You would just keep doing this until you can generalize. Eventually you will spot that:
T(n) = 2^kT(n/2^k) + sum(2^(k-1))
You want T(1) so set n/2^k = 1 and solve for k. When you do this you will find that, k = lgn
Substitute lgn for k you will end up with
T(n) = 2^lgnT(n/2^lgn) + (1 - 2^lgn) / (1 - 2)
2^lgn = n so,
T(n) = nT(1) + n - 1
T(n) = n + n - 1 where n is the dominant term.
For Master Theorem its really fast
Consider, T(n) = aT(n/b) + n^c for n>1
There are three cases (note that b is the log base)
(1) if logb a < c, T(n)=Θ(n^c),
(2) if logb a = c, T (n) = Θ(n^c log n),
(3) if logb a > c, T(n) = Θ(n^(logb a)).
In this case a = 2, b = 2, and c = 0 (n^0 = 1)
A quick check shows case 3.
n^(log2 2)
note log2 2 is 1
So by master theorem this is Θ(n)
Apart from the Master Theorem, the Recursion Tree Method and the Iterative Method there is also the so
called "Substitution Method".
Often you will find people talking about the
substitution method, when in fact they mean the iterative method (especially on Youtube).
I guess this stems from the fact that in the iterative method you are also substituting
something, namely the n+1-th recursive call into the n-th one...
The standard reference work about algorithms
(CLRS)
defines it as follows:
Substitution Method
Guess the form of the solution.
Use mathematical induction to find the constants and show that the solution works.
As example let's take your recurrence equation: T(n) = 2T(ⁿ/₂)+1
We guess that the solution is T(n) ∈ O(n²), so we have to prove that
T(n) ≤ cn² for some constant c.
Also, let's assume that for n=1 you are doing some constant work c.
Given:
T(1) ≤ c
T(n) = 2T(ⁿ/₂)+1
To prove:
∃c > 0, ∃n₀ ∈ ℕ, ∀n ≥ n₀, such that T(n) ≤ cn² is true.
Base Case:
n=1: T(1) ≤ c
n=2: T(2) ≤ T(1) + T(1) + 1 ≤ 4c
(≤c) (≤c) (cn²)
Induction Step:
As inductive hypothesis we assume T(n) ≤ cn² for all positive numbers smaller than n
especially for (ⁿ/₂).
Therefore T(ⁿ/₂) ≤ c(ⁿ/₂)², and hence
T(n) ≤ 2c(ⁿ/₂)² + 1 ⟵ Here we're substituting c(ⁿ/₂)² for T(ⁿ/₂)
= (¹/₂)cn² + 1
≤ cn² (for c ≥ 2, and all n ∈ ℕ)
So we have shown, that there is a constant c, such that T(n) ≤ cn² is true for all n ∈ ℕ.
This means exactly T(n) ∈ O(n²). ∎
(for Ω, and hence Θ, the proof is similar).
Here are the steps that are used to prove the above
|f(x)| = |4x^2 – 5x + 3|
<= |4x^2|+ |- 5x| + |3|
<= 4x^2 + 5x + 3, for all x > 0
<= 4x^2 + 5x^2 + 3x^2, for all x > 1
<= 12x^2, for all x > 1
Hence we conclude that f(x) is O(x^2)
I referred this But it does not help
Can someone explain the above proof step by step?
Why the absolute value of f(x) is taken ?
Why and how were all the term replaced by x^2 term?
Preparations
We start by loosely stating the definition of a function or algorithm f being in O(g(n)):
If a function f is in O(g(n)), then c · g(n) is an upper
bound on f(n), for some non-negative constant c such that f(n) ≤ c · g(n)
holds, for sufficiently large n (i.e. , n ≥ n0 for some constant
n0).
Hence, to show that f ∈ O(g(n)), we need to find a set of (non-negative) constants (c, n0) that fulfils
f(n) ≤ c · g(n), for all n ≥ n0, (+)
We note, however, that this set is not unique; the problem of finding the constants (c, n0) such that (+) holds is degenerate. In fact, if any such pair of constants exists, there will exist an infinite amount of different such pairs.
Analysis
For common convention, we'll analyse your example using variable name n rather than x.
f(n) = 4n^2 - 5n + 3 (++)
Now, for your example, we may assume, without loss of generality (since we're studying asymptotic complexity: function/algorithm behavior for "large" n) that n > n0 where n0 > 0. This would correspond to the analysis you've shown in your question analyzing absolute values of x. Anyway, given this assumption, the following holds:
f(n) = 4n^2 - 5n + 3 < 4n^2 + 3, for all n > n0
Now let, again without loss of generality, n0 equal 2 (we could choose any value, but lets choose 2 here). For n0 = 2, naturally n^2 > 3 holds for n > n0, which means the following holds:
f(n) = 4n^2 - 5n + 3 < 4n^2 + 3 < 4n^2 + n^2, for all n > n0 = 2
f(n) < 5n^2, for all n > n0 = 2
Now choose c = 5 and let g(n) = n^2:
f(n) < c · g(n), for all n > n0,
with c = 5, n0 = 2, g(n) = n^2
Hence, from (+), we've shown that f as defined in (++) is in O(g(n)) = O(n^2).