Unable to understand O(f(n)) - algorithm

What is the value of following recurrence:
T(n) = T(n/4) + T(n/2) + cn², T(1) = c, T(0) = 0
Where c is a positive constant:
T(n) = O(n³)
T(n) = O(n²)
T(n) = O(n² log n)
T(n) = O(n log n)
The correct answer is 2, but I have a doubt. According to the definition of O(f(n)) it gives us an upper bound, the O(n²) is the least upper bound. So in my opinion O(n³) and O(n² log n) should also be true.
Let
T(n) = 1/2n² + 3n
Which of the following statements are true (Check all that apply.)
T(n) = O(n)
T(n) = Ω(n)
T(n) = θ(n²)
T(n) = O(n³)
Here, the correct answers are 2, 3, and 4.
So, am I understanding the definition incorrectly or am I making some mistake?

Let's try to prove for the first recurrence using induction.
I'll use this definition of Big O (from CLRS):
for some constants and .
Basis Step:
. Same for , but since , it doesn't contribute anything to the recurrence and we can choose as our base case.
Inductive Step:
Assume , now deduce for . Since , our assumption implies .
We have
for some constant .
Since
holds, setting and to all the inductively assumed expressions yields:
(If I screwed up somewhere, let me know!)
So in my opinion O(n³) and O(n² log n) should also be true.
, so yes, you are entirely correct. You can prove it correct as done above. Informally, however, people often use interchangeably with tight upper bound. It's imprecise, but customary. If you got those questions from a facility of higher learning, that's problematic, of course.
So, am I understanding the definition incorrectly or am I making some
mistake?
You are basically correct. The real world isn't about formal correctness, though, so beware of informalisms and know better. As an aside, many people also treat the Landau symbols , , and the same, even though they absolutely aren't.

Related

Dropping the less significant terms in the middle of calculating time complexity?

We know that for some algorithm with time complexity of lets say T(n) = n^2 + n + 1 we can drop the less significant terms and say that it has a worst case of O(n^2).
What about when we're in the middle of calculating time complexity of an algorithm such as T(n) = 2T(n/2) + n + log(n)? Can we just drop the less significant terms and just say T(n) = 2T(n/2) + n = O(n log(n))?
In this case, yes, you can safely discard the dominated (log n) term. In general, you can do this any time you only need the asymptotic behaviour rather than the exact formula.
When you apply the Master theorem to solve a recurrence relation like
T(n) = a T(n/b) + f(n)
asymptotically, then you don't need an exact formula for f(n), just the asymptotic behaviour, because that's how the Master theorem works.
In your example, a = 2, b = 2, so the critical exponent is c = 1. Then the Master theorem tells us that T(n) is in Θ(n log n) because f(n) = n + log n, which is in Θ(nc) = Θ(n).
We would have reached the same conclusion using f(n) = n, because that's also in Θ(n). Applying the theorem only requires knowing the asymptotic behaviour of f(n), so in this context it's safe to discard dominated terms which don't affect f(n)'s asymptotic behaviour.
First of all you need to understand that T(n) = n^2 + n + 1 is a closed form expression, in simple terms it means you can inject some value for n and you will get the value of this whole expression.
on the other hand T(n) = 2T(n/2) + n + log(n) is a recurrence relation, it means this expression is defined recursively, to get a closed form expression you will have to solve the recurrence relation.
Now to answer your question, in general we drop lower order terms and coefficients when we can clearly see the highest order term, in T(n) = n^2 + n + 1 its n^2. but in a recurrence relation there is no such highest order term, because its not a closed form expression.
but one thing to observe is that highest order term in the closed form expression of a recurrence relation would be result of depth of recurrence tree multiplied with the highest order term in recurrence relation, so in your case it would be depthOf(2T(n/2)) * n, this would result in something like logn*n, so you can say that in terms of big O notation its O(nlogn).

How to solve equations using asymptotic notations?

I'm stuck on whether or not the asymptotic notations (options 1-5) are correct or not.
The big-O notation rule I got (from a YouTube video) was that for O(f(n)) is the set of all functions with smaller or same order of grown as f(n), which means that option 2 would be correct because the leading term has the same order of grown as t(n).
The little-o notation rule I got was that for O(f(n)) is the set of all functions with smaller rate of grown than f(n), which means that option 1 is correct because the leading term n^3 is smaller than o(n^4).
How would I solve this problem for the rest (Omega, Theta, and little-Omega)? I have trouble finding the explanation or rule for those.
Given t(n) = 53n^3+ 32n^2+ 28, which of the following is(are) correct
1) t(n) = o(n^4) (Correct?)
2) t(n) = O(n^3) (Correct?)
3) t(n) = Ɵ(n^4)
4) t(n) = Ω(n^3) (Correct?)
5) t(n) = ɯ(n^2)
Your understanding of O and o is correct.
Roughly speaking, for Omega and omega, they are sort of the opposite. They are kind of bounds from below. So the growth of t(n) must be larger [larger or equal] than that of f(n) to be in omega(f(n)) [Omega(f(n)].
Theta is the same as O and Omega at the same time.
So 4 and 5 are correct and 3 is false.
The mathematically exact definitions are more involved see for example https://en.wikipedia.org/wiki/Big_O_notation
Given t(n) = 53n^3+ 32n^2+ 28, which of the following is(are) correct
1)t(n) = o(n^4)
==>Correct as n^4 is bigger by Function n.
2)t(n) = O(n^3) (Correct?)
==>correct :::take large C constant
3)t(n) = Ɵ(n^4)
==>false because Omega does not satisfy here.
4)t(n) = Ω(n^3)<br/>==> correct
5)t(n) = ɯ(n^2)
true as it is strictly smaller than n^3

Master Theorem confusion with the three cases

I know that we can apply the Master Theorem to find the running time of a divide and conquer algorithm, when the recurrence relation has the form of:
T(n) = a*T(n/b) + f(n)
We know the following :
a is the number of subproblems that the algorithm divides the original problem
b is the size of the sun-problem i.e n/b
and finally.. f(n) encompasses the cost of dividing the problem and combining the results of the subproblems.
Now we then find something (I will come back to the term "something")
and we have 3 cases to check.
The case that f(n) = O(n^log(b)a-ε) for some ε>0; Then T(n) is O(n*log(b)a)
The case that f(n) = O(n^log(b)a) ; Then T(n) is O(n^log(b)a * log n).
If n^log(b)a+ε = O(f(n)) for some constant ε > 0, and if a*f(n/b) =< cf(n) for some constant
c < 1 and almost all n, then T(n) = O(f(n)).
All fine, I am recalling the term something. How we can use general examples (i.e uses variables and not actual numbers) to decide which case the algorithm is in?
In instance. Consider the following:
T(n) = 8T(n/2) + n
So a = 8, b = 2 and f(n) = n
How I will proceed then? How can I decide which case is? While the function f(n) = some big-Oh notation how these two things are comparable?
The above is just an example to show you where I don't get it, so the question is in general.
Thanks
As CLRS suggests, the basic idea is comparing f(n) with n^log(b)a i.e. n to the power (log a to the base b). In your hypothetical example, we have:
f(n) = n
n^log(b)a = n^3, i.e. n-cubed as your recurrence yields 8 problems of half the size at every step.
Thus, in this case, n^log(b)a is larger because n^3 is always O(n) and the solution is: T(n) = θ(n^3).
Clearly, the number of subproblems vastly outpaces the work (linear, f(n) = n) you are doing for each subproblem. Thus, the intuition tells and master theorem verifies that it is the n^log(b)a that dominates the recurrence.
There is a subtle technicality where the master theorem says that f(n) should be not only smaller than n^log(b)a O-wise, it should be smaller polynomially.

If f(n) contains some term of log(n), is it possible to solve this by the Master Method?

The Master Method is a direct way to get the solution. The Master Method works only for following type of recurrences or for recurrences that can be transformed to following type.
T(n) = a T(n / b) + f(n) where a ≥ 1, b > 1, and f(n) = Θ(nc).
There are following three cases:
If c < logb(a) then T(n) = Θ(nlogb(a)).
If c = logb(a) then T(n) = Θ(nc log(n)).
If c > logb(a) then T(n) = Θ(f(n)).
In the Master Method, if f(n) contains some term of log(n), is it possible to solve this by the Master Method?
for example in
T(n)=4T(n/2)+n^2logn
Here master's theorem appplicable or not
It is not really possible to tell directly whether or not the Master Method works for some logarithmic function. This would depend on the specific recurrence you're trying to solve. It all depends on how f grows in comparison to nlogb a.
In the example given by JPC (where T(n) = 4T(n/2) + log(n)), it is indeed possible. However, also consider the example T(n) = 2T(n/5) + log(n). In this recurrence it is harder to determine whether nlog5 2 grows faster than log(n). If the logarithmic function f(n) gets more complex (e.g. log3(n/2)), it becomes even harder.
In short, it may be hard to determine for logarithmic functions how they grow when compared to an exponential function when the exponent is less than 1 (for exponents >= 1, log(n) is always faster). If it doesn't seem to work for you, you'll have to use other techniques to solve the recurrence.

Asymptotic Complexity of Logarithms and Powers

So, clearly, log(n) is O(n). But, what about (log(n))^2? What about sqrt(n) or log(n)—what bounds what?
There's a family of comparisons like this:
nᵃ (vs.) (log(n))ᵇ
I run into these comparisons a lot, and I've never come up with a good way to solve them. Hints for tactics for solving the general case?
[EDIT: I'm not talking about the computational complexity of calculating the values of these functions. I'm talking about the functions themselves. E.g., f(n) = n is an upper bound on g(n) = log(n) because f(n) ≤ c g(n) for c = 1 and n₀ > 0.]
log(n)^a is always O(n^b), for any positive constants a, b.
Are you looking for a proof? All such problems can be reduced to seeing that log(n) is O(n), by the following trick:
log(n)^a = O(n^b) is equivalent to:
log(n) = O(n^{b/a}), since raising to the 1/a power is an increasing function.
This is equivalent to
log(m^{a/b}) = O(m), by setting m = n^{b/a}.
This is equivalent to log(m) = O(m), since log(m^{a/b}) = (a/b)*log(m).
You can prove that log(n) = O(n) by induction, focusing on the case where n is a power of 2.
log n -- O(log n)
sqrt n -- O(sqrt n)
n^2 -- O(n^2)
(log n)^2 -- O((log n)^2)
n^a versus (log(n))^b
You need either bases or powers the same. So use your math to change n^a to log(n)^(whatever it gets to get this base) or (whatever it gets to get this power)^b. There is no general case
I run into these comparisons a lot (...)
Hints for tactics for solving the general case?
As you as about general case and that you following a lot into such questions. Here is what I recommend :
Use limit definition of BigO notation, once you know:
f(n) = O(g(n)) iff limit (n approaches +inf) f(n)/g(n) exists and is not +inf
You can use Computer Algebra System, for example opensource Maxima, here is in Maxima documentation about limits .
For more detailed info and example - check out THIS answer

Resources