Generic form: T(n) = aT(n/b) + f(n)
So i must compare n^logb(a) with f(n)
if n^logba > f(n) is case 1 and T(n)=Θ(n^logb(a))
if n^logba < f(n) is case 2 and T(n)=Θ((n^logb(a))(logb(a)))
Is that correct? Or I misunderstood something?
And what about case 3? When its apply?
Master Theorem for Solving Recurrences
Recurrences occur in a divide and conquer strategy of solving complex problems.
What does it solve?
It solves recurrences of the form T(n) = aT(n/b) + f(n).
a should be greater than or equal to 1. This means that the problem is at least reduced to a smaller sub problem once. At least one recursion is needed.
b should be greater than 1. Which means at every recursion, the size of the problem is reduced to a smaller size. If b is not greater than 1, that means our sub problems are not of smaller size.
f(n) must be positive for relatively larger values of n.
Consider the below image:
Let's say we have a problem of size n to be solved. At each step, the problem can be divided into a sub problems and each sub problem is of smaller size, where the size is reduced by a factor of b.
The above statement in simple words means that a problem of size n can be divided into a sub problems of relatively smaller sizes n/b.
Also, the above diagram shows that at the end when we have divided the problems multiple times, each sub problem would be so small that it can be solved in constant time.
For the below derivation consider log to the base b.
Let us say that H is the height of the tree, then H = logn. The number of leaves = a^logn.
Total work done at Level 1 : f(n)
Total work done at Level 2 : a * f(n/b)
Total work done at Level 1 : a * a * f(n/b2)
Total work done at last Level : number of leaves * θ(1). This is equal to n^loga
The three cases of the Master Theorem
Case 1:
Now let us assume that the cost of operation is increasing by a significant factor at each level and by the time we reach the leaf level the value of f(n) becomes polynomially smaller than the value n^loga. Then the overall running time will be heavily dominated by the cost of the last level. Hence T(n) = θ(n^loga).
Case 2:
Let us assume that the cost of the operation on each level is roughly equal. In that case f(n) is roughly equal to n^loga. Hence, the total running time would be f(n) times the total number of levels.
T(n) = θ(n^loga * logn) where k can be >=0. Where logn would be the height of a tree for k >= 0.
Note: Here k+1 is the base of log in logn
Case 3:
Let us assume that the cost of the operation on each level is decreasing by a significant factor at each level and by the time we reach the leaf level the value of f(n) becomes polynomially larger than the value n^loga. Then the overall running time will be heavily dominated by the cost of the first level. Hence T(n) = θ(f(n)).
If you are interested in more detailed reading and examples to practice, visit my blog entry Master Method to Solve Recurrences
I think you have misunderstood it.
if n^logba > f(n) is case 1 and T(n)=Θ(n^logb(a))
Here you should not be worried about f(n) as a result, what you are getting is T(n)=Θ(n^logb(a)).
f(n) is part of T(n) ..and if you get the result T(n) then that value will be inclusive of f(n).
so, There is no need to consider that part.
Let me know if you are not clear.
Related
Assume that the size of the input problem increases with an integer n. Let T(n) be the time complexity of a divide-and-conquer algorithm to solve this problem. Then T(n) satisfies an equation of the form:
T(n) = a T(n/b) + f (n).
Now my question is: how can a and b be unequal?
It seems that they should be equal because the number of recursive calls must be equal to b (size of a sub-problem).
In software, time is often wasted on control operations, like function calls. So usually a > b.
Also, there are situations where the problem calls for just one "recursive call" (which would then be just iteration), for example, binary search. In these cases, a < b.
I am currently learning about big O notation but there is a concept that's confusing me. If for 8N^2 + 4N + 3 the complexity class would be N^2 because this is the fastest growing term. And for 5N the complexity class is N.
Then is it correct to say that of NLogN the complexity class is N since N grows faster than LogN?
The problem I'm trying to solve is that if configuration A consists of a fast algorithm that takes 5NLogN operations to sort a list on a computer that runs 10^6 operations per seconds and configuration B consists of a slow algorithm that takes N**2 operations to sort a list and is run on a computer that runs 10^9 operations per second. for smaller arrays
configuration 1 is faster, but for larger arrays configuration 2 is better. For what size of array does this transition occur?
What I thought was if I equated expressions for the time it took to solve the problem then I could get an N for the transition point however that yielded the equation N^2/10^9 = 5NLogN/10^6 which simplifies to N/5000 = LogN which is not solvable.
Thank you
In mathematics, the definition of f = O(g) for two real-valued functions defined on the reals, is that f(n)/g(n) is bounded when n approaches infinity. In other words, there exists a constant A, such that for all n, f(n)/g(n) < A.
In your first example, (8n^2 + 4n + 3)/n^2 = 8 + 4/n + 3/n^2 which is bounded when n approaches infinity (by 15, for example), so 8n^2 + 4n + 3 is O(n^2). On the other hand, nlog(n)/n = log(n) which approaches infinity when n approaches infinity, so nlog(n) is not O(n). It is however O(n^2), because nlog(n)/n^2 = log(n)/n which is bounded (it approches zero near infinity).
As to your actual problem, remember that if you can't solve an equation symbolically you can always resolve it numerically. The existence of solutions is clear.
Let's suppose that the base of your logarithm is b, so we are to compare
5N * log(b, N)
with
N^2
5N * log(b, N) = log(b, N^(5N))
N^2 = N^2 * log(b, b) = log(b, b^(N^2))
So we compare
N ^ (5N) with b^(N^2)
Let's compare them and analyze the relative value of (N^5N) / (b^(N^2)) compared to 1. You will observe that after a sertain limit it is smaller than 1.
Q: is it correct to say that of NLogN the complexity class is N?
A: No, here is why we can ignore smaller terms:
Consider N^2 + 1000000 N
For small values of N, the second term is the only one which matters, but as N grows, that does not matter. Consider the ratio 1000000N / N^2, which shows the relative size of the two terms. Reduce to 10000000/N, which approaches zero as N approaches infinity. Therefore the second term has less and less importance as N grows, literally approaching zero.
It is not just "smaller," it is irrelevant for sufficiently large N.
That is not true for multiplicands. n log n is always significantly bigger than n, by a margin that continues to increase.
Then is it correct to say that of NLogN the complexity class is N
since N grows faster than LogN?
Nop, because N and log(N) are multiplied and log(N) isn't constant.
N/5000 = LogN
Roughly 55.000
Then is it correct to say that of NLogN the complexity class is N
since N grows faster than LogN?
No, when you omit you should omit a TERM. When you have NLgN it is, as a whole, called a term. As of what you're suggesting then: NNN = (N^2)*N. And since N^2 has bigger growth rate we omit N. Which is completely WRONG. The order is N^3 not N^2. And NLgN works in the same manner. You only omit when the term is added/subtracted.
For example, NLgN + N = NLgN because it has faster growth than N.
The problem I'm trying to solve is that if configuration A consists of
a fast algorithm that takes 5NLogN operations to sort a list on a
computer that runs 10^6 operations per seconds and configuration B
consists of a slow algorithm that takes N**2 operations to sort a list
and is run on a computer that runs 10^9 operations per second. for
smaller arrays configuration 1 is faster, but for larger arrays
configuration 2 is better. For what size of array does this transition
occur?
This CANNOT be true. It is the absolute OPPOSITE. For small N values the faster computer with N^2 is better. For very large N the slower computer with NLgN is better.
Where is the point? Well, the second computer is 1000 times faster than the first one. So they will be equal in speed when N^2 = 1000NLgN which solves to N~=14,500. So for N<14,500 then N^2 will go faster (since the computer is 1000 times faster) but for N>14,500 the slower computer will be much faster. Now imagine N=1,000,000. The faster computer will need 50 times more than what the slower computer needs because N^2 = 50,000 NLgN and it is 1000 times faster.
Note: the calculations were made using the Big O where constant factors are omitted. And the logarithm used is of the base 2. In algorithms complexity analysis we usually use LgN not LogN where LgN is log N to the base 2 and LogN is log N to the base 10.
However, referring to CLRS (good book, I recommend reading it) the Big O defines as:
Take a look at this graph for better understanding:
It is all about N > No. So all the rules of the Big O notation are valid FOR BIG VALUES OF N. For small N it is NOT necessarily correct. I mean, for N=5 it is not necessary that the Big O will give a close approximation on the running time.
I hope this gives a good answer for the question.
Reference: Chapter3, Section1, [CLRS] Introduction To Algorithms, 3rd Edition.
I am following a course of Algorithms and Data Structures.
Today, my professor said the complexity of the following algorithm is 2n.
I waited till the lesson was over, approached him and told him I actually believed it was an O(n) algorithm, and I did the computation to prove it, and wanted to show them to it, but he continued to say it was not, without giving me any convincing explanation.
The algorithm is recursive, and it has this complexity:
{ 1 if n=1
T(n) = {
{ 2T(n/2) otherwise
I computed it down to be a O(n), this way:
Let's expand T(n)
T(n) = 2 [2 * T(n/(2^2))]
= 2^2 * T(n/(2^2))
= 2^2 * [2 * T(n/(2^3))]
= 2^3 * T(n/(2^3))
= ...
= 2^i * T(n/(2^i)).
We stop when the term inside the T is 1, that is:
n/(2i) = 1 ==> n = 2i ==> i = log n
After the substitution, we obtain
T(n) = 2^log n * T(1)
= n * 1
= O(n).
Since this algorithm jumped out of a lesson on Merge Sort, I noted how Merge Sort, which notoriously is O(n log n) has a complexity of 2T(n/2) + Θ(n) (obviously higher than 2T(n/2)), and I asked him why is it, that an algorithm with a lower complexity, gets a higher big-O. Because, at this point, it's counter intuitive for me. He replied, words for words, "If you think that is counter-intuitive, you have serious problem in your math."
My questions are:
Is there any fallacy in my demonstration?
Wouldn't the last situation be counter-intuitive?
Yes, this is also a vent.
Proof - 1
This recurrence falls in case - 3 of Master Theorem, with
a = 2;
b = 2; and,
c = -∞
and thus Logba = 1 which is bigger than -∞. Therefore the running time is Θ(n1) = Θ(n).
Proof - 2
Intuitively, you are breaking the problem of size n into 2 problems of size n/2 and the cost to join the result of two sub-problems is 0 (i.e. there is no constant component in the recurrence).
Hence at the bottom-most level you have n problems of cost 1 each, resulting in the running time of n * O(1) which is equal to O(n).
Edit: Just to complete this answer I will also add the answers to specific questions asked by you.
Is there any fallacy in my demonstration?
No. It is correct.
Wouldn't the last situation be counter-intuitive?
Definitely it is counter-intuitive. See the Proof-2 above.
You are correct in computing the time complexity of the given relation. If we are measuring the input size in n(which we should) then your professor is wrong in claiming that the time complexity is 2^n.
You should probably discuss it with him and clear any misunderstanding that you might have.
You are clearly correct that a function T(n) which satisfies that recurrence relation is O(n). It is essentially obvious since it says that the complexity of a given problem is twice that of a problem which is half the size. You can't get much more linear than that. For example -- the complexity of searching through a list of 1000 elements with a linear search is twice that of searching through a list with 500 elements.
If your professor is also correct then perhaps you are incorrect about the complexity satisfying that recurrence. Alternatively, sometimes there is some confusion about how the input size is being measured. For example, an integer n is exponential in the number of bits needed to specify it. For example -- brute force trial division of an integer n is O(sqrt(n)) which is much better than O(n). The reason that this doesn't contradict that fact that brute force factoring is essentially worthless for e.g. cracking RSA is because for say a 256 bit key the relevant n is around 2^256.
I'm not particularly sure about the runtime of the following algorithm:
T(n) = 2T(n/2) + n/logn
I think this would be O(n) by the Master Theorem but I don't know whether n/logn is asymptotically equal to n. Can someone explain it?
n and log n are not asymptotically equal. If you try to calculate lim n / log n in n = inf limit, you can use L'Hôpital's rule and end up quickly with lim n which is infinity, thus proving that n is assymptotically bigger than log n.
However, the bigger problem is that, according to few places your problem is ill-conditioned. This case does not satisfy the preconditions for the Master Theorem. Loot at Example 7. - exactly your case.
Consider the limit of (n/log(n))/n as n tends to infinity. For all n>0, (n/log(n))/n is equal to 1/log(n). 1/log(n) tends to zero as n tends to infinity, not one, so n/log(n) and n are not asymptotically equal.
for i = 0 to size(arr)
for o = i + 1 to size(arr)
do stuff here
What's the worst-time complexity of this? It's not N^2, because the second one decreases by one every i loop. It's not N, it should be bigger. N-1 + N-2 + N-3 + ... + N-N+1.
It is N ^ 2, since it's the product of two linear complexities.
(There's a reason asymptotic complexity is called asymptotic and not identical...)
See Wikipedia's explanation on the simplifications made.
Think of it like you are working with a n x n matrix. You are approximately working on half of the elements in the matrix, but O(n^2/2) is the same as O(n^2).
When you want to determine the complexity class of an algorithm, all you need is to find the fastest growing term in the complexity function of the algorithm. For example, if you have complexity function f(n)=n^2-10000*n+400, to find O(f(n)), you just have to find the "strongest" term in the function. Why? Because for n big enough, only that term dictates the behavior of the entire function. Having said that, it is easy to see that both f1(n)=n^2-n-4 and f2(n)=n^2 are in O(n^2). However, they, for the same input size n, don't run for the same amount of time.
In your algorithm, if n=size(arr), the do stuff here code will run f(n)=n+(n-1)+(n-2)+...+2+1 times. It is easy to see that f(n) represents a sum of an arithmetic series, which means f(n)=n*(n+1)/2, i.e. f(n)=0.5*n^2+0.5*n. If we assume that do stuff here is O(1), then your algorithm has O(n^2) complexity.
for i = 0 to size(arr)
I assumed that the loop ends when i becomes greater than size(arr), not equal to. However, if the latter is the case, than f(n)=0.5*n^2-0.5*n, and it is still in O(n^2). Remember that O(1),O(n),0(n^2),... are complexity classes, and that complexity functions of algorithms are functions that describe, for the input size n, how many steps there is in the algorithm.
It's n*(n-1)/2 which is equal to O(n^2).