T (1) = c
T (n) = T (n/2) + dn
How would I determine BigO of this quickly?
Use repeated backsubstitution and find the pattern. An example here.
I'm not entirely sure what dn is, but assuming you mean a constant multiplied by n:
According to Wolfram Alpha, the recurrence equation solution for:
f(n) = f(n / 2) + cn
is:
f(n) = 2c(n - 1) + c1
which would make this O(n).
Well, the recurrence part of the relationship is the T(n/2) part, which is in effect halving the value of n each time.
Thus you will need approx. (log2 n) steps to get to the termination condition, hence the overall cost of the algorithm is O(log2 n). You can ignore the dn part as is it a constant-time operation for each step.
Note that as stated, the problem won't necessarily terminate since halving an arbitrary value of n repeatedly is unlikely to exactly hit 1. I suspect that the T(n/2) part should actually read T(floor (n / 2)) or something like that in order to ensure that this terminates.
use master's theorem
see http://en.wikipedia.org/wiki/Master_theorem
By the way, the asymptotic behaviour of your recurrence is O(n) assuming d is positive and sufficiently smaller than n (size of problem)
Related
The following question was on a recent assignment in University. I would have thought the answer would be n^2+T(n-1) as I thought the n^2 would make it's asymptotic time complexity O(n^2). Where as with T(n/2)+1 its asymptotic time complexity would be O(log2(n)).
The answers were returned and it turns out the correct answer is T(n/2)+1 however I can't get my head around why this is the case.
Could someone possibly explain to me why that's the worst case time complexity of this algorithm? It's possible my understanding of time complexity is just wrong.
The asymptotic time complexity is taking n large. In the case of your example, since the question specifies that k is fixed, the only complexity relevant is the last one. See the Wikipedia formal definition, specifically:
As n grows to infinity, the recursion that dominates T(n) = T(n / 2) + 1. You can prove this as well using the formal definition, basically picking x_0 = 10 * k and showing that a finite M can be found using the first two cases. It should be clear that both log(n) and n^2 satisfy the definition, so the tighter bound is the asymptotic complexity.
What does O (f (n)) mean? It means the time is at most c * f (n), for some unknown and possibly large c.
kevmo claimed a complexity of O (log2 n). Well, you can check all the values n ≤ 10k, and let the largest value of T (n) be X. X might be quite large (about 167 k^3 in this case, I think, but it doesn't actually matter). For larger n, the time needed is at most X + log2 (n). Choose c = X, and this is always less than c * log2 (n).
Of course people usually assume that a O (log n) algorithm would be quick, and this one most certainly isn't if say k = 10,000. So you learned as well that O notation must be handled with care.
I am following a course of Algorithms and Data Structures.
Today, my professor said the complexity of the following algorithm is 2n.
I waited till the lesson was over, approached him and told him I actually believed it was an O(n) algorithm, and I did the computation to prove it, and wanted to show them to it, but he continued to say it was not, without giving me any convincing explanation.
The algorithm is recursive, and it has this complexity:
{ 1 if n=1
T(n) = {
{ 2T(n/2) otherwise
I computed it down to be a O(n), this way:
Let's expand T(n)
T(n) = 2 [2 * T(n/(2^2))]
= 2^2 * T(n/(2^2))
= 2^2 * [2 * T(n/(2^3))]
= 2^3 * T(n/(2^3))
= ...
= 2^i * T(n/(2^i)).
We stop when the term inside the T is 1, that is:
n/(2i) = 1 ==> n = 2i ==> i = log n
After the substitution, we obtain
T(n) = 2^log n * T(1)
= n * 1
= O(n).
Since this algorithm jumped out of a lesson on Merge Sort, I noted how Merge Sort, which notoriously is O(n log n) has a complexity of 2T(n/2) + Θ(n) (obviously higher than 2T(n/2)), and I asked him why is it, that an algorithm with a lower complexity, gets a higher big-O. Because, at this point, it's counter intuitive for me. He replied, words for words, "If you think that is counter-intuitive, you have serious problem in your math."
My questions are:
Is there any fallacy in my demonstration?
Wouldn't the last situation be counter-intuitive?
Yes, this is also a vent.
Proof - 1
This recurrence falls in case - 3 of Master Theorem, with
a = 2;
b = 2; and,
c = -∞
and thus Logba = 1 which is bigger than -∞. Therefore the running time is Θ(n1) = Θ(n).
Proof - 2
Intuitively, you are breaking the problem of size n into 2 problems of size n/2 and the cost to join the result of two sub-problems is 0 (i.e. there is no constant component in the recurrence).
Hence at the bottom-most level you have n problems of cost 1 each, resulting in the running time of n * O(1) which is equal to O(n).
Edit: Just to complete this answer I will also add the answers to specific questions asked by you.
Is there any fallacy in my demonstration?
No. It is correct.
Wouldn't the last situation be counter-intuitive?
Definitely it is counter-intuitive. See the Proof-2 above.
You are correct in computing the time complexity of the given relation. If we are measuring the input size in n(which we should) then your professor is wrong in claiming that the time complexity is 2^n.
You should probably discuss it with him and clear any misunderstanding that you might have.
You are clearly correct that a function T(n) which satisfies that recurrence relation is O(n). It is essentially obvious since it says that the complexity of a given problem is twice that of a problem which is half the size. You can't get much more linear than that. For example -- the complexity of searching through a list of 1000 elements with a linear search is twice that of searching through a list with 500 elements.
If your professor is also correct then perhaps you are incorrect about the complexity satisfying that recurrence. Alternatively, sometimes there is some confusion about how the input size is being measured. For example, an integer n is exponential in the number of bits needed to specify it. For example -- brute force trial division of an integer n is O(sqrt(n)) which is much better than O(n). The reason that this doesn't contradict that fact that brute force factoring is essentially worthless for e.g. cracking RSA is because for say a 256 bit key the relevant n is around 2^256.
I'm not particularly sure about the runtime of the following algorithm:
T(n) = 2T(n/2) + n/logn
I think this would be O(n) by the Master Theorem but I don't know whether n/logn is asymptotically equal to n. Can someone explain it?
n and log n are not asymptotically equal. If you try to calculate lim n / log n in n = inf limit, you can use L'Hôpital's rule and end up quickly with lim n which is infinity, thus proving that n is assymptotically bigger than log n.
However, the bigger problem is that, according to few places your problem is ill-conditioned. This case does not satisfy the preconditions for the Master Theorem. Loot at Example 7. - exactly your case.
Consider the limit of (n/log(n))/n as n tends to infinity. For all n>0, (n/log(n))/n is equal to 1/log(n). 1/log(n) tends to zero as n tends to infinity, not one, so n/log(n) and n are not asymptotically equal.
I have used the Master Theorem to solve recurrence relations. I have gotten it down to Θ(3n2-9n). Does this equal Θ(n2)? I have another recurrence for which the solution is Θ(2n3 - 1002). In BigTheta notation do you always use only the largest term? So my second one would be Θ(n3)? It just seems like 100n2 would be more important in the second case. So will it matter if I discard it?
Any suggestions?
Yes. Your assumptions are correct. The first one is Θ(n2) and the second one is Θ(n3). When you are using Θ notation you only require the largest term.
In case of your second recurrence consider the n = 1000, then n3 = 1000000000. Where as 100n2 is just 100000000. As the value of n increases, n3 becomes more and more predominant than 100n2.
For theoretical purpose you don't need to consider the constant, how ever large it might be. But practical applications might prefer an algorithm with a small constant even if the complexity is high. For example it might be better to use an algorithm having complexity 0.01n 3 over an algorithm having 10000n2 complexity if the value of n is not very large.
if we have function f(n) = 3n^2-9n , lower order terms and costants can be ignored, we consider higher order terms ,because they play major role in growth of function.
By considering only higher order term we can easily find the upper bound, here is the example.
f(n)= 3n^2-9n
For all sufficient large value of n>=1,
3n^2<=3n^2
and -9n <= n^2
thus, f(n)=3n^2 -9n <= 3n^2 + n^2
<= 4n^2
*The upper bound of f(n) is 4n^2 , that means for all sufficient large
value of n>=1, the value of f(n) wouldn't be greater than 4n^2.*
therefore, f(n)= Θ(n^2) where c=4 and n0=1
we can directly find the upper bound by saying to ignore lower order terms and constants in the equation f(n)= 3n^2-9n , result will be the same Θ(n^2)
Lets say I have a routine that scans an entire list of n items 3 times, does a sort based on the size, and then bsearches that sorted list n times. The scans are O(n) time, the sort I will call O(n log(n)), and the n times bsearch is O(n log(n)). If we add all 3 together, does it just give us the worst case of the 3 - the n log(n) value(s) or does the semantics allow added times?
Pretty sure, now that I type this out that the answer is n log(n), but I might as well confirm now that I have it typed out :)
The sum is indeed the worst of the three for Big-O.
The reason is that your function's time complexity is
(n) => 3n + nlogn + nlogn
which is
(n) => 3n + 2nlogn
This function is bounded above by 3nlogn, so it is in O(n log n).
You can choose any constant. I just happened to choose 3 because it was a good asymptotic upper bound.
You are correct. When n gets really big, the n log(n) dominates 3n.
Yes it will just be the worst case since O-notation is just about asymptotic performance.
This should of course not be taken to mean that adding these extra steps will have no effect on your programs performance. One of the O(n) steps could easily consume a huge portion of your execution time for the given range of n where your program operates.
As Ray said, the answer is indeed O(n log(n)). The interesting part of this question is that it doesn't matter which way you look at it: does adding mean "actual addition" or does it mean "the worst case". Let's prove that these two ways of looking at it produce the same result.
Let f(n) and g(n) be functions, and without loss of generality suppose f is O(g). (Informally, that g is "worse" than f.) Then by definition, there exists constants M and k such that f(n) < M*g(n) whenever n > k. If we look at in the "worst case" way, we expect that f(n)+g(n) is O(g(n)). Now looking at it in the "actual addition" way, and specializing to the case where n > k, we have f(n) + g(n) < M*g(n) + g(n) = (M+1)*g(n), and so by definition f(n)+g(n) is O(g(n)) as desired.