We learned about big O notation, but I often see T(n) as well. For example,
public static Comparable[] mergeSort(Comparable[] A, int low, int high) {
if (low < high) { //at least 2 elements? //cost = c
int mid = (low + high)/2; //cost = d
Comparable[] A1 = mergeSort(A, low, mid); //cost = T(n/2) + e
Comparable[] A2 = mergeSort(A, mid+1, high); //cost = T(n/2) + f
return merge(A1,A2); //cost = g n + h
}
.... //cost = i
I believe c,d,e,... are meant to be arbitrarily named constants.
What does T(n/2) mean? also how is T notation related to big O?
This notation refers to the maximum amount of time (or, more specifically, steps) that a function takes to run.
T(n) may be much more specific than O(n); for example, let's say you have a program that for any input, requires n^2+n+1 steps to run:
T(n) = n^2+n+1
O(n) = n^2
More information can be found here.
I've personally never seen this notation before, but I suspect it refers to "big-Theta" (Θ), which is both an asymptotic upper-bound (big-O) and an asymptotic lower bound.
Also related: Big-Omega (Ω) is used to denote just an asymptotic lower-bound (mirroring big-O).
Related
I'm learning time efficiency of algorithms and have become stuck at trying to analyse recursive algorithms. I currently have an algorithm that just basically traverses a binary search tree and puts each node into an array.
placeIntoArray(root, array[]a, int i) {
if (root.left != null) {
i = placeIntoArray(root.left, a, i);
}
a[i] = root;
i++;
if (root.right != null) {
i = placeIntoArray(root.right, a, i);
}
return i;
}
If I had to guess I'd think it was in the class of O(n) since it's just touching each node of placing it into an array, but I'm not sure how to analyse it properly.. Any help would be appreciated
The time complexity of the problem for the array with the size of n is T(n) = T(number of the elements in the root.left) + T(number of the elements in the root.right) + c that c is constant. In two extreme scenarios, it would be T(n) = 2T(n/2) + c (completely balanced) which means T(n) = Theta(n) or T(n) = T(n-1) + T(1) + c (completely unblanced) that means T(n) = Theta(n). If you consider the other cases, you will find that T(n) = Theta(n).
I have a recursive function. And i’m looking for what is the time complexity ?
Here is the function
public static int f7(int N){
if (N==1) return 0;
return 1 + f7(N/2);
}
First, we come up with a recurrence for this function:
T(1) = 1
T(n) = T(n/2) + 1
This is a recurrence that we can plug into the master theorem, which will give us Θ(log n) as an answer.
Assume that when N=1, the call takes a time units, and when N is a power of 2, it takes b time units, not counting the recursive call.
Then
T(1) = a
T(2^n) = T(2^(n-1)) + b.
This can be seen as an ordinary linear recurrence
S(0) = a
S(n) = S(n-1) + b = S(n-2) + 2b = … = S(0) + nb = a + nb,
or
T(N) = a + Lg(N) b
where Lg denotes the base-2 logarithm.
When N is not a power of 2, the time is the same as for the nearest inferior power of 2.
The exact formula for all N is
T(N) = a + [Lg(N)] b.
Brackets denote the floor function.
Hi I am having a tough time showing the run time of these three algorithms for T(n). Assumptions include T(0)=0.
1) This one i know is close to Fibonacci so i know it's close to O(n) time but having trouble showing that:
T(n) = T(n-1) + T(n-2) +1
2) This on i am stumped on but think it's roughly about O(log log n):
T(n) = T([sqrt(n)]) + n. n greater-than-or-equal to 1. sqrt(n) is lower bound.
3) i believe this one is in roughly O(n*log log n):
T(n) = 2T(n/2) + (n/(log n)) + n.
Thanks for the help in advance.
T(n) = T(n-1) + T(n-2) + 1
Assuming T(0) = 0 and T(1) = a, for some constant a, we notice that T(n) - T(n-1) = T(n-2) + 1. That is, the growth rate of the function is given by the function itself, which suggests this function has exponential growth.
Let T'(n) = T(n) + 1. Then T'(n) = T'(n-1) + T'(n-2), by the above recurrence relation, and we have eliminated the troublesome constant term. T(n) and U(n) differ by a constant factor of 1, so assuming they are both non-decreasing (they are) then they will have the same asymptotic complexity, albeit for different constants n0.
To show T'(n) has asymptotic growth of O(b^n), we would need some base cases, then the hypothesis that the condition holds for all n up to, say, k - 1, and then we'd need to show it for k, that is, cb^(n-2) + cb^(n-1) < cb^n. We can divide through by cb^(n-2) to simplify this to 1 + b <= b^2. Rearranging, we get b^2 - b - 1 > 0; roots are (1 +- sqrt(5))/2, and we must discard the negative one since we cannot use a negative number as the base for our exponent. So for b >= (1+sqrt(5))/2, T'(n) may be O(b^n). A similar thought experiment will show that for b <= (1+sqrt(5))/2, T'(n) may be Omega(b^n). Thus, for b = (1+sqrt(5))/2 only, T'(n) may be Theta(b^n).
Completing the proof by induction that T(n) = O(b^n) is left as an exercise.
T(n) = T([sqrt(n)]) + n
Obviously, T(n) is at least linear, assuming the boundary conditions require T(n) be nonnegative. We might guess that T(n) is Theta(n) and try to prove it. Base case: let T(0) = a and T(1) = b. Then T(2) = b + 2 and T(4) = b + 6. In both cases, a choice of c >= 1.5 will work to make T(n) < cn. Suppose that whatever our fixed value of c is works for all n up to and including k. We must show that T([sqrt(k+1)]) + (k+1) <= c*(k+1). We know that T([sqrt(k+1)]) <= csqrt(k+1) from the induction hypothesis. So T([sqrt(k+1)]) + (k+1) <= csqrt(k+1) + (k+1), and csqrt(k+1) + (k+1) <= c(k+1) can be rewritten as cx + x^2 <= cx^2 (with x = sqrt(k+1)); dividing through by x (OK since k > 1) we get c + x <= cx, and solving this for c we get c >= x/(x-1) = sqrt(k+1)/(sqrt(k+1)-1). This eventually approaches 1, so for large enough n, any constant c > 1 will work.
Making this proof totally rigorous by fixing the following points is left as an exercise:
making sure enough base cases are proven so that all assumptions hold
distinguishing the cases where (a) k + 1 is a perfect square (hence [sqrt(k+1)] = sqrt(k+1)) and (b) k + 1 is not a perfect square (hence sqrt(k+1) - 1 < [sqrt(k+1)] < sqrt(k+1)).
T(n) = 2T(n/2) + (n/(log n)) + n
This T(n) > 2T(n/2) + n, which we know is the recursion relation for the runtime of Mergesort, which by the Master theorem is O(n log n), s we know our complexity is no less than that.
Indeed, by the master theorem: T(n) = 2T(n/2) + (n/(log n)) + n = 2T(n/2) + n(1 + 1/(log n)), so
a = 2
b = 2
f(n) = n(1 + 1/(log n)) is O(n) (for n>2, it's always less than 2n)
f(n) = O(n) = O(n^log_2(2) * log^0 n)
We're in case 2 of the Master Theorem still, so the asymptotic bound is the same as for Mergesort, Theta(n log n).
I have this algorithm (golden ratio):
public static float golden(int n){
float res;
if(n == 0) {
res = 1;
} else {
res = (float) (1.0 + (1.0 / golden(n-1)));
}
return res;
}
I suppose The T(n) formula is T(n-1). I can get the complexity following this formula.
T(n) = aT(n - b) + c^n p(n)
and this one:
What's p(n) and c?
For each n > 0 the call to golden(n) results in a constant number of arithmetic operations and one recursive call, therefore you can write the recurrence for the time complexity function T(n) as
T(n) = T(n-1) + k, n>0
where k is the number of operations in each call. This is the recurrence relation for sequential search.
Now you can apply the formula that you mention in the question. Recall that d in the formula is the degree of the polynomial p(n). Here
a = 1, c = 1, b = 1, and p(n) = k,
therefore d = 0, now applying your formula you get a=c^b, and therefore the second case applies, so
T(n) = Theta(n^{d+1}c^n) = Theta(n).
This gives the final answer T(n) = Theta(n).
Consider the following function:
int testFunc(int n){
if(n < 3) return 0;
int num = 7;
for(int j = 1; j <= n; j *= 2) num++;
for(int k = n; k > 1; k--) num++;
return testFunc(n/3) + num;
}
I get that the first loop is O(logn) while the second loop gives O(n) which gives a time complexity of O(n) in total. But due to the recursive calls I thought the time complexity would be O(nlogn), but apperantly it is only O(n). Can anyone explain why?
The recursive call pretty much gives the following for the complexity(denoting the complexity for input n by T(n)):
T(n) = log(n) + n + T(n/3)
First observation as you correctly noted is that you can ignore the logarithm as it is dominated by n. Now we are only left with T(n) = n + T(n/3). Try writing this up to 0 for instance. We have:
T(n) = n + n/3 + n/9+....
You can easily prove that the above sum is always less than 2*n. In fact better limits can be proven but this one is enough to state that overall complexity is O(n).
For procedures using a recursive algorithm such as the following:
procedure T( n : size of problem ) defined as:
if n < base_case then exit
Do work of amount f(n) // In this case, the O(n) for loop
T(n/b)
T(n/b)
... a times... // In this case, b = 3, and a = 1
T(n/b)
end procedure
Applying the Master theorem to find the time complexity, the f(n) in this case is O(n) (due to the second for loop, like you said). This makes c = 1.
Now, logba = log31 = 0, making this the 3rd case of the theorem, according to which the time complexity T(n) = Θ(f(n)) = Θ(n).