void opt(int i , int j )
{
if(i == m)
opt = 2( n - j);
else if(j == n)
opt = 2( m - i);
else{
if(x[i] == x[j])
penalty = 0;
else
penalty = 1;
opt = min(opt(i+1,j+1) + penalty, opt(i+1,j)+2, opt(i, j+1)+2);
}
}
Why is the complexity of this algorithm 3^n ?
Analyze the time complexity of Algorithm opt.
You should specify how are you calling this function first.
Regarding the analysis of Big O, you can get it by drawing the recursion tree. You can do it for small n samples and you will notice that the height of the tree is n. Now, you can notice that for each instance of the function you call the same function 3 times again, so you have a tree that expands by a factor of 3 exponentially. Hence your complexity is O(3^n).
Bonus: Analogy with Fibonacci
Check the basic (withot memoization) recusrive version of the Fibonacci algorithm and you will see the similar structure except for the fact that 2 calls are done each time, and hence the complexity is O(2^n).
Related
I have the following code. What time complexity does it have?
I have tried to write a recurrence relation for it but I can't understand when will the algorithm add 1 to n or divide n by 4.
void T(int n) {
for (i = 0; i < n; i++);
if (n == 1 || n == 0)
return;
else if (n%2 == 1)
T(n + 1);
else if (n%2 == 0)
T(n / 4);
}
You can view it like this: you always divide by four only if you have odd you add 1 to n before division. So, you should count how many times 1 was added. If there no increments then you have log4n recursive calls. Let's assume that you always have to add 1 before division. Then can rewrite it like this:
void T(int n) {
for (i = 0; i < n; i++);
if (n == 1 || n == 0)
return;
else if (n%2 == 0)
T(n / 4 + 1);
}
But n/4 + 1 < n/2, and in case of recursive call T(n/2), running time is O(log(n,4)), but base of logarithm doesn't impact running time in big-O notation because it's just like constant factor. So running time is O(log(n)).
EDIT:
As ALB pointed in a comment, there is cycle of length n. So, with accordance with master theorem running time is Theta(n). You can see it in another way as sum of n * (1 + 1/2 + 1/4 + 1/8 + ...) = 2 * n.
Interesting question. Be aware that even though your for loop is doing nothing, since it is not an optimized solution (see Dukeling's comment), it will be considered in your time complexity as if computing time was taken to iterate through it.
First part
The first section is definitely O(n).
Second part
Let's suppose for the sake of simplicity that half the time n will be odd and the other half time it will be even. Hence, the recursion is looping (n+1) half the time and (n/4) the other half.
Conclusion
For each time T(n) is called, the recursion will implicitly loop n times. Hence, The first half of the time, we will have a complexity of n * (n+1) = n^2 + n. The other half of the time, we will deal with a n * (n/4) = (1/4)n^2.
For Big O notation, we care more about the upper bound than its precise behavior. Hence, your algorithm would be bound by O(n^2).
Consider the following function:
int testFunc(int n){
if(n < 3) return 0;
int num = 7;
for(int j = 1; j <= n; j *= 2) num++;
for(int k = n; k > 1; k--) num++;
return testFunc(n/3) + num;
}
I get that the first loop is O(logn) while the second loop gives O(n) which gives a time complexity of O(n) in total. But due to the recursive calls I thought the time complexity would be O(nlogn), but apperantly it is only O(n). Can anyone explain why?
The recursive call pretty much gives the following for the complexity(denoting the complexity for input n by T(n)):
T(n) = log(n) + n + T(n/3)
First observation as you correctly noted is that you can ignore the logarithm as it is dominated by n. Now we are only left with T(n) = n + T(n/3). Try writing this up to 0 for instance. We have:
T(n) = n + n/3 + n/9+....
You can easily prove that the above sum is always less than 2*n. In fact better limits can be proven but this one is enough to state that overall complexity is O(n).
For procedures using a recursive algorithm such as the following:
procedure T( n : size of problem ) defined as:
if n < base_case then exit
Do work of amount f(n) // In this case, the O(n) for loop
T(n/b)
T(n/b)
... a times... // In this case, b = 3, and a = 1
T(n/b)
end procedure
Applying the Master theorem to find the time complexity, the f(n) in this case is O(n) (due to the second for loop, like you said). This makes c = 1.
Now, logba = log31 = 0, making this the 3rd case of the theorem, according to which the time complexity T(n) = Θ(f(n)) = Θ(n).
How can I calculate time complexity of following piece of code? Suppose m is close to n. What I got is f(n) = 2*f(n-1). So time complexity is f(n) = O(2^n). Am I right?
int uniquePaths(int m, int n) {
if (m < 1 || n < 1) return 0;
if (m == 1 && n == 1) return 1;
return uniquePaths(m - 1, n) + uniquePaths(m, n - 1);
}
There is some hand-waving involved in what follows, but I think it's essentially correct.
Every leaf in the call tree will contribute 1 to the total result, so the number of leaves is uniquePaths(m,n). Since uniquePaths(m,n) == "m+n-2 choose n-1", when m and n are similar the execution time of your algorithm will be roughly the central binomial coefficient "2n choose n", which is in O(4^n).
First, here's my Shell sort code (using Java):
public char[] shellSort(char[] chars) {
int n = chars.length;
int increment = n / 2;
while(increment > 0) {
int last = increment;
while(last < n) {
int current = last - increment;
while(current >= 0) {
if(chars[current] > chars[current + increment]) {
//swap
char tmp = chars[current];
chars[current] = chars[current + increment];
chars[current + increment] = tmp;
current -= increment;
}
else { break; }
}
last++;
}
increment /= 2;
}
return chars;
}
Is this a correct implementation of Shell sort (forgetting for now about the most efficient gap sequence - e.g., 1,3,7,21...)? I ask because I've heard that the best-case time complexity for Shell Sort is O(n). (See http://en.wikipedia.org/wiki/Sorting_algorithm). I can't see this level of efficiency being realized by my code. If I added heuristics to it, then yeah, but as it stands, no.
That being said, my main question now - I'm having difficulty calculating the Big O time complexity for my Shell sort implementation. I identified that the outer-most loop as O(log n), the middle loop as O(n), and the inner-most loop also as O(n), but I realize the inner two loops would not actually be O(n) - they would be much less than this - what should they be? Because obviously this algorithm runs much more efficiently than O((log n) n^2).
Any guidance is much appreciated as I'm very lost! :P
The worst-case of your implementation is Θ(n^2) and the best-case is O(nlogn) which is reasonable for shell-sort.
The best case ∊ O(nlogn):
The best-case is when the array is already sorted. The would mean that the inner if statement will never be true, making the inner while loop a constant time operation. Using the bounds you've used for the other loops gives O(nlogn). The best case of O(n) is reached by using a constant number of increments.
The worst case ∊ O(n^2):
Given your upper bound for each loop you get O((log n)n^2) for the worst-case. But add another variable for the gap size g. The number of compare/exchanges needed in the inner while is now <= n/g. The number of compare/exchanges of the middle while is <= n^2/g. Add the upper-bound of the number of compare/exchanges for each gap together: n^2 + n^2/2 + n^2/4 + ... <= 2n^2 ∊ O(n^2). This matches the known worst-case complexity for the gaps you've used.
The worst case ∊ Ω(n^2):
Consider the array where all the even positioned elements are greater than the median. The odd and even elements are not compared until we reach the last increment of 1. The number of compare/exchanges needed for the last iteration is Ω(n^2).
Insertion Sort
If we analyse
static void sort(int[] ary) {
int i, j, insertVal;
int aryLen = ary.length;
for (i = 1; i < aryLen; i++) {
insertVal = ary[i];
j = i;
/*
* while loop exits as soon as it finds left hand side element less than insertVal
*/
while (j >= 1 && ary[j - 1] > insertVal) {
ary[j] = ary[j - 1];
j--;
}
ary[j] = insertVal;
}
}
Hence in case of average case the while loop will exit in middle
i.e 1/2 + 2/2 + 3/2 + 4/2 + .... + (n-1)/2 = Theta((n^2)/2) = Theta(n^2)
You saw here we achieved (n^2)/2 even though divide by two doesn't make more difference.
Shell Sort is nothing but insertion sort by using gap like n/2, n/4, n/8, ...., 2, 1
mean it takes advantage of Best case complexity of insertion sort (i.e while loop exit) starts happening very quickly as soon as we find small element to the left of insert element, hence it adds up to the total execution time.
n/2 + n/4 + n/8 + n/16 + .... + n/n = n(1/2 + 1/4 + 1/8 + 1/16 + ... + 1/n) = nlogn (Harmonic Series)
Hence its time complexity is some thing close to n(logn)^2
What is the time and space complexity of:
int superFactorial4(int n, int m)
{
if(n <= 1)
{
if(m <= 1)
return 1;
else
n = m -= 1;
}
return n*superFactorial4(n-1, m);
}
It runs recursively by decreasing the value of n by 1 until it equals 1 and then it will either decrease the value of m by 1 or return 1 in case m equals 1.
I think the complexity depends on both n and m so maybe it's O(n*m).
Actually, it looks to be closer to O(N+m^2) to me. n is only used for the first "cycle".
Also, in any language that doesn't do tail call optimization the space complexity is likely to be "fails". In languages that support the optimization, the space complexity is more like O(1).
The time complexity is O(n+m^2), space complexity the same.
Reasoning: with a fixed value of m, the function makes n recursive calls to itself, each does constant work, so the complexity of calls with fixed m is n. Now, when n reaches zero, it becomes m-1 and m becomes m-1 too. So the next fixed-m-phase will take m-1, the next m-2 and so on. So you get a sum (m-1)+(m-2)+...+1 which is O(m^2).
The space complexity is equal, because for each recursive call, the recursion takes constant space and you never return from the recursion except at the end, and there is no tail recursion.
The time complexity of a Factorial function using recursion
pseudo code:
int fact(n)
{
if(n==0)
{
return 1;
}
else if(n==1)
{
return 1;
}
else if
{
return n*f(n-1);
}
}
time complexity;
let T(n) be the number of steps taken to compute fact(n).
we know in each step F(n)= n*F(n-1)+C
F(n-1)= (n-1)*F(n-2)+c
substitute this in F(n), we get
F(n)= n*(n-1)*F(n-2)+(n+1)c
using big o notation now we can say that
F(n)>= n*F(n-1)
F(n)>= n*(n-1)*F(n-2)
.
.
.
.
.
F(n)>=n!F(n-k)
T(n)>=n!T(n-k)
n-k=1;
k=n-1;
T(n)>=n!T(n-(n-1))
T(n)>=n!T(1)
since T(1)=1
T(n)>=1*n!
now it is in the form of
F(n)>=c(g(n))
so we can say that time complexity of factorial using recursion is
T(n)= O(n!)