How can I calculate time complexity of this recursion algorithm - algorithm

How can I calculate time complexity of following piece of code? Suppose m is close to n. What I got is f(n) = 2*f(n-1). So time complexity is f(n) = O(2^n). Am I right?
int uniquePaths(int m, int n) {
if (m < 1 || n < 1) return 0;
if (m == 1 && n == 1) return 1;
return uniquePaths(m - 1, n) + uniquePaths(m, n - 1);
}

There is some hand-waving involved in what follows, but I think it's essentially correct.
Every leaf in the call tree will contribute 1 to the total result, so the number of leaves is uniquePaths(m,n). Since uniquePaths(m,n) == "m+n-2 choose n-1", when m and n are similar the execution time of your algorithm will be roughly the central binomial coefficient "2n choose n", which is in O(4^n).

Related

Time complexity of this power function

I'm having this code to calculate the power of certain number
func power(x, n int) int {
if n == 0 {
return 1
}
if n == 1 {
return x
}
if n%2 == 0 {
return power(x, n/2) * power(x, n/2)
} else {
return power(x, n/2) * power(x, n/2) * x
}
}
go playground:
So, the total number of execution is 1 + 2 + 4 + ... + 2^k
and according to the formula of Geometric progression
a(1-r^n) / (1-r)
the sum of the execution times will be 2^k, where k is the height of the binary tree
Hence the time complexity is 2^logn
Am I correct? Thanks :)
Yes.
Another way of thinking on complexity of recursive functions is (amount of calls)**(height of recursive tree)
In each call you make two calls which divide n by two so the height of tree is logn so the time complexity is 2**(logn) which is O(n)
See a much more formal proof here:
https://cs.stackexchange.com/questions/69539/time-complexity-of-recursive-power-code
Every time you are dividing n by 2 unless n <= 1. So think how many times you can reduce n to 1 only by dividing by 0? Let's see,
n = 26
n1 = 13
n2 = 6 (take floor of 13/2)
n3 = 3
n4 = 1 (take floor of 3/2)
Let's say x_th power of 2 is greater or equal to x. Then,
2^x >= n
or, log2(2^x) = log2(n)
or, x = log2(n)
That is how you find the time complexity of your algorithm as log2(n).

How do you get the complexity of an sequence alignment algorithm?

void opt(int i , int j )
{
if(i == m)
opt = 2( n - j);
else if(j == n)
opt = 2( m - i);
else{
if(x[i] == x[j])
penalty = 0;
else
penalty = 1;
opt = min(opt(i+1,j+1) + penalty, opt(i+1,j)+2, opt(i, j+1)+2);
}
}
Why is the complexity of this algorithm 3^n ?
Analyze the time complexity of Algorithm opt.
You should specify how are you calling this function first.
Regarding the analysis of Big O, you can get it by drawing the recursion tree. You can do it for small n samples and you will notice that the height of the tree is n. Now, you can notice that for each instance of the function you call the same function 3 times again, so you have a tree that expands by a factor of 3 exponentially. Hence your complexity is O(3^n).
Bonus: Analogy with Fibonacci
Check the basic (withot memoization) recusrive version of the Fibonacci algorithm and you will see the similar structure except for the fact that 2 calls are done each time, and hence the complexity is O(2^n).

Complexity Algorithm Analysis with if

I have the following code. What time complexity does it have?
I have tried to write a recurrence relation for it but I can't understand when will the algorithm add 1 to n or divide n by 4.
void T(int n) {
for (i = 0; i < n; i++);
if (n == 1 || n == 0)
return;
else if (n%2 == 1)
T(n + 1);
else if (n%2 == 0)
T(n / 4);
}
You can view it like this: you always divide by four only if you have odd you add 1 to n before division. So, you should count how many times 1 was added. If there no increments then you have log4n recursive calls. Let's assume that you always have to add 1 before division. Then can rewrite it like this:
void T(int n) {
for (i = 0; i < n; i++);
if (n == 1 || n == 0)
return;
else if (n%2 == 0)
T(n / 4 + 1);
}
But n/4 + 1 < n/2, and in case of recursive call T(n/2), running time is O(log(n,4)), but base of logarithm doesn't impact running time in big-O notation because it's just like constant factor. So running time is O(log(n)).
EDIT:
As ALB pointed in a comment, there is cycle of length n. So, with accordance with master theorem running time is Theta(n). You can see it in another way as sum of n * (1 + 1/2 + 1/4 + 1/8 + ...) = 2 * n.
Interesting question. Be aware that even though your for loop is doing nothing, since it is not an optimized solution (see Dukeling's comment), it will be considered in your time complexity as if computing time was taken to iterate through it.
First part
The first section is definitely O(n).
Second part
Let's suppose for the sake of simplicity that half the time n will be odd and the other half time it will be even. Hence, the recursion is looping (n+1) half the time and (n/4) the other half.
Conclusion
For each time T(n) is called, the recursion will implicitly loop n times. Hence, The first half of the time, we will have a complexity of n * (n+1) = n^2 + n. The other half of the time, we will deal with a n * (n/4) = (1/4)n^2.
For Big O notation, we care more about the upper bound than its precise behavior. Hence, your algorithm would be bound by O(n^2).

Write an algorithm to efficiently find all i and j for any given N such that N=i^j

I am looking for an efficient algorithm of the problem, for any N find all i and j such that N=i^j.
I can solve it of O(N^2) as follows,
for i=1 to N
{
for j=1 to N
{
if((Power(i,j)==N)
print(i,j)
}
}
I am looking for better algorithm(or program in any language)if possible.
Given that i^j=N, you can solve the equation for j by taking the log of both sides:
j log(i) = log(N) or j = log(N) / log(i). So the algorithm becomes
for i=2 to N
{
j = log(N) / log(i)
if((Power(i,j)==N)
print(i,j)
}
Note that due to rounding errors with floating point calculations, you might want to check j-1 and j+1 as well, but even so, this is an O(n) solution.
Also, you need to skip i=1 since log(1) = 0 and that would result in a divide-by-zero error. In other words, N=1 needs to be treated as a special case. Or not allowed, since the solution for N=1 is i=1 and j=any value.
As M Oehm pointed out in the comments, an even better solution is to iterate over j, and compute i with pow(n,1.0/j). That reduces the time complexity to O(logN), since the maximum value for j is log2(N).
Here is a method you can use.
Lets say you have to solve an equation:
a^b = n //b and n are known
You can find this using binary search. If, you get a condition such that,
x^b < n and (x+1)^b > n
Then, no pair (a,b) exists such that a^b = n.
If you apply this method in range for b from 1..log(n), you should get all possible pairs.
So, complexity of this method will be: O(log n * log n)
Follow these steps:
function ifPower(n,b)
min=1, max=n;
while(min<max)
mid=min + (max-min)/2
k = mid^b, l = (mid + 1)^b;
if(k == n)
return mid;
if(l == n)
return mid + 1;
if(k < n && l > n)
return -1;
if(k < n)
max = mid - 1;
else
min = mid + 2; //2 as we are already checking for mid+1
return -1;
function findAll(n)
s = log2(n)
for i in range 2 to s //starting from 2 to ignore base cases, powers 0,1...you can handle them if required
p = ifPower(n,i)
if(b != -1)
print(b,i)
Here, in the algorithm, a^b means a raised to power of b and not a xor b (its obvs, but just saying)

Recursion, inner loop and time complexity

Consider the following function:
int testFunc(int n){
if(n < 3) return 0;
int num = 7;
for(int j = 1; j <= n; j *= 2) num++;
for(int k = n; k > 1; k--) num++;
return testFunc(n/3) + num;
}
I get that the first loop is O(logn) while the second loop gives O(n) which gives a time complexity of O(n) in total. But due to the recursive calls I thought the time complexity would be O(nlogn), but apperantly it is only O(n). Can anyone explain why?
The recursive call pretty much gives the following for the complexity(denoting the complexity for input n by T(n)):
T(n) = log(n) + n + T(n/3)
First observation as you correctly noted is that you can ignore the logarithm as it is dominated by n. Now we are only left with T(n) = n + T(n/3). Try writing this up to 0 for instance. We have:
T(n) = n + n/3 + n/9+....
You can easily prove that the above sum is always less than 2*n. In fact better limits can be proven but this one is enough to state that overall complexity is O(n).
For procedures using a recursive algorithm such as the following:
procedure T( n : size of problem ) defined as:
if n < base_case then exit
Do work of amount f(n) // In this case, the O(n) for loop
T(n/b)
T(n/b)
... a times... // In this case, b = 3, and a = 1
T(n/b)
end procedure
Applying the Master theorem to find the time complexity, the f(n) in this case is O(n) (due to the second for loop, like you said). This makes c = 1.
Now, logba = log31 = 0, making this the 3rd case of the theorem, according to which the time complexity T(n) = Θ(f(n)) = Θ(n).

Resources