Recursive Function Big-Oh Complexity - data-structures

Algorithm multiply(n, m)
PRE: n :: Integer, greater than or equal to 0
m :: Integer
POST: ????
RETURNS: the product, n * m
if (n = 0)
return 0
else if (n is even)
return multiply(n/2, m+m)
else
return m + multiply((n-1)/2, m+m)
endif
Is this function O(logn) because it is dividing n in every recursive case? I am studying for my midterm and I want to make sure I am doing this correctly. Thank you in advance.

Yes, you are correct.
Although this is the case here, be careful, a divisor in a recursive call is not always an indication of logarithmic complexity.

Related

Big-Theta Complexity of Recursive Algorithm

I am currently trying to determine the Big-Theta complexity of the recursive algorithm below. It makes sense that the complexity is at least n^2 (due to the nested for-loop). However, the recursive aspect makes me struggle to determine its precise Big-Theta complexity.
I guess it must be n^3 as the function calls itself recursively and executes itself. But I struggle to find proof for that. Can anyone please tell me the complexity and how to determine it for recursive algorithms?
function F(n)
if n < 1:
return 1
t = 0
for i <- 0 to n:
for j <- i to n:
t = t + j
return t + F(n-1)
The recursion aspect of this algorithm is fairly simple: it comes down to tail recursion and so it can be written as a loop:
function F(n)
t = 0
for k <- n to 1:
for i <- 0 to k:
for j <- i to k:
t = t + j
return t + 1
...where k replaces the argument that each recursive call would get as n
The number of times that t = t + j executes is determining for the time complexity (assuming addition is considered a Θ(1) step).
This represents a tetrahedral number and is Θ(n³).

How to perform the big O analysis

Below is my code.
How can you perform the big O analysis for intDiv(m-n, n), where m & n are two random inputs?
intDiv(int m, int n) {
if(n>m)
return 0;
else
return 1 + intDiv(m-n, n);
} //this code finds the number of times n goes into m
It depends on the values of the m and n. In general, The number of steps involved for any pair of (m,n), such that n, m >=0 are integer_ceil(m/n).
Therefore, the time complexity of the above algorithm : O([m/n]), where, [] represents the ceil of the number.
I think it totally depends on n and m. For example if n=1 then it it is O(m). And when n=m/2 then it is O(log(m)).

How can I calculate time complexity of this recursion algorithm

How can I calculate time complexity of following piece of code? Suppose m is close to n. What I got is f(n) = 2*f(n-1). So time complexity is f(n) = O(2^n). Am I right?
int uniquePaths(int m, int n) {
if (m < 1 || n < 1) return 0;
if (m == 1 && n == 1) return 1;
return uniquePaths(m - 1, n) + uniquePaths(m, n - 1);
}
There is some hand-waving involved in what follows, but I think it's essentially correct.
Every leaf in the call tree will contribute 1 to the total result, so the number of leaves is uniquePaths(m,n). Since uniquePaths(m,n) == "m+n-2 choose n-1", when m and n are similar the execution time of your algorithm will be roughly the central binomial coefficient "2n choose n", which is in O(4^n).

how to solve recursion of given algorithm?

int gcd(n,m)
{
if (n%m ==0) return m;
n = n%m;
return gcd(m,n);
}
I solved this and i got
T(n, m) = 1 + T(m, n%m) if n > m
= 1 + T(m, n) if n < m
= m if n%m == 0
I am confused how to proceed further to get the final result. Please help me to solve this.
The problem here is that the size of the next values of m and n depend on exactly what the previous values were, not just their size. Knuth goes into this in detail in "The Art of Computer Programming" Vol 2: Seminumerical algorithms, section 4.5.3. After about five pages he proves what you might have guessed, which is that the worst case is when m and n are consecutive fibonacci numbers. From this (or otherwise!) it turns out that in the worst case the number of divisions required is linear in the logarithm of the larger of the two arguments.
After a great deal more heavy-duty math, Knuth proves that the average case is also linear in the logarithm of the arguments.
mcdowella has given a perfect answer to this.
For an intuitive explaination you can think of it this way,
if n >= m, n mod m < n/2;
This can be shown as,
if m < n/2, then:
n mod m < m < n/2
if m > n/2, then: n mod m = n-m < n/2
So effectively you are halving the larger input, and in two calls both the arguments will be halved.

Complexity of a function

What is the time and space complexity of:
int superFactorial4(int n, int m)
{
if(n <= 1)
{
if(m <= 1)
return 1;
else
n = m -= 1;
}
return n*superFactorial4(n-1, m);
}
It runs recursively by decreasing the value of n by 1 until it equals 1 and then it will either decrease the value of m by 1 or return 1 in case m equals 1.
I think the complexity depends on both n and m so maybe it's O(n*m).
Actually, it looks to be closer to O(N+m^2) to me. n is only used for the first "cycle".
Also, in any language that doesn't do tail call optimization the space complexity is likely to be "fails". In languages that support the optimization, the space complexity is more like O(1).
The time complexity is O(n+m^2), space complexity the same.
Reasoning: with a fixed value of m, the function makes n recursive calls to itself, each does constant work, so the complexity of calls with fixed m is n. Now, when n reaches zero, it becomes m-1 and m becomes m-1 too. So the next fixed-m-phase will take m-1, the next m-2 and so on. So you get a sum (m-1)+(m-2)+...+1 which is O(m^2).
The space complexity is equal, because for each recursive call, the recursion takes constant space and you never return from the recursion except at the end, and there is no tail recursion.
The time complexity of a Factorial function using recursion
pseudo code:
int fact(n)
{
if(n==0)
{
return 1;
}
else if(n==1)
{
return 1;
}
else if
{
return n*f(n-1);
}
}
time complexity;
let T(n) be the number of steps taken to compute fact(n).
we know in each step F(n)= n*F(n-1)+C
F(n-1)= (n-1)*F(n-2)+c
substitute this in F(n), we get
F(n)= n*(n-1)*F(n-2)+(n+1)c
using big o notation now we can say that
F(n)>= n*F(n-1)
F(n)>= n*(n-1)*F(n-2)
.
.
.
.
.
F(n)>=n!F(n-k)
T(n)>=n!T(n-k)
n-k=1;
k=n-1;
T(n)>=n!T(n-(n-1))
T(n)>=n!T(1)
since T(1)=1
T(n)>=1*n!
now it is in the form of
F(n)>=c(g(n))
so we can say that time complexity of factorial using recursion is
T(n)= O(n!)

Resources