Computing expected time complexity of recursive program - algorithm

I wish to determine the average processing time T(n) of the recursive algorithm:
int myTest( int n ) {
if ( n <= 0 ) {
return 0;
}
else {
int i = random( n - 1 );
return myTest( i ) + myTest( n - 1 - i );
}
}
provided that the algorithm random( int n ) spends one time unit to return
a random integer value uniformly distributed in the range [0, n] whereas
all other instructions spend a negligibly small time (e.g., T(0) = 0).
This is certainly not of the simpler form T(n) = a * T(n/b) + c so I am lost at how to write it. I can't figure out how to write it due to the fact that I am taking a random number each time from 0 to n-1 range and supplying it twice to the function and asking for the sum of those two calls.

The recurrence relations are:
T(0) = 0
T(n) = 1 + sum(T(i) + T(n-1-i) for i = 0..n-1) / n
The second can be simplified to:
T(n) = 1 + 2*sum(T(i) for i = 0..n-1) / n
Multiplying by n:
n T(n) = n + 2*sum(T(i) for i = 0..n-1)
Noting that (n-1) T(n-1) = n-1 + 2*sum(T(i) for i = 0..n-2), we get:
n T(n) = (n-1) T(n-1) + 1 + 2T(n-1)
= (n+1) T(n-1) + 1
Or:
T(n) = ((n+1)T(n-1) + 1) / n
This has the solution T(n) = n, which you can derive by telescoping the series, or by guessing the solution and then substituting it in to prove it works.

Related

What is the time complexity of recursive function?

I have a recursive function. And i’m looking for what is the time complexity ?
Here is the function
public static int f7(int N){
if (N==1) return 0;
return 1 + f7(N/2);
}
First, we come up with a recurrence for this function:
T(1) = 1
T(n) = T(n/2) + 1
This is a recurrence that we can plug into the master theorem, which will give us Θ(log n) as an answer.
Assume that when N=1, the call takes a time units, and when N is a power of 2, it takes b time units, not counting the recursive call.
Then
T(1) = a
T(2^n) = T(2^(n-1)) + b.
This can be seen as an ordinary linear recurrence
S(0) = a
S(n) = S(n-1) + b = S(n-2) + 2b = … = S(0) + nb = a + nb,
or
T(N) = a + Lg(N) b
where Lg denotes the base-2 logarithm.
When N is not a power of 2, the time is the same as for the nearest inferior power of 2.
The exact formula for all N is
T(N) = a + [Lg(N)] b.
Brackets denote the floor function.

What is the complexity of these functions with explanation?

i would like to know how to find the complexity of these functions using T(n) .. and stuff like this .. Because i can only Guess.
First Function :
int f(int n)
{
if (n == 1)
return 1 ;
return 1 + f(f(n-1));
}
Time&Space complexity ??
Second Function :
time&space Complexity of Function f() ??? :
void f(int n)
{
int i ;
if(n < 2) return ;
for(i = 0 ; i < n/2 , i+= 5)
printf("*");
g(n/3);
g(n/3);
}
void g(int n)
{
int i ;
for(i = 0 ; i < n ; i++)
printf("?") ;
f(3*n/2);
}
Many Thanks :)
It may surprise you, but the second one is easier to start with. O_o ikr
Second function:
g(n) = n + f(3n/2), f(n) = n/10 + 2g(n/3). Therefore f(n) = 21n/10 + 2f(n/2).
Substitute n = 2^m, therefore f(2^m) = 21(2^m)/10 + 2f(2^(m-1)) = 2*21(2^m)/10 + 4f(2^(m-2)) etc...
The first term sums to m*21(2^m)/10, which may be obvious to you.
The second term (with the f()) grows geometrically; now f(1) = 1 (as there is only 1 operation), so if you expand to the last term you will find this term is 2^m * f(1) = 2^m. Therefore the total complexity of f is f(2^m) = m*21(2^m)/10 + 2^m, or f(n) = n(2.1*log(n) + 1), where log is base-2 logarithm.
Thus f(n) is O(n log(n)).
First function:
Ok I'll be honest I didn't know how to start, but I tested the code in C++ and the result is exactly f(n) = n.
Proof by induction:
Suppose f(n) = n, then f(n + 1) = 1 + f(f(n)) = n + 1. Thus if true for n then also true for n + 1
Now f(1) = 1 obviously. Therefore it's true for 2, and for 3, 4, 5 ... and so on.
Therefore by mathematical induction, f(n) = n.
Now for the time complexity bit. Since f(n) returns n, the outer call in the nested f(f(n-1)) will effectively be a second call, as the argument is the same: f(n-1); f(n-1);. Thus T(n) = 2*T(n-1) and therefore T(n) = 2^n. O(2^n).

Determination of computational complexity of sample code

I give you three short codes:
First code:
procedure Proc (n:integer)
begin
if n>0 then
begin
writeln('x')
Proc(n-2)
writeln('*');
Proc(n-2)
end
end
Second code:
procedure Proc (n:integer)
begin
if n>0 then
begin
writeln('*');
Proc(n-1)
end
end
Third code:
procedure Proc (n:integer)
begin
if n>0 then
begin
writeln('x')
Proc(n/2)
writeln('*');
Proc(n/2)
end
end
I would like to know how to determine the computational complexity of each code that I gave, cuz it will help me to better understand.. Can someone write an algorithm for determination of computational complexity of sample code step by step, and do it so that it was possible to apply this algorithm for another examples of codes?
First Question: Assume you know that for the value of n - 2, Proc is called T(n-1) times. Therefore, for the value of n, T(n) = 1 + 2T(n-2), as there would be one call to Proc(n) which would in turn call Proc(n-2) twice. T(n) = 1 + 2T(n-2) is a variant of Tower of Hanoi which is T(n) = 1+2T(n-1). There are proofs here http://en.wikipedia.org/wiki/Tower_of_Hanoi to Show that T(n) = 1+2T(n-1) = 2^n-1. Therefore T(n-1) = 1+2T((n-1)-1)= 1+2T(n-2) = 2^(n-1) -1. In your case T(n) = 1 + 2T(n-2) = 2^(n-1) -1. In other words, subtracting out every other term in the Tower of Hanoi problem saves about half the calls. 2^(n-1) - 1 = 2^n/2 - 1 which is O(2^n).
Second Question: This is easier. T(0) = 1 and T(n) = 1 + T(n-1). You can solve this many different ways, but one is via telescoping:
T(n) = 1 + T(n-1)
T(n-1) = 1 + (n-2)
...
T(1) = 1 + T(0)
Adding up both sides...
T(n) + T(n-1)+...+T(1) = 1 + T(n-1) + ... + 1 + T(0) = n + T(n-1)+...+T(0)
Subtract out like terms.
T(n) = n + T(0) = n + 1. So this is O(n).
Third Question: Similar to the first. T(0) = 1, say we know that for value of n-1, you can see that T(n) = 1 + 2 T(n/2). Note here that T(n) = 1 + 2T(n/2) < n + 2T(n/2).
So solve for 2T(n/2) + n with rolling out the recurrence:
T(n) = 2 T(n/2) + n
T(n/2) = 2 T(n/4) + n/2
So T(n) = 4T(n/4) + n + n
T(n/4) = 2T(n/8) + n/4
So T(n) = 8T(n/8) n + n + n
... It looks like T(n) = 2^kT(n/2^k)+kn for positive k.
Prove it by induction.
k = 1: T(n) = 2 T(n/2)+n which was given. This is our base case.
If true for k-1, show true for k:
T(n) = (2^(k-1))T(n/2^(k-1))+(k-1)n //Inductive hypothesis
T(n/2^(k-1)) = 2 T([n/2^(k-1))]/2)+n/2^(k-1)) //Given recurrence
= 2T(n/2^k)+n/2^(k-1)
=> T(n) = (2^k)T(n/2^k)+ n + (k-1)n = (2^k)T(n/2^k) + kn. So true for k.
T(n) = 2^kT(n/2^k)+kn, choose an appropriate positive k, such as k = ln(n).
T(n) = 2^ln(n) T(n/2^Ln(n)) + nln(n) = nT(1) +nln(n).
T(1) = 1 since Proc would just end. So n(T(1)) + nln(n) = nln(n) + n = O(nln(n)).
Unfortunately, there is not a one-size-fits all procedure for complexity. You have to take it on a case-by-case basis and figure out the problem.

Efficient Algorithm to Solve a Recursive Formula

I am given a formula f(n) where f(n) is defined, for all non-negative integers, as:
f(0) = 1
f(1) = 1
f(2) = 2
f(2n) = f(n) + f(n + 1) + n (for n > 1)
f(2n + 1) = f(n - 1) + f(n) + 1 (for n >= 1)
My goal is to find, for any given number s, the largest n where f(n) = s. If there is no such n return None. s can be up to 10^25.
I have a brute force solution using both recursion and dynamic programming, but neither is efficient enough. What concepts might help me find an efficient solution to this problem?
I want to add a little complexity analysis and estimate the size of f(n).
If you look at one recursive call of f(n), you notice, that the input n is basically divided by 2 before calling f(n) two times more, where always one call has an even and one has an odd input.
So the call tree is basically a binary tree where always the half of the nodes on a specific depth k provides a summand approx n/2k+1. The depth of the tree is log₂(n).
So the value of f(n) is in total about Θ(n/2 ⋅ log₂(n)).
Just to notice: This holds for even and odd inputs, but for even inputs the value is about an additional summand n/2 bigger. (I use Θ-notation to not have to think to much about some constants).
Now to the complexity:
Naive brute force
To calculate f(n) you have to call f(n) Θ(2log₂(n)) = Θ(n) times.
So if you want to calculate the values of f(n) until you reach s (or notice that there is no n with f(n)=s) you have to calculate f(n) s⋅log₂(s) times, which is in total Θ(s²⋅log(s)).
Dynamic programming
If you store every result of f(n), the time to calculate a f(n) reduces to Θ(1) (but it requires much more memory). So the total time complexity would reduce to Θ(s⋅log(s)).
Notice: Since we know f(n) ≤ f(n+2) for all n, you don't have to sort the values of f(n) and do a binary search.
Using binary search
Algorithm (input is s):
Set l = 1 and r = s
Set n = (l+r)/2 and round it to the next even number
calculate val = f(n).
if val == s then return n.
if val < s then set l = n
else set r = n.
goto 2
If you found a solution, fine. If not: try it again but round in step 2 to odd numbers. If this also does not return a solution, no solution exists at all.
This will take you Θ(log(s)) for the binary search and Θ(s) for the calculation of f(n) each time, so in total you get Θ(s⋅log(s)).
As you can see, this has the same complexity as the dynamic programming solution, but you don't have to save anything.
Notice: r = s does not hold for all s as an initial upper limit. However, if s is big enough, it holds. To be save, you can change the algorithm:
check first, if f(s) < s. If not, you can set l = s and r = 2s (or 2s+1 if it has to be odd).
Can you calculate the value of f(x) which x is from 0 to MAX_SIZE only once time?
what i mean is : calculate the value by DP.
f(0) = 1
f(1) = 1
f(2) = 2
f(3) = 3
f(4) = 7
f(5) = 4
... ...
f(MAX_SIZE) = ???
If the 1st step is illegal, exit. Otherwise, sort the value from small to big.
Such as 1,1,2,3,4,7,...
Now you can find whether exists n satisfied with f(n)=s in O(log(MAX_SIZE)) time.
Unfortunately, you don't mention how fast your algorithm should be. Perhaps you need to find some really clever rewrite of your formula to make it fast enough, in this case you might want to post this question on a mathematics forum.
The running time of your formula is O(n) for f(2n + 1) and O(n log n) for f(2n), according to the Master theorem, since:
T_even(n) = 2 * T(n / 2) + n / 2
T_odd(n) = 2 * T(n / 2) + 1
So the running time for the overall formula is O(n log n).
So if n is the answer to the problem, this algorithm would run in approx. O(n^2 log n), because you have to perform the formula roughly n times.
You can make this a little bit quicker by storing previous results, but of course, this is a tradeoff with memory.
Below is such a solution in Python.
D = {}
def f(n):
if n in D:
return D[n]
if n == 0 or n == 1:
return 1
if n == 2:
return 2
m = n // 2
if n % 2 == 0:
# f(2n) = f(n) + f(n + 1) + n (for n > 1)
y = f(m) + f(m + 1) + m
else:
# f(2n + 1) = f(n - 1) + f(n) + 1 (for n >= 1)
y = f(m - 1) + f(m) + 1
D[n] = y
return y
def find(s):
n = 0
y = 0
even_sol = None
while y < s:
y = f(n)
if y == s:
even_sol = n
break
n += 2
n = 1
y = 0
odd_sol = None
while y < s:
y = f(n)
if y == s:
odd_sol = n
break
n += 2
print(s,even_sol,odd_sol)
find(9992)
This recursive in every iteration for 2n and 2n+1 is increasing values, so if in any moment you will have value bigger, than s, then you can stop your algorithm.
To make effective algorithm you have to find or nice formula, that will calculate value, or make this in small loop, that will be much, much, much more effective, than your recursion. Your recursion is generally O(2^n), where loop is O(n).
This is how loop can be looking:
int[] values = new int[1000];
values[0] = 1;
values[1] = 1;
values[2] = 2;
for (int i = 3; i < values.length /2 - 1; i++) {
values[2 * i] = values[i] + values[i + 1] + i;
values[2 * i + 1] = values[i - 1] + values[i] + 1;
}
And inside this loop add condition of possible breaking it with success of failure.

Order of growth for Permutation

This is a simple program which finds all the permutations of a given string :
void perm( char str[], int len )
{
if ( len == 1 )
cout << str << endl ;
else
for ( int i=0; i<len; i++ ) {
swap( str[len-1], str[i] ) ;
perm( str, len-1 ) ;
swap( str[len-1], str[i] ) ;
}
}
What is the T(n) for this function ? How to calculate the Big Oh ( or Theta ) for this function ?
Let the length of the initial input string be N.
Let the time taken for a call of perm(str (size = N), len=i) be T(i)
T(1) = N
and
T(i > 1) = iT(i-1) + i
then the total time taken is T(N),
To calculate the closed form of this recurrence see here:
https://math.stackexchange.com/questions/188119/closed-form-for-t1-k-tx-xtx-1-x
The answer is:
T(N) is approximately (N + e - 1)N!
So as N approaches infinity the performance of the function is:
O((N + e - 1)N!) = O(N(N!))
For loop perform n recursion, to n*T(n-1), plus O(n) since you also need to swap 2n times, so
T(n) = n*T(n-1)+O(n)
n = 5 for sake of my keyboard
T(n) = n*T(n-1) + n
T(n) = n*[(n-1)*T(n-2) + (n-1)] + n
T(n) = n*[(n-1)*[(n-2)*T(n-3) + (n-2)] + (n+1)] + n
T(n) = n*[(n-1)*[(n-2)*[(n-3)*T(n-4) + (n-3)] + (n-2)] + (n-1)] + n
T(n-4) = 1 ----------------------^
Simplify
T(n) = n*[(n-1)*[(n-2)*[(n-3) + (n-3)] + (n-2)] + (n-1)] + n
T(n) = n*[(n-1)*[(n-2)*[2(n-3)] + (n-2)] + (n-1)] + n
T(n) = n(n-1)(n-2)*(n-3)*2 + (n-1)(n-2) + n(n-1) + n
T(n) = n! + O(n*n!) <-- wrong, see comment for right answer
T(n) = O(n*n!) <-- wrong, see comment for right answer
you see the pattern
The number of possible permutations of N items is N! (factorial), and this code seems to use O(1) operations per permutation it outputs. The cost of construction is therefore O(N!), which is equivalent to O(N^N).
Or maybe O(N!*N) since for every permutation N items are printed out to the console.

Resources