Recursive function with a specific runtime of Theta(...) - algorithm

I'm stuck with this homework I got from my Algorithms course:
Write a recursive function with a runtime of Theta(n^4 logn).
I thought something like this, but I'm very unsure about my approach.
function(int n)
{
for-loop (1..n^4)
//do something
return function(n/2);
}

You should be unsure, your function has some problems:
It doesn't has initial values, and runs forever.
If you set initial value for your function it will be as:
T(n) = T(n/2) + O(n^4)
and by master theorem this is Θ(n^4).
Hint: You should increase coefficient of T(n/2), but how much? find it yourself. For this increasing you can call it x times.
By Master theorem log n happens when we have a recursion like this:
T(n) = a T(n/b) + na/b
In your case you have a/b = 4, so you can fix b = 2 and a = 8 to achieve this.
T(n) = 8T(n/2) + n4
and for arriving to this, you can call T(n/2) for 8 times.

Hint: n4 could be four nested loops from 1 to n. Logarithmic run-time factor is usually obtained by halving problem size recursively until reaching 1. Your proposal kind of works, but is quite blunt and you could even do
func(int n)
for i = 1 to n^4 log n
nop()
but I don't believe this is something that's being looked for.

Your approach is sane, and your insecurity is normal. You should now prove that your algorithm is Theta(n^4 log n). Apply your normal algorithmic analysis techniques to show that function executes do something n^4 log_2 n times.
Hint: Count how many times function is called recursively and how often the loop runs in each call. You'll see that there is still a small bug in your function; the n for the n^4 factor is reduced in each recursive call.

Related

Recursive equation from algorithm

I started my masters degree in bioinformatics this October, for a former biologist finding a recursive equation from a piece of code is pretty hard. If somebody could explain this to me, i would be very grateful.
How do i find a recursive equation from this piece of code?
procedure DC(n)
if n<1 then return
for i <- 1 to 8 do DC(n/2)
for i <- 1 to n³ do dummy <- 0
My guess is T(n) = c + 8T(n/2), because the first if condition needs constant time c and the first for loop is the recursive case which performs from 1 to 8, therefore 8*T(n/2), but I dont know how to ad the last line of code to my equation.
You’re close, but that’s not quite it.
Usually, a recurrence relation only describes the work done by the recursive step of a recursive procedure, since it’s assumed that the base case does a constant amount of work. You’d therefore want to look at
what recursive calls are made and on what size inputs they’re made on, and
how much work is done outside of that.
You’ve correctly identified that there are eight recursive calls on inputs of size n / 2, so the 8T(n / 2) term is correct. However, notice that this is followed up by a loop that does O(n3) work. As a result, your recursive function is more accurately modeled as
T(n) = 8T(n / 2) + O(n3).
It’s then worth seeing if you can argue why this recurrence solves to O(n3 log n).
This turns out to be T(n)= 8*T(n/2)+O(n^3).
I will give you a jab at solving this with iteration/recursion tree method.
T(n) = 8* T(n/2) + O(n^3)
~ 8* T(n/2) + n^3
= 8*(8* T(n/4) + (n/2)^3))+n^3
= 8^2*T(n/4)+8*(n/2)^3+ n^3
= 8^2*T(n/2^2)+n^3+n^3
= 8^2( 8*T(n/8)+(n/4)^3)+n^3+n^3
= 8^3*T(n/2^3)+ n^3 + n^3 + n^3
...
= 8^k*T(n/2^k)+ n^3 + n^3 + n^3 + ...k time ...+n^3
This will stop when n/2^k=1 or k=log_2(n).
So the complexity is O(n^3log(n))

time complexity of some recursive and none recursive algorithm

I have two pseudo-code algorithms:
RandomAlgorithm(modVec[0 to n − 1])
b = 0;
for i = 1 to n do
b = 2.b + modVec[n − i];
for i = 1 to b do
modVec[i mod n] = modVec[(i + 1) mod n];
return modVec;
Second:
AnotherRecursiveAlgo(multiplyVec[1 to n])
if n ≤ 2 do
return multiplyVec[1] × multiplyVec[1];
return
multiplyVec[1] × multiplyVec[n] +
AnotherRecursiveAlgo(multiplyVec[1 to n/3]) +
AnotherRecursiveAlgo(multiplyVec[2n/3 to n]);
I need to analyse the time complexity for these algorithms:
For the first algorithm i got the first loop is in O(n),the second loop has a best case and a worst case , best case is we have O(1) the loop runs once, the worst case is we have a big n on the first loop, but i don't know how to write this idea as a time complexity cause i usually get b=sum(from 1 to n-1) of 2^n-1 . modVec[n-1] and i get stuck here.
For the second loop i just don't get how to solve the time complexity of this one, we usually have it dependant on n , so we need the formula i think.
Thanks for the help.
The first problem is a little strange, all right.
If it helps, envision modVec as an array of 1's and 0's.
In this case, the first loop converts this array to a value.
This is O(n)
For instance, (1, 1, 0, 1, 1) will yield b = 27.
Your second loop runs b times. The dominating term for the value of b is 2^(n-1), a.k.a. O(2^n). The assignment you do inside the loop is O(1).
The second loop does depend on n. Your base case is a simple multiplication, O(1). The recursion step has three terms:
simple multiplication
recur on n/3 elements
recur on n/3 elements (from 2n/3 to the end is n/3 elements)
Just as your binary partitions result in log[2] complexities, this one will result in log[3]. The base doesn't matter; the coefficient (two recursive calls) doesn't' matter. 2*O(log3) is still O(log N).
Does that push you to a solution?
First Loop
To me this boils down to the O(First-For-Loop) + O(Second-For-Loop).
O(First-For-Loop) is simple = O(n).
O(Second-For-Loop) interestingly depends on n. Therefore, to me it's can be depicted as O(f(n)), where f(n) is some function of n. Not completely sure if I understand the f(n) based on the code presented.
The answer consequently becomes O(n) + O(f(n)). This could boil down to O(n) or O(f(n)) depending upon which one is larger and more dominant (since the lower order terms don't matter in the big-O notation.
Second Loop
In this case, I see that each call to the function invokes 3 additional calls...
The first call seems to be an O(1) call. So it won't matter.
The second and the third calls seems to recurses the function.
Therefore each function call is resulting in 2 additional recursions.
Consequently , the time complexity on this would be O(2^n).

Forming recurrence relations

I have a question on forming recurrence relations and calculating the time complexity.
If we have a recurrence relation T(n)=2T(n/2) + c then it means that the constant amount of work c is divided into 2 parts T(n/2) + T(n/2) when drawing recursion tree.
Now consider recurrence relation of factorial which is T(n)=n*T(n-1) + c . If we follow the above method then we should divide the work c into n instances each of T(n-1) and then evaluate time complexity. However if calculate in this way then answer will O(n^n) because we will have O(n^n) recursive calls which is wrong.
So my question is why can't we use the same approach of dividing the elements into sub parts as in first case.
Let a recurrence relation be T(n) = a * T(n/b) + O(n).
This recurrence implies that there is a recursive function which:
divides the original problem into a subproblems
the size of each subproblem will be n/b if the current problem size is n
when the subproblems are trivial (too easy to solve), no recursion is needed and they are solved directly (and this process will take O(n) time).
When we say that original problem is divided into a subproblems, we mean that there are a recursive calls in the function body.
So, for instance, if the function is:
int f(int n)
{
if(n <= 1)
return n;
return f(n-1) + f(n-2);
}
we say that the problem (of size n) is divided into 2 subproblems, of sizes n-1 and n-2. The recurrence relation would be T(n) = T(n-1) + T(n-2) + c. This is because there are 2 recursive calls, and with different arguments.
But, if the function is like:
int f(int n)
{
if(n <= 2)
return n;
return n * f(n-1);
}
we say that the problem (of size n) is divided into only 1 subproblem, which is of size n-1. This is because there is only 1 recursive call.
So, the recurrence relation would be T(n) = T(n-1) + c.
If we multiply the T(n-1) with n, as would seem normal, we are conveying that there were n recursive calls made.
Remember, our main motive for forming recurrence relations is to perform (asymptotic) complexity analysis of recursive functions. Even though it would seem like n cannot be discarded from the relation as it depends on the input size, it would not serve the same purpose as it does in the function itself.
But, if you are talking about the value returned by the function, it would be f(n) = n * f(n-1). Here, we are multiplying with n because it is an actual value, that will be used in the computation.
Now, coming to the c in T(n) = T(n-1) + c; it merely suggests that when we are solving a problem of size n, we need to solve a smaller problem of size n-1 and some other constant (constant time) work like comparison, multiplication and returning values are also performed.
We can never divide "constant amount of work c" into two parts T(n/2) and T(n/2), even using recursive tree method. What we are, in fact, dividing, is the problem into two halves. The same "c" amount of work will be needed in each recursive call in each level of the recursive tree.
If there were a recurrence relation like T(n) = 2T(n/2) + O(n), where the amount of work to be done depends on the input size, then the amount of work to be done at each level will be halved at the next level, just like you described.
But, if the recurrence relation were like T(n) = T(n-1) + O(n), we would not be dividing the amount of work into two halves in the next recursion level. We would just be reducing the amount of work by one at each successive level (n-sized problem becomes n-1 at next level).
To check how the amount of work will change with recursion, apply substitution method to your recurrence relation.
I hope I have answered your question.

What is the big-O complexity of the following pseudocode?

What would be the computational complexity of the following pseudocode?
integer recursive (integer n) {
if (n == 1)
return (1);
else
return (recursive (n-1) + recursive (n-1));
}
In the real world, the calls would get optimized and yield linear complexity, but with the RAM model on which big-Oh is calculated, what would be the complexity? 2^n?
The complexity of this algorithm in its current form is indeed O(2n), because on each level of call, there will be twice more number of calls.
The first call (recursive(n)) constitutes one call
The next level (recursive(n-1)) constitutes 2 calls
At the base case (recursive(1)) it constitutes 2n-1 calls.
So the total number of function calls is 1+2+…+2n-1 = 2n-1
So the complexity is O(2n).
Additional points:
As you said, this can be easily made O(n) (or perhaps O(log n) for this special case using fast exponentiation) by memoization, or dynamic programming.
Your complexity will be
Why is it so? Simply mathematical induction proof:
N=1: special case, count of steps = 1.
N=2, Obvious, = 2, so it's correct
Let it be correct for N=K, i.e. for N=K it will be
Assuming N=K+1. The function recursive will call itself recursively for N=K two times: recursive(K+1) = recursive(K) + recursive(K) as it follows from the code. That is: . So, for N=K+1 we got steps.
So we've proof that complexity for N will be in common case (from definition of mathematical induction).

How many calls to generator are made?

Suppose I have the following algorithm:
procedure(n)
if n == 1 then break
R = generaterandom()
procedure(n/2)
Now I understand that the complexity of this algorithm is log(n) but does it make log(n) calls to the random generator or log(n)-1 since it is not called for the call when n==1.
Sorry if this is obvious, but i've been looking around and its not really stated anywhere what the exact answer is.
There are ceil(log(n))calls to the generator
Proof Using induction:
Hypothesis:
There are ceil(log(k)) calls to generator for each k<n
Base:
log_2(1) = 0 => 0 calls
Step:
For arbitrary n>1 there is one call, and then from hypothesis ceil(log(n/2) more calls in the recursive calls.
This gives us total of ceil(log(n/2))+1 = ceil(log(n/2)) + log(2) = ceil(log(n/2 * 2)) = ceil(log(n)) calls
QED
Note: In here, all logs are with base 2.
By the Master's Theorem, your method can be written as T(n) = T(n/2) + O(1), since you are dividing n into half every function call, and this is exactly O(log n). - I realized you are not asking for complexity analysis, but like I mentioned, the idea is the same (i.e. finding the number of calls is equivalent to its complexity)

Resources