I'm interested in calculating the following code's time and space complexity but seem to be struggling a lot. I know that the deepest the recursion could reach is n so the space should be O(n). I have no idea however how to calculate the time complexity... I don't know how to write the formula when it comes to recursions similar to this forms like: f(f(n-1)) .
if it was something like, return f3(n-1) + f3(n-1) then i know it should be O(2^n) since T(n) = 2T(n-1) correct?
Here's the code:
int f3(int n)
{
if(n <= 2)
return 1;
f3(1 + f3(n-2));
return n - 1;
}
Thank you for your help!
Notice that f3(n) = n - 1 for all n, so the line f3(1 + f3(n-2)), first f3(n-2) is computed, which returns n - 3 and then f3(1 + n - 3) = f3(n-2) is computed again!
So, f3(n) computes f3(n-2) twice, alongside with some O(1) overhead.
We got the recursion formula T(n) = 2T(n-2) + c for some constant c, and T(n) is the running time of f3(n).
Solving the recursion, we get T(n) = O(2^(n/2)).
Related
I am trying to figure out time complexities for the following:
First:
j = 1
while j < n:
j += log(j + 5)
Would this be log n?
Secondly, a recurrence relation:
T(n) = T(n/2) + T(n/4) + n
I know you can't apply Master Theorem here but I am not sure how to find the complexity otherwise. A solution would be nice, but references to how to help me understand this would be good I guess.
Next, another recurrence relation:
T(n) = T(n/2) + log(n)
I am fairly certain that Master Theorem can be applied here. Leaving us with:
a = 1, b = 2, f(n) = log(n)
This means we would compare
n^(log_2(1)) to log(n) ==> n^0 to log(n)
Making it Theta(log(n))
Finally
j=1
while(j<n):
k=j
while k<n:
k += sqrt(k)
j += 0.25*j
I can tell that the outer loop will run 4 times. I am unclear as to the inner loop, however. Would it be log^2 n log log n or am I completely off in my thinking.
I am just studying for a test and am finding the materials at my disposal to be woefully inadequate.
The First is O(n) as we know each time at least 1 added to the previous result.
If you exapand the recurrent equation, the second is:
T(n) = 2T(n/4) + T(n/8) + n + n/2 < 3T(n/4) + 3n/2
We can say from master theorem that T(n) = \Theta(n).
The third is true and it is \Theta(log(n)).
Outer loop in the forth loop is T(n+1) = 5T(n)/4. It means outer loop is run log_{1.25}n. In the worst case, we can say the inner loop runs in O(n). Hence, it would be O(nlog(n)). If you want tighter complexity analysis, you should scrutinize more,.
This is pseudocode. I tried to calculate the time complexity of this function as this answer said. It should be like:
n + n/3 + n/9 + ...
Maybe the time complexity is something like O(nlog(n)) I guess? Or the log(n) should be log(n) base 3? Someone said the time complexity is O(n), which is totally unacceptable for me.
j = n
while j >= 1 {
for i = 1 to j {
x += 1
}
j /= 3
}
The algorithm will run in:
n + n/3 + n/9 + ... = series ~= O(3/2 * n) = O(n)
since 3/2 is a constant. Here the k-th loop will run in n/3k steps.
Please notice the crucial difference from the linked question, where the outer loop runs n times and that is fixed.
up front this is a homework question but I am having a difficult time understanding recurrence relations. I've scoured the internet for examples and they are very vague to me. I understand that recurrence relations for recursive algorithms don't have a set way of handling each one but I am lost at how to understand these. Here's the algorithm I have to work with:
void selectionSort(int array[]) {
sort(array, 0);
}
void sort(int[] array, int i) {
if (i < array.length - 1)
{
int j = smallest(array, i); T(n)
int temp = array[i];
array[i] = array[j];
array[j] = temp;
sort(array, i + 1); T(n)
}
}
int smallest(int[] array, int j) T(n - k)
{
if (j == array.length - 1)
return array.length - 1;
int k = smallest(array, j + 1);
return array[j] < array[k] ? j : k;
}
So from what I understand this is what I'm coming up with: T(n) = T(n – 1) +cn + c
The T(n-1) represents the recursive function of sort and the added cn represents the recursive function of smallest which should decrease as n decreases since it's called only the amount of times that are remaining in the array each time. The constant multiplied by n is the amount of time to run the additional code in smallest and the additional constant is the amount of time to run the additional code in sort. Is this right? Am I completely off? Am I not explaining it correctly? Also the next step is to create a recursion tree out of this but I don't see this equation as the form T(n) = aT(n/b) + c which is the form needed for the tree if I understand this right. Also I don't see how my recurrence relation would get to n^2 if it is correct. This is my first post too so I apologize if I did something incorrect here. Thanks for the help!
The easiest way to compute the time complexity is to model the time complexity of each function with a separate recurrence relation.
We can model the time complexity of the function smallest with the recurrence relation S(n) = S(n-1)+O(1), S(1)=O(1). This obviously solves to S(n)=O(n).
We can model the time complexity of the sort function with T(n) = T(n-1) + S(n) + O(1), T(1)=O(1). The S(n) term comes in because we call smallest within the function sort. Because we know that S(n)=O(n) we can write T(n) = T(n-1) + O(n), and writing out the recurrence we get T(n)=O(n)+O(n-1)+...+O(1)=O(n^2).
So the total running time is O(n^2), as expected.
In selection sort algo
Our outer loop runs for n- 1 times (n is the length of the array) so n-1 passes would be made... and then element is compared with other elements ....so n-1 comparisons
T(n)=T(n-1) + n-1
Which can be proved as O(n^2) by solving the particular relation ..
I'm learning Big-O notation right now and stumbled across this small algorithm in another thread:
i = n
while (i >= 1)
{
for j = 1 to i // NOTE: i instead of n here!
{
x = x + 1
}
i = i/2
}
According to the author of the post, the complexity is Θ(n), but I can't figure out how. I think the while loop's complexity is Θ(log(n)). The for loop's complexity from what I was thinking would also be Θ(log(n)) because the number of iterations would be halved each time.
So, wouldn't the complexity of the whole thing be Θ(log(n) * log(n)), or am I doing something wrong?
Edit: the segment is in the best answer of this question: https://stackoverflow.com/questions/9556782/find-theta-notation-of-the-following-while-loop#=
Imagine for simplicity that n = 2^k. How many times x gets incremented? It easily follows this is Geometric series
2^k + 2^(k - 1) + 2^(k - 2) + ... + 1 = 2^(k + 1) - 1 = 2 * n - 1
So this part is Θ(n). Also i get's halved k = log n times and it has no asymptotic effect to Θ(n).
The value of i for each iteration of the while loop, which is also how many iterations the for loop has, are n, n/2, n/4, ..., and the overall complexity is the sum of those. That puts it at roughly 2n, which gets you your Theta(n).
I have the homework question:
Let T(n) denote the number of times the statement x = x + 1 is
executed in the algorithm
example (n)
{
if (n == 1)
{
return
}
for i = 1 to n
{
x = x + 1
}
example (n/2)
}
Find the theta notation for the number of times x = x + 1 is executed.
(10 points).
here is what I have done:
we have the following amount of work done:
- A constant amount of work for the base case check
- O(n) work to count up the number
- The work required for a recursive call to something half the size
We can express this as a recurrence relation:
T(1) = 1
T(n) = n + T(n/2)
Let’s see what this looks like. We can start expanding this out by noting that
T(n)=n+T(n/2)
=n+(n/2+T(n/4))
=n+n/2+T(n/4)
=n+n/2+(n/4+T(n/8))
=n+n/2+n/4+T(n/8)
We can start to see a pattern here. If we expand out the T(n/2) bit k times, we get:
T(n)=n+n/2+n/4+⋯+n/2^k +T(n/2^k )
Eventually, this stops when n/2^k =1 when this happens, we have:
T(n)=n+n/2+n/4+n/8+⋯+1
What does this evaluate to? Interestingly, this sum is equal to 2n+1 because the sum n+ n/2 + n/4 + n/8 +…. = 2n. Consequently this first function is O(n)
I got that answer thanks to this question answered by #templatetypedef
Know I am confused with the theta notation. will the answer be the same? I know that the theta notation is supposed to bound the function with a lower and upper limit. that means I need to functions?
Yes, the answer will be the same!
You can easily use the same tools to prove T(n) = Omega(n).
After proving T(n) is both O(n) [upper asymptotic bound] and Omega(n) [lower asymptotic bound], you know it is also Theta(n) [tight asymptotic bound]
In your example, it is easy to show that T(n) >= n - since it is homework, it is up to you to understand why.