Building a recurrence relation for this code? - big-o

I need to build a recurrence relation for the following algorithm (T(n) stands for number of elemental actions) and find it's time complexity:
Alg (n)
{
if (n < 3) return;
for i=1 to n
{
for j=i to 2i
{
for k=j-i to j-i+100
write (i, j, k);
}
}
for i=1 to 7
Alg(n-2);
}
I came to this Recurrence relation (don't know if it's right):
T(n) = 1 if n < 3
T(n) = 7T(n-2)+100n2 otherwise.
I don't know how to get the time complexity, though.
Is my recurrence correct? What's the time complexity of this code?

Let's take a look at the code to see what the recurrence should be.
First, let's look at the loop:
for i=1 to n
{
for j=i to 2i
{
for k=j-i to j-i+100
write (i, j, k);
}
}
How much work does this do? Well, let's begin by simplifying it. Rather than having j count up from i to 2i, let's define a new variable j' that counts up from 0 to i. This means that j' = j - i, and so we get this:
for i=1 to n
{
for j' = 0 to i
{
for k=j' to j'+100
write (i, j' + i, k);
}
}
Ah, that's much better! Now, let's also rewrite k as k', where k' ranges from 1 to 100:
for i=1 to n
{
for j' = 0 to i
{
for k'= 1 to 100
write (i, j' + i, k' + j);
}
}
From this, it's easier to see that this loop has time complexity Θ(n2), since the innermost loop does O(1) work, and the middle loop will run 1 + 2 + 3 + 4 + ... + n = Θ(n2) times. Notice that it's not exactly 100n2 because the summation isn't exactly n2, but it is close.
Now, let's look at the recursive part:
for i=1 to 7
Alg(n-2);
For starters, this is just plain silly! There's no reason you'd ever want to do something like this. But, that said, we can say that this is 7 calls to the algorithm on an input of size n - 2.
Accordingly, we get this recurrence relation:
T(n) = 7T(n - 2) + Θ(n2) [if n ≥ 3]
T(n) = Θ(1) [otherwise]
Now that we have the recurrence, we can start to work out the time complexity. That ends up being a little bit tricky. If you think about how much work we'll end up doing, we'll get that
There's 1 call of size n.
There's 7 calls of size n - 2.
There's 49 calls of size n - 4.
There's 343 calls of size n - 6.
...
There's 7k calls of size n - 2k
From this, we immediately get a lower bound of Ω(7n/2), since that's the number of calls that will get made. Each call does O(n2) work, so we can get an upper boudn of O(n27n/2). The true value lies somewhere in there, though I honestly don't know how to figure out what it is. Sorry about that!
Hope this helps!

A formal method is to do the following:
The prevailing order of growth can be intuitively inferred from the source code, when it comes to the number of recursive calls.
An algorithm with 2 recursive calls has a complexity of 2^n; with 3 recursive calls the complexity 3^n and so on.

Related

Time complexity of an algorithm that runs 1+2+...+n times;

To start off I found this stackoverflow question that references the time complexity to be O(n^2), but it doesn't answer the question of why O(n^2) is the time complexity but instead asks for an example of such an algorithm. From my understanding an algorithm that runs 1+2+3+...+n times would be
less than O(n^2). For example, take this function
function(n: number) {
let sum = 0;
for(let i = 0; i < n; i++) {
for(let j = 0; j < i+1; j++) {
sum += 1;
}
}
return sum;
}
Here are some input and return values
num
sum
1
1
2
3
3
6
4
10
5
15
6
21
7
28
From this table you can see that this algorithm runs in less than O(n^2) but more than O(n). I also realize than algorithm that runs 1+(1+2)+(1+2+3)+...+(1+2+3+...+n) is true O(n^2) time complexity. For the algorithm stated in the problem, do we just say it runs in O(n^2) because it runs more than O(log n) times?
It's known that 1 + 2 + ... + n has a short form of n * (n + 1) / 2. Even if you didn't know that, you have to consider that, when i gets to n, the inner loop runs at most n times. So you have exactly n times (for outer loop i), each running at most n times (for inner loop j), so the O(n^2) becomes more apparent.
I agree that the complexity would be exactly n^2 if the inner loop also ran from 0 to n, so you have your reasons to think that a loop i from 0 to n and another loop j from 0 to i has to perform better and that's true, but with big Oh notation you're actually measuring the degree of algorithm's complexity, not the exact number of operations.
p.s. O(log n) is usually achieved when you split the main problem into sub-problems.
I think you should interpret the table differently. The O(N^2) complexity says that if you double the input N, the runtime should quadruple (take 4 times as long). In this case, the function(n: number) returns a number mirroring its runtime. I use f(N) as a short for it.
So say N goes from 1 to 2, which means the input has doubled (2/1 = 2). The runtime then has gone from f(1) to f(2), which means it has increased f(2)/f(1) = 3/1 = 3 times. That is not 4 times, but the Big-O complexity measure is asymptotic, dealing with the situation where N approaches infinity. If we test another input doubling from the table, we have f(6)/f(3) = 21/6 = 3.5. It is already closer to 4.
Let us now stray outside the table and try more doublings with bigger N. For example we have f(200)/f(100) = 20100/5050 = 3.980 and f(5000)/f(2500) = 12502500/3126250 = 3.999. The trend is clear. As N approaches infinity, a doubled input tends toward a quadrupled runtime. And that is the hallmark of O(N^2).

How to calculate time complexity of this recursion?

I'm interested in calculating the following code's time and space complexity but seem to be struggling a lot. I know that the deepest the recursion could reach is n so the space should be O(n). I have no idea however how to calculate the time complexity... I don't know how to write the formula when it comes to recursions similar to this forms like: f(f(n-1)) .
if it was something like, return f3(n-1) + f3(n-1) then i know it should be O(2^n) since T(n) = 2T(n-1) correct?
Here's the code:
int f3(int n)
{
if(n <= 2)
return 1;
f3(1 + f3(n-2));
return n - 1;
}
Thank you for your help!
Notice that f3(n) = n - 1 for all n, so the line f3(1 + f3(n-2)), first f3(n-2) is computed, which returns n - 3 and then f3(1 + n - 3) = f3(n-2) is computed again!
So, f3(n) computes f3(n-2) twice, alongside with some O(1) overhead.
We got the recursion formula T(n) = 2T(n-2) + c for some constant c, and T(n) is the running time of f3(n).
Solving the recursion, we get T(n) = O(2^(n/2)).

Complexity Algorithm Analysis with if

I have the following code. What time complexity does it have?
I have tried to write a recurrence relation for it but I can't understand when will the algorithm add 1 to n or divide n by 4.
void T(int n) {
for (i = 0; i < n; i++);
if (n == 1 || n == 0)
return;
else if (n%2 == 1)
T(n + 1);
else if (n%2 == 0)
T(n / 4);
}
You can view it like this: you always divide by four only if you have odd you add 1 to n before division. So, you should count how many times 1 was added. If there no increments then you have log4n recursive calls. Let's assume that you always have to add 1 before division. Then can rewrite it like this:
void T(int n) {
for (i = 0; i < n; i++);
if (n == 1 || n == 0)
return;
else if (n%2 == 0)
T(n / 4 + 1);
}
But n/4 + 1 < n/2, and in case of recursive call T(n/2), running time is O(log(n,4)), but base of logarithm doesn't impact running time in big-O notation because it's just like constant factor. So running time is O(log(n)).
EDIT:
As ALB pointed in a comment, there is cycle of length n. So, with accordance with master theorem running time is Theta(n). You can see it in another way as sum of n * (1 + 1/2 + 1/4 + 1/8 + ...) = 2 * n.
Interesting question. Be aware that even though your for loop is doing nothing, since it is not an optimized solution (see Dukeling's comment), it will be considered in your time complexity as if computing time was taken to iterate through it.
First part
The first section is definitely O(n).
Second part
Let's suppose for the sake of simplicity that half the time n will be odd and the other half time it will be even. Hence, the recursion is looping (n+1) half the time and (n/4) the other half.
Conclusion
For each time T(n) is called, the recursion will implicitly loop n times. Hence, The first half of the time, we will have a complexity of n * (n+1) = n^2 + n. The other half of the time, we will deal with a n * (n/4) = (1/4)n^2.
For Big O notation, we care more about the upper bound than its precise behavior. Hence, your algorithm would be bound by O(n^2).

Complexity of recursive code

I came across this question in a review for my class and I am having a difficult time understanding the professors justification and explanation of the solution. The question:
The following code computes 2^n for given n. Determine total number of lines executed. Justify your answer.
Power2(int n)
1) if(n=0)
2) return 1
3) else
4) k=Power2(n/2)
5) k=k*k;
6) if(k is even)
7) return k
8) else
9) return 2*k
The justification
I don't understand from the "hence" part on. If someone could break these steps down for me a little more and describe how these steps are equivalent, it would be a great help.
t(2^k) = //this is because 2^k is even so
[t(2^(k-1))] + 1 //you apply the first rule for t(n)
t(2^(k-1)) + 1 = //note that 2^(k-1) is also even
[t(2^(k-2)) + 1] + 1 //so you apply the same t(n) rule
each 2^(i = 0...k) will be even (except 2^0), so that you can apply the first t(n) rule for all steps, coming to the end of the recursion after k-1 steps.
then, you note that if you want to evaluate t(2^k), you must do about k operations, and k = lg(2^k), so Power2 is a logarithmic-time function,
The line 7) is wrong it should read 7) if (n is even).
Regarding the Hence part, they take
n = 2k
and show there are k iterations. Yet, from above equation we get
k = log2(n)
thus having k iterations is having log2(n) iterations.
The algorithm is O(log(n))

Time complexity of the following algorithm?

I'm learning Big-O notation right now and stumbled across this small algorithm in another thread:
i = n
while (i >= 1)
{
for j = 1 to i // NOTE: i instead of n here!
{
x = x + 1
}
i = i/2
}
According to the author of the post, the complexity is Θ(n), but I can't figure out how. I think the while loop's complexity is Θ(log(n)). The for loop's complexity from what I was thinking would also be Θ(log(n)) because the number of iterations would be halved each time.
So, wouldn't the complexity of the whole thing be Θ(log(n) * log(n)), or am I doing something wrong?
Edit: the segment is in the best answer of this question: https://stackoverflow.com/questions/9556782/find-theta-notation-of-the-following-while-loop#=
Imagine for simplicity that n = 2^k. How many times x gets incremented? It easily follows this is Geometric series
2^k + 2^(k - 1) + 2^(k - 2) + ... + 1 = 2^(k + 1) - 1 = 2 * n - 1
So this part is Θ(n). Also i get's halved k = log n times and it has no asymptotic effect to Θ(n).
The value of i for each iteration of the while loop, which is also how many iterations the for loop has, are n, n/2, n/4, ..., and the overall complexity is the sum of those. That puts it at roughly 2n, which gets you your Theta(n).

Resources