Complexity of recursive code - algorithm

I came across this question in a review for my class and I am having a difficult time understanding the professors justification and explanation of the solution. The question:
The following code computes 2^n for given n. Determine total number of lines executed. Justify your answer.
Power2(int n)
1) if(n=0)
2) return 1
3) else
4) k=Power2(n/2)
5) k=k*k;
6) if(k is even)
7) return k
8) else
9) return 2*k
The justification
I don't understand from the "hence" part on. If someone could break these steps down for me a little more and describe how these steps are equivalent, it would be a great help.

t(2^k) = //this is because 2^k is even so
[t(2^(k-1))] + 1 //you apply the first rule for t(n)
t(2^(k-1)) + 1 = //note that 2^(k-1) is also even
[t(2^(k-2)) + 1] + 1 //so you apply the same t(n) rule
each 2^(i = 0...k) will be even (except 2^0), so that you can apply the first t(n) rule for all steps, coming to the end of the recursion after k-1 steps.
then, you note that if you want to evaluate t(2^k), you must do about k operations, and k = lg(2^k), so Power2 is a logarithmic-time function,

The line 7) is wrong it should read 7) if (n is even).
Regarding the Hence part, they take
n = 2k
and show there are k iterations. Yet, from above equation we get
k = log2(n)
thus having k iterations is having log2(n) iterations.
The algorithm is O(log(n))

Related

How to solve T(n) = 5T(n/2) + O(nlogn) using recursion

So that might be silly, but I'm stuck with this recursion T(n) = 5T(n/2) + O(nlogn). I know from Master Theorem that it's supposed to be
, but I can't really get there.
So far I got to a point of
I just wanted to know if I'm going the right direction with that
You're definitely on the right track here! Let's see if we can simplify that summation.
First, notice that you can pull out the log n term from the summation, since it's independent of the sum. That gives us
(n log n) (sum from k = 0 to lg n (5/2)k)
That sum is the sum of a geometric series, so it solves to
((5/2)log n + 1 - 1) / (5/2 - 1)
= O((5/2)lg n)
Here, we can use the (lovely) identity that alogb c = clogb a to rewrite
O((5/2)lg n) = O(nlg 5/2)
= O(nlg 5 - 1)
And plugging that back into our original formula gives us
n log n · O(nlg 5 - 1) = O(nlg 5 log n).
Hmmm, that didn't quite work. We're really, really close to having something that works here, though! A good question to ask is why this didn't work, and for that, we have to go back to how you got that original summation in the first place.
Let's try expanding out a few terms of the recurrence T(n) using the recursion method. The first expansion gives us
T(n) = 5T(n / 2) + n log n.
The next one is where things get interesting:
T(n) = 5T(n / 2) + n log n
= 5(5T(n / 4) + (n / 2) log (n / 2)) + n log n
= 25T(n / 4) + (5/2) log (n / 2) + n log n
Then we get
T(n) = 25T(n / 4) + (5/2) log (n/2) + n log n
= 25(5T(n / 8) + (n / 4) log (n / 4)) + (5/2) log (n/2) + n log n
= 125T(n / 8) + (25/4)n log (n / 4) + (5/2) log (n/2) + n log n
The general pattern here seems to be the following sum:
T(n) = sum from i = 0 to lg n (5/2)kn lg(n/2k)
= n sum from i = 0 to lg n (5/2)k lg(n/2k)
And notice that this is not your original sum! In particular, notice that the log term isn't log n, but rather a function that grows much more slowly than that. In fact, as k gets bigger, that logarithmic term gets much, much smaller. In fact, if you think about it, the only time we're really paying the full lg n cost here is when k = 0.
Here's a cute little trick we can use to make this sum easier to work with. The log function grows very, very slowly, so slowly, in fact, that we can say that log n = o(nε) for any ε > 0. So what what happens if we try upper-bounding this summation by replacing lg (n / 2k) with (n / 2k)ε for some very small but positive ε? Well, then we'd get
T(n) = n sum from i = 0 to lg n (5/2)k lg(n/2k)
= O(n sum from i = 0 to lg n (5/2)k (n / 2k)ε)
= O(n sum from i = 0 to lg n (5/2)k nε (1 / 2ε)k)
= O(n1+ε sum from i = 0 to lg n (5 / 21+ε))
This might have seemed like some sort of sorcery, but this technique - replacing logs with tiny, tiny polynomials - is a nice one to keep in your back pocket. It tends to come up in lots of contexts!
The expression we have here might look a heck of a lot worse than the one we started with, but it's about to get a lot better. Let's imagine that we pick ε to be sufficiently small - say, so that that 5 / 21+ε is greater than one. Then that inner summation is, once again, the sum of a geometric series, and so we simplify it to
((5/21+ε)lg n + 1 - 1) / (5/21+ε - 1)
= O((5/21+ε)lg n)
= O(nlg (5/21+ε)) (using our trick from before)
= O(nlg 5 - 1 - ε)
And that's great, because our overall runtime is then
T(n) = O(n1+ε nlg 5 - 1 - ε)
= O(nlg 5),
and you've got your upper bound!
To summarize:
Your original summation can be simplified using the formula for the sum of a geometric series, along with the weird identity that alogb c = clogb a.
However, that won't give you a tight upper bound, because your original summation was slightly off from what you'd get from the recursion method.
By repeating the analysis using the recursion method, you get a tighter sum, but one that's harder to evaluate.
We can simplify that summation by using the fact that log n = o(nε) for any ε > 0, and use that to rejigger the sum to make it easier to manipulate.
With that simplification in place, we basically redo the analysis using the same techniques as before - sums of geometric series, swapping terms in exponents and logs - to arrive at the result.
Hope this helps!

Complexity Algorithm Analysis with if

I have the following code. What time complexity does it have?
I have tried to write a recurrence relation for it but I can't understand when will the algorithm add 1 to n or divide n by 4.
void T(int n) {
for (i = 0; i < n; i++);
if (n == 1 || n == 0)
return;
else if (n%2 == 1)
T(n + 1);
else if (n%2 == 0)
T(n / 4);
}
You can view it like this: you always divide by four only if you have odd you add 1 to n before division. So, you should count how many times 1 was added. If there no increments then you have log4n recursive calls. Let's assume that you always have to add 1 before division. Then can rewrite it like this:
void T(int n) {
for (i = 0; i < n; i++);
if (n == 1 || n == 0)
return;
else if (n%2 == 0)
T(n / 4 + 1);
}
But n/4 + 1 < n/2, and in case of recursive call T(n/2), running time is O(log(n,4)), but base of logarithm doesn't impact running time in big-O notation because it's just like constant factor. So running time is O(log(n)).
EDIT:
As ALB pointed in a comment, there is cycle of length n. So, with accordance with master theorem running time is Theta(n). You can see it in another way as sum of n * (1 + 1/2 + 1/4 + 1/8 + ...) = 2 * n.
Interesting question. Be aware that even though your for loop is doing nothing, since it is not an optimized solution (see Dukeling's comment), it will be considered in your time complexity as if computing time was taken to iterate through it.
First part
The first section is definitely O(n).
Second part
Let's suppose for the sake of simplicity that half the time n will be odd and the other half time it will be even. Hence, the recursion is looping (n+1) half the time and (n/4) the other half.
Conclusion
For each time T(n) is called, the recursion will implicitly loop n times. Hence, The first half of the time, we will have a complexity of n * (n+1) = n^2 + n. The other half of the time, we will deal with a n * (n/4) = (1/4)n^2.
For Big O notation, we care more about the upper bound than its precise behavior. Hence, your algorithm would be bound by O(n^2).

Runtime Analysis of Insertion Sort

I am trying to compute the run-time analysis of this Insertion Sort algorithm:
1) n = length[A]
2) count = 0
3) for (i=1; i<=n; i++)
4) for (j=1; j<=i; j++)
5) if A[j] <= 100
6) for (k=j; k<=j+2*i; k++)
7) A[j] = A[j]-1
8) count = count+1
9) return (count)
I have watched some videoes on youtube like: https://www.youtube.com/watch?v=tmKUHLs21PU
I have also read by book and I cannot find anything online that is similair to this (because of the 3 nested for loops and and if statement).
Now I am pretty good up until about like 5.
I understand that the runtime for line 3 is n, and for 4 it is Σ j =1 to n (tj)
after that I am completely lost, I know that there are to 'Σ's involved with the if statement and 3rd for loop. Can somebody please explain in detail what to do next and why it is like that. Thank you.
This sounds a lot like a homework problem, and it wouldn't be doing you any favors to just give you all the answers, but here are some principles that can hopefully help you figure out the rest on your own.
Line 4 will happen once the first time through the outer loop, twice the second time, and so forth up to n times on the nth time through the loop.
1 + 2 + ... + n
If we rearrange these to put the first and last addend together, then the second and the second-to-last, we see a pattern:
1 + 2 + ... (n-1) + n
= (n + 1) + (n - 1 + 2) + ... + (n - n/2 + n/2 + 1)
= (n + 1) + (n + 1) + ... + (n + 1)
= (n + 1) * n/2
= n²/2 + n/2
In terms of asymptotic complexity, the constant 1/2 and the n are outweighed by the n², so the big-O of line 4 is n².
Line 5 will have to be evaluated as many times as line 4 runs, regardless of what it evaluates to, so that'll be n². But how many times the lines inside it are run will depend on the values in the array. This is where you start running into best-case and worst-case complexity.
In the best case, the value in the array will always be greater than 100, so the complexity of the entire algorithm is equal to the complexity of line 5.
In the worst case, the value in A[j] will always be less than or equal to 100, so the for loop on line 6 will be evaluated, increasing the complexity of the overall algorithm.
I'll leave you to figure out how the remaining lines will affect the overall complexity.
And by the way, this doesn't look like an insertion sort to me. It's not comparing array values to each other and swapping their positions in an array. It's comparing array values to a constant (100) and reducing their values based on their position in the array.

Time complexity of the following algorithm?

I'm learning Big-O notation right now and stumbled across this small algorithm in another thread:
i = n
while (i >= 1)
{
for j = 1 to i // NOTE: i instead of n here!
{
x = x + 1
}
i = i/2
}
According to the author of the post, the complexity is Θ(n), but I can't figure out how. I think the while loop's complexity is Θ(log(n)). The for loop's complexity from what I was thinking would also be Θ(log(n)) because the number of iterations would be halved each time.
So, wouldn't the complexity of the whole thing be Θ(log(n) * log(n)), or am I doing something wrong?
Edit: the segment is in the best answer of this question: https://stackoverflow.com/questions/9556782/find-theta-notation-of-the-following-while-loop#=
Imagine for simplicity that n = 2^k. How many times x gets incremented? It easily follows this is Geometric series
2^k + 2^(k - 1) + 2^(k - 2) + ... + 1 = 2^(k + 1) - 1 = 2 * n - 1
So this part is Θ(n). Also i get's halved k = log n times and it has no asymptotic effect to Θ(n).
The value of i for each iteration of the while loop, which is also how many iterations the for loop has, are n, n/2, n/4, ..., and the overall complexity is the sum of those. That puts it at roughly 2n, which gets you your Theta(n).

Building a recurrence relation for this code?

I need to build a recurrence relation for the following algorithm (T(n) stands for number of elemental actions) and find it's time complexity:
Alg (n)
{
if (n < 3) return;
for i=1 to n
{
for j=i to 2i
{
for k=j-i to j-i+100
write (i, j, k);
}
}
for i=1 to 7
Alg(n-2);
}
I came to this Recurrence relation (don't know if it's right):
T(n) = 1 if n < 3
T(n) = 7T(n-2)+100n2 otherwise.
I don't know how to get the time complexity, though.
Is my recurrence correct? What's the time complexity of this code?
Let's take a look at the code to see what the recurrence should be.
First, let's look at the loop:
for i=1 to n
{
for j=i to 2i
{
for k=j-i to j-i+100
write (i, j, k);
}
}
How much work does this do? Well, let's begin by simplifying it. Rather than having j count up from i to 2i, let's define a new variable j' that counts up from 0 to i. This means that j' = j - i, and so we get this:
for i=1 to n
{
for j' = 0 to i
{
for k=j' to j'+100
write (i, j' + i, k);
}
}
Ah, that's much better! Now, let's also rewrite k as k', where k' ranges from 1 to 100:
for i=1 to n
{
for j' = 0 to i
{
for k'= 1 to 100
write (i, j' + i, k' + j);
}
}
From this, it's easier to see that this loop has time complexity Θ(n2), since the innermost loop does O(1) work, and the middle loop will run 1 + 2 + 3 + 4 + ... + n = Θ(n2) times. Notice that it's not exactly 100n2 because the summation isn't exactly n2, but it is close.
Now, let's look at the recursive part:
for i=1 to 7
Alg(n-2);
For starters, this is just plain silly! There's no reason you'd ever want to do something like this. But, that said, we can say that this is 7 calls to the algorithm on an input of size n - 2.
Accordingly, we get this recurrence relation:
T(n) = 7T(n - 2) + Θ(n2) [if n ≥ 3]
T(n) = Θ(1) [otherwise]
Now that we have the recurrence, we can start to work out the time complexity. That ends up being a little bit tricky. If you think about how much work we'll end up doing, we'll get that
There's 1 call of size n.
There's 7 calls of size n - 2.
There's 49 calls of size n - 4.
There's 343 calls of size n - 6.
...
There's 7k calls of size n - 2k
From this, we immediately get a lower bound of Ω(7n/2), since that's the number of calls that will get made. Each call does O(n2) work, so we can get an upper boudn of O(n27n/2). The true value lies somewhere in there, though I honestly don't know how to figure out what it is. Sorry about that!
Hope this helps!
A formal method is to do the following:
The prevailing order of growth can be intuitively inferred from the source code, when it comes to the number of recursive calls.
An algorithm with 2 recursive calls has a complexity of 2^n; with 3 recursive calls the complexity 3^n and so on.

Resources