Is this worst-case analysis correct? - algorithm

Here is the code:
int Outcome = 0;
for (int i = 0; i < N; i++)
for (int j = i+2; j = 0; j--)
Outcome += i*j;
Here's my analysis. Since the first line is an assignment statement, this takes exactly one time unit, O(1). The breakdown for line 2 is : 1 + N + N = 2N + 2. With line 3,
since the loop’s content is a single operation, the loop and its block perform i+1 operations. This is also a nested for loop. Finally, line 4 takes exactly one time unit to execute. Therefore, the big-Oh notation for this code in terms of N is O(N2).

To be exact: As you say, line 4 is 1 operation. For a specific i, you execute the inner loop i+3 times. Therefore, your total number of operations is
sum(0 <= i <= N-1 : i+3)
= 3N + sum(0 <= i <= N-1 : i)
= 3N + N(N-1) / 2
= N^2/2 + 5N/2
= O(N^2)

Your intuition is correct about the final efficiency class, but it is possible to be more rigorous. The first thing is that you usually just pick the most expensive basic operation to count for your analysis. In this case it would likely be the multiplication in the innermost loop, which is executed once per iteration. So how many times is it called? On the first iteration of the outermost loop, the inner loop will iterate twice. On the second outer iteration, it will be three times, and similarly up to N+2 (I'm assuming the inner loop condition is meant to be j >= 0). So that leaves us with the following summation:
sum(2, 3, 4, 5, 6 ..., N+2)
= sum(1, 2, 3, 4 ..., N+2) - 1
= (N+2)(N+3)/2 - 1
Which is in O(N²) (and actually since you have this specific result that will always be the same you can say it's in ϴ(N²)).

Related

Big O for this triple nested loop?

What's the big O of this?
for (int i = 1; i < n; i++) {
for (int j = 1; j < (i*i); j++) {
if (j % i == 0) {
for (int k = 0; k < j; k++) {
// Simple computation
}
}
}
}
Can't really figure it out. Inclined to say O(n^4 log(n)) but feel like i'm wrong here.
This is quite a confusing analysis, so let's break it down bit by bit to make sense of the calculations:
The outermost loop runs for n-1 iterations (since 1 ≤ i < n).
The next loop inside it makes (i² - 1) iterations for each index i of the outer loop (since 1 ≤ j < i²).
In total, this means the number of iterations for these two loops is equal to calculating the sum of (i²-1) for each 1 ≤ i < n. This is similar to computing the sum of the first n squares, and is order of magnitude of O(n³).
Note the modulo operator % takes constant time (O(1)) to compute, therefore checking the condition if (j % i == 0) for all iterations of these two loops will not affect the O(n³) runtime.
Now let's talk about the inner loop inside the conditional.
We are interested in seeing how many times (and for which values of j) this if condition evaluates to true, since this would dictate how many iterations the innermost loop will run.
Practically speaking, (j % i) will never equal 0 if j < i, so the second loop could actually be shortened to start from i rather than from 1, however this will not impact the Big-O upper bound of the algorithm.
Notice that for a given number i, (j % i == 0) if and only if i is a divisor of j. Since our range is (1 ≤ j < i²), there will be a total of (i-1) values of j for which this will be true, for any given i. If this is confusing, consider this example:
Let's assume i = 4. Then our index j would iterate through all values 1,..,15=i²,
and (j%i == 0) would be true for j = 4, 8, 12 - exactly (i - 1) values.
The innermost loop would therefore make a total of (12 + 8 + 4 = 24) iterations. Thus for a general index i, we would look for the sum: i + 2i + 3i + ... + (i-1)i to indicate the number of iterations the innermost loop would make.
And this could be generalized by calculating the sum of this arithmetic progression. The first value is i and the last value is (i-1)i, which results in a sum of (i³ - i²)/2 iterations of the k loop for every value of i. In turn, the sum of this for all values of i could be computed by calculating the sum of cubes and the sum of squares - for a total runtime of O(n⁴) iterations of the innermost loop (the k loop) for all values of i.
Thus in total, the runtime of this algorithm would be the total of both runtimes we calculated above. We checked the if statement O(n³) times and the innermost loop ran for O(n⁴), so assuming // Simple computation runs in constant time, our total runtime would come down to:
O(n³) + O(n⁴)*O(1) = O(n⁴)
Let us assume that i = 2.Then j can be [1,2,3].The "k" loop will run for j = 2 only.
Similarly for i=3,j can be[1,2,3,4,5,6,7,8].hence, k can run for j = 3,6. You can see a pattern here that for any value of i, the 'k' loop will run (i-1) times.The length of loops will be [i,2*i,3*i,....i*i].
Hence the time complexity of k loop is
=i+(2*i)+(3*i)+ ..... +(i*i)
=(i^2)(i+1)/2
Hence the final complexity will be
= (n^3)(n+3)/2

time complexity of three codes where variables depend on each other

1) i=s=1;
while(s<=n)
{
i++;
s=s+i;
}
2) for(int i=1;i<=n;i++)
for(int j=1;j<=n;j+=i)
cout<<"*";
3) j=1;
for(int i=1;i<=n;i++)
for(j=j*i;j<=n;j=j+i)
cout<<"*";
can someone explain me the time complexity of these three codes?
I know the answers but I can't understand how it came
1) To figure this out, we need to figure out how large s is on the x'th iteration of the loop. Then we'll know how many iterations occur until the condition s > n is reached.
On the x'th iteration, the variable i has value x + 1
And the variable s has value equal to the sum of i for all previous values. So, on that iteration, s has value equal to
sum_{y = 1 .. x} (y+1) = O(x^2)
This means that we have s = n on the x = O(\sqrt{n}) iteration. So that's the running time of the loop.
If you aren't sure about why the sum is O(x^2), I gave an answer to another question like this once here and the same technique applies. In this particular case you could also use an identity
sum_{y = 1 .. x} y = y choose 2 = (y+1)(y) / 2
This identity can be easily proved by induction on y.
2) Try to analyze how long the inner loop runs, as a function of i and n. Since we start at one, end at n, and count up by i, it runs n/i times. So the total time for the outer loop is
sum_{i = 1 .. n} n/i = n * sum_{i = 1 .. n} 1 / i = O(n log n)
The series sum_{i = 1 .. n} 1 / i is called the harmonic series. It is well-known that it converges to O(log n). I can't enclose here a simple proof. It can be proved using calculus though. This is a series you just have to know. If you want to see a simple proof, you can look on on wikipedia at the "comparison test". The proof there only shows the series is >= log n, but the same technique can be used to show it is <= O(log n) also.
3.) This looks like kind of a trick question. The inner loop is going to run once, but once it exits with j = n + 1, we can never reenter this loop, because no later line that runs will make j <= n again. We will run j = j * i many times, where i is a positive number. So j is going to end up at least as large as n!. For any significant value of n, this is going to cause an overflow. Ignoring that possibility, the code is going to perform O(n) operations in total.

How to effectively calculate an algorithm's time complexity? [duplicate]

This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
Closed 5 years ago.
I'm studying algorithm's complexity and I'm still not able to determine the complexity of some algorithms ... Ok I'm able to figure out basic O(N) and O(N^2) loops but I'm having some difficult in routines like this one:
// What is time complexity of fun()?
int fun(int n)
{
int count = 0;
for (int i = n; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count += 1;
return count;
}
Ok I know that some guys can calculate this with the eyes closed but I would love to to see a "step" by "step" how to if possible.
My first attempt to solve this would be to "simulate" an input and put the values in some sort of table, like below:
for n = 100
Step i
1 100
2 50
3 25
4 12
5 6
6 3
7 1
Ok at this point I'm assuming that this loop is O(logn), but unfortunately as I said no one solve this problem "step" by "step" so in the end I have no clue at all of what was done ....
In case of the inner loop I can build some sort of table like below:
for n = 100
Step i j
1 100 0..99
2 50 0..49
3 25 0..24
4 12 0..11
5 6 0..5
6 3 0..2
7 1 0..0
I can see that both loops are decreasing and I suppose a formula can be derived based on data above ...
Could someone clarify this problem? (The Answer is O(n))
Another simple way to probably look at it is:
Your outer loop initializes i (can be considered step/iterator) at n and divides i by 2 after every iteration. Hence, it executes the i/2 statement log2(n) times. So, a way to think about it is, your outer loop run log2(n) times. Whenever you divide a number by a base continuously till it reaches 0, you effectively do this division log number of times. Hence, outer loop is O(log-base-2 n)
Your inner loop iterates j (now the iterator or the step) from 0 to i every iteration of outer loop. i takes the maximum value of n, hence the longest run that your inner loop will have will be from 0 to n. Thus, it is O(n).
Now, your program runs like this:
Run 1: i = n, j = 0->n
Run 2: i = n/2, j = 0->n/2
Run 3: i = n/4, j = 0->n/4
.
.
.
Run x: i = n/(2^(x-1)), j = 0->[n/(2^(x-1))]
Now, runnning time always "multiplies" for nested loops, so
O(log-base-2 n)*O(n) gives O(n) for your entire code
Lets break this analysis up into a few steps.
First, start with the inner for loop. It is straightforward to see that this takes exactly i steps.
Next, think about which different values i will assume over the course of the algorithm. To start, consider the case where n is some power of 2. In this case, i starts at n, then n/2, then n/4, etc., until it reaches 1, and finally 0 and terminates. Because the inner loop takes i steps each time, then the total number of steps of fun(n) in this case is exactly n + n/2 + n/4 + ... + 1 = 2n - 1.
Lastly, convince yourself this generalizes to non-powers of 2. Given an input n, find smallest power of 2 greater than n and call it m. Clearly, n < m < 2n, so fun(n) takes less than 2m - 1 steps which is less than 4n - 1. Thus fun(n) is O(n).

Easiest way to calculate time complexity?

I find it pretty easy to calculate the time complexity for most problems, but my prof gives really complicated examples and I have trouble figuring them out. Here are two that he's given us that I couldn't get to the bottom of:
Example 1:
x = 0
j = n
while (j >= 1)
for i = 1 to j do
x += 1
j *= 3/4
return x
Example2:
x = 0
j = 3
while (j <= n)
x += 1
j *= j
return x
Please note that for operations like x += 1 and j *= j, we only count this as 1 time unit.
If you could show me how you would calculate the time complexity for these examples, I should be able to deduce how I would do it for most of the ones he gives. Thanks!
Answers:
1. O(j)
2. O(log log(n))
Explanation:
See the inner loop. It operates j times in the 1st entry. Now, j=3j/4. So, second time, it operates 3j/4 times. 3rd time, 9j/16, and so on. The total number of operations will be:
j + 3j/4 + 9j/16 + ...
= 4j
So, complexity will be O(4*j) = O(j).
There is only one loop. The value of its controller (j) increases as:
3, 9, 81, 6561, ...
Now, the number of iterations it will make until it reaches a certain number n will be log log (n). If it increased by a multiple of 3 everytime, like:
3, 9, 27, 81, 243...
the complexity would have been O(log n).

Analysis of for loop

Consider this fragment of code
int sum = 0;
for( int i = 1; i <= n*n; i = i*2 ){
sum++ ;
}
How to do a quick proper analysis for it to get order of growth of the worst case running time?
How does changing the increment statement to i = i * 3 instead of i = i * 2 changes the worst case running time?
And is our analysis affected by changing comparison operator to < instead of <= ?
int sum = 0;
for( int i = 0; i <= n*n; i = i*2 ){
sum++ ;
}
As it stands, this is an infinite loop which will never stop, since i is never changing.
As complexity is defined for only Algorithms, which by definition should terminate in finite amount of time, it is undefined for this snippet.
However, if you change the code to the following :
int sum = 0;
for( int i = 1; i <= n*n; i = i*2 ){
sum++ ;
}
We can analyze the complexity as follows:
Let the loop run k - 1 times, and terminate at kth updation of i.
Since it's better to be redundant than to be unclear, here is what is happening:
Init(1) -> test(1) -> Loop(1) [i = 1]->
Update(2) -> test(2) -> Loop(2) [i = 2]->
...
Update(k - 1) -> test(k - 1) -> Loop(k - 1) [i = 2 ^ (k - 2)] ->
Update(k) -> test(k)->STOP [Test fails as i becomes 2 ^ (k - 1)]
Where Update(k) means kth update (i = i * 2).
Since, the increments in i are such that in the pth loop (or equivalently, after pth updation), the value of i will be 2 ^ (p - 1), we can say that at termination:
2 ^ (k - 1) > (n * n)
In verbose, we have terminated at the kth updation. Whatever the value of i was, it would've been greater than (n * n) or we would have gone for the kth loop. Taking log base 2 on both sides:
k ~ 2 * log(n)
Which implies that k is O(log(n)).
Equivalently, the number of times the loop runs is O(log(n)).
You can easily extend this idea to any limit (say n*n*n) and any increments (i*3, i*4 etc.)
The Big O complexity will be unaffected by using < instead of <=
Actualy this loop is infinte loop.
i=0
i=i*2 //0*2=0
So this loop will never end. Make i=1 to get the count of powers of 2 till n^2 not sum.
for any loop, to analys it. u have to see 2 things. the condition that will make it exit, and the iteration applied to the variable used in the condition..
for your code. u can notice that the loop stops when i goes from 0 to n*n (n^2). and the variable i is increasing like i = i*2. as i is increasing i in this manner, the loop would run for log (n^2) times. this you can see by taking an example value of n^2, like 128, and then iterate it manually one by one.

Resources