Big O for this triple nested loop? - algorithm

What's the big O of this?
for (int i = 1; i < n; i++) {
for (int j = 1; j < (i*i); j++) {
if (j % i == 0) {
for (int k = 0; k < j; k++) {
// Simple computation
}
}
}
}
Can't really figure it out. Inclined to say O(n^4 log(n)) but feel like i'm wrong here.

This is quite a confusing analysis, so let's break it down bit by bit to make sense of the calculations:
The outermost loop runs for n-1 iterations (since 1 ≤ i < n).
The next loop inside it makes (i² - 1) iterations for each index i of the outer loop (since 1 ≤ j < i²).
In total, this means the number of iterations for these two loops is equal to calculating the sum of (i²-1) for each 1 ≤ i < n. This is similar to computing the sum of the first n squares, and is order of magnitude of O(n³).
Note the modulo operator % takes constant time (O(1)) to compute, therefore checking the condition if (j % i == 0) for all iterations of these two loops will not affect the O(n³) runtime.
Now let's talk about the inner loop inside the conditional.
We are interested in seeing how many times (and for which values of j) this if condition evaluates to true, since this would dictate how many iterations the innermost loop will run.
Practically speaking, (j % i) will never equal 0 if j < i, so the second loop could actually be shortened to start from i rather than from 1, however this will not impact the Big-O upper bound of the algorithm.
Notice that for a given number i, (j % i == 0) if and only if i is a divisor of j. Since our range is (1 ≤ j < i²), there will be a total of (i-1) values of j for which this will be true, for any given i. If this is confusing, consider this example:
Let's assume i = 4. Then our index j would iterate through all values 1,..,15=i²,
and (j%i == 0) would be true for j = 4, 8, 12 - exactly (i - 1) values.
The innermost loop would therefore make a total of (12 + 8 + 4 = 24) iterations. Thus for a general index i, we would look for the sum: i + 2i + 3i + ... + (i-1)i to indicate the number of iterations the innermost loop would make.
And this could be generalized by calculating the sum of this arithmetic progression. The first value is i and the last value is (i-1)i, which results in a sum of (i³ - i²)/2 iterations of the k loop for every value of i. In turn, the sum of this for all values of i could be computed by calculating the sum of cubes and the sum of squares - for a total runtime of O(n⁴) iterations of the innermost loop (the k loop) for all values of i.
Thus in total, the runtime of this algorithm would be the total of both runtimes we calculated above. We checked the if statement O(n³) times and the innermost loop ran for O(n⁴), so assuming // Simple computation runs in constant time, our total runtime would come down to:
O(n³) + O(n⁴)*O(1) = O(n⁴)

Let us assume that i = 2.Then j can be [1,2,3].The "k" loop will run for j = 2 only.
Similarly for i=3,j can be[1,2,3,4,5,6,7,8].hence, k can run for j = 3,6. You can see a pattern here that for any value of i, the 'k' loop will run (i-1) times.The length of loops will be [i,2*i,3*i,....i*i].
Hence the time complexity of k loop is
=i+(2*i)+(3*i)+ ..... +(i*i)
=(i^2)(i+1)/2
Hence the final complexity will be
= (n^3)(n+3)/2

Related

Time complexity of two separate nested while loops

I just started my data structures course and I'm having troubles trying to figure out the time complexity in the following code:
{
int j, y;
for(j = n; j >= 1; j--)
{
y = 1;
while (y < j)
y *= 2;
while (y > 2)
y = sqrt(y);
}
The outer 'for' loop is running n times in every run of the code, and the first 'while' loop runs about log2(j) if I'm not mistaken.
I'm not sure about the second 'while' loop and how to determine the overall time complexity of the code.
My initial thoughts were to determine which 'while' loop would "cost" more in each iteration of the 'for' loop, consider only the higher of the two and sum it up but obviously it didn't lead me to an answer.
Would appreciate any help, especially in what is the the process and overall approach in trying to compute the complexity in codes such as this one.
You are right that the first while loop has time complexity O(log(j)). The second while loop repeatedly executes square root on y until it's less than 2.
Since y is approximately j (it's between j and 2j), the question is: how often can you perform a square root on j until you get a number less than or equal to 2? But equivalently you could ask: how often can you square 2 until you get a number larger than or equal to j? Or as an equation:
(((2^2)^...)^2 >= j // k repeated squares
<=> 2^(2^k) >= j
<=> k >= log(log(j))
So the second while loop has time complexity O(log(log(j)). That's negligible compared to O(log(j)). Now since j <= n and the loop is itereated n times, we get the overall complexity O(n log(n)).

Finding best algorithm for sum of a section of an array's values

Given an array of n integers in the locations A[1], A[2], …, A[n], describe an O(n^2) time algorithm to
compute the sum A[i] + A[i+1] + … + A[j] for all i, j, 1 ≤ i < j ≤ n.
I've tried multiple ways of solving this problem but none have in O(n^2) time.
So for an array containing {1,2,3,4}
You would output:
1+2 = 3
1+2+3 = 6
1+2+3+4 = 10
2+3 = 5
2+3+4 = 9
3+4 = 7
The answer does not need to be in a specific language, pseudocode is preferred.
A good preperation is everything.
You could create an array of integrals:
I[0..n] = (0, I[0] + A[1], I[1] + A[2], ..., I[n-1]+A[n]);
This will cost you O(n) * O(1) (looping over all elements and doing one addition);
Now you can calculate each Sum(A, i, j) with just a single subtraction: I[j] - I[i-1];
so this has O(1)
Looping over all combinations of i and j with 1 <= (i,j) <= n has O(n^2).
So you end up with O(n) * O(1) + O(n^2) * O(1) = O(n^2) .
Edit:
Your array A starts at 1 - adapted to this - this also solves the little quirk with i-1
So the integral array I starts with index 0 and is 1 element larger than A
Edit:
First you'll maybe have thought about the most naive idea:
Naive idea
Create a function that for given values of i and of j will return the sum A[i] + ... + A[j].
function sumRange(A, i, j):
sum = 0
for k = i to j
sum = sum + A[k]
return sum
Then generate all pairs of i and j (with i < j) and call the above function for each pair:
for i = 1 to n
for j = i+1 to n
output sumRange(A, i, j)
This is not O(n²), because already the two loops on i and j represent O(n²) iterations, and then the function will perform yet another loop, making it O(n³).
Better idea
The above can be improved. Look at the repetition it performs. The sum that was calculated for given values of i and j could be reused to calculate the sum for when j has increased with 1, without starting from scratch and summing the values between i and (now) j-1 again, only to add that one more value to it.
We should just remember what the previous sum was, and add A[j] to it.
So without a separate function:
for i = 1 to n
sum = A[i]
for j = i+1 to n
sum = sum + A[j]
output sum
Note how the sum is not reset to 0 once it is output. It is preserved, so that when j is incremented, only one value needs to be added to it.
Now it is O(n²). Note also how it does not require an extra array for storage. It only needs the memory for a few variables (i, j, sum), so its space complexity is O(1).
As the number of sums you need to output is O(n²), there is no way to improve this time complexity any further.
NB: I assume here that single array values do not constitute a "sum". As you stated in your question, i < j, and also in your example you only showed sums of at least two array values. The above can be easily adapted to also include single value "sums" if ever that were needed.

Run time analysis on a nested for loop

I am trying to find the run time on each line, when the best case and worst case would occur, and the Big-O in worst and best case.
What the code i pasted does is find the the longest length of the sequence of ascending numbers in array.
For example if we had [4,5,6,9,1,2,3,4,5,6] , the longest sequence would be 6.
will the first for loop will be executed n times?
will the second for loop be executed n times?
will the if statement be executed n times?
will the statement inside the if, be executed n times?
Will the best case occur when the array is in ascending order and the worst occur when the array is in descending order?
The reason I don't believe this to be true is that, when it is sorted in ascending order the second loop will be ran all the way through. When it is sorted in descending order, the second loop will always break because the and statement does not hold true.
for (i = 0, length = 1; i < n-1; i++) {
for (i1 = i2 = k = i; k < n-1 && a[k] < a[k+1]; k++, i2++);
if (length < i2 - i1 + 1)
length = i2 - i1 + 1;
}
return length;
First, note that the variables i1 and i2 are not really needed, as i1 == i and i2 == k at each iteration of the inner loop, so we can just write:
for (i = 0, length = 1; i < n-1; i++) {
for (k = i; k < n-1 && a[k] < a[k+1]; k++);
if (length < k - i + 1)
length = k - i + 1;
}
return length;
The outer loop will execute n-1 times. No difference in worst/best case there.
The best case occurs when a is a non-increasing sequence. In that case a[k] < a[k+1] will never be true and thus the inner loop's condition will only be executed once. The return value will in that case be 1, and the time complexity O(n).
The worst case occurs when a is an ever increasing sequence. In that case a[k] < a[k+1] will always be true, and thus the inner loop's condition will iterate n-1 - i times, the loop's condition once time more when k < n-1 is false.
The if condition and the adjustment of length execute in constant time.
The nested loop's body (the if) executes in total as many times as one can make a multiset of 2 elements (represented by i and k) from a set of n-1 elements. The formula for that is (n-1)n/2, which is O(n²).

Proposed analysis of algorithm

I have been practicing analyzing algorithms lately. I feel like I have a pretty good understanding of analyzing non-recursive algorithms but I am unsure, and have just begun to embark on a full understanding of recursive algorithm as well. Although, I have not had a formal check on my methods and if what I have been doing is indeed correct
Would it be too much to ask if someone could check a few algorithms that I have implemented and analyzed and see if my understanding is along the right lines or if I am completely off.
here:
1)
sum = 0;
for (i = 0; i < n; i++){
for (j = 0; j < i*i; j++){
if (j % i == 0) {
for (k = 0; k < j; k++){
sum++;
}
}
}
}
My analysis of this one was O(n^5) due to:
Sum(i = 0 to n)[Sum(j = 0 to i^2)[Sum(k = 0 to j) of 1]]
which evaluated to:
(1/2)(n^5/5 + n^4/2 + n^3/3 - n/30) + (1/2)(n^3/3 + n^2/2 + n/6) + (1/2)(n^3/3 + n^2/2 + n/6) + n + 1.
Hence it is O(n^5)
Is this correct as an evaluation of the summations of the loops?
a triple summation. I have assumed that the if statement will always pass for worse case complexity. Is this a correct assumption for worst case?
2)
int tonyblair (int n, int a) {
if (a < 12) {
for (int i = 0; i < n; i++){
System.out.println("*");
}
tonyblair(n-1, a);
} else {
for (int k = 0; k < 3000; k++){
for (int j = 0; j < nk; j++){
System.out.println("#");
}
}
}
}
My analysis of this algorithm is O(infinity) due to the infinite recursion in the if statement if it is assumed to be true, which would be the worst case. Although, for pure analysis, I analyzed if this were not true and the if statement would not run. I then got a complexity of O(nk) due to:
Sum(k = 0 to 3000)[Sum(j = 0 to nk) of 1]
which then evaluated to nk(3001) + 3001. Hence is O(nk), where k is not discarded due to it controlling the number of iterations of the loop.
Number 1
I can't tell how you've derived your formula. Usually adding terms happens when there are multiple steps in an algorithm, such as precomputing data and then looking up values from the data. Instead, nested for loops implies multiplication. Also, the worst case is the best case for this snippet of code, because given a value of n, sum will be the same at the end.
To find the complexity, we want to find the number of times that the inner loop is evaluated. Summations are often easy to solve if they go from 1 to n, so I'm going to drop the 0s from them later on. If i is 0, the middle loop won't run, and if j is 0, the inner loop won't run. We can rewrite the code equivalently as:
sum = 0;
for (i = 1; i < n; i++)
{
for (j = 1; j < i*i; j++)
{
if (j % i == 0)
{
for (k = 0; k < j; k++)
{
sum++;
}
}
}
}
I could make my life harder by forcing the outer loop to start at 2, but I'm not going to. The outer loop now runs from 1 to n-1. The middle loop runs based on the current value of i, so we need to do a summation:
The middle for loop always goes to (i^2 - 1), and j will only be divisible by i for a total of (i - 1) times (i, i*2, i*3, ..., i*(i-2), i*(i-1)). With this, we get:
The middle loop then executes j times. The j in our summation is not the same as the j in the code though. The j in the summation represents each time the middle loop executes. Each time the middle loop executes, the j in the code will be (i * (number of executions so far)) = i * (the j in the summation). Therefore, we have:
We can move the i to in-between the two summations, as it is a constant for the inner summation. Then, the formula for the sum of 1 to n is well known: n*(n+1)/2. Because we are going to n - 1, we must subtract n out. This gives:
The summations for the sum of squares and the sum of cubes are also well known. Keeping in mind that we are only summing to n-1 in both cases, we must remember to subtract n^3 and n^2, respectively, and we get:
This is obviously n^4. If we solve it all the way, we get:
Number 2
For the last one, it is in fact O(infinity) if a < 12 because of the if statement. Well, technically everything is O(infinity), because Big-O only provides an upper bound on runtime. If a < 12, it is also omega(infinity) and theta(infinity). If only the else runs, then we have the summation from 1 to 2999 of i*n:
It's very important to notice that the summation from 1 to 2999 is a constant (it's 4498500). No matter how large a constant is, it's still a constant, and not dependent on n. We will end up throwing it out of the runtime calculations. Sometimes, when a theoretically fast algorithm has a large constant, it is practically slower than other algorithms that are theoretically slow. One example I can think of is Chazelle's linear time triangulation algorithm. No one has ever implemented it. In any case, we have 4498500 * n. This is theta(n):

Is this worst-case analysis correct?

Here is the code:
int Outcome = 0;
for (int i = 0; i < N; i++)
for (int j = i+2; j = 0; j--)
Outcome += i*j;
Here's my analysis. Since the first line is an assignment statement, this takes exactly one time unit, O(1). The breakdown for line 2 is : 1 + N + N = 2N + 2. With line 3,
since the loop’s content is a single operation, the loop and its block perform i+1 operations. This is also a nested for loop. Finally, line 4 takes exactly one time unit to execute. Therefore, the big-Oh notation for this code in terms of N is O(N2).
To be exact: As you say, line 4 is 1 operation. For a specific i, you execute the inner loop i+3 times. Therefore, your total number of operations is
sum(0 <= i <= N-1 : i+3)
= 3N + sum(0 <= i <= N-1 : i)
= 3N + N(N-1) / 2
= N^2/2 + 5N/2
= O(N^2)
Your intuition is correct about the final efficiency class, but it is possible to be more rigorous. The first thing is that you usually just pick the most expensive basic operation to count for your analysis. In this case it would likely be the multiplication in the innermost loop, which is executed once per iteration. So how many times is it called? On the first iteration of the outermost loop, the inner loop will iterate twice. On the second outer iteration, it will be three times, and similarly up to N+2 (I'm assuming the inner loop condition is meant to be j >= 0). So that leaves us with the following summation:
sum(2, 3, 4, 5, 6 ..., N+2)
= sum(1, 2, 3, 4 ..., N+2) - 1
= (N+2)(N+3)/2 - 1
Which is in O(N²) (and actually since you have this specific result that will always be the same you can say it's in ϴ(N²)).

Resources