int sum = 0;
for(int i = 1; i < n; i++) {
for(int j = 1; j < i * i; j++) {
if(j % i == 0) {
for(int k = 0; k < j; k++) {
sum++;
}
}
}
}
I don't understand how when j = i, 2i, 3i... the last for loop runs n times. I guess I just don't understand how we came to that conclusion based on the if statement.
Edit: I know how to compute the complexity for all the loops except for why the last loop executes i times based on the mod operator... I just don't see how it's i. Basically, why can't j % i go up to i * i rather than i?
Let's label the loops A, B and C:
int sum = 0;
// loop A
for(int i = 1; i < n; i++) {
// loop B
for(int j = 1; j < i * i; j++) {
if(j % i == 0) {
// loop C
for(int k = 0; k < j; k++) {
sum++;
}
}
}
}
Loop A iterates O(n) times.
Loop B iterates O(i2) times per iteration of A. For each of these iterations:
j % i == 0 is evaluated, which takes O(1) time.
On 1/i of these iterations, loop C iterates j times, doing O(1) work per iteration. Since j is O(i2) on average, and this is only done for 1/i iterations of loop B, the average cost is O(i2 / i) = O(i).
Multiplying all of this together, we get O(n × i2 × (1 + i)) = O(n × i3). Since i is on average O(n), this is O(n4).
The tricky part of this is saying that the if condition is only true 1/i of the time:
Basically, why can't j % i go up to i * i rather than i?
In fact, j does go up to j < i * i, not just up to j < i. But the condition j % i == 0 is true if and only if j is a multiple of i.
The multiples of i within the range are i, 2*i, 3*i, ..., (i-1) * i. There are i - 1 of these, so loop C is reached i - 1 times despite loop B iterating i * i - 1 times.
The first loop consumes n iterations.
The second loop consumes n*n iterations. Imagine the case when i=n, then j=n*n.
The third loop consumes n iterations because it's executed only i times, where i is bounded to n in the worst case.
Thus, the code complexity is O(n×n×n×n).
I hope this helps you understand.
All the other answers are correct, I just want to amend the following.
I wanted to see, if the reduction of executions of the inner k-loop was sufficient to reduce the actual complexity below O(n⁴). So I wrote the following:
for (int n = 1; n < 363; ++n) {
int sum = 0;
for(int i = 1; i < n; ++i) {
for(int j = 1; j < i * i; ++j) {
if(j % i == 0) {
for(int k = 0; k < j; ++k) {
sum++;
}
}
}
}
long cubic = (long) Math.pow(n, 3);
long hypCubic = (long) Math.pow(n, 4);
double relative = (double) (sum / (double) hypCubic);
System.out.println("n = " + n + ": iterations = " + sum +
", n³ = " + cubic + ", n⁴ = " + hypCubic + ", rel = " + relative);
}
After executing this, it becomes obvious, that the complexity is in fact n⁴. The last lines of output look like this:
n = 356: iterations = 1989000035, n³ = 45118016, n⁴ = 16062013696, rel = 0.12383254507467704
n = 357: iterations = 2011495675, n³ = 45499293, n⁴ = 16243247601, rel = 0.12383580700180696
n = 358: iterations = 2034181597, n³ = 45882712, n⁴ = 16426010896, rel = 0.12383905075183874
n = 359: iterations = 2057058871, n³ = 46268279, n⁴ = 16610312161, rel = 0.12384227647628734
n = 360: iterations = 2080128570, n³ = 46656000, n⁴ = 16796160000, rel = 0.12384548432498857
n = 361: iterations = 2103391770, n³ = 47045881, n⁴ = 16983563041, rel = 0.12384867444612208
n = 362: iterations = 2126849550, n³ = 47437928, n⁴ = 17172529936, rel = 0.1238518469862343
What this shows is, that the actual relative difference between actual n⁴ and the complexity of this code segment is a factor asymptotic towards a value around 0.124... (actually 0.125). While it does not give us the exact value, we can deduce, the following:
Time complexity is n⁴/8 ~ f(n) where f is your function/method.
The wikipedia-page on Big O notation states in the tables of 'Family of Bachmann–Landau notations' that the ~ defines the limit of the two operand sides is equal. Or:
f is equal to g asymptotically
(I chose 363 as excluded upper bound, because n = 362 is the last value for which we get a sensible result. After that, we exceed the long-space and the relative value becomes negative.)
User kaya3 figured out the following:
The asymptotic constant is exactly 1/8 = 0.125, by the way; here's the exact formula via Wolfram Alpha.
Remove if and modulo without changing the complexity
Here's the original method:
public static long f(int n) {
int sum = 0;
for (int i = 1; i < n; i++) {
for (int j = 1; j < i * i; j++) {
if (j % i == 0) {
for (int k = 0; k < j; k++) {
sum++;
}
}
}
}
return sum;
}
If you're confused by the if and modulo, you can just refactor them away, with j jumping directly from i to 2*i to 3*i ... :
public static long f2(int n) {
int sum = 0;
for (int i = 1; i < n; i++) {
for (int j = i; j < i * i; j = j + i) {
for (int k = 0; k < j; k++) {
sum++;
}
}
}
return sum;
}
To make it even easier to calculate the complexity, you can introduce an intermediary j2 variable, so that every loop variable is incremented by 1 at each iteration:
public static long f3(int n) {
int sum = 0;
for (int i = 1; i < n; i++) {
for (int j2 = 1; j2 < i; j2++) {
int j = j2 * i;
for (int k = 0; k < j; k++) {
sum++;
}
}
}
return sum;
}
You can use debugging or old-school System.out.println in order to check that i, j, k triplet is always the same in each method.
Closed form expression
As mentioned by others, you can use the fact that the sum of the first n integers is equal to n * (n+1) / 2 (see triangular numbers). If you use this simplification for every loop, you get :
public static long f4(int n) {
return (n - 1) * n * (n - 2) * (3 * n - 1) / 24;
}
It is obviously not the same complexity as the original code but it does return the same values.
If you google the first terms, you can notice that 0 0 0 2 11 35 85 175 322 546 870 1320 1925 2717 3731 appear in "Stirling numbers of the first kind: s(n+2, n).", with two 0s added at the beginning. It means that sum is the Stirling number of the first kind s(n, n-2).
Let's have a look at the first two loops.
The first one is simple, it's looping from 1 to n. The second one is more interesting. It goes from 1 to i squared. Let's see some examples:
e.g. n = 4
i = 1
j loops from 1 to 1^2
i = 2
j loops from 1 to 2^2
i = 3
j loops from 1 to 3^2
In total, the i and j loops combined have 1^2 + 2^2 + 3^2.
There is a formula for the sum of first n squares, n * (n+1) * (2n + 1) / 6, which is roughly O(n^3).
You have one last k loop which loops from 0 to j if and only if j % i == 0. Since j goes from 1 to i^2, j % i == 0 is true for i times. Since the i loop iterates over n, you have one extra O(n).
So you have O(n^3) from i and j loops and another O(n) from k loop for a grand total of O(n^4)
Can I get some help in understanding how to solve this tutorial question! I still do not understand my professors explanation. I am unsure of how to count the big 0 for the third/most inner loop. She explains that the answer for this algorithm is O(n^2) and that the 2nd and third loop has to be seen as one loop with the big 0 of O(n). Can someone please explain to me the big O notation for the 2nd / third loop in basic layman terms
Assuming n = 2^m
for ( int i = n; i > 0; i --) {
for (int j =1; j < n; j *= 2){
for (int k =0; k < j; k++){
}
}
}
As far as I understand, the first loop has a big O notation of O(n)
Second loop = log(n)
Third loop = log (n) (since the number of times it will be looped has been reduced by logn) * 2^(2^m-1)( to represent the increase in j? )
lets add print statement to the innermost loop.
for (int j =1; j < n; j *= 2){
for (int k =0; k < j; k++){
print(1)
}
}
output for
j = 1, 1 1
j = 2, 1 1 1
j = 4, 1 1 1 1 1
...
j = n, 1 1 1 1 1 ... n+1 times.
The question boils down to how many 1s will this print.
That number is
(2^0 + 1) + (2^1 + 1) + (2^2 + 1) + ... + (n + 1)
= (2^0 + 1) + (2^1 + 1) + (2^2 + 1) + ... + (n + 1)
= log n + (1 + 2 + 4 + ... + n)
= O(log n + n)
= O(n).
assuming you know why (1 + 2 + 4 + ... + n) = O(n)
O-notation is an upperbound. You can say it has O(n^2). For least upperbound, I believe it should be O(n*log(n)*log(n)) which belongs to O(n^2).
It’s because of the logarithm. If you have log(16) raised to the power 2 is 16. So log(n) raised to the power of 2 is n. That is why your teacher says to view the second and third loop as O(n) together.
If the max iterations for the second loop are O(log(n)) then the second and third loops will be: O(1 + 2 + 3 + ... + log(n)) = O(log(n)(log(n) + 1)/2) = O((log(n)^2 + log(n))/2) = O(n)
for ( int i = n; i > 0; i --) { // This runs n times
for (int j =1; j < n; j *= 2){ // This runs atmost log(n) times, i.e m times.
for (int k =0; k < j; k++){ // This will run atmost m times, when the value of j is m.
}
}
}
Hence, the overall complexity will be the product of all three, as mentioned in the comments under the question.
Upper bound can be loose or tight.
You can say that it is loosely bound under O(n^2) or tightly bound under O(n * m^2).
I would like to know the time complexity of this algorithm and how it is calculated.
for (i = 1; i < 2n; i++) {
for (j = 1; j < i; j++) {
for (k = 1; k < j; k++) {
// do something
}
}
}
Assume the inner statement takes constant time.
The inner loop runs (j-1) times, hence its run time is
t_inner(j) = Sum {k from 1 to j-1} 1
= j-1
The middle loop runs i-1 times. Its run time is:
t_middle(i) = Sum { j from 1 to i-1 } t_inner(j)
= Sum { j from 1 to i-1 } j-1
= 1/2 * (2 - 3 * i + i^2)
The outer loop runs 2n-1 times. Its run time is:
t_outer(n) = Sum { i from 1 to 2n-1 } t_middle(i)
= Sum { i from 1 to 2n-1 } 1/2 * (2 - 3 * i + i^2)
= 1/3 (-3 + 11 n - 12 n^2 + 4 n^3)
From the last formula, we see that the time complexity is O(n^3).
I've got to analyze this loop, among others, and determine its running time using Big-O notation.
for ( int i = 0; i < n; i += 4 )
for ( int j = 0; j < n; j++ )
for ( int k = 1; k < j*j; k *= 2 )`
Here's what I have so far:
for ( int i = 0; i < n; i += 4 ) = n
for ( int j = 0; j < n; j++ ) = n
for ( int k = 1; k < j*j; k *= 2 ) = log^2 n
Now the problem I'm coming to is the final running time of the loop. My best guess is O(n^2), however I am uncertain if this correct. Can anyone help?
Edit: sorry about the Oh -> O thing. My textbook uses "Big-Oh"
First note that the outer loop is independent from the remaining two - it simply adds a (n/4)* multiplier. We will consider that later.
Now let's consider the complexity of
for ( int j = 0; j < n; j++ )
for ( int k = 1; k < j*j; k *= 2 )
We have the following sum:
0 + log2(1) + log2(2 * 2) + ... + log2(n*n)
It is good to note that log2(n^2) = 2 * log2(n). Thus we re-factor the sum to:
2 * (0 + log2(1) + log2(2) + ... + log2(n))
It is not very easy to analyze this sum but take a look at this post. Using Sterling's approximation one can that it is belongs to O(n*log(n)). Thus the overall complexity is O((n/4)*2*n*log(n))= O(n^2*log(n))
In terms of j, the inner loop is O(log_2(j^2)) time, but sine
log_2(j^2)=2log(j), it is actually O(log(j)).
For each iteration of middle loop, it takes O(log(j)) time (to do the
inner loop), so we need to sum:
sum { log(j) | j=1,..., n-1 } log(1) + log(2) + ... + log(n-1) = log((n-1)!)
And since log((n-1)!) is in O((n-1)log(n-1)) = O(nlogn), we can conclude middle middle loop takes O(nlogn) operations .
Note that both middle and inner loop are independent of i, so to
get the total complexity, we can just multiply n/4 (number of
repeats of outer loop) with complexity of middle loop, and get:
O(n/4 * nlogn) = O(n^2logn)
So, total complexity of this code is O(n^2 * log(n))
Time Complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount (which is c in examples below):
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
for (int i = n; i > 0; i -= c) {
// some O(1) expressions
}
Time complexity of nested loops is equal to the number of times the innermost statement is executed. For example the following sample loops have O(n²) time complexity:
for (int i = 1; i <=n; i += c) {
for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}
for (int i = n; i > 0; i += c) {
for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}
Time Complexity of a loop is considered as O(logn) if the loop variables is divided / multiplied by a constant amount:
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}
Now we have:
for ( int i = 0; i < n; i += 4 ) <----- runs n times
for ( int j = 0; j < n; j++ ) <----- for every i again runs n times
for ( int k = 1; k < j*j; k *= 2 )` <--- now for every j it runs logarithmic times.
So complexity is O(n²logm) where m is n² which can be simplified to O(n²logn) because n²logm = n²logn² = n² * 2logn ~ n²logn.
I'm working on a data structures course and I'm not sure how to proceed w/ this Big O analysis:
sum = 0;
for(i = 1; i < n; i++)
for(j = 1; j < i*i; j++)
if(j % i == 0)
for(k = 0; k < j; k++)
sum++;
My initial idea is that this is O(n^3) after reduction, because the innermost loop will only run when j/i has no remainder and the multiplication rule is inapplicable. Is my reasoning correct here?
Let's ignore the outer loop for a second here, and let's analyze it in terms of i.
The mid loop runs i^2 times, and is invoking the inner loop whenever j%i == 0, that means you run it on i, 2i, 3i, ...,i^2, and at each time you run until the relevant j, this means that the inner loop summation of running time is:
i + 2i + 3i + ... + (i-1)*i = i(1 + 2 + ... + i-1) = i* [i*(i-1)/2]
The last equality comes from sum of arithmetic progression.
The above is in O(i^3).
repeat this to the outer loop which runs from 1 to n and you will get running time of O(n^4), since you actually have:
C*1^3 + C*2^3 + ... + C*(n-1)^3 = C*(1^3 + 2^3 + ... + (n-1)^3) =
= C/4 * (n^4 - 2n^3 + n^2)
The last equation comes from sum of cubes
And the above is in O(n^4), which is your complexity.