int sum = 0;
for(int i = 1; i < n; i++) {
for(int j = 1; j < i * i; j++) {
if(j % i == 0) {
for(int k = 0; k < j; k++) {
sum++;
}
}
}
}
I don't understand how when j = i, 2i, 3i... the last for loop runs n times. I guess I just don't understand how we came to that conclusion based on the if statement.
Edit: I know how to compute the complexity for all the loops except for why the last loop executes i times based on the mod operator... I just don't see how it's i. Basically, why can't j % i go up to i * i rather than i?
Let's label the loops A, B and C:
int sum = 0;
// loop A
for(int i = 1; i < n; i++) {
// loop B
for(int j = 1; j < i * i; j++) {
if(j % i == 0) {
// loop C
for(int k = 0; k < j; k++) {
sum++;
}
}
}
}
Loop A iterates O(n) times.
Loop B iterates O(i2) times per iteration of A. For each of these iterations:
j % i == 0 is evaluated, which takes O(1) time.
On 1/i of these iterations, loop C iterates j times, doing O(1) work per iteration. Since j is O(i2) on average, and this is only done for 1/i iterations of loop B, the average cost is O(i2 / i) = O(i).
Multiplying all of this together, we get O(n × i2 × (1 + i)) = O(n × i3). Since i is on average O(n), this is O(n4).
The tricky part of this is saying that the if condition is only true 1/i of the time:
Basically, why can't j % i go up to i * i rather than i?
In fact, j does go up to j < i * i, not just up to j < i. But the condition j % i == 0 is true if and only if j is a multiple of i.
The multiples of i within the range are i, 2*i, 3*i, ..., (i-1) * i. There are i - 1 of these, so loop C is reached i - 1 times despite loop B iterating i * i - 1 times.
The first loop consumes n iterations.
The second loop consumes n*n iterations. Imagine the case when i=n, then j=n*n.
The third loop consumes n iterations because it's executed only i times, where i is bounded to n in the worst case.
Thus, the code complexity is O(n×n×n×n).
I hope this helps you understand.
All the other answers are correct, I just want to amend the following.
I wanted to see, if the reduction of executions of the inner k-loop was sufficient to reduce the actual complexity below O(n⁴). So I wrote the following:
for (int n = 1; n < 363; ++n) {
int sum = 0;
for(int i = 1; i < n; ++i) {
for(int j = 1; j < i * i; ++j) {
if(j % i == 0) {
for(int k = 0; k < j; ++k) {
sum++;
}
}
}
}
long cubic = (long) Math.pow(n, 3);
long hypCubic = (long) Math.pow(n, 4);
double relative = (double) (sum / (double) hypCubic);
System.out.println("n = " + n + ": iterations = " + sum +
", n³ = " + cubic + ", n⁴ = " + hypCubic + ", rel = " + relative);
}
After executing this, it becomes obvious, that the complexity is in fact n⁴. The last lines of output look like this:
n = 356: iterations = 1989000035, n³ = 45118016, n⁴ = 16062013696, rel = 0.12383254507467704
n = 357: iterations = 2011495675, n³ = 45499293, n⁴ = 16243247601, rel = 0.12383580700180696
n = 358: iterations = 2034181597, n³ = 45882712, n⁴ = 16426010896, rel = 0.12383905075183874
n = 359: iterations = 2057058871, n³ = 46268279, n⁴ = 16610312161, rel = 0.12384227647628734
n = 360: iterations = 2080128570, n³ = 46656000, n⁴ = 16796160000, rel = 0.12384548432498857
n = 361: iterations = 2103391770, n³ = 47045881, n⁴ = 16983563041, rel = 0.12384867444612208
n = 362: iterations = 2126849550, n³ = 47437928, n⁴ = 17172529936, rel = 0.1238518469862343
What this shows is, that the actual relative difference between actual n⁴ and the complexity of this code segment is a factor asymptotic towards a value around 0.124... (actually 0.125). While it does not give us the exact value, we can deduce, the following:
Time complexity is n⁴/8 ~ f(n) where f is your function/method.
The wikipedia-page on Big O notation states in the tables of 'Family of Bachmann–Landau notations' that the ~ defines the limit of the two operand sides is equal. Or:
f is equal to g asymptotically
(I chose 363 as excluded upper bound, because n = 362 is the last value for which we get a sensible result. After that, we exceed the long-space and the relative value becomes negative.)
User kaya3 figured out the following:
The asymptotic constant is exactly 1/8 = 0.125, by the way; here's the exact formula via Wolfram Alpha.
Remove if and modulo without changing the complexity
Here's the original method:
public static long f(int n) {
int sum = 0;
for (int i = 1; i < n; i++) {
for (int j = 1; j < i * i; j++) {
if (j % i == 0) {
for (int k = 0; k < j; k++) {
sum++;
}
}
}
}
return sum;
}
If you're confused by the if and modulo, you can just refactor them away, with j jumping directly from i to 2*i to 3*i ... :
public static long f2(int n) {
int sum = 0;
for (int i = 1; i < n; i++) {
for (int j = i; j < i * i; j = j + i) {
for (int k = 0; k < j; k++) {
sum++;
}
}
}
return sum;
}
To make it even easier to calculate the complexity, you can introduce an intermediary j2 variable, so that every loop variable is incremented by 1 at each iteration:
public static long f3(int n) {
int sum = 0;
for (int i = 1; i < n; i++) {
for (int j2 = 1; j2 < i; j2++) {
int j = j2 * i;
for (int k = 0; k < j; k++) {
sum++;
}
}
}
return sum;
}
You can use debugging or old-school System.out.println in order to check that i, j, k triplet is always the same in each method.
Closed form expression
As mentioned by others, you can use the fact that the sum of the first n integers is equal to n * (n+1) / 2 (see triangular numbers). If you use this simplification for every loop, you get :
public static long f4(int n) {
return (n - 1) * n * (n - 2) * (3 * n - 1) / 24;
}
It is obviously not the same complexity as the original code but it does return the same values.
If you google the first terms, you can notice that 0 0 0 2 11 35 85 175 322 546 870 1320 1925 2717 3731 appear in "Stirling numbers of the first kind: s(n+2, n).", with two 0s added at the beginning. It means that sum is the Stirling number of the first kind s(n, n-2).
Let's have a look at the first two loops.
The first one is simple, it's looping from 1 to n. The second one is more interesting. It goes from 1 to i squared. Let's see some examples:
e.g. n = 4
i = 1
j loops from 1 to 1^2
i = 2
j loops from 1 to 2^2
i = 3
j loops from 1 to 3^2
In total, the i and j loops combined have 1^2 + 2^2 + 3^2.
There is a formula for the sum of first n squares, n * (n+1) * (2n + 1) / 6, which is roughly O(n^3).
You have one last k loop which loops from 0 to j if and only if j % i == 0. Since j goes from 1 to i^2, j % i == 0 is true for i times. Since the i loop iterates over n, you have one extra O(n).
So you have O(n^3) from i and j loops and another O(n) from k loop for a grand total of O(n^4)
Related
Outer loop is O(n), 2nd loop is O(n^2) and 3rd loop is also O(n^2), but the 3rd loop is conditional.
Does that mean the 3rd loop only happens 1/n (1 every n) times and therefore total big O is O(n^4)?
for (int i = 1; i < n; i++) {
for (int j = 1; j < (n*n); j++) {
if (j % i == 0) {
for (int k = 1; k < (n*n); k++) {
// Simple computation
}
}
}
}
For any given value of i between 1 and n, the complexity of this part:
for (int j = 1; j < (n*n); j++) {
if (j % i == 0) {
for (int k = 1; k < (n*n); k++) {
// Simple computation
}
}
}
is O(n4/i), because the if-condition is true one ith of the time. (Note: if i could be larger than n, then we'd need to write O(n4/i + n2) to include the cost of the loop iterations where the if-condition was false; but since i is known to be small enough that n4/i ≥ n2, we don't need to worry about that.)
So the total complexity of your code, adding together the different loop iterations across all values of i, is O(n4/1 + n4/2 + n4/3 + ⋯ + n4/n) = O(n4 · (1/1 + 1/2 + 1/3 + ⋯ + 1/n)) = O(n4 log n).
(That last bit relies on the fact that, since ln(n) is the integral of 1/x from 1 to n, and 1/x is decreasing over that interval, we have ln(n) < ln(n+1) < (1/1 + 1/2 + 1/3 + ⋯ + 1/n) < 1 + ln(n).)
I understand that the innermost for loop is Θ(logn)
and the two outermost for loops is Θ(n^2) because it's an arithmetic sum. The if-statement is my main problem. Does anyone know how to solve this?
int tally=0;
for (int i = 1; i < n; i ++)
{
for (int j = i; j < n; j ++)
{
if (j % i == 0)
{
for (int k = 1; k < n; k *= 2)
{
tally++;
}
}
}
}
Edit:
Now I noticed loop order: i before j.
In this case for given i value j varies from i to n and there are (n/i) successful if-conditions.
So program will call then most inner loop
n/1 +n/2+n/3+..+n/n
times. This is sum of harmonic series, it converges to n*ln(n)
So inner loop will be executed n*log^2(n) times.
As you wrote, two outermost loops provide O(n^2) complexity, so overall complexity is O(n^2 + n*log^2(n)), the first term overrides the second one, loop, and finally overall complexity is quadratic.
int tally=0;
for (int i = 1; i < n; i ++)
{
// N TIMES
for (int j = i; j < n; j ++)
{
//N*N/2 TIMES
if (j % i == 0)
{
//NlogN TIMES
for (int k = 1; k < n; k *= 2)
{
//N*logN*logN
tally++;
}
}
}
}
Old answer (wrong)
This complexity is linked with sum of sigma0(n) function (number of divisors) and represented as sequence A006218 (Dirichlet Divisor problem)
We can see that approximation for sum of divisors for values up to n is
n * ( log(n) + 2*gamma - 1 ) + O(sqrt(n))
so average number of successful if-conditions for loop counter j is ~log(j)
so I was wondering how would I work out the time complexity (T(n)) of a piece of code, for example, the one below, in terms of the number of operations.
for( int i = n; i > 0; i /= 2 ) {
for( int j = 1; j < n; j *= 2 ) {
for( int k = 0; k < n; k += 2 ) {
... // constant number of operations
}
}
}
I'm sure its simple but this concept wasn't taught very well by my lecturer and I really want to know how to work out the time complexity!
Thank you in advance!
To approach this, one method is to breakdown the complexity of your three loops individually.
A key observation we can make is that:
(P): The number of steps in each of the loop does not depend on the value of the "index" of its parent loop.
Let's call
f(n) the number of operations aggregated in the outer loop (1)
g(n) in the intermediate inner loop (2)
h(n) in the most inner loop (3).
for( int i = n; i > 0; i /= 2 ) { // (1): f(n)
for( int j = 1; j < n; j *= 2 ) { // (2): g(n)
for( int k = 0; k < n; k += 2 ) { // (3): h(n)
// constant number of operations // => (P)
}
}
}
Loop (1)
Number of steps
i gets the values n, n/2, n/4, ... etc. until it reaches n/2^k where 2^k is greater than n (2^k > n), such that n/2^k = 0, at which point you exit the loop.
Another way to say it is that you have step 1 (i = n), step 2 (i = n/2), step 3 (i = n/4), ... step k - 1 (i = n/2^(k-1)), then you exit the loop. These are k steps.
Now what is the value of k? Observe that n - 1 <= 2^k < n <=> log2(n - 1) <= k < log2(n) <= INT(log2(n - 1)) <= k <= INT(log2(n)). This makes k = INT(log2(n)) or loosely speaking k = log2(n).
Cost of each step
Now how many operations do you have for each individual step?
At step i, it is g(i) = g(n) according to the notations we chose and the property (P).
Loop (2)
Number of steps
You have step (1) (j = 1), step (2) (j = 2), step (3) (j = 4), etc. until you reach step (p) (j = 2^p) where p is defined as the smallest integer such that 2^p > n, or loosely speaking log2(n).
Cost of each step
The cost of step j is h(j) = h(n) according to the notations we chose and the property (P).
Loop (3)
Number of steps
Again, let's count the steps: (1):k = 0, (1):k = 2, (2):k = 4, ..., k = n - 1 or k = n - 2. This amounts to n / 2 steps.
Cost of each step
Because of (P), it is constant. Let's call this constant K.
All loops altogether
The number of aggregated operations is
T(n) = f(n) = sum(i = 0, i < log2(n), g(i))
= sum(i = 0, i < log2(n), g(n))
= log2(n).g(n)
= log2(n).sum(j = 0, j < log2(n), h(j))
= log2(n).log2(n).h(n)
= log2(n).log2(n).(n/2).K
So T(n) = (K/2).(log2(n))^2.n
Write a method, add a counter, return the result:
int nIterator (int n) {
int counter = 0;
for( int i = n; i > 0; i /= 2 ) {
for( int j = 1; j < n; j *= 2 ) {
for( int k = 0; k < n; k += 2 ) {
++counter;
}
}
}
return counter;
}
Protocol for fast increasing N and document in a readable manner the results:
int old = 0;
for (int i = 0, j=1; i < 18; ++i, j*=2) {
int res = nIterator (j);
double quote = (old == 0) ? 0.0 : (res*1.0)/old;
System.out.printf ("%6d %10d %3f\n", j, res, quote);
old=res;
}
Result:
1 0 0,000000
2 2 0,000000
4 12 6,000000
8 48 4,000000
16 160 3,333333
32 480 3,000000
64 1344 2,800000
128 3584 2,666667
256 9216 2,571429
512 23040 2,500000
1024 56320 2,444444
2048 135168 2,400000
4096 319488 2,363636
8192 745472 2,333333
16384 1720320 2,307692
32768 3932160 2,285714
65536 8912896 2,266667
131072 20054016 2,250000
So n is increasing by factor 2, the counter increases in the beginning with more than 2², but then decreases rapidly towards something, not much higher than 2. This should help you find the way.
I've got to analyze this loop, among others, and determine its running time using Big-O notation.
for ( int i = 0; i < n; i += 4 )
for ( int j = 0; j < n; j++ )
for ( int k = 1; k < j*j; k *= 2 )`
Here's what I have so far:
for ( int i = 0; i < n; i += 4 ) = n
for ( int j = 0; j < n; j++ ) = n
for ( int k = 1; k < j*j; k *= 2 ) = log^2 n
Now the problem I'm coming to is the final running time of the loop. My best guess is O(n^2), however I am uncertain if this correct. Can anyone help?
Edit: sorry about the Oh -> O thing. My textbook uses "Big-Oh"
First note that the outer loop is independent from the remaining two - it simply adds a (n/4)* multiplier. We will consider that later.
Now let's consider the complexity of
for ( int j = 0; j < n; j++ )
for ( int k = 1; k < j*j; k *= 2 )
We have the following sum:
0 + log2(1) + log2(2 * 2) + ... + log2(n*n)
It is good to note that log2(n^2) = 2 * log2(n). Thus we re-factor the sum to:
2 * (0 + log2(1) + log2(2) + ... + log2(n))
It is not very easy to analyze this sum but take a look at this post. Using Sterling's approximation one can that it is belongs to O(n*log(n)). Thus the overall complexity is O((n/4)*2*n*log(n))= O(n^2*log(n))
In terms of j, the inner loop is O(log_2(j^2)) time, but sine
log_2(j^2)=2log(j), it is actually O(log(j)).
For each iteration of middle loop, it takes O(log(j)) time (to do the
inner loop), so we need to sum:
sum { log(j) | j=1,..., n-1 } log(1) + log(2) + ... + log(n-1) = log((n-1)!)
And since log((n-1)!) is in O((n-1)log(n-1)) = O(nlogn), we can conclude middle middle loop takes O(nlogn) operations .
Note that both middle and inner loop are independent of i, so to
get the total complexity, we can just multiply n/4 (number of
repeats of outer loop) with complexity of middle loop, and get:
O(n/4 * nlogn) = O(n^2logn)
So, total complexity of this code is O(n^2 * log(n))
Time Complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount (which is c in examples below):
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
for (int i = n; i > 0; i -= c) {
// some O(1) expressions
}
Time complexity of nested loops is equal to the number of times the innermost statement is executed. For example the following sample loops have O(n²) time complexity:
for (int i = 1; i <=n; i += c) {
for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}
for (int i = n; i > 0; i += c) {
for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}
Time Complexity of a loop is considered as O(logn) if the loop variables is divided / multiplied by a constant amount:
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}
Now we have:
for ( int i = 0; i < n; i += 4 ) <----- runs n times
for ( int j = 0; j < n; j++ ) <----- for every i again runs n times
for ( int k = 1; k < j*j; k *= 2 )` <--- now for every j it runs logarithmic times.
So complexity is O(n²logm) where m is n² which can be simplified to O(n²logn) because n²logm = n²logn² = n² * 2logn ~ n²logn.
i, j, N, sum is all integer type. N is input.
( Code1 )
i = N;
while(i > 1)
{
i = i / 2;
for (j = 0; j < 1000000; j++)
{
sum = sum + j;
}
}
( Code2 )
sum = 0;
d = 1;
d = d << (N-1);
for (i = 0; i < d; i++)
{
for (j = 0; j < 1000000; j++)
{
sum = sum + i;
}
}
How to calculate step count and time complexity for a Code1, Code2?
to calculate the time complexity try to understand what takes how much time, and by what n are you calculating.
if we say the addition ("+") takes O(1) steps then we can check how many time it is done in means of N.
the first code is dividing i in 2 each step, meaning it is doing log(N) steps. so the time complexity is
O(log(N) * 1000000)= O(log(N))
the second code is going form 0 to 2 in the power of n-1 so the complexity is:
O(s^(N-1) * 1000000)= O(2^(N-1))
but this is just a theory, because d has a max of 2^32/2^64 or other number, so it might not be O(2^(N-1)) in practice