It looks like some sort of a partial-sort.
int n = a.length;
for(int i = 0; i < n; i++) {
while(a[i] != i) {
if(a[i] < 0 || a[i] >= n) //avoid stepping out of range
break;
if(a[i] == a[a[i]]) //avoid inf loop by duplicates
break;
int t = a[i];
a[i] = a[t];
a[t] = t;
}
}
return a;
On first look, seems like O(N^2) but when I run it seems O(N). Any ideas? Thanks in advance.
You're right that it's O(n):
To help explain this I'll make up a definition:
Reflective: An element, a[i], in an array, a, is reflective if a[i] = i.
Iterations of while loop that do result in a break:
For each value of i, you can have exactly one break that's executed within the while loop (including the while condition). As there's n values of i, this means there's n total iterations of the while loop that result in a break.
Iterations of while loop that don't result in a break:
For this part it might help to imagine our array where each element is either reflective (1), or non-reflective (0):
| 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 1 |
Once we have passed the break points, then we know that a[i] != a[a[i]] (ie. if we name a[i] as t, then we know that a[t] != t). And because we later assign a[t] = t, then we have changed an element of the array from non-reflective to reflective. Note that nowhere in your code do we make a reflective element non-reflective: The assignment a[i] = a[t] could result in a[i] being non-reflective, but we also know that it wasn't reflective to begin with because the while statement was true: a[i] != i.
From our visual, this means that no 1 ever changes to a 0, and yet every iteration of the while loop (that passes the break points) results in at least one 0 flipping to a 1.
Once you observe that every (non-break) iteration of the inner loop takes at least one (possibly two) non-reflective elements from the array and converts it to become permanently reflective, then we realise that the total amount of (non-break) iterations of the inner loop cannot exceed n for the entire run-time of the program.
In summary: i is iterated and checked in the for loop n times, and each does a constant amount of work, c1. There's n total iterations of the iterations of the while loop that correspond to a break, and at most n iterations that don't correspond to a break. Hence there's at most 2n iterations of the while loop in total. The work done in a single iteration of the while loop is some max constant, c2.
Hence time complexity <= c1*n + c2*2*n = O(n).
As for the function of the code, it rearranges elements to make as many of them reflective as possible: if after this function a[i] is non-reflective, then the value i isn't present in the array.
Related
I am trying to find the run time on each line, when the best case and worst case would occur, and the Big-O in worst and best case.
What the code i pasted does is find the the longest length of the sequence of ascending numbers in array.
For example if we had [4,5,6,9,1,2,3,4,5,6] , the longest sequence would be 6.
will the first for loop will be executed n times?
will the second for loop be executed n times?
will the if statement be executed n times?
will the statement inside the if, be executed n times?
Will the best case occur when the array is in ascending order and the worst occur when the array is in descending order?
The reason I don't believe this to be true is that, when it is sorted in ascending order the second loop will be ran all the way through. When it is sorted in descending order, the second loop will always break because the and statement does not hold true.
for (i = 0, length = 1; i < n-1; i++) {
for (i1 = i2 = k = i; k < n-1 && a[k] < a[k+1]; k++, i2++);
if (length < i2 - i1 + 1)
length = i2 - i1 + 1;
}
return length;
First, note that the variables i1 and i2 are not really needed, as i1 == i and i2 == k at each iteration of the inner loop, so we can just write:
for (i = 0, length = 1; i < n-1; i++) {
for (k = i; k < n-1 && a[k] < a[k+1]; k++);
if (length < k - i + 1)
length = k - i + 1;
}
return length;
The outer loop will execute n-1 times. No difference in worst/best case there.
The best case occurs when a is a non-increasing sequence. In that case a[k] < a[k+1] will never be true and thus the inner loop's condition will only be executed once. The return value will in that case be 1, and the time complexity O(n).
The worst case occurs when a is an ever increasing sequence. In that case a[k] < a[k+1] will always be true, and thus the inner loop's condition will iterate n-1 - i times, the loop's condition once time more when k < n-1 is false.
The if condition and the adjustment of length execute in constant time.
The nested loop's body (the if) executes in total as many times as one can make a multiset of 2 elements (represented by i and k) from a set of n-1 elements. The formula for that is (n-1)n/2, which is O(n²).
What's the big O of this?
for (int i = 1; i < n; i++) {
for (int j = 1; j < (i*i); j++) {
if (j % i == 0) {
for (int k = 0; k < j; k++) {
// Simple computation
}
}
}
}
Can't really figure it out. Inclined to say O(n^4 log(n)) but feel like i'm wrong here.
This is quite a confusing analysis, so let's break it down bit by bit to make sense of the calculations:
The outermost loop runs for n-1 iterations (since 1 ≤ i < n).
The next loop inside it makes (i² - 1) iterations for each index i of the outer loop (since 1 ≤ j < i²).
In total, this means the number of iterations for these two loops is equal to calculating the sum of (i²-1) for each 1 ≤ i < n. This is similar to computing the sum of the first n squares, and is order of magnitude of O(n³).
Note the modulo operator % takes constant time (O(1)) to compute, therefore checking the condition if (j % i == 0) for all iterations of these two loops will not affect the O(n³) runtime.
Now let's talk about the inner loop inside the conditional.
We are interested in seeing how many times (and for which values of j) this if condition evaluates to true, since this would dictate how many iterations the innermost loop will run.
Practically speaking, (j % i) will never equal 0 if j < i, so the second loop could actually be shortened to start from i rather than from 1, however this will not impact the Big-O upper bound of the algorithm.
Notice that for a given number i, (j % i == 0) if and only if i is a divisor of j. Since our range is (1 ≤ j < i²), there will be a total of (i-1) values of j for which this will be true, for any given i. If this is confusing, consider this example:
Let's assume i = 4. Then our index j would iterate through all values 1,..,15=i²,
and (j%i == 0) would be true for j = 4, 8, 12 - exactly (i - 1) values.
The innermost loop would therefore make a total of (12 + 8 + 4 = 24) iterations. Thus for a general index i, we would look for the sum: i + 2i + 3i + ... + (i-1)i to indicate the number of iterations the innermost loop would make.
And this could be generalized by calculating the sum of this arithmetic progression. The first value is i and the last value is (i-1)i, which results in a sum of (i³ - i²)/2 iterations of the k loop for every value of i. In turn, the sum of this for all values of i could be computed by calculating the sum of cubes and the sum of squares - for a total runtime of O(n⁴) iterations of the innermost loop (the k loop) for all values of i.
Thus in total, the runtime of this algorithm would be the total of both runtimes we calculated above. We checked the if statement O(n³) times and the innermost loop ran for O(n⁴), so assuming // Simple computation runs in constant time, our total runtime would come down to:
O(n³) + O(n⁴)*O(1) = O(n⁴)
Let us assume that i = 2.Then j can be [1,2,3].The "k" loop will run for j = 2 only.
Similarly for i=3,j can be[1,2,3,4,5,6,7,8].hence, k can run for j = 3,6. You can see a pattern here that for any value of i, the 'k' loop will run (i-1) times.The length of loops will be [i,2*i,3*i,....i*i].
Hence the time complexity of k loop is
=i+(2*i)+(3*i)+ ..... +(i*i)
=(i^2)(i+1)/2
Hence the final complexity will be
= (n^3)(n+3)/2
foo(int n)
{
int s=0;
for(int i=1;i<=n;i++)
for(int j=1;j<=i*i;j++)
if(j%i==0)
for(k=1;k<=j;k++)
s++;
}
What is the time complexity of the above code?
I am getting it as O(n^5) but it is not correct.
The complexity is O(n^4).
Innermost loop will be executed i times for each i. (i multiples of i within 0..i*i)
It will be like the inner loop will run for
j = 0 1 2...i i+1 ...2*i ....3*i .... 4*i .... 5*i... i*i
x x x x x x
\------/\--------/\-------/ \------/
These x denotes the execution of the innermost for loop with complexity j. Rest of the time this is not touched and just the test is done and it fails.
So now check the thing, these \-----/ has i*j (j = 1,2,3...i) looping and i checks.
And now we do i times precisely.
So total work = i*(1+1+1+...1) + i*(1+2+3+..i)
= i*i+ i*i*(i+1)/2 ~ i^3
With the outer loop it will be n^4.
Now what is the meaning of it. The whole work can be divided in like this
O(i*j+i)
^^^ ^
| The other cases when it simply skips
The innermost loop executed
Now if we iterate over j then it will have complexity O(n^3).
With added external loop it will be O(n^4).
Your function computes 4-dimensional pyramidal numbers (A001296). The number of increments to s can be computed using this formula:
a(n) = n*(1+n)*(2+n)*(1+3*n)/24
Therefore, the complexity of your function is O(n4).
The reason it is not O(n5) is that if (j%i == 0) proceeds with the "payload" loop only for multiples of i, of which we have exactly i among all js in the range from 1 to i2, inclusive.
Hence, we add one for the outermost loop, one for the loop in the middle, and two for the innermost loop, because it iterates up to i2, for the total of 4.
Why only one for middle (j) ? It runs up to i2 right?
Perhaps it would be easier to see if we rewrite the code to exclude the condition:
int s=0;
for(int i=1;i<=n;i++)
for(int j=1;j<=i;j++)
for(int k=1;k<=i*j;k++)
s++;
return s;
This code produces the same number of "payload loop" iterations, but rather than "filtering out" the iterations that skip the inner loop it removes them from consideration by computing the terminal value of k in the innermost loop as i*j.
I'm having troubles understanding how to estimate the Big-O. We've had two lectures on this topic and the only thing I undestand is to take the leading coefficient from the largest polynomial in the function and replace it with an O so it would look like O(...)
During the first lecture this was shown
int i = length;
while (i>0) {
i--;
int j = i -1;
while (j >= 0) {
if (a[i] == a[j]) {
return 1;
}
j--;
}
}
return 0;
Followed by this on the following slide
int i = length; // Counts as 1
while (i>0) { // Counts as N+1
i--; // Counts as N
int j = i -1; // Coutns as N
while (j >= 0) { // Counts as i+1
if (a[i] == a[j]) { // Counts as i
return 1;
}
j--; // Counts as i
}
}
return 0; // Counts as 1
From this, I'm wondering why
return 1;
isn't counted as a step.
Following that slide, it tells us that the
Outer Loop count is 3N+1
Inner Loop count is 3i+1 ; for all possible i from 0 to N-1
I understand that the second [while] loop will occur N times and following that, the [if] will occur i times where i is equal to N-1 since if j < 0, the second while loop will still be read but nothing else will happen after it.
The slide shows that the Total from the Inner loop is equal to
3N^2 - 1/2N
and that the Grand Total is equal to 3/2N^2 + 5/2N +3.
Wondering if anyone has time to walk me through how to acquire the functions used in Big-O estimations like in the example above; I have no idea how 3i+1 translated into 3N^2 - 1/2N as well has how the Grand Total is calculated.
I will try to explain the calculation of the complexity of your example.
First we notice, that every operation requries only constant time, written as O(1), which means, that the run time does not depend on the input.
int i = length; // O(1), executed only one time
while (i > 0) { // outer loop, condition needs O(1)
i--; // O(1)
int j = i - 1; // O(1)
while (j >= 0) { // inner loop, condition needs O(1)
if (a[i] == a[j]) { // O(1)
return 1; // first return statement
}
j--; // O(1)
}
}
return 0; // second return statement, executed only one time
The number of operations in every loop is constant, so we only have to count how often they are executed.
The outer loop runs from i = n to i = 1. For each i the inner loop is executed once and executes i constant time operations itself. In total we get
3 + Σi=0,...,n-13 + 3i + 1 = 3 + 4n + 3/2⋅(n-1)⋅n = 3/2⋅n² + 5/2⋅n + 3
(1) (2) (3) (4) (5)
Explanations:
The 3 contains the first line, the last line and the an additional execution of the outer loop condition. The condition evaluates n times to true and one time to false.
This 3 contains one evaluation of the condition of the outer loop and the first two lines in the outer loop.
The factor 3 in front of the i contains one evaluation of the inner loop condition, the evaluation of the if statement and the last line of the inner loop.
The 1 is for the additional evaluation where the inner loop condition evaluates to false.
The sum of consecutive integers 1 to n evaluates to 1/2⋅n⋅(n+1). Notice the sum here is from 0 to n-1, so it evaluates to 1/2⋅(n-1)⋅n.
The frist (inner) return statement is not counted, because if executed the algorithm terminates. But we want to calculate the maximum number of steps, the so called worst case. This case is when the algorithm terminates as late as possible.
Notice: The calculation of the steps is very precise. This is not necessary to get the big-O complexity. It would be enough to say, that each loop runs in O(n) and since they are nested the complexity has to be multiplied so you get O(n)⋅O(n) = O(n²).
I need to count the elementary operations of the code below:
public static int findmax(int[] a, int x) {
int currentMax = a[0];
for (int i = 1; i < a.length; i++) {
if (a[i] > currentMax) {
currentMax = a[i];
}
}
return currentMax;
}
I understand that a primitive operation (such as assigning a value to a variable) is given a value of 1. So here assigning a[0] to currentMax accounts for 1 primitive operation executed.
Within the for loop: assigning 1 to i, also accounts for 1. And i < a.length, and i++ are n - 1 each (i.e 2(n-1)). However, I get confused as to how to deal with the if statement. I'm aware that we're looking for the worst case (so we'd need to perform the if condition and the statement nested within that block). But I'm not sure what this is in terms of a primitive operation.
Before the loop iterations
int currentMax = a[0];
Assignment: counts for 1.
int i = 1
Assignment: counts for 1
For each of the n iterations of the loop (note that here, n=a.length-1)
i < a.length
Comparison (returns true): counts as 1
i++
Incrementation: counts as 1
a[i] > currentMax
Comparison: counts as 1
currentMax = a[i];
Assignment: counts as 1
When existing the loop
i < a.length
Comparison (return false): counts as 1
CONCLUSION
You have in the worst case 1 + 1 + n*(1+1+1+1) + 1 = 4*n + 3 elementary operations, hence the conplexity of your algorithm is Θ(n).
More specifically, to handle the if statement, you have of course to take into account the computation of its argument, but the word "if" itself doesn't count. The processor just jumps instantly to the next instruction depending on the result. Some may argue that this conditionnal jump may count as 1, but anyway this has no importance, since 4*n + 3 is the same complexity as 5*n + 3, i.e. Θ(n).
If you want to be precise and keep the constants, then you have to specify what does it mean exactly, such as:
n+2 assignments
n incrementations
2*n+1 comparisons
In which case it is clear what you decided to count as elementary operations or not. But for instance, you could have also decided that accessing the array like a[i] was worth counting (it is actually one pointer addition plus one memory access), so you would add:
2*n+1 array access
Or if you want to be more precise, and separate the fact that one of the access is a[0] and do not perform pointer arithmetic, you would say:
2*n+1 memory access
2*n pointer additions
So you see that it is up to you to decide what do you count as "elementary operations", and all answers are equally true.