The algorithm is taken from great "Algorithms and Programming: Problems and Solutions" by Alexander Shen (namely exercise 1.1.28).
Following is my translation from Russian so excuse me for mistakes or ambiguity. Please correct me if you feel so.
What should algorithm do
With given natural n algorithm calculates the number of solutions of
inequality
x*x + y*y < n
in natural (non-negative) numbers without using
manipulations on real numbers
In Pascal
k := 0; s := 0;
{at this moment of execution
(s) = number of solutions of inequality with
x*x + y*y < n, x < k}
while k*k < n do begin
l := 0; t := 0;
while k*k + l*l < n do begin
l := l + 1;
t := t + 1;
end;
{at this line
(t) = number of solutions of k*k + y*y < n
for given (k) with y>=0}
k := k + 1;
s := s + t;
end;
{k*k >= n, so s = number of solutions of inequality}
Further in the text Shen says briefly that number of operations performed by this algorithm is "proportional to n, as one can calculate". So I ask you how one can calculate that with strict mathematics.
You have two loops, one inside the other.
The external has this condition: k*k < n so k goes from 0 up to SQRT(n)
and the internal loop has this condition: k*k + l*l < n so l goes from 0 up to SQRT(n-k^2). But this is snaller than SQRT(n)
So, the maximum iterations is less than SQRT(n) * SQRT(n) which is n and in every iteration a constant number of operations is done.
The number of operation done by nested loops is a multiplication of the 2 lenghts
for example:
for i=1 to 5
for j = 1 to 10
print j+i
end
end
will print 5*10 = 50 times
In your example the outer loop runs sqrt(n) times -that is until k^2=n or k=sqrt(n).
the inner loop runs sqrt(n) times as well .
k is constant within the loop, and it will stop when k^2+l^2>n , the most times it could run would be at k=0 -> l^2>n => l>sqrt(n) .
So the total number of iterations is at most sqrt(n)*sqrt(n) - O(n)
The time taken by your algorithm is proportional to the number of operations made. Therefore, you just have to calculate that the time taken by your algorithm is proportional with an increasing size of the input (n). You can do so by timing the algorithm's completion with a wide range of n's and plotting the n vs time graph. Doing so should give you a linear graph.
Related
x=0;
for(int i=1 ; i<=n ; i++){
for(int j=1 ; j<=n ; j++){
x++;
n--;
}
}
By testing the code, the nested FOR loop recurs ⌈n/2⌉ times per steps of the first For loop.
But I don't know how to formulate these rules with sigmas. I would really appreciate if anyone can help me with this.
You can express T(n) as T(n-2)+1, i.e. T(n)=T(n-2)+1 Or its expected time complexity is O(n/2) => O(n).
Edit: T(n-2)+1 expression is evaluated as you can see if you increase n-2 by 2 means when n-2 became n, the number of times the loop will be executed is the 1 + number of time loop executed for n-2. 1 is because you are incrementing j and decrementing n simultaneously. it is exactly the same as incrementing j by 2.
Let's compute the exact value of x.
TL;DR: x(N) = N-[N/2^i+1], where i is the lowest number, satisfying the condition: (i+1) 2^i > N. As Mariano Demarchi said, T(N)=O(N).
First we will check how variables change after each inner loop. Let we have (n, i, x) values between 2 and 3 lines in code (before the inner loop):
How many iterations will happens? Each iteration increases j and decreases n, so the distance between them decreases by two. Start distance is n-1, and final, after the loop, is 0 (if n is odd) or -1 (if n is even). Thus if n=2k, the answer is k, otherwise k+1. So, the inner loop makes [(n+1)/2] = d iterations.
Thus x will increase by d, n becomes n-d and i becomes i+1.
(n, i, x) -> (n-d, i+1, x+d) or equal ([n/2], i+1, x + [(n+1)/2])
Now concentrate on values of n and i variables in the big loop:
They changes like that: (n, i) -> ([n/2], i+1)
The stop-condition is [N/2^i] < i+1, which is equals to (i+1)*2^i > N. Of course, we need minimal i, satisfying the condition.
So, i is the first number satisfying the condition and we DO NOT SUM further:
x = [(N+1)/2] + [([N/2]+1)/2] + [([N/2^2]+1)/2] + ... + [([N/2^i-1]+1)/2]
By the number theory magic (not related on this question) this series is equals to N (1-1/2^i+1). Particularly, if N is a power of 2 minus 1, we can see it easily.
So, this code returns exactly the same value in O(log(N)) time.
// finding i
unsigned long long i = 0;
while ((i + 1) * (1ll << i) < n)
++i;
// finding x
x = n - (n >> (i + 1));
In the inner loop, given that n decrements at the same time that j increments, n is going to be lower than j at the middle of the difference between both initial values, that is (n-1)/2.
That's why your tests show that the inner loop runs ⌈n/2⌉ times per each iteration of the outer loop.
Then the outer loop is going to stop when for the i that satisfies this equality n/2^i = i-1. This affects the outer loop stopping condition.
T(n)
=
n/2 + T(n/2)
=
n/2 + n/4 + T(n/4)
=
n (1/2 + 1/4 + ... + 1/(2^i))
This series converges to n so that algorithm is O(n).
This question is based off of this resource http://algs4.cs.princeton.edu/14analysis.
Can someone break down why Exercise 6 letter b is linear? The outer loop seems to be increasing i by a factor of 2 each time, so I would assume it was logarithmic...
From the link:
int sum = 0;
for (int n = N; n > 0; n /= 2)
for (int i = 0; i < n; i++)
sum++;
This is a geometric series.
The inner loops runs i iterations per iteration of the outer loop, and the outer loop decreases by half each time.
So, summing it up gives you:
n + n/2 + n/4 + ... + 1
This is geometric series, with r=1/2 and a=n - that converges to a/(1-r)=n/(1/2)=2n, so:
T(n) <= 2n
And since 2n is in O(n) - the algorithm runs in linear time.
This is a perfect example to see that complexity is NOT achieved by multiplying the complexity of each nested loop (that would have got you O(nlogn)), but by actually analyzing how many iterations are needed.
Yes its simple
See the value of n is decreasing by half each time and I runs n times.
So for the first time i goes from 1 to n
next time 0 to n/2
and hence 0 to n/k on kth term.
Now total time inner loop would run = Log(n)
So its a GP the number of times i is running.
with terms
n,n/2,n/4,n/8....0
so we can find the sum of the GP
2^(long(n) +1)-1 / (2-1)
2^(long(n)+1) = n
hence n-1/(1) = >O(n)
for i = 0 to n do
for j = n to 0 do
for k = 1 to j-i do
print (k)
I'm wondering about the lower-bound runtime of the above code. In the notes I am reading it explains the lower bound runtime to be
with the explanation;
To find the lower bound on the running time, consider the values of i, such that 0 <= i <= n/4 and values of j, such that 3n/4 <= j <= n. Note that for each of the n^2/16 different combinations of i and j, the innermost loop executes at least n/2 times.
Can someone explain where these numbers came from? They seem to be arbitrary to me.
There are n iterations of the first loop and for each of them n iterations of the second loop. In total these are n^2 iterations of the second loop.
Now if you only consider the lower quarter of possible values for i, then you have n^2/4 iterations of the inner loop left. If you also only consider the upper quarter of values for j then you have n^2/16 iterations of the inner loop left.
For each iteration of these constrained cases you have j-i >= 3n/4-n/4 = n/2 and therefore the most inner loop is iterated at least n/2 times for each of these n^2/16 iterations of the outer loops. Therefore the full number of iterations of the most inner loop is at least n^2/16*n/2.
Because we considered only specific iterations, the actual number of iterations is higher and this result is a lower bound. Therefore the algorithm is in Omega(n^3).
The values are insofar arbitrary that you could use many others. But these are some simple ones which make the argument j-i >= 3n/4-n/4 = n/2 possible. For example if you took only the lower half of the i iterations and the upper half of the j iterations, then you would have j-i >= n/2-n/2 = 0, leading to Omega(0), which is not interesting. If you took something like lower tenth and upper tenth it would still work, but the numbers wouldn't be as nice.
I can't really explain the ranges from your book, but if you attempt to proceed via the methodology below, I hope it would become more clear to find what is it that you are looking for.
The ideal way for the outer loop (index i) and the inner loop (index j) is as follows, since j - i >= 1 should be sustained, so that the innermost loop would execute (at least once in every case).
The performed decomposition was made because the range of j from i to 0 is ignored by the innermost loop.
for ( i = 0 ; i < n ; i ++ ) {
for ( j = n; j > i ; j -- ) {
for ( k = 1; k <= j - i ; k ++ ) {
printf(k);
}
}
}
This algorithm's order of growth complexity T(n) is:
Hence, your algorithm does iterate more than the algorithm above (j goes from n to 0).
Using Sigma Notation, you can do this:
Where c represent the execution cost of print(k), and c' is the execution cost of the occurring iterations that don't involve the innermost loop.
Refreshing up on algorithm complexity, I was looking at this example:
int x = 0;
for ( int j = 1; j <= n; j++ )
for ( int k = 1; k < 3*j; k++ )
x = x + j;
I know this loops ends up being O(n^2). I'm believing inner loop is executed 3*n times( 3(1+2+...n) ), and the outer loop executes n times. So, O(3n*n) = O(3n^2) = O(n^2).
However, the source I'm looking at expands the execution of the inner loop to: 3(1+2+3+...+n) = 3n^2/2 + 3n/2. Can anyone explain the 3n^2/2 + 3n/2 execution times?
for each J you have to execute J * 3 iterations of internal loop, so you command x=x+j will be finally executed n * 3 * (1 + 2 + 3 ... + n) times, sum of Arithmetic progression is n*(n+1)/2, so you command will be executed:
3 * n * (n+1)/2 which is equals to (3*n^2)/2 + (3*n)/2
but big O is not how much iterations will be, it is about assymptotic measure, so in expression 3*n*(n+1)/2 needs to remove consts (set them all to 0 or 1), so we have 1*n*(n+0)/1 = n^2
Small update about big O calculation for this case: to make big O from the 3n(n+1)/2, for big O you can imagine than N is infinity, so:
infinity + 1 = infinity
3*infinity = infinity
infinity/2 = infinity
infinity*infinity = infinity^2
so you after this you have N^2
The sum of integers from 1 to m is m*(m+1)/2. In the given problem, j goes from 1 to n, and k goes from 1 to 3*j. So the inner loop on k is executed 3*(1+2+3+4+5+...+n) times, with each term in that series representing one value of j. That gives 3n(n+1)/2. If you expand that, you get 3n^2/2+3n/2. The whole thing is still O(n^2), though. You don't care if your execution time is going up both quadratically and linearly, since the linear gets swamped by the quadratic.
Big O notation gives an upper bound on the asymptotic running time of an algorithm. It does not take into account the lower order terms or the constant factors. Therefore O(10n2) and O(1000n2 + 4n + 56) is still O(n2).
What you are doing is try to count the number the number of operations in your algorithm. However Big O does not say anything about the exact number of operations. It simply provides you an upper bound on the worst case running time that may occur with an unfavorable input.
The exact precision of your algorithm can be found using Sigma notation like this:
It's been empirically verified.
I can probably figure out part b if you can help me do part a. I've been looking at this and similar problems all day, and I'm just having problems grasping what to do with nested loops. For the first loop there are n iterations, for the second there are n-1, and for the third there are n-1.. Am I thinking about this correctly?
Consider the following algorithm,
which takes as input a sequence of n integers a1, a2, ..., an
and produces as output a matrix M = {mij}
where mij is the minimum term
in the sequence of integers ai, a + 1, ..., aj for j >= i and mij = 0 otherwise.
initialize M so that mij = ai if j >= i and mij = 0
for i:=1 to n do
for j:=i+1 to n do
for k:=i+1 to j do
m[i][j] := min(m[i][j], a[k])
end
end
end
return M = {m[i][j]}
(a) Show that this algorithm uses Big-O(n^3) comparisons to compute the matrix M.
(b) Show that this algorithm uses Big-Omega(n^3) comparisons to compute the matrix M.
Using this face and part (a), conclude that the algorithm uses Big-theta(n^3) comparisons.
In part A, you need to find an upper bound for the number of min ops.
In order to do so, it is clear that the above algorithm has less min ops then the following:
for i=1 to n
for j=1 to n //bigger range then your algorithm
for k=1 to n //bigger range then your algorithm
(something with min)
The above has exactly n^3 min ops - thus in your algorithm, there are less then n^3 min ops.
From this we can conclude: #minOps <= 1 * n^3 (for each n > 10, where 10 is arbitrary).
By definition of Big-O, this means the algorithm is O(n^3)
You said you can figure B alone, so I'll let you try it :)
hint: the middle loop has more iterations then for j=i+1 to n/2
For each iteration of outer loop inner two nested loop would give n^2 complexity if i == n. Outer loop will run for i = 1 to n. So total complexity would be a series like: 1^2 + 2^2 + 3^2 + 4^2 + ... ... ... + n^2. This summation value is n(n+1)(2n+1)/6. Ignoring lower order terms of this summation term ultimately the order would be O(n^3)