Time complexity of an algorithm with two nested loops - algorithm

Given this algorithm :
m = 1
while(a>m*b){
m = m*2
}
while(a>=b){
while(a>=m*b){
a = a-m*b
}
m=m/2
}
My question : What is the time complexity of this algorithm ?
What I have done : I have to find the number of instructions. So I found out that, for the first while, there is m=log_2(a/b) iterations approximately. Now for the inner while of the second part of this algorithm, I found this pattern : a_i = a - i*m where i is the number of iterations. So there is a/bm iterations for the inner while.
But I don't know how to calculate the outer now because the condition depends on what the inner while have done to a.

Let's begin by "normalizing" the function in the same way as in your previous question, noting that once again all changes in a and stopping conditions are proportional to b:
n = a/b
// 1)
m = 1
while(n>m){
m = m*2
}
// 2)
while(n>=1){
while(n>=m){
n = n-m
}
m=m/2
}
Unfortunately, this is where the similarity ends...
Snippet 1)
Note that m can be written as an integer power of 2, since it doubles every loop:
i = 0
while (n > pow(2, i)) {
i++
}
// m = pow(2, i)
From the stopping condition:
Snippet 2)
Here m decreases in the exact opposite way to 1), so it can again be written as a power of 2:
// using i from the end of 1)
while (n>=1) {
k = pow(2, i)
while (n >= k) {
n = n - k
}
i--
}
The inner loop is simpler than the inner loop from your previous question, because m does not change inside it. It is easy to deduce the number of times c it executes, and the value of n at the end:
This is the exact definition of the Modulus operator % in the "C-family" of languages:
while (n>=1) {
k = pow(2, i)
n = n % k // time complexity O(n / k) here instead of O(1)
i--
}
Note that, because consecutive values of k only differ by a factor of 2, at no point will the value of n be greater than or equal to 2k; this means that the inner loop executes at most once per outer loop. Therefore the outer loop executes at most i times.
Both the first and second loops are O(log n), which means the total time complexity is O(log n) = O(log [a/b]).
Update: numerical tests in Javascript as before.
function T(n)
{
let t = 0;
let m = 1;
while (n > m) {
m *= 2; t++;
}
while (n >= 1) {
while (n >= m) {
n -= m; t++;
}
m/=2;
}
return t;
}
Plotting T(n) against log(n) shows a nice straight line:
Edit: a more thorough explanation of snippet 2).
At the end of snippet 1), the value of i = ceil(log2(n)) represents the number of significant bits in the binary representation of the integer ceil(n).
Computing the modulus of an integer with a positive power-of-2 2^i is equivalent to discarding all but the first i bits. For example:
n = ...00011111111 (binary)
m = ...00000100000 (= 2^5)
n % m = ...00000011111
----- (5 least significant bits)
The operation of snippet 2) is therefore equivalent to removing the most significant bit of n, one at a time, until only zero is left. For example:
outer loop no | n
----------------------------
1 | ...110101101
| ^
2 | ...010101101
| ^
3 | ...000101101
| ^
4 | ...000001101
| ^
: | :
: | :
i (=9) | ...000000001
| ^
----------------------------
final | 000000000
When the current most significant bit (pointed to by ^) is:
0: the inner loop does not execute because the value of n is already smaller than k = 2^i (equal to the bit position value of ^).
1: the inner loop executes once because n is greater than k, but less than 2k (which corresponds the bit above the current position ^).
Hence the "worst" case occurs when all significant bits of n are 1, in which case the inner loop to always executes once.
Regardless, the outer loop executes ceil(log2(n)) times for any value of n.

Related

Big O for this triple nested loop?

What's the big O of this?
for (int i = 1; i < n; i++) {
for (int j = 1; j < (i*i); j++) {
if (j % i == 0) {
for (int k = 0; k < j; k++) {
// Simple computation
}
}
}
}
Can't really figure it out. Inclined to say O(n^4 log(n)) but feel like i'm wrong here.
This is quite a confusing analysis, so let's break it down bit by bit to make sense of the calculations:
The outermost loop runs for n-1 iterations (since 1 ≤ i < n).
The next loop inside it makes (i² - 1) iterations for each index i of the outer loop (since 1 ≤ j < i²).
In total, this means the number of iterations for these two loops is equal to calculating the sum of (i²-1) for each 1 ≤ i < n. This is similar to computing the sum of the first n squares, and is order of magnitude of O(n³).
Note the modulo operator % takes constant time (O(1)) to compute, therefore checking the condition if (j % i == 0) for all iterations of these two loops will not affect the O(n³) runtime.
Now let's talk about the inner loop inside the conditional.
We are interested in seeing how many times (and for which values of j) this if condition evaluates to true, since this would dictate how many iterations the innermost loop will run.
Practically speaking, (j % i) will never equal 0 if j < i, so the second loop could actually be shortened to start from i rather than from 1, however this will not impact the Big-O upper bound of the algorithm.
Notice that for a given number i, (j % i == 0) if and only if i is a divisor of j. Since our range is (1 ≤ j < i²), there will be a total of (i-1) values of j for which this will be true, for any given i. If this is confusing, consider this example:
Let's assume i = 4. Then our index j would iterate through all values 1,..,15=i²,
and (j%i == 0) would be true for j = 4, 8, 12 - exactly (i - 1) values.
The innermost loop would therefore make a total of (12 + 8 + 4 = 24) iterations. Thus for a general index i, we would look for the sum: i + 2i + 3i + ... + (i-1)i to indicate the number of iterations the innermost loop would make.
And this could be generalized by calculating the sum of this arithmetic progression. The first value is i and the last value is (i-1)i, which results in a sum of (i³ - i²)/2 iterations of the k loop for every value of i. In turn, the sum of this for all values of i could be computed by calculating the sum of cubes and the sum of squares - for a total runtime of O(n⁴) iterations of the innermost loop (the k loop) for all values of i.
Thus in total, the runtime of this algorithm would be the total of both runtimes we calculated above. We checked the if statement O(n³) times and the innermost loop ran for O(n⁴), so assuming // Simple computation runs in constant time, our total runtime would come down to:
O(n³) + O(n⁴)*O(1) = O(n⁴)
Let us assume that i = 2.Then j can be [1,2,3].The "k" loop will run for j = 2 only.
Similarly for i=3,j can be[1,2,3,4,5,6,7,8].hence, k can run for j = 3,6. You can see a pattern here that for any value of i, the 'k' loop will run (i-1) times.The length of loops will be [i,2*i,3*i,....i*i].
Hence the time complexity of k loop is
=i+(2*i)+(3*i)+ ..... +(i*i)
=(i^2)(i+1)/2
Hence the final complexity will be
= (n^3)(n+3)/2

How does this method, which finds the smallest factor of a given number, work?

I've recently come across a method which returns the smallest factor of a given number:
public static int findFactor(int n)
{
int i = 1;
int j = n - 1;
int p = j; // invariant: p = i * j
while(p != n && i < j)
{
i++;
p += j;
while(p > n)
{
j--;
p -= i;
}
}
return p == n ? i : n;
}
After examining the method, I've been able to (most likely incorrectly) determine the quantities which some of is variables respectively represent:
n = the int that is subject to factorization for
the purposes of determining its smallest factor
i = the next potential factor of n to be tested
j = the smallest integer which i can be multiplied by to yield a value >= n
The problem is I don't know what quantity p represents. The inner loop seems to treat (p+=j) - n as a
potential multiple of i, but given what I believe j represents, I don't understand how that can be true
for all i, or how the outer loop accounts for the "extra" iteration of the inner loop that is carried out
before the latter terminates as a result of p < n
Assuming I've correctly determined what n, i, and j represent, what quantity does p represent?
If any of my determinations are incorrect, what do each of the quantities represent?
p stands for “product”. The invariant, as stated, is p == i*j; and the algorithm tries different combinations of i and j until the product (p) equals n. If it never does (the while loop falls through), you get p != n, and hence n is returned (n is prime).
At the end of the outer while loop's body, j is the largest integer which i can be multiplied by to yield a value ≤ n.
The algorithm avoids explicit division, and tries to limit the number of j values inspected for each i. At the beginning of the outer loop, p==i*j is just less than n. As i is gradually increased, j needs to gradually shrink. In each outer loop, i is increased (and p is corrected to match the invariant). The inner loop then decreases j (and corrects p) until p is ≤ n again. Since i*j is only just less than n at the beginning of the next outer loop, increasing i makes the product greater than n again, and the process repeats.
The algorithm tries all divisors between 1 and n / i (continuing past n / i is of no use as the corresponding quotients have already been tried).
So the outer loop actually performs
i= 1
while i * (n / i) != n && i < n / i)
{
i++;
}
It does it in a clever way, by avoiding divisions. As the annotation says, the invariant p = i * j is maintained; more precisely, p is the largest multiple of i that doesn't exceed n, and this actually establishes j = n / i.
There is a little adjustment to perform when i is incremented: i becoming i + 1 makes p = i * j become (i + 1) * j = p + j, and p may become too large. This is fixed by decrementing j as many times as necessary (j--, p-= i) to compensate.

finding the maximum product of 2 primes below a given number

Given a number N, how do we find maximum P*Q < N, such that P and Q are prime numbers?
My (brute force) attempt:
find a list {P, N/P} for all primes P < √N
find a list of primes Q, such that Q is the largest prime just below
N/P in the list above
Determine the maximum product P*Q from above
While this brute force approach will work, is there a formal (more sensible) solution to this question?
Example: N=27
√N = 5.196
Candidate primes: 2,3,5 --> [{2,13.5},{3,9},{5,5.4}] ->[{2,13},{3,7},{5,5}]
Solution: Max([2*13, 3*7, 5*5]) = 2*13 = 26
Hence, the brute force solution works.
Taking this one step further, we see that Q_max <= N/2 and if indeed we agree that P < Q, then we have Q >= √N.
We can refine our solution set to only those values {P, N\2}, where N\2 >= √N.
I have opted for integer division "\", since we are only interested in integer values, and integer division is indeed much faster than regular division "/"
The problem reduces to:
Example: N=27
√N = 5.196
Candidate P: 2,3 --> [{2,13},{3,9}] -->[{2,13},{3,7}]
(we drop {5,5} since N\P < √N i.e. 5 < 5.196)
Solution set: max([2*13, 3*7]) = 2*13 = 26
It might look trivial, but it just eliminated 1/3 of the possible solution set.
Are there other clever procedures we can add to reduce the set further?
This is similar to what #RalphMRickenback describes, but with tighter complexity bounds.
The prime finding algorithm he describes is the sieve of Erathostenes, which needs space O(n), but has time complexity O(n log log n), you may want to see the discussion on Wikipedia if you want to be more careful about this.
After finding a list of primes smaller than n // 2, you can scan it a single time, i.e. with O(n) complexity, by having a pointer start at the beginning and another at the end. If the product of those two primes is larger than your value, reduce the high pointer. If the product is smaller, compare it to a stored maximum product, and increase the low pointer.
EDIT As mentioned in the comments, the time complexity of the scan of the primes is better than linear on n, since it is only over the primes less than n, so O(n / log n).
Rather than pseudo-code, here's full implementation in Python:
def prime_sieve(n):
sieve = [False, False] + [True] * (n - 1)
for num, is_prime in enumerate(sieve):
if num * num > n:
break
if not is_prime:
continue
for not_a_prime in range(num * num, n + 1, num):
sieve[not_a_prime] = False
return [num for num, is_prime in enumerate(sieve) if is_prime]
def max_prime_product(n):
primes = prime_sieve(n // 2)
lo, hi = 0, len(primes) - 1
max_prod = 0
max_pair = None
while lo <= hi:
prod = primes[lo] * primes[hi]
if prod <= n:
if prod > max_prod:
max_prod = prod
max_pair = (primes[lo], primes[hi])
lo += 1
else:
hi -= 1
return max_prod, max_pair
With your example this produces:
>>> max_prime_product(27)
(26, (2, 13))
Another brute force attempt:
Find all prime numbers p <= N/2
Iterate over the array from the smallest p as long as p < √N and multiply it with the largest q < N/p, retaining the largest product.
If enough memory is available (N/2 bit), one could make a bitarray of that size. Initialize it with all TRUE but the first position. Iterating over the bit array, calculate the multiples of the position you are at and set all multiples to false. If the next position is false already, you do not need to recalculate all its multiples, they are already set to false.
Finding all primes therefore is < O(N^2).
a[1] := false;
m := n \ 2; // sizeof(a)
for i := 2 to m do
a[i] := true;
for i := 2 to m do
if a[i] then
for j := 2*a[i] to m step a[i] do
a[i*j] := false;
Step 2) is < O(n^2) as well:
result := 0;
for i := 2 to √N do
if not a[i] then continue; // next i;
for j := (n \ i) downto i do
if not a[j] then continue; // next j
if a[j] * a[i] < N
result := max(result, a[j] * a[i]);
break; // next i;
if result = N then break; // you are finished
This can be optimized further, I guess. You can keep (i,j) to know the two prime numbers.

Analysis of for loop

Consider this fragment of code
int sum = 0;
for( int i = 1; i <= n*n; i = i*2 ){
sum++ ;
}
How to do a quick proper analysis for it to get order of growth of the worst case running time?
How does changing the increment statement to i = i * 3 instead of i = i * 2 changes the worst case running time?
And is our analysis affected by changing comparison operator to < instead of <= ?
int sum = 0;
for( int i = 0; i <= n*n; i = i*2 ){
sum++ ;
}
As it stands, this is an infinite loop which will never stop, since i is never changing.
As complexity is defined for only Algorithms, which by definition should terminate in finite amount of time, it is undefined for this snippet.
However, if you change the code to the following :
int sum = 0;
for( int i = 1; i <= n*n; i = i*2 ){
sum++ ;
}
We can analyze the complexity as follows:
Let the loop run k - 1 times, and terminate at kth updation of i.
Since it's better to be redundant than to be unclear, here is what is happening:
Init(1) -> test(1) -> Loop(1) [i = 1]->
Update(2) -> test(2) -> Loop(2) [i = 2]->
...
Update(k - 1) -> test(k - 1) -> Loop(k - 1) [i = 2 ^ (k - 2)] ->
Update(k) -> test(k)->STOP [Test fails as i becomes 2 ^ (k - 1)]
Where Update(k) means kth update (i = i * 2).
Since, the increments in i are such that in the pth loop (or equivalently, after pth updation), the value of i will be 2 ^ (p - 1), we can say that at termination:
2 ^ (k - 1) > (n * n)
In verbose, we have terminated at the kth updation. Whatever the value of i was, it would've been greater than (n * n) or we would have gone for the kth loop. Taking log base 2 on both sides:
k ~ 2 * log(n)
Which implies that k is O(log(n)).
Equivalently, the number of times the loop runs is O(log(n)).
You can easily extend this idea to any limit (say n*n*n) and any increments (i*3, i*4 etc.)
The Big O complexity will be unaffected by using < instead of <=
Actualy this loop is infinte loop.
i=0
i=i*2 //0*2=0
So this loop will never end. Make i=1 to get the count of powers of 2 till n^2 not sum.
for any loop, to analys it. u have to see 2 things. the condition that will make it exit, and the iteration applied to the variable used in the condition..
for your code. u can notice that the loop stops when i goes from 0 to n*n (n^2). and the variable i is increasing like i = i*2. as i is increasing i in this manner, the loop would run for log (n^2) times. this you can see by taking an example value of n^2, like 128, and then iterate it manually one by one.

Fastest unconditional sort algorithm

I have a function, which can take two elements and return them back in ascending order:
void Sort2(int &a, int &b) {
if (a < b) return;
int t = a;
a = b;
b = t;
}
what is the fastest way to sort an array with N entries using this function if I am not allowed to use extra conditional operators?
That means that whole my program should look like this:
int main(){
int a[N];
// fill a array
const int NS = ...; // number of comparison, depending on N.
const int c[NS] = { {0,1}, {0,2}, ... }; // consequence of indices pairs generated depending on N.
for( int i = 0; i < NS; i++ ) {
Sort2(a[c[i][0]], a[c[i][1]]);
}
// sort is finished
return 1;
}
Most of the fast sort algorithms use conditions to decide what to do. There is bubble sort of course, but it takes M = N(N-1)/2 comparisons. This is not the optimum, for instance, with N = 4 it takes M = 6 comparison, meanwhile 4 entries can be sorted with 5:
Sort2(a[0],a[1]);
Sort2(a[2],a[3]);
Sort2(a[1],a[3]);
Sort2(a[0],a[2]);
Sort2(a[1],a[2]);
The standard approach is known as Bitonic Mergesort. It is hella efficient when paralellized, and only slightly less efficient than conventional algorithms when not parallelized. Bitonic mergesort is a special kind of a wider class of algorithms known as "sorting networks"; it is unusual among sorting networks in that some of its reorderings are in reverse order of the desired sort (though everything is in the correct order once the algorithm completes). You can do that with your Sort2 by passing in a higher array slot for the first argument than the second.
For N a power of 2 you can generalize the approach you used, by using a "merge-sortish" kind of approach: you sort the first half and the last half separately, and then merge these using a few comparisons.
For instance, consider an array of size 8. And assume that the first half is sorted and the last half is sorted (by applying this same approach recursively):
A B C D P Q R S
In the first round, you do a comparison of 1 vs 1, 2 vs 2, etc:
---------
| |
| ---------
| | | |
A B C D P Q R S
| | | |
| ---------
| |
---------
After this round, the first and the last element are in the right place, so you need to repeat the process for the inner 6 elements (I keep the names of the elements the same, because it is unknown where they end up):
-------
| |
| -------
| | | |
A B C D P Q R S
| |
-------
In the next round, the inner 4 elements are compared, and in the last round the inner 2.
Let f(n) be the number of comparisons needed to sort an array of length n (where n is a power of 2, for the moment). Clearly, an array consisting of 1 element is sorted already:
f(1) = 0
For a longer array, you first need to sort both halves, and then perform the procedure described above. For n=8, that took 4+3+2+1 = (n/2)(n/2+1)/2 comparisons. Hence in general:
f(n) = 2 f(n/2) + (n/2)(n/2+1)/2
Note that for n=4, this indeed gives:
f(4) = 2 f(2) + 2*3/2
= 2 * (2 f(1) + 1*2/2) + 3
= 5
To facilitate ns that are no power of 2, the important thing is to do the merging step on an odd-length array. The simplest strategy seems to be to compare the smallest element of both subarrays (which yields the smallest element) and then just continue on the rest of the array (which has now even length).
If we write g(k) = k(k+1)/2, we can now have a short way of writing the recursive formula (I use 2k and 2k+1 to distinguish even and odd):
f(1) = 0
f(2k) = 2 f(k) + g(k)
f(2k+1) = f(k+1) + f(k) + 1 + g(k)
Some pseudocode on how to approach this:
function sort(A, start, length) {
if (length == 1) {
// do nothing
} else if (length is even) {
sort(A, start, length/2)
sort(A, start+length/2, length/2)
merge(A, start, length)
} else if (length is odd) {
sort(A, start, length/2+1)
sort(A, start+length/2+1, length/2)
Sort2(A[start], A[start+length/2+1])
merge(A, start+1, length-1)
}
}
function merge(A, start, length) {
if (length > 0) {
for (i = 0; i < length/2; i++)
Sort2(A[i], A[i]+length/2)
merge(A, start+1, length-2)
}
}
And you would run this on your array by
sort(A, 0, A.length)

Resources