begin
Input: n (pos. Integer)
Output: y (pos. Integer)
Other: x, z (pos. Integer)
y := 0;
x :=0;
while x < n do
y := y + 1;
z := 0;
while z < 4 do
x := x + 1;
z := z + 1;
end;
for (i=0;i<2;i++){
x=x-1;
}
End;
How is this done? I know that when there is a for loop it's O(N) and when there is a while it's O(log N) .
I would appreciate the help :)
Thank you
A good way to do this is to determine how many times the outer loop executes and how much work it does per iteration.
The body of the loop is the following:
y := y + 1;
z := 0;
while z < 4 do
x := x + 1;
z := z + 1;
end;
for (i=0;i<2;i++){
x=x-1;
}
This first line does O(1) work, as does the second. The next part of the logic is a loop that runs four times, each iteration doing O(1) work. Accordingly, this inner loop does O(1) work as well. Finally, the remaining loop does O(1) work. Consequently, each iteration of the loop does O(1) work.
So how many times does the outer loop execute? Well, the loop is
while x < n do
x starts at zero, and note that on each iteration the loop increments x four times and decrements x twice. Consequently, on each iteration of the loop, x increases by two. Therefore, the number of loop iterations is roughly n / 2 = O(n).
Since the loop runs O(n) times and does O(1) work per iteration, the total work done here is O(n).
Hope this helps!
Related
I just started my data structures course and I'm having troubles trying to figure out the time complexity in the following code:
{
int j, y;
for(j = n; j >= 1; j--)
{
y = 1;
while (y < j)
y *= 2;
while (y > 2)
y = sqrt(y);
}
The outer 'for' loop is running n times in every run of the code, and the first 'while' loop runs about log2(j) if I'm not mistaken.
I'm not sure about the second 'while' loop and how to determine the overall time complexity of the code.
My initial thoughts were to determine which 'while' loop would "cost" more in each iteration of the 'for' loop, consider only the higher of the two and sum it up but obviously it didn't lead me to an answer.
Would appreciate any help, especially in what is the the process and overall approach in trying to compute the complexity in codes such as this one.
You are right that the first while loop has time complexity O(log(j)). The second while loop repeatedly executes square root on y until it's less than 2.
Since y is approximately j (it's between j and 2j), the question is: how often can you perform a square root on j until you get a number less than or equal to 2? But equivalently you could ask: how often can you square 2 until you get a number larger than or equal to j? Or as an equation:
(((2^2)^...)^2 >= j // k repeated squares
<=> 2^(2^k) >= j
<=> k >= log(log(j))
So the second while loop has time complexity O(log(log(j)). That's negligible compared to O(log(j)). Now since j <= n and the loop is itereated n times, we get the overall complexity O(n log(n)).
What's the big O of this?
for (int i = 1; i < n; i++) {
for (int j = 1; j < (i*i); j++) {
if (j % i == 0) {
for (int k = 0; k < j; k++) {
// Simple computation
}
}
}
}
Can't really figure it out. Inclined to say O(n^4 log(n)) but feel like i'm wrong here.
This is quite a confusing analysis, so let's break it down bit by bit to make sense of the calculations:
The outermost loop runs for n-1 iterations (since 1 ≤ i < n).
The next loop inside it makes (i² - 1) iterations for each index i of the outer loop (since 1 ≤ j < i²).
In total, this means the number of iterations for these two loops is equal to calculating the sum of (i²-1) for each 1 ≤ i < n. This is similar to computing the sum of the first n squares, and is order of magnitude of O(n³).
Note the modulo operator % takes constant time (O(1)) to compute, therefore checking the condition if (j % i == 0) for all iterations of these two loops will not affect the O(n³) runtime.
Now let's talk about the inner loop inside the conditional.
We are interested in seeing how many times (and for which values of j) this if condition evaluates to true, since this would dictate how many iterations the innermost loop will run.
Practically speaking, (j % i) will never equal 0 if j < i, so the second loop could actually be shortened to start from i rather than from 1, however this will not impact the Big-O upper bound of the algorithm.
Notice that for a given number i, (j % i == 0) if and only if i is a divisor of j. Since our range is (1 ≤ j < i²), there will be a total of (i-1) values of j for which this will be true, for any given i. If this is confusing, consider this example:
Let's assume i = 4. Then our index j would iterate through all values 1,..,15=i²,
and (j%i == 0) would be true for j = 4, 8, 12 - exactly (i - 1) values.
The innermost loop would therefore make a total of (12 + 8 + 4 = 24) iterations. Thus for a general index i, we would look for the sum: i + 2i + 3i + ... + (i-1)i to indicate the number of iterations the innermost loop would make.
And this could be generalized by calculating the sum of this arithmetic progression. The first value is i and the last value is (i-1)i, which results in a sum of (i³ - i²)/2 iterations of the k loop for every value of i. In turn, the sum of this for all values of i could be computed by calculating the sum of cubes and the sum of squares - for a total runtime of O(n⁴) iterations of the innermost loop (the k loop) for all values of i.
Thus in total, the runtime of this algorithm would be the total of both runtimes we calculated above. We checked the if statement O(n³) times and the innermost loop ran for O(n⁴), so assuming // Simple computation runs in constant time, our total runtime would come down to:
O(n³) + O(n⁴)*O(1) = O(n⁴)
Let us assume that i = 2.Then j can be [1,2,3].The "k" loop will run for j = 2 only.
Similarly for i=3,j can be[1,2,3,4,5,6,7,8].hence, k can run for j = 3,6. You can see a pattern here that for any value of i, the 'k' loop will run (i-1) times.The length of loops will be [i,2*i,3*i,....i*i].
Hence the time complexity of k loop is
=i+(2*i)+(3*i)+ ..... +(i*i)
=(i^2)(i+1)/2
Hence the final complexity will be
= (n^3)(n+3)/2
I'm trying to find the worse case complexity function of this algorithm considering comparisons statement as the most relevant operation. That's when the if and else if are both always executed under the loop, so the function is 2*number of loop executions.
Since the variable i is beeing increased by a bigger number each time the complexity is probably O(log n) but how do i find the exac number of executions? Thanks.
int find ( int a[], int n, int x ) {
int i = 0, mid;
while ( i < n ) {
mid = ( n + i ) / 2;
if ( a[mid] < x )
n = mid;
else if ( a[mid] > x )
i = mid + 1;
else return mid;
}
return 0;
}
Qualitative Understanding
Well let's try to look at the loop invariant to figure out how long this function is going to run.
We can see that the function will continue to execute code until this while(i < n){ ... } condition is met.
Let's also note that within this while loop, i or n is always being mutated to some variation of mid:
if ( a[mid] < x ) # Condition 1:
n = mid; # we set n to mid
else if ( a[mid] > x ) # Condition 2:
i = mid + 1; # we set i to mid+1
else return mid; # Condition 3: we exit loop (let's not worry about this)
So now let's focus on mid since our while condition always seems to be getting cut down depending on this value (since the while condition is dependent on i and n, one of which will be set to the value of mid after each loop iteration):
mid = ( n + i ) / 2; # mid = average of n and i
So effectively we can see what's going on here after looking at these pieces of the function:
The function will execute code while i < n, and after each loop iteration the value of i or n is set to the average value, effectively cutting down the space between i and n by half each time the loop iterates.
This algorithm is known as a binary search, and the idea behind it is we keep cutting the array boundaries in half each time we iterate in the loop.
So you can think about it as we keep cutting n in half until we can't cut in half anymore.
Quantitative Analysis
A mathematical way to look at this is to see that we're effectively dividing n by 2 each iteration, until i and n are equal to each other (or n < i).
So let's think about it as how many times can we divide our n by 2 until it equals 1? We want our n to equal 1 in this case because that's when we are unable to split the list any further.
So we're left with an equation, where x is the amount of time we need to execute the while loop:
n/2^x = 1
n = 2^x
lg(n) = lg(2^x)
lg(n) = x lg(2)
lg(n) = x
As you can see, x = lg(n) so we can conclude that your algorithm runs in O(lgn)
I tried many way and i created n,i,t value table.I noticed that n=1 loop 0 time returns,n=2 loop=1 time ,n=3 or 4 loop=2 time, n=5,6 or 7 loop=3 ,n=8,9,10,11 loop=4 four time i found these values full-comprehend but i does not find solution O(n) for this algorithm.
function func3(n)
i = 1;
t = 1;
while i < n do
i = i + t;
t = t + 1;
end while
Your statemnts in loop repeat until i < n.
What is i? i is sum of natural numbers i=1+2+3+...x. Formula for sum of first x natural numbers is S=(x(x+1))/2.
Your expression in loop is i < n. This meen that (x(x+1))/2 < n. When we solve the inequality, we obtain x<(-1+sqrt(1+8n))/2. Since the number of loop iteration in integer, number of iterations is firts int lower then max x.
For example:
n = 1, x<0,823 => number of iterations is 0
n = 2, x<1,436 => number of iterations is 1
n = 11, x<4,164 => number of iterations is 4
For the following code :
s = 0 ;
for(i=m ; i<=(2*n-1) ; i+=m) {
if(i<=n+1){
s+=(i-1)/2 ;
}
else{
s+=(2*n-i+1)/2 ;
}
}
I want to change the complexity of the code from O(n)
to O(1) . So I wanted to eliminate the for loop . But as
the sum s stores values like (i-1)/2 or (2*n-i+1)/2 so eliminating the loop involves tedious calculation of floor value of each (i-1)/2 or (2*n-i+1)/2 . It became very difficult for me to do so as I might have derived the wrong formula in sums of floors . Can u please help me in Changing complexity from O(n) to O(1). Or please help me with this floor summations . Is there any other way to reduce the complexity ? If yes ... then how ?
As Don Roby said, there is a plain old arithmetic solution to your problem. Let me show you how to do it for the first values of i.
* EDIT 2 : CODE FOR THE LOWER PART *
for(int i=m ; i<= n+1 ; i+=m)//old computation
s+=(i-1)/2 ;
int a = (n+1)/m; // maximum value of i
int b = (a*(a+1))/2; //
int v = 0;
int p;
if(m % 2 == 0){
p = m/2;
v = b*p-a; // this term is always here
}
else{
p = (m - 1)/2;
int sum1 = ((a/2)*(a/2 +1))/2;
int sum2 = (((a-1)/2)*((a-1)/2 +1))/2;
v = b*p -a ;// this term is always here
v+= sum1 + a/2; //sum( 1 <= j <= a )(j-1), j pair
v+= sum2; //sum( 1 <= j <= a )(j-1), j impair
}
System.out.println( " Are both result equals ? "+ (s == v));
How do I come up with it? I take
for(i=m ; i<= n+1 ; i+=m)
s+=(i-1)/2 ;
I make a change
for(j=1 ; j*m <= n-1 ; j++)
s+=(j*m-1)/2 ;
I pose a=Math.floor(n+1/m). There are 3 cases :
m is pair, then interior of the loop is s+= p*j. The result is
b(a*(a+1))/2 -a
m is impair and the iterator j is pair
m is impair and the iterator j is impair
When m is impair, you can write m = 2p + 1 and the interior of the loop becomes
s+= p*j + (j-1)/2
p*j is the same as before, now you need to break the division by assuming j is always pair or j always impair and summing both values.
The next loop you need to compute is
for(int i=a+1 ; i<= (2*n-1) ; i+=m)// a is (n+1)/m
s+=(2*n-i+1)/2;
which is the same as
for(int i=1 ; i<= (2*n-1)-a ; i+=m)
s+= (2n-a)/2 - (i-1)/2;
This loop is similar to the first one, so there is not much work to do...
Indeed this is tedious..
My approach to this would be to first write characterizing tests asserting the values produced for different values of m and n, and then start refactoring.
Your main loop has a change of logic based on getting halfway through (the if(i<=n+1) choice), so I'd first split it into two loops based on that.
Then you have in each of the resulting loops, a computation that varies principally on whether i is even or odd. Split each into 2 more loops separating thes, and the floor computations may be simpler to understand. Alternatively, you might see a pattern of repeated values that lets you simplify these loops in a different way.
Each of the resulting loops will likely be something resembling a sum of an arithmetic progression, so you'll likely find that they can be replaced by closed form computations not requiring loops at all.
While you go along this path, you might also refactor to extract portions of the computation to functions. Write characterizing tests for these as you extract them.
Keep running all your tests as you proceed and you'll likely be able to reduce this to a sum of simple computations, which might then reduce further by plain old arithmetic.