for i:=1 to n do
for j=1 to i do
for k:=1 to j do Mod A
Known that O(Mod A) = constant M;
I tried and this was the result: O[(1+2+..+i)*(1+2+....+n)*M]= O[(i^2+i)(n^2+n)/4 * M) = O(n^4) because the maximum of i value is n. I wonder if this explanation is true.
Related
In pseudo-code:
j = 5;
while (j <= n) {
j = j* j* j * j;
}
What is the time complexity of this code?
It is way shorter than O(logn), is there even any reason to go lower than that?
Let's trace through the execution of the code. Suppose we start with initial value j0:
0. j ← j0
1. j ← j0^4
2. j ← [j0^4]^4 = j0^(4^2)
3. j ← [j0^(4^2)]^4 = j0^(4^3)
4. j ← [j0^(4^3)]^4 = j0^(4^4)
...
m. j ← [j0^(4^(m-1))]^4 = j0^(4^m)
... after m loops.
The loop terminates when the value exceeds n:
j0^(4^m) > n
→m > log(4, log(j0, n))
Thus the time complexity is O(m) = O(log log n).
I used help from MathSE to find out how to solve this. The answer is same as another one by #meowgoesthedog, but I understand it the following way:
On every iteration, the value of j is going to increase by its own 4th power. Or, we can look at it from the side of n, that on every iteration n is going to reduce by its 4th root. Hence, the recurrence will look like:
T(n) = 1 + T(n1/4)
For any integer k, with 24k + 1 <= n <= 24k + 1, the recurrence will become:
T(n) = 1 + k
if we go on to assume that the 4th root will always be an integer. It won't matter if it is not as the constant of +/- 1 will be ignored in the Big-O calculation.
Now, since the assumption of 4th root being an integer simplifies things for us, we can try to solve the following equation:
n = 24k,
with the equation yielding k = (Log(Log(n)) - Log(2))/Log(4).
This implies that O(T(n)) = O(Log(Log(n))).
I'm trying to calculate the time complexity of the following code snippet
sum = 0;
for(i = 0; i <= n; i++) {
for(j = 1; j <= i; j++) {
if(i % j == 0) {
for(k = 0; k <= n; k++) {
sum = sum + k;
}
}
}
}
What i think , that out of N iterations of First loop, only 1 value which is 0 allowed to enter K loop and from i = 1.....N, K loop never runs.
So, only 1 value of I runs j loop N times and k loop N times and for other values of I only J loop runs N times
So, is the TC = O(N^2) ?
Here let d(n) is the number of divisors of n.
I see your program doing O(n) work(innermost loop) for O( d(n) ) number of divisors of each i (i looping from 0 to n in outermost loop: O(n) ).
Its complexity is O( n * d(n) * n ).
Reference
for large n, d() ~ O( exp( log(n)/log(log n) ) ).
So the overall complexity is O( n^(2 + 1/log(log n) ) ).
I've got another answer. Lets replace the inner loop with an abstract func():
for(i=0;i<=n;i++) {
for(j=1;j<=i;j++) {
if(i%j==0) {
func();
}
}
}
Firstly, forgetting the calls to func(), the complexity M to calculate all (i % j) is O(n^2).
Now, we can ask ourselves how many times the func() is called. It's called once for each divisor of i. That is it is called d(i) times for each i. This is a Divisor summatory function D(n). D(n) ~ n log n for large n.
So func() is called n log n times. At the same time the func() itself has complexity of O(n). So it gives the complexity P = O(n * n log n).
So total complexity is M + P = O(n^2) + O(n^2 log n) = O(n^2 log n)
Edit
Vow, thanks for downvote! I guess I need to prove it using python.
This code prints out n, how many times the inner loop is called for n, and outputs ratio of the latter and Divisor summatory function
import math
n = 100000
i = 0
z = 0
gg = 2 * 0.5772156649 - 1
while i < n:
j = 1
while j <= i:
if i % j == 0:
#ignoring the most inner loop just calculate the number of times it is called
z+=1
j+=1
if i > 0 and i % 1000 == 0:
#Exact divisor summatory function, to make z/Di converge to 1.0 quicker
Di = (i * math.log(i) + i * gg)
#prints n Di z/Di
print str(i) + ": " + str(z) + ": " + str(z/Di)
i+=1
Output sample:
24000: 245792: 1.00010672544
25000: 257036: 1.00003672445
26000: 268353: 1.00009554815
So the most inner loop is called n * log n times, and total complexity is n^2 * log n
I have analyzed running time of following alogirthm i analyzed theta but can its running time could be Big O?
Cost Time
1. for i ←1 to n c1 n
2. do for j ← i to n c2 n
3. do k ← k+ j c3 n-1
T(n) = c1n +c2n+c3(n-1)
= C1n+C2n+C3(n-1)
= n(C1+C2)+n-1
= n+n-1
Or T(n) = Ө(n)
So running time is Ө(n)
Your loop will continue as follows (well-known arithmetic progression formula):
-which also can be estimated as since big-O gives majority estimation.
1. for i ←1 to n c1 n
2. do for j ← i to n c2 n
3. do k ← k+ j c3 1
T(n) = n * n * 1 = O(n^2) #Giulio Franco
It's a nested loop that does a constant time operation in there.
do k ← k+ j is constant because it's a fixed length of time for the operation to take place no matter what inputs you put. k + j
loop(n)
loop(n)
constant time(1)
When it's a loop inside a loop you multiply. n*n*1
loop(n)
loop(n)
These loops aren't nested.
this would be n + n
O(n+n) which reduces to O(n)
The algorithm is taken from great "Algorithms and Programming: Problems and Solutions" by Alexander Shen (namely exercise 1.1.28).
Following is my translation from Russian so excuse me for mistakes or ambiguity. Please correct me if you feel so.
What should algorithm do
With given natural n algorithm calculates the number of solutions of
inequality
x*x + y*y < n
in natural (non-negative) numbers without using
manipulations on real numbers
In Pascal
k := 0; s := 0;
{at this moment of execution
(s) = number of solutions of inequality with
x*x + y*y < n, x < k}
while k*k < n do begin
l := 0; t := 0;
while k*k + l*l < n do begin
l := l + 1;
t := t + 1;
end;
{at this line
(t) = number of solutions of k*k + y*y < n
for given (k) with y>=0}
k := k + 1;
s := s + t;
end;
{k*k >= n, so s = number of solutions of inequality}
Further in the text Shen says briefly that number of operations performed by this algorithm is "proportional to n, as one can calculate". So I ask you how one can calculate that with strict mathematics.
You have two loops, one inside the other.
The external has this condition: k*k < n so k goes from 0 up to SQRT(n)
and the internal loop has this condition: k*k + l*l < n so l goes from 0 up to SQRT(n-k^2). But this is snaller than SQRT(n)
So, the maximum iterations is less than SQRT(n) * SQRT(n) which is n and in every iteration a constant number of operations is done.
The number of operation done by nested loops is a multiplication of the 2 lenghts
for example:
for i=1 to 5
for j = 1 to 10
print j+i
end
end
will print 5*10 = 50 times
In your example the outer loop runs sqrt(n) times -that is until k^2=n or k=sqrt(n).
the inner loop runs sqrt(n) times as well .
k is constant within the loop, and it will stop when k^2+l^2>n , the most times it could run would be at k=0 -> l^2>n => l>sqrt(n) .
So the total number of iterations is at most sqrt(n)*sqrt(n) - O(n)
The time taken by your algorithm is proportional to the number of operations made. Therefore, you just have to calculate that the time taken by your algorithm is proportional with an increasing size of the input (n). You can do so by timing the algorithm's completion with a wide range of n's and plotting the n vs time graph. Doing so should give you a linear graph.
I have a short program here:
Given any n:
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k = k * k;
}
i++;
}
The asymptotic running time of this is O(n log log n). Why is this the case? I get that the entire program will at least run n times. But I'm not sure how to find log log n. The inner loop is depending on k * k, so it's obviously going to be less than n. And it would just be n log n if it was k / 2 each time. But how would you figure out the answer to be log log n?
For mathematical proof, inner loop can be written as:
T(n) = T(sqrt(n)) + 1
w.l.o.g assume 2 ^ 2 ^ (t-1)<= n <= 2 ^ (2 ^ t)=>
we know 2^2^t = 2^2^(t-1) * 2^2^(t-1)
T(2^2^t) = T(2^2^(t-1)) + 1=T(2^2^(t-2)) + 2 =....= T(2^2^0) + t =>
T(2^2^(t-1)) <= T(n) <= T(2^2^t) = T(2^2^0) + log log 2^2^t = O(1) + loglogn
==> O(1) + (loglogn) - 1 <= T(n) <= O(1) + loglog(n) => T(n) = Teta(loglogn).
and then total time is O(n loglogn).
Why inner loop is T(n)=T(sqrt(n)) +1?
first see when inner loop breaks, when k>n, means before that k was at least sqrt(n), or in two level before it was at most sqrt(n), so running time will be T(sqrt(n)) + 2 ≥ T(n) ≥ T(sqrt(n)) + 1.
Time Complexity of a loop is O(log log n) if the loop variables is reduced / increased exponentially by a constant amount. If the loop variable is divided / multiplied by a constant amount then complexity is O(Logn).
Eg: in your case value of k is as follow. Let i in parenthesis denote the number of times the loop has been executed.
2 (0) , 2^2 (1), 2^4 (2), 2^8 (3), 2^16(4), 2^32 (5) , 2^ 64 (6) ...... till n (k) is reached.
The value of k here will be O(log log n) which is the number of times the loop has executed.
For the sake of assumption lets assume that n is 2^64. Now log (2^64) = 64 and log 64 = log (2^6) = 6. Hence your program ran 6 times when n is 2^64.
I think if the codes are like this, it should be n*log n;
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k *= c;// c is a constant bigger than 1 and less than k;
}
i++;
}
Okay, So let's break this down first -
Given any n:
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k = k * k;
}
i++;
}
while( i<n ) will run for n+1 times but we'll round it off to n times.
now here comes the fun part, k<n will not run for n times instead it will run for log log n times because here instead of incrementing k by 1,in each loop we are incrementing it by squaring it. now this means it'll take only log log n time for the loop. you'll understand this when you learn design and analysis of algorithm
Now we combine all the time complexity and we get n.log log n time here I hope you get it now.