What is time complexity of below code? - algorithm

For(I=1 ; I<=n ; I++)
{
For(J=1 ; J<=I ; J++)
{
For(K=1 ; K<=n^5 ; K=15 × K)
{
x=y+z;
}
}
}
It seems to be O(N^2 log N) according to me, but when i analyzed the k loop, it is not following the Log N, which is confusing me,

It should be O(n^2 log(n)) because the inner loop will be called (n/2)(n+1) times and it will loop log base 15 of n^5 = 5 * log base 15 of n because k grows exponentially in the number of loops.
This results in 5(n^2+n)(log base 15 of n)/2 assignments to x, which is O(n^2 * log(n))

Time complexity of your problem is:
Explanation:
When we say
we mean
not
2 base of log function is coming from divide by 2 in the definitions inside of for loops about binary searching not from the binary nature of computers.
But in your case divide by value is not 2 but 15 because of k = 15 × k definition so the base of log function must be 15 not 2.
You can see the correlation between these with replacing k *= 15 line with k *= 2 and
print n * n * int(math.log(n**5,15) + 1)
line with
print n * n * int(math.log(n**5,2) + 1)
in the given Python Code above. Results will continue to match.
Also because of the quitting of binary base you need to round up log function with nearest integer function:
Python Code:
import math
n = 100
i = 1
while i <= n:
j = 1
while j <= i:
k = 1
counter = 1
while k <= n**5:
x = 1 + 1
k *= 15
counter += 1
#print k
#print counter
j += 1
#print j
i += 1
#print i
print "\nTime Complexity Prediction:"
print n * n * int(math.log(n**5,15) + 1)
print "\nReal World Result:"
print (i - 1) * (j - 1) * (counter - 1)
print ""
Example results of the program:
For n = 10:
Time Complexity Prediction:
500
Real World Result:
500
For n = 100:
Time Complexity Prediction:
90000
Real World Result:
90000
For n = 1000:
Time Complexity Prediction:
13000000
Real World Result:
13000000
For n = 3000:
Time Complexity Prediction:
135000000
Real World Result:
135000000

Actually it is 15BaseLog(n)
Now, look at the powers of 15 : 15, 225, 3375, 50625, 759375, 11390625, ......
See how fast they grow. When you run the 2nd inner loop, the effect is ignorable, because this sequence (power of 15) passes the value on n before a countable number of iterations.
That's why, There is no significant effect of log(n)

Related

Time Complexity - While loop divided by 2 with for loop nested

I am stuck on a review question for my upcoming midterms, and any help is greatly appreciated.
Please see function below:
void george(int n) {
int m = n; //c1 - 1 step
while (m > 1) //c2 - log(n) steps
{
for (int i = 1; i < m; i++) //c3 - log(n)*<Stuck here>
int S = 1; //c4 - log(n)*<Stuck here>
m = m / 2; //c5 - (1)log(n) steps
}
}
I am stuck on the inner for loop since i is incrementing and m is being divided by 2 after every iteration.
If m = 100:
1st iteration m = 100: loop would run 100, i iterates 100 times + 1 for last check
2nd iteration m = 50: loop would run 50 times, i iterates 50 times + 1 for last check
..... and so on
Would this also be considered log(n) since m is being divided by 2?
External loop executes log(n) times
Internal loop executes n + n/2 + n/4 +..+ 1 ~ 2*n times (geometric progression sum)
Overall time is O(n + log(n)) = O(n)
Note - if we replace i < m with i < n in the inner loop, we will obtain O(n*log(n)) complexity, because in this case we have n + n + n +.. + n operations for inner loops, where number of summands is log(n)

Time complexity of the code segment

I'm trying to calculate the time complexity of the following code snippet
sum = 0;
for(i = 0; i <= n; i++) {
for(j = 1; j <= i; j++) {
if(i % j == 0) {
for(k = 0; k <= n; k++) {
sum = sum + k;
}
}
}
}
What i think , that out of N iterations of First loop, only 1 value which is 0 allowed to enter K loop and from i = 1.....N, K loop never runs.
So, only 1 value of I runs j loop N times and k loop N times and for other values of I only J loop runs N times
So, is the TC = O(N^2) ?
Here let d(n) is the number of divisors of n.
I see your program doing O(n) work(innermost loop) for O( d(n) ) number of divisors of each i (i looping from 0 to n in outermost loop: O(n) ).
Its complexity is O( n * d(n) * n ).
Reference
for large n, d() ~ O( exp( log(n)/log(log n) ) ).
So the overall complexity is O( n^(2 + 1/log(log n) ) ).
I've got another answer. Lets replace the inner loop with an abstract func():
for(i=0;i<=n;i++) {
for(j=1;j<=i;j++) {
if(i%j==0) {
func();
}
}
}
Firstly, forgetting the calls to func(), the complexity M to calculate all (i % j) is O(n^2).
Now, we can ask ourselves how many times the func() is called. It's called once for each divisor of i. That is it is called d(i) times for each i. This is a Divisor summatory function D(n). D(n) ~ n log n for large n.
So func() is called n log n times. At the same time the func() itself has complexity of O(n). So it gives the complexity P = O(n * n log n).
So total complexity is M + P = O(n^2) + O(n^2 log n) = O(n^2 log n)
Edit
Vow, thanks for downvote! I guess I need to prove it using python.
This code prints out n, how many times the inner loop is called for n, and outputs ratio of the latter and Divisor summatory function
import math
n = 100000
i = 0
z = 0
gg = 2 * 0.5772156649 - 1
while i < n:
j = 1
while j <= i:
if i % j == 0:
#ignoring the most inner loop just calculate the number of times it is called
z+=1
j+=1
if i > 0 and i % 1000 == 0:
#Exact divisor summatory function, to make z/Di converge to 1.0 quicker
Di = (i * math.log(i) + i * gg)
#prints n Di z/Di
print str(i) + ": " + str(z) + ": " + str(z/Di)
i+=1
Output sample:
24000: 245792: 1.00010672544
25000: 257036: 1.00003672445
26000: 268353: 1.00009554815
So the most inner loop is called n * log n times, and total complexity is n^2 * log n

Determining and analysis the big-O runtimes of these different loops

here is the simple code, i want to find out the time complexity of the code
i have already done analysis on it, my teacher told me that there is a mistake in it. i am not able to figure out where i am wrong. need a help in it. thanks
j = 2
while(j<n)
{
k=j
while(k < n)
{
sum + = a[k]*b[k]
k = k*k
}
k = log(n)
j += log(k)
}
here what i got the answer
time complexity = O(n/loglogn).
i just want to know where i am wrong
You go from 2 to n, adding log log n to the accumulator each step, so you do indeed have n / log log n steps.
However, what is done per step? Each step, you go from j to n, multiplying the accumulator by itself each step. How many operations is that? I'm not 100% sure, but based on messing around a bit and on this answer, this seems to end up being log log (n - j) steps, or log log n for short.
So, n / log log n steps, doing log log n operations each step, gives you an O(n / log log n * log log n), or O(n) algorithm.
Some experimentation seems to more or less bear this out (Python), although n_ops appears to flag a bit as n gets bigger:
import math
def doit(n):
n_ops = 0
j = 2
while j < n:
k = j
while k < n:
# sum + = a[k]*b[k]
k = k*k
n_ops += 1
k = math.log(n, 2)
j += math.log(k, 2)
n_ops += 1
return n_ops
Results:
>>> doit(100)
76
>>> doit(1000)
614
>>> doit(10000)
5389
>>> doit(100000)
49418
>>> doit(1000000)
463527
Ok. Let's see. The
k=j
while(k < n)
{
sum + = a[k]*b[k]
k = k*k
}
bit takes what takes j^(2^i) to reach n. I.e. what 2^i takes to reach log_j(n) which is log_2(log_j(n)). Now you have
j = 2
while(j<n)
{
// stuff that takes log_2(log_j(n))
j += log(log(n))
}
This would require n/log(log(n)) of steps but those steps take different time. If they took equal time, you would be right. But instead you have to sum for {j from 2 to n/log(log(n))} log_2(log_j(n)) which is
sum for {j from 2 to n/log(log(n))} [log_2(log(n)) - log_2(log(j))]
which is not that simple. Well, at least, I think I've pointed where you are probably wrong, which was the question.

Homework - Big O analysis

My homework involves Big O analysis and I think I've got the hang of it, but I'm not 100% sure. Would any of you lovely people mind taking a look and telling me if I'm on the right track?
The assignment is below. For questions 1 and 3, my analysis and answers are on the right, after the // marks. For question 2, my analysis and answers are below the algorithm type.
Thanks in advance for your help! :-)
1.For each of the following program fragments, give a Big-Oh analysis of the running time in terms of N:
(a) // Fragment (a)
for ( int i = 0, Sum = 0; i < N; i++ ) // N operations
for( int j = 0; j < N; j++ ) // N operations
Sum++; // Total: N^2 operations => O(N^2)
(b) // Fragment (b)
for( int i = 0, Sum = 0; i < N * N; i++ ) // N^2 operations
for( int j = 0; j < N; j ++ ) // N operations
Sum++; // Total: N^3 operations => O(N^3)
(c) // Fragment (c)
for( int i = 0, Sum = 0; i < N; i++ ) // N operations
for( int j = 0; j < i; j ++ ) // N-1 operations
Sum++; // Total: N(N-1) = N^2 – N operations => O(N^2)
(d) // Fragment (d)
for( int i = 0, Sum = 0; i < N; i++ ) // N operations
for( int j = 0; j < N * N; j++ ) // N^2 operations
for( int k = 0; k < j; k++ ) // N^2 operations
Sum++; // Total: N^5 operations => O(N^5)
2. An algorithm takes 0.5 milliseconds for input size 100. How long will it take for input size 500 if the running time is:
a. Linear
0.5 *5 = 2.5 milliseconds
b. O( N log N)
O (N log N) – treat the first N as a constant, so O (N log N) = O (log N)
Input size 100 = (log 100) + 1 = 2 + 1 = 3 operations
Input size 500 = (log 500) + 1= 2.7 + 1 = 3.7 ≈ 4 operations
Input size 100 runs in 0.5 milliseconds, so input size 500 takes 0.5 * (4/3) ≈ 0.67 milliseconds
c. Quadratic
Input size 100 in quadratic runs 100^2 operations = 10,000 operations
Input size 500 in quadratic runs 500^2 operations = 250,000 operations = 25 times as many
Input size of 100 runs in 0.5 milliseconds, so input size of 500 takes 25 * 0.5 = 12.5 milliseconds
d. Cubic
Input size 100 in quadratic runs 100^3 operations = 1,000,000 operations
Input size 500 in quadratic runs 500^3 operations = 125,000,000 operations = 125 times as many
Input size of 100 runs in 0.5 milliseconds, so input size of 500 takes 125 * 0.5 = 62.5 milliseconds
3. Find the Big-O for the following:
(a) f(x) = 2x^3 + x^2 log x // O(x^3)
(b) f(x) = (x^4 – 34x^3 + x^2 -20) // O(x^4)
(c) f(x) = x^3 – 1/log(x) // O(x^3)
4. Order the following functions by growth rate: (1 is slowest growth rate; 11 is fastest growth rate)
__6_ (a) N
__5_ (b) √N
__7_ (c) N^1.5
__9_ (d) N^2
__4_ (e) N log N
__2_ (f) 2/N
_11_ (g) 2^N
__3_ (h) 37
_10_ (i) N^3
__1_ (j) 1/ N^2
__8_ (k) N^2 /log N
* My logic in putting (j) and (f) as the slowest is that as N grows, 1/N^2 and 2/N decrease, so their growth rates are negative and therefore slower than the rest which have positive growth rates (or a 0 growth rate in the case of 37 (h)). Is that correct?
I looked at your questions 1 to 3 and its looks alright.
Follow these rules and check for yourself:
1) Multiplicative constants can be omitted,
Example 50n^2 simplifies to n^2
2) n^a dominates n^b if a>b
Example n^3 dominates n^2, so n^3 + n^2 + n ,simplifies to n3
3) Any exponential dominates any polynomial
Example 3^n dominates n^5
Example 2^n dominates n^2+5n+100
4) Any polynomial dominates any logarithm
Example n dominates (log n)3
As for Question 4 use below as a guide (from least to greatest):
Log2 n < n < n log2 n < n^2 < n^3 < 2^n
the answer for the (b) of the time calculation is wrong. you can not assume one of the n as constant.So nlogn becomes 1log1 which means log1 is 0. so 0.
so that answer is 100 log100 operations comparision with 500log500 ...
coming to the least to greatest.
b is 4 and a is 5. c,e,k are competition for the position 6 and 7 and 8.
1,2,3 positions given by you are correct.9,10,11 are correct.
i will check some analysis over 6,7,8 and let you know..
if you need any clarrification over my answer you can comment on that ..
#op Can you please tell me why you considered O(nlgn) = O(lg n)? As far as I understand your analysis for Q2 part b is actually the analysis of O(lg n) algorithms, to analyze nlgn algos you need to factor in that n in the left.
(a) Correct
(b) Correct
(c) Correctish. 0 + 1 + 2 + ... + (n - 1) = n(n - 1) / 2 = 0.5n^2 - 0.5n = O(n^2)
(d) Correct (there is a 1/2 in there like for (c), but the complexity is still O(N^5))
a. Correct
b. let K be the duration of one step.
K * (100 log 100) = 0.5, so K = 7.5 * 10^-4
K * (500 log 500) = 7.5 * 1-^-4 * 500 log 500 = 3.3737ms
Alternatively, (500 log 500) / (100 log 100) = 6.7474
When n = 500 it will be 6.7474 times slower, 6.7474 * 0.5ms = 3.3737ms
c. Correct
d. Correct
(a) Correct
(b) Correct
(c) Correct
__5_ (a) N
__4_ (b) √N
__7_ (c) N^1.5
__9_ (d) N^2
__6_ (e) N log N
__2_ (f) 2/N
_11_ (g) 2^N
__3_ (h) 37
_10_ (i) N^3
__1_ (j) 1 / N^2
__8_ (k) N^2 /log N
I agree with the positioning of (f) and (j). However, you should be aware that they don't occur out there 'in the wild' because every algorithm has at least one step, and therefore cannot beat O(1). See Are there any O(1/n) algorithms? for a more detailed explanation.

O(n log log n) time complexity

I have a short program here:
Given any n:
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k = k * k;
}
i++;
}
The asymptotic running time of this is O(n log log n). Why is this the case? I get that the entire program will at least run n times. But I'm not sure how to find log log n. The inner loop is depending on k * k, so it's obviously going to be less than n. And it would just be n log n if it was k / 2 each time. But how would you figure out the answer to be log log n?
For mathematical proof, inner loop can be written as:
T(n) = T(sqrt(n)) + 1
w.l.o.g assume 2 ^ 2 ^ (t-1)<= n <= 2 ^ (2 ^ t)=>
we know 2^2^t = 2^2^(t-1) * 2^2^(t-1)
T(2^2^t) = T(2^2^(t-1)) + 1=T(2^2^(t-2)) + 2 =....= T(2^2^0) + t =>
T(2^2^(t-1)) <= T(n) <= T(2^2^t) = T(2^2^0) + log log 2^2^t = O(1) + loglogn
==> O(1) + (loglogn) - 1 <= T(n) <= O(1) + loglog(n) => T(n) = Teta(loglogn).
and then total time is O(n loglogn).
Why inner loop is T(n)=T(sqrt(n)) +1?
first see when inner loop breaks, when k>n, means before that k was at least sqrt(n), or in two level before it was at most sqrt(n), so running time will be T(sqrt(n)) + 2 ≥ T(n) ≥ T(sqrt(n)) + 1.
Time Complexity of a loop is O(log log n) if the loop variables is reduced / increased exponentially by a constant amount. If the loop variable is divided / multiplied by a constant amount then complexity is O(Logn).
Eg: in your case value of k is as follow. Let i in parenthesis denote the number of times the loop has been executed.
2 (0) , 2^2 (1), 2^4 (2), 2^8 (3), 2^16(4), 2^32 (5) , 2^ 64 (6) ...... till n (k) is reached.
The value of k here will be O(log log n) which is the number of times the loop has executed.
For the sake of assumption lets assume that n is 2^64. Now log (2^64) = 64 and log 64 = log (2^6) = 6. Hence your program ran 6 times when n is 2^64.
I think if the codes are like this, it should be n*log n;
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k *= c;// c is a constant bigger than 1 and less than k;
}
i++;
}
Okay, So let's break this down first -
Given any n:
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k = k * k;
}
i++;
}
while( i<n ) will run for n+1 times but we'll round it off to n times.
now here comes the fun part, k<n will not run for n times instead it will run for log log n times because here instead of incrementing k by 1,in each loop we are incrementing it by squaring it. now this means it'll take only log log n time for the loop. you'll understand this when you learn design and analysis of algorithm
Now we combine all the time complexity and we get n.log log n time here I hope you get it now.

Resources