What is the asymptotic growth rate of this function :
int i=3;
while(i < n) {
i *= 5;
}
I measured it :
when n=3 i<n is executed 1 time
.
.
when n=16 i<n is executed 2 times
.
.
when n=80, i<n is executed 3 times
.
.
I need to find the right growth rate but I'm stuck.
I believe the growth rate is:
3 * 5^x >= n
5^x >= n/3
therefore
xlog5 >= log n - log 3
x >= (log n - log 3) / (log 5)
You can define that 3*5^x must be >=n. We can set up the equation in the first line with that basis.
Related
for (i = 1; i <= n; i++)
{
for (j = n; j >= i; j--)
}
I'm struggling with this algorithm. I can't even know what time complexity of this algorithm is? I've checked using online software it shows me only o(n).
First of all, the algorithm should be something like this:
for (i = 1; i <= n; i++)
for (j = n; j >= i; j--)
DoSomeWork(i, j); // <- Payload which is O(1)
To find out the time complexity, let's count how many times DoSomeWork will be executed:
i : j : executed, times
----------------------------
1 : 1..n : n
2 : 2..n : n - 1
3 : 3..n : n - 2
.. : ... ...
n : n..n : 1
So far so good, DoSomeWork will be executed
n + n - 1 + n - 2 + ... + 2 + 1 = (n + 1) * n / 2
times; time complexity for your case is
O((n + 1) * n / 2) = O((n + 1) * n) = O(n * n) + O(n) = O(n * n)
Nested loops are not necessary have quadratic time complexity, e.g.
for (i = 1; i <= n; i++)
for (j = n; j >= i; j /= 2) // note j /= 2, instead of j--
DoSomeWork(i, j);
has O(n * log(n)) time complexity
Think about this a little. How many times do we enter into the outer loop? It's n times, as you surely already know, since we have step1, ..., stepn, in total n steps.
So, we have
n * average(inner)
In the first step, the inner loop has n steps. Then it has n - 1 steps and so on, on the final step we have 1 step in the inner loop.
so, the inner loop has:
n, n-1, ..., 1 steps in the respective iterations.
Since addition is both commutative and associative, we have:
(n + 1) + (n - 1 + 2) + ...
that is (n+1)/2 on average.
Since we worry about the scenario when n -> infinity, adding 1 to it or dividing it by 2 is less of a concern, so we roughly have n * n steps, hence it has a complexity of O(n * n), don't listen to the tool that says otherwise, unless you have some extra information about your algorithm to share with us.
Why is the time-complexity of
function (n)
{
//this loop executes n times
for( i = 1 ; i <= n ; i + + )
//this loop executes j times with j increase by the rate of i
for( j = 1 ; j <= n ; j+ = i )
print( “*” ) ;
}
Its running time is n*(n^1/2)=n^3/2
so, O(n^3/2)
Please explain with mathematical steps.
Thank you in advance.
The running time is bounded by O(n^{3/2}), but that's not tight!
Notice that the inner loop makes O(n/i) iterations for each i=1,2,...n), thus the time complexity is O(n * (1 + 1/2 + 1/3 + ... + 1/n)) = O(n log n).
This is because 1+1/2+1/3+...+1/n=O(log n) as it is the well-known harmonic series.
For(I=1 ; I<=n ; I++)
{
For(J=1 ; J<=I ; J++)
{
For(K=1 ; K<=n^5 ; K=15 × K)
{
x=y+z;
}
}
}
It seems to be O(N^2 log N) according to me, but when i analyzed the k loop, it is not following the Log N, which is confusing me,
It should be O(n^2 log(n)) because the inner loop will be called (n/2)(n+1) times and it will loop log base 15 of n^5 = 5 * log base 15 of n because k grows exponentially in the number of loops.
This results in 5(n^2+n)(log base 15 of n)/2 assignments to x, which is O(n^2 * log(n))
Time complexity of your problem is:
Explanation:
When we say
we mean
not
2 base of log function is coming from divide by 2 in the definitions inside of for loops about binary searching not from the binary nature of computers.
But in your case divide by value is not 2 but 15 because of k = 15 × k definition so the base of log function must be 15 not 2.
You can see the correlation between these with replacing k *= 15 line with k *= 2 and
print n * n * int(math.log(n**5,15) + 1)
line with
print n * n * int(math.log(n**5,2) + 1)
in the given Python Code above. Results will continue to match.
Also because of the quitting of binary base you need to round up log function with nearest integer function:
Python Code:
import math
n = 100
i = 1
while i <= n:
j = 1
while j <= i:
k = 1
counter = 1
while k <= n**5:
x = 1 + 1
k *= 15
counter += 1
#print k
#print counter
j += 1
#print j
i += 1
#print i
print "\nTime Complexity Prediction:"
print n * n * int(math.log(n**5,15) + 1)
print "\nReal World Result:"
print (i - 1) * (j - 1) * (counter - 1)
print ""
Example results of the program:
For n = 10:
Time Complexity Prediction:
500
Real World Result:
500
For n = 100:
Time Complexity Prediction:
90000
Real World Result:
90000
For n = 1000:
Time Complexity Prediction:
13000000
Real World Result:
13000000
For n = 3000:
Time Complexity Prediction:
135000000
Real World Result:
135000000
Actually it is 15BaseLog(n)
Now, look at the powers of 15 : 15, 225, 3375, 50625, 759375, 11390625, ......
See how fast they grow. When you run the 2nd inner loop, the effect is ignorable, because this sequence (power of 15) passes the value on n before a countable number of iterations.
That's why, There is no significant effect of log(n)
In the book Programming Interviews Exposed it says that the complexity of the program below is O(N), but I don't understand how this is possible. Can someone explain why this is?
int var = 2;
for (int i = 0; i < N; i++) {
for (int j = i+1; j < N; j *= 2) {
var += var;
}
}
You need a bit of math to see that. The inner loop iterates Θ(1 + log [N/(i+1)]) times (the 1 + is necessary since for i >= N/2, [N/(i+1)] = 1 and the logarithm is 0, yet the loop iterates once). j takes the values (i+1)*2^k until it is at least as large as N, and
(i+1)*2^k >= N <=> 2^k >= N/(i+1) <=> k >= log_2 (N/(i+1))
using mathematical division. So the update j *= 2 is called ceiling(log_2 (N/(i+1))) times and the condition is checked 1 + ceiling(log_2 (N/(i+1))) times. Thus we can write the total work
N-1 N
∑ (1 + log (N/(i+1)) = N + N*log N - ∑ log j
i=0 j=1
= N + N*log N - log N!
Now, Stirling's formula tells us
log N! = N*log N - N + O(log N)
so we find the total work done is indeed O(N).
Outer loop runs n times. Now it all depends on the inner loop.
The inner loop now is the tricky one.
Lets follow:
i=0 --> j=1 ---> log(n) iterations
...
...
i=(n/2)-1 --> j=n/2 ---> 1 iteration.
i=(n/2) --> j=(n/2)+1 --->1 iteration.
i > (n/2) ---> 1 iteration
(n/2)-1 >= i > (n/4) ---> 2 iterations
(n/4) >= i > (n/8) ---> 3 iterations
(n/8) >= i > (n/16) ---> 4 iterations
(n/16) >= i > (n/32) ---> 5 iterations
(n/2)*1 + (n/4)*2 + (n/8)*3 + (n/16)*4 + ... + [n/(2^i)]*i
N-1
n*∑ [i/(2^i)] =< 2*n
i=0
--> O(n)
#Daniel Fischer's answer is correct.
I would like to add the fact that this algorithm's exact running time is as follows:
Which means:
I have a short program here:
Given any n:
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k = k * k;
}
i++;
}
The asymptotic running time of this is O(n log log n). Why is this the case? I get that the entire program will at least run n times. But I'm not sure how to find log log n. The inner loop is depending on k * k, so it's obviously going to be less than n. And it would just be n log n if it was k / 2 each time. But how would you figure out the answer to be log log n?
For mathematical proof, inner loop can be written as:
T(n) = T(sqrt(n)) + 1
w.l.o.g assume 2 ^ 2 ^ (t-1)<= n <= 2 ^ (2 ^ t)=>
we know 2^2^t = 2^2^(t-1) * 2^2^(t-1)
T(2^2^t) = T(2^2^(t-1)) + 1=T(2^2^(t-2)) + 2 =....= T(2^2^0) + t =>
T(2^2^(t-1)) <= T(n) <= T(2^2^t) = T(2^2^0) + log log 2^2^t = O(1) + loglogn
==> O(1) + (loglogn) - 1 <= T(n) <= O(1) + loglog(n) => T(n) = Teta(loglogn).
and then total time is O(n loglogn).
Why inner loop is T(n)=T(sqrt(n)) +1?
first see when inner loop breaks, when k>n, means before that k was at least sqrt(n), or in two level before it was at most sqrt(n), so running time will be T(sqrt(n)) + 2 ≥ T(n) ≥ T(sqrt(n)) + 1.
Time Complexity of a loop is O(log log n) if the loop variables is reduced / increased exponentially by a constant amount. If the loop variable is divided / multiplied by a constant amount then complexity is O(Logn).
Eg: in your case value of k is as follow. Let i in parenthesis denote the number of times the loop has been executed.
2 (0) , 2^2 (1), 2^4 (2), 2^8 (3), 2^16(4), 2^32 (5) , 2^ 64 (6) ...... till n (k) is reached.
The value of k here will be O(log log n) which is the number of times the loop has executed.
For the sake of assumption lets assume that n is 2^64. Now log (2^64) = 64 and log 64 = log (2^6) = 6. Hence your program ran 6 times when n is 2^64.
I think if the codes are like this, it should be n*log n;
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k *= c;// c is a constant bigger than 1 and less than k;
}
i++;
}
Okay, So let's break this down first -
Given any n:
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k = k * k;
}
i++;
}
while( i<n ) will run for n+1 times but we'll round it off to n times.
now here comes the fun part, k<n will not run for n times instead it will run for log log n times because here instead of incrementing k by 1,in each loop we are incrementing it by squaring it. now this means it'll take only log log n time for the loop. you'll understand this when you learn design and analysis of algorithm
Now we combine all the time complexity and we get n.log log n time here I hope you get it now.