Complexity when loop runs log times - algorithm

If we're finding the no. of factors of a number, we can use the following efficient loop.
for(i=1;i<=sqrt(n);i++), where n is the 'no' whose factors are to be found. This loop would have a complexity of O(n).
What would be the time complexity of the below code snippet? (Assume that log(x) returns log value in base 2). O(n^2) or O (n logn)? (I assume that log n is the complexity when the loop divides by two. ie. i/=2)
void fun()
{
int i,j;
for(i=1;i<=n;i++)
for(j=1;j<=log(i);j++)
printf("hello world");
}

The actual number of "Hello world" prints in your code is:
You can then use the Srinivasa Ramanujan approximation of log(n!):
To get the actual complexity of the whole code, which is O(n logn)

The inner loop calls printf approximately log(i) times, for i in range [1..n]. The total number of calls is approximately
log(1)+log(2)+log(3)+...log(n) = log(n!)
Now, the Stirling asymptotic formula will give you the solution.
For the base 2 logarithm, the exact count is given by
0 + 1 + 1 + 2 + 2 + 2 + 2 + 3 + 3 + 3 + 3 + 3 + 3 + 3 + 3 + ... + floor(Lg(n))
or
1.0 + 2.1 + 4.2 + 8.3 + ... + k.floor(Lg(n))
For convenience, assume that n is of the form n=2^m-1, so that the last run is complete (and k=2^(m-1)).
Now take the sum of x^k from 0 to m-1, which equals (x^m-1)/(x-1) and derive on x to get the sum of x^k.k. Evaluating for x=2, you get
s = m.2^m-2^m+2 = (n+1).Lg(n+1)-n+1
For other n, you need to add a correction term for the last partial run. With m=floor(Lg(n+1)):
t = m.(n+1-2.2^m)

An upper bound of O(n*Log(n)) can be proven without any math.
void fun()
{
int i,j;
for(i=1;i<=n;i++)
for(j=1;j<=log(n);j++) // << notice I changed "i" to "n"
printf("hello world");
}
The above function will run N times the inner loop, and the inner loop will run log(N) times.
Hence, the function will run exactly nLog(n) times.
Since this function
(log(n) + log(n) + ... + log(n)) // n times
is larger than the OP version
(log(1) + log(2) + ... + log(n))
Then it is an upper bound of the original version.
<= O(n log(n)
comment
also
(log(n) + log(n) + ... + log(n)) // n times
= log(n^n)
= n*log(n)

j is dependent on j, therefore unroll the dependency, means analyze for i only
if i=1 ----> inner loop executes log(1) times
if i=2 ----> inner loop executes log(2) times
if i=3 ----> inner loop executes log(3) times
.
.
if i=n ----> inner loop executes log(n) times.
combine them ==> log(1)+log(2)+.....+log(n) = log ( 1.2.3...n ) = log ( n! ) = n log(n)

Related

confused about a nested loop having linear complexity(Big-Oh = O(n)) but I worked it to be logarithmic

Computing complexity and Big o of an algorithm
T(n) = 5n log n + 3log n + 2 // to the base 2 big o = o(n log n)
for(int x = 0,i = 1;i <= N;i*=2)
{
for(int j = 1 ;j <= i ;j++)
{
x++;
}
}
The Big o expected was linear where as mine is logarithmic
Your Big-Oh analysis is not correct. While it is true that the outer loop is executed log n times, the inner loop is linear in i at each iteration.
If you count the total number of iterations of the inner loop, you will see that the whole thing is linear:
The inner loop will do ‍1 + 2 + 4 + 8 + 16 + ... + (the last power of 2 <= N) iterations. This sum will be between N and 2*N, which makes the whole loop linear.
Let me explain why your analysis is wrong.
It is clear that inner loop will execute 1 + 2 + 4 + ... + 2^k times where k is the biggest integer which satisfies equation . This implies that upper bound for k is
Without loss of generality we can take upper bound for k and assume that k is integer, complexity equals to 1 + 2 + 4 + ... + = which is geometric series so it is equal to
Therefore in O notation it is O(n)
First, you should notice that your analysis is not logarithmic! As N \log N is not logarithmic.
Also, the time complexity is T(n) = sum_{j = 0}^{log(n)} 2^j (as the value of i duplicated each time). Hence, T(n) = 2^(log(N) + 1) - 1 = 2N - 1 = \Theta(N).

What is the time complexity of this pseudocode?

This is pseudocode. I tried to calculate the time complexity of this function as this answer said. It should be like:
n + n/3 + n/9 + ...
Maybe the time complexity is something like O(nlog(n)) I guess? Or the log(n) should be log(n) base 3? Someone said the time complexity is O(n), which is totally unacceptable for me.
j = n
while j >= 1 {
for i = 1 to j {
x += 1
}
j /= 3
}
The algorithm will run in:
n + n/3 + n/9 + ... = series ~= O(3/2 * n) = O(n)
since 3/2 is a constant. Here the k-th loop will run in n/3k steps.
Please notice the crucial difference from the linked question, where the outer loop runs n times and that is fixed.

Big-O complexity of algorithms

I'm trying to figure out the exact big-O value of algorithms. I'll provide an example:
for (int i = 0; i < n; i++) // 2N + 2
{
for (int x = i; x < n; x++) // N * 2N + 2 ?
{
sum += i; // N
}
} // Extra N?
So if I break some of this down, int i = 0 would be O(1), i < n is N+1, i++ is N, multiply the inner loop by N:
2N + 2 + N(1 + N + 1 + N) = 2N^2 + 2N + 2N + 2 = 2N^2 + 4N + 2
Add an N for the loop termination and the sum constant, = 3N^2 + 5N + 2...
Basically, I'm not 100% sure how to calculate the exact O notation for an algorithm, my guess is O(3N^2 + 5N + 2).
What do you mean by exact? Big O is an asymptotic upper bound, so it's by definition not exact.
Thinking about i=0 as O(1) and i<n as O(N+1) is not correct. Instead, think of the outer loop as doing something n times, and for every iteration of the outer loop, the inner loop is executed at most n times. The calculation inside the loop takes constant time (the calculation is not getting more complex as n gets bigger). So you end up with O(n*n*1) = O(n^2), quadratic complexity.
When asking about "exact", you're running the inner loop from 0 to n, then from 1 to n, then from 2 to n, ... , from (n-1) to n, each time doing a constant time operation. So you do n + (n-1) + (n-2) + ... + 1 = n*(n+1)/2 = n^2/2 + n/2 iterations. To get from the exact number of calculations to big O notation, omit constants and lower-order terms, and you'll end up with O(n^2) (the 1/2 and +n/2 are omitted).
Big O means Worst case complexity.
And Here worst case will occur only if both the loops will run for n numbers of time i.e n*n.
So, complexity is O(n2).

Analyzing worst case order-of-growth

I'm trying to analyze the worst case order of growth as a function of N for this algorithm:
for (int i = N*N; i > 1; i = i/2)
for (int j = 0; j < i; j++) {
total++;
}
What I'm trying is to analyze how many times the line total++ will run by looking at the inner and outer loops. The inner loop should run (N^2)/2 times. The outer loop I don't know. Could anyone point me in the right direction?
The statement total++; shall run following number of times:
= N^2 + N^2 / 2 + N^2 / 4 ... N^2 / 2^k
= N^2 * ( 1 + 1/2 + 1/4 + ... 1/2^k )
The number of terms in the above expression = log(N^2) = 2log(N).
Hence sum of series = N^2 * (1 - 1/2^(2logN)) / (1/2)
= N^2 * (1 - 1/4N) / (1/2).
Hence according to me the order of complexity = O(N^2)
The outer loop would run with a complexity of log(N) as the series reduces to half on every iteration . For example a binary search.
The outer loop runs exactly 2LOG (base 2) N + 1 times (Float to int conversion and remove decimal places). If you see the value decreases like N^2,N^2/2 , N^2/4 ... 1. ..
So the total number of times total ++ runs is,
Summazion( x from 0 to int(2LOG (base 2) N + 1)) N^2/2^x
for this question as the inner loop is depending upon the value of the variable that is changing by the outer loop (so u cant solve this simply by multiplying the values of inner and the outer loops). u will have to start writing the values in a and then try to figure out the series and then solve the series to get the answer..
like in your question, total++ will run..
n^2 + n^2/2 + n^2/2^2 + n^2/2^3 + .....
then, taking n^2 common, we get
n^2 [ 1 + 1/2 + 1/2^2 + 1/2^3 + ...]
solve this series to get the answer

What is the time complexity?

What is the time complexity for the following function?
for(int i = 0; i < a.size; i++) {
for(int j = i; j < a.size; i++) {
//
}
}
I think it is less than big O n^2 because we arent iterating over all of the elements in the second for loop. I believe the time complexity comes out to be something like this:
n[ (n) + (n-1) + (n-2) + ... + (n-n) ]
But when I solve this formula it comes out to be
n^2 - n + n^2 - 2n + n^2 - 3n + ... + n^2 - n^2
Which doesn't seem correct at all. Can somebody tell me exactly how to solve this problem, and where I am wrong.
That is O(n^2). If you consider the iteration where i = a.size() - 1, and you work your way backwards (i = a.size() - 2, i = a.size - 3, etc), you are looking at the following sum of number of iterations, where n = a.size.
1 + 2 + 3 + 4 + ... + n
The sum of this series is n(n+1)/2, which is O(n^2). Note that big-O notation ignores constants and takes the highest polynomial power when it is applied to a polynomial function.
It will run for:
1 + 2 + 3 + .. + n
Which is 1/2 n(n+1) which give us O(n^2)
The Big-O notation will only keep the dominant term, neglecting constants too
The Big-O is only used to compare algorithms on the same variation of a problem using the same complexity analysis standard, if and only if the dominant terms are different.
If the dominant terms are the same, you need to compare Big-Theta or Time complexity, which will show minor differences.
Example
A
for i = 1 .. n
for j = i .. n
..
B
for i = 1 .. n
for j = 1 .. n
..
We have
Time(A) = 1/2 n(n+1) ~ O(n^2)
Time(B) = n^2 ~ O(n^2)
O(A) = O(B)
T(A) < T(B)
Analysis
To visualize how we got 1 + 2 + 3 + .. n:
for i = 1 .. n:
print "(1 + "
sum = 0
for j = i .. n:
sum++
print sum") + "
will print the following:
(1+n) + (1+(n-1)) + .. + (1+3) + (1+2) + (1+1) + (1+0)
n+1 + n + n-1 + .. + 3 + 2 + 1
1 + 2 + 3 + .. + n + n+1
1/2 n(n+1) + (n+1)
1/2 n^2 + 1/2 n + n + 1
1/2 n^2 + 3/2 n + 1
Yes, the number of iterations is strictly less than n^2, but it's still Θ(n^2). It will eventually be greater than n^k for any k<2, and it will eventually be less than n^k for any k>2.
(As a side note, computer scientists often say big-O when they really mean big-theta (Θ). It's technically correct to say that almost every algorithm you've seen has O(n!) running time; all reasonably algorithms have running times that grow no more quickly than n!. But it's not really useful to say that the complexity is O(n!) if it's also O(n log n), so by some kind of Gricean maxim we assume that when someone says an algorithm's complexiy is O(f(x)) that f(x) is as small as possible.)

Resources