This question already has answers here:
O(n log log n) time complexity
(4 answers)
Closed 7 years ago.
I have a very basic question here
for(int i = 1; i < N/2; i++) {
}
My initial understanding was the time-complexity for the above loop would O(logn) but after reading through some articles it is pretty much evident that it's simply O(n) and O(logn) would look like for (i = 1; i <= n; i *= 2)
Now my question is how does O(log log N) loop look like?
O(log n) loop:
for (i = 1; i <= n; i *= 2)
So you double i at each step. Basically:
Increment => O(n)
Doubling => O(log n)
??? => O(log log n)
What comes after multiplication? Exponentiation. So this would be O(log log n):
for (i = 2; i <= n; i *= i) // we are squaring i at each step
Note: your loop is O(n), not O(log n). Keeping in line with the increment / double / exponentiate idea above, you can rewrite your loop using incrementation:
for(int i = 1; i < n; i += 2)
Even if you increment by more, it's still incrementation, and still O(n).
That loop doesn't look like O(log N). It is what it is, an O(N/2) loop. To quote the definition:
Function f(x) is said to be O(g(x)) iff (if and only if) there exists a positive real number c, and a real number x0 such that |f(x)| <= c|g(x)| for all x >= x0. For example, you could also call that loop O(N), O(N^2), O(N^3) as you can find the required parameters easily. But, you cannot find parameters that will make O(log N) fit.
As for O(log log N), I suppose you could rewrite Interpolation search implementation given here https://en.wikipedia.org/wiki/Interpolation_search to use a for loop. It is O(log log N) on the average!
Your cost is not O(logN) your cost is O(N*logN).
Read the link you will see a function example like :
No matter the number in the beginning of the polynomial cost is the biggest polynomial.
In your case it is
1/2 * n * log(n) , which 1/2 makes no difference your complexity is O(N*logN)
Related
I'm struggling to understand the concepts of calculating time complexity. I have this code in C, why does the time complexity is O(n) and not O(n log n)?
The first loop is running for maximum of 3 iteration, the outer for loop is in logarithmic complexity and each iteration of it doing linear time.
int f2 (int n)
{
int j, k, cnt=0;
do
{
++n;
} while (n%3);
for (j=n; j>0; j/=3)
{
k=j;
while (k>0)
{
cnt++;
k-=3;
}
}
return cnt;
}
Why do we neglecting the log time?
T = (n+n/3+n/9+...+1)
3*T - T = 3*n-1
T = 1.5*n-0.5
it's O(n)
It is a common beginner's mistake to reason as follows:
the outer loop follows a decreasing geometric progression, so it iteratess O(log n) times.
the inner loop follows an arithmetic progression, so its complexity is O(n).
hence the global complexity is O(n+n+n+...)=O(n log n).
The mistake is due to the lack of rigor in the notation. The correct approach is
the outer loop follows a decreasing geometric progression, so it iterates O(log n) time.
the inner loop follows an arithmetic progression, so its complexity is O(j) (not O(n) !), where j decreases geometrically.
hence the global complexity is O(n+n/3+n/9+...)=O(n).
The time complexity of you program is O(n).
Because the total number of calculations is : log(n)/log(3) + 2 * (n/3 + n/9 + n/27 + ... < log(n) + n/2 < 2*n
I have pseudo code for a function (below). I understand that if each of i, j and k were 1 then worst case run time would be O(n^3). I am struggling to understand the impact of n/2 though - if any - on run time. Any guidance would be great.
for i=n/2; i < n; increase i by 1
for j=1; j < n/2; increase j by 1
for k = 1; k < n; increase k by k*2
Execute a Statement
Your understanding is not correct
k is increased by k*2 which leads to logarithmic time, the complexity is actually O(n^2 * log n)
The O(n/2) = O(n), therefore the n/2 does not have any impact on asymptotic growth.
If you are not sure, the general approach is to count it as precise as possible and then remove the constants.
for i will be execute n/2 times, the for j also n/2 times and k will be executed log n times.
n/2 * n/2 * log n = (n^2/4) * log n. You can remove constants, so O(n^2 * log)
The worst case time complexity is not O(N^3)
Check the innermost for loop. K increases by K * 2
That means, the innermost for loop will take O(lgN) time
Other two outer loops would take O(N) time each and N/2 would not have any effect on the asymptotic growth of run time.
So, the overall time complexity would be O(N^2 * lgN)
I know the big-O complexity of this algorithm is O(n^2), but I cannot understand why.
int sum = 0;
int i = 1; j = n * n;
while (i++ < j--)
sum++;
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
During every iteration you increment i and decrement j which is equivalent to just incrementing i by 2. Therefore, total number of iterations is n^2 / 2 and that is still O(n^2).
big-O complexity ignores coefficients. For example: O(n), O(2n), and O(1000n) are all the same O(n) running time. Likewise, O(n^2) and O(0.5n^2) are both O(n^2) running time.
In your situation, you're essentially incrementing your loop counter by 2 each time through your loop (since j-- has the same effect as i++). So your running time is O(0.5n^2), but that's the same as O(n^2) when you remove the coefficient.
You will have exactly n*n/2 loop iterations (or (n*n-1)/2 if n is odd).
In the big O notation we have O((n*n-1)/2) = O(n*n/2) = O(n*n) because constant factors "don't count".
Your algorithm is equivalent to
while (i += 2 < n*n)
...
which is O(n^2/2) which is the same to O(n^2) because big O complexity does not care about constants.
Let m be the number of iterations taken. Then,
i+m = n^2 - m
which gives,
m = (n^2-i)/2
In Big-O notation, this implies a complexity of O(n^2).
Yes, this algorithm is O(n^2).
To calculate complexity, we have a table the complexities:
O(1)
O(log n)
O(n)
O(n log n)
O(n²)
O(n^a)
O(a^n)
O(n!)
Each row represent a set of algorithms. A set of algorithms that is in O(1), too it is in O(n), and O(n^2), etc. But not at reverse. So, your algorithm realize n*n/2 sentences.
O(n) < O(nlogn) < O(n*n/2) < O(n²)
So, the set of algorithms that include the complexity of your algorithm, is O(n²), because O(n) and O(nlogn) are smaller.
For example:
To n = 100, sum = 5000. => 100 O(n) < 200 O(n·logn) < 5000 (n*n/2) < 10000(n^2)
I'm sorry for my english.
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
Yes! That's why it's O(n^2). By the same logic, it's a lot less than n * n * n, which makes it O(n^3). It's even O(6^n), by similar logic.
big-O gives you information about upper bounds.
I believe you are trying to ask why the complexity is theta(n) or omega(n), but if you're just trying to understand what big-O is, you really need to understand that it gives upper bounds on functions first and foremost.
How do I find time complexity as a function of the problem size n?
sum = 0;
if (EVEN(n)) {
for (i = 0; i < n; i++) {
if (i % 2 == 0) {
O(logn)
}
else {
sum++;
}
}
}
else {
sum = sum + n;
}
The answer is: O(N log N)
Considering the worst case scenario EVEN(n), the for loop will execute N times or in O(N) time.
The worst case complexity of the code inside the for loop is O(log N).
You then multiply the for loop's complexity with the complexity of its contents.
Therefore, O(N) * O(log N) = O(N log N).
EDIT: With regards to the code inside the for loop...
Since the O(log N) execution is only run when i % 2 == 0, that means it only runs every other iteration of the for loop. Therefore, the true complexity is O(0.5log N), but since you drop all constants when calculating complexity, the complexity is still O(log N), and the final answer is still O(N log N).
I would like to understand Asymptotic Analysis better since I believe I don't have solid understanding on that. I would appreciate if someone can highlight a better approach to it. Here are two examples
for (int i = 1; i <= n; i *= 2) {
for (int j = 0; j < n; j++) {
count++;
}
}
This question is from Quiz and its answer is O(n log n)
I watched a lecture of Stanford University and its example is below
for i = 1 to n
for j = i + 1 to n
if A[i] == A [j] return TRUE otherwise
return FALSE
Asymptotic Analysis for second given problem is Quadratic O(n^2)
How can I know when is O(n log n) or O(n^2) they both have nested for loop?
Any answer is highly appreciated. Thanks beforehand
The first example is O(nlogn) because the outer loop repeats log(n) times, and each of its repeat require O(n) repeats of the inner loop - so this totals in O(log(n) * n) = O(nlogn).
However, in the 2nd example - the outer loop requires O(n) iterations, and for each iteration i - it requires O(n-i) iterations of the inner loop. This means it will take n + (n-1) + (n-2) + ... + 2 + 1 total time. This is arithmetic progression, and the sum of it is in O(n^2)
There is no "easy way" to know the complexity without understanding some of what happens - complexity analysis is case dependent.
However, there are some hints that might help you, for example - if the iteration counter is multiplying each iteration - it is a strong indication a logarithm will be involved in the complexity function, like in your first example.