Asymptotic complexity of an algorithm (Big - O) - algorithm

I've got the following example:
i=2;
while i<=n {
O(1)
j=2*i
while j<=n {
O(1)
j=j+i
}
i=i+1
I'm a beginner at calculating asymptotic complexity. I think it is O((n-1)*(n/4)) but I'm not sure if this answer is correct or I'm missing something.

In the inner loop, j goes from 2i to n in steps i, so the inner loop runs (n-2i)/i+1 times, which is n/i-1 (integer division).
Then the outer loop runs with i from 2 to n, for a total of n/2-1+n/3-1+n/4-1+...n/(n/2)-1 inner iterations (for larger i, there are no iterations).
This quantity is difficult to estimate, but is bounded above by n (H(n/2)-1)-n/2 where H denotes an Harmonic Number.
We know that Hn~O(log n), hence, asymptotically, the running time is O(n log n).

Related

Time complexity of this code confusing me

I'm struggling to understand the concepts of calculating time complexity. I have this code in C, why does the time complexity is O(n) and not O(n log n)?
The first loop is running for maximum of 3 iteration, the outer for loop is in logarithmic complexity and each iteration of it doing linear time.
int f2 (int n)
{
int j, k, cnt=0;
do
{
++n;
} while (n%3);
for (j=n; j>0; j/=3)
{
k=j;
while (k>0)
{
cnt++;
k-=3;
}
}
return cnt;
}
Why do we neglecting the log time?
T = (n+n/3+n/9+...+1)
3*T - T = 3*n-1
T = 1.5*n-0.5
it's O(n)
It is a common beginner's mistake to reason as follows:
the outer loop follows a decreasing geometric progression, so it iteratess O(log n) times.
the inner loop follows an arithmetic progression, so its complexity is O(n).
hence the global complexity is O(n+n+n+...)=O(n log n).
The mistake is due to the lack of rigor in the notation. The correct approach is
the outer loop follows a decreasing geometric progression, so it iterates O(log n) time.
the inner loop follows an arithmetic progression, so its complexity is O(j) (not O(n) !), where j decreases geometrically.
hence the global complexity is O(n+n/3+n/9+...)=O(n).
The time complexity of you program is O(n).
Because the total number of calculations is : log(n)/log(3) + 2 * (n/3 + n/9 + n/27 + ... < log(n) + n/2 < 2*n

Asymptotic Growth of Run Time of function with three nested for loops

I have pseudo code for a function (below). I understand that if each of i, j and k were 1 then worst case run time would be O(n^3). I am struggling to understand the impact of n/2 though - if any - on run time. Any guidance would be great.
for i=n/2; i < n; increase i by 1
for j=1; j < n/2; increase j by 1
for k = 1; k < n; increase k by k*2
Execute a Statement
Your understanding is not correct
k is increased by k*2 which leads to logarithmic time, the complexity is actually O(n^2 * log n)
The O(n/2) = O(n), therefore the n/2 does not have any impact on asymptotic growth.
If you are not sure, the general approach is to count it as precise as possible and then remove the constants.
for i will be execute n/2 times, the for j also n/2 times and k will be executed log n times.
n/2 * n/2 * log n = (n^2/4) * log n. You can remove constants, so O(n^2 * log)
The worst case time complexity is not O(N^3)
Check the innermost for loop. K increases by K * 2
That means, the innermost for loop will take O(lgN) time
Other two outer loops would take O(N) time each and N/2 would not have any effect on the asymptotic growth of run time.
So, the overall time complexity would be O(N^2 * lgN)

Time complexity of O(log n) in double nested loop function

I don't know how to calculate time complexity of this algorithm, I know nested loops is O(n^2) but i don't know what to do with .insert(), I came to wrong conclusion about it being O(n^2 + n log n) but I know I can't sum in big O, any help would be appreciated.
for i in range(arr_len):
for j in range(arr_len):
if (i == arr[j]):
max_bin_heap.insert(//whatever) //O(log n)
At first glance, most people would say that this is O(n*n*logn) because of two nested loops and O(logn) operation max_bin_heap.insert call within the inner for loop. However, it is not! Pay attention to if (i == arr[j]) condition. For each j from the inner for loop, at most one value of i will be equal to arr[j], so two for loops will not induce n^2 invocations of max_bin_heap.insert call, but only n of them. Since there are n^2 comparisons and at most n*logn heap operations, the total complexity is O(n*logn + n*n) = O(n^2).

Calculating tilde-complexity of for-loop with cubic index

Say I have following algorithm:
for(int i = 1; i < N; i *= 3) {
sum++
}
I need to calculate the complexity using tilde-notation, which basically means that I have to find a tilde-function so that when I divide the complexity of the algorithm by this tilde-function, the limit in infinity has to be 1.
I don't think there's any need to calculate the exact complexity, we can ignore the constants and then we have a tilde-complexity.
By looking at the growth off the index, I assume that this algorithm is
~ log N
But rather than having a binary logarithmic function, the base in this case is 3.
Does this matter for the exact notation? Is the order of growth exactly the same and thus can we ignore the base when using Tilde-notation? Do I approach this correctly?
You are right, the for loop executes ceil(log_3 N) times, where log_3 N denotes the base-3 logarithm of N.
No, you cannot ignore the base when using the tilde notation.
Here's how we can derive the time complexity.
We will assume that each iteration of the for loop costs C, for some constant C>0.
Let T(N) denote the number of executions of the for-loop. Since at j-th iteration the value of i is 3^j, it follows that the number of iterations that we make is the smallest j for which 3^j >= N. Taking base-3 logarithms of both sides we get j >= log_3 N. Because j is an integer, j = ceil(log_3 N). Thus T(N) ~ ceil(log_3 N).
Let S(N) denote the time complexity of the for-loop. The "total" time complexity is thus C * T(N), because the cost of each of T(N) iterations is C, which in tilde notation we can write as S(N) ~ C * ceil*(log_3 N).

Why is the Big-O complexity of this algorithm O(n^2)?

I know the big-O complexity of this algorithm is O(n^2), but I cannot understand why.
int sum = 0;
int i = 1; j = n * n;
while (i++ < j--)
sum++;
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
During every iteration you increment i and decrement j which is equivalent to just incrementing i by 2. Therefore, total number of iterations is n^2 / 2 and that is still O(n^2).
big-O complexity ignores coefficients. For example: O(n), O(2n), and O(1000n) are all the same O(n) running time. Likewise, O(n^2) and O(0.5n^2) are both O(n^2) running time.
In your situation, you're essentially incrementing your loop counter by 2 each time through your loop (since j-- has the same effect as i++). So your running time is O(0.5n^2), but that's the same as O(n^2) when you remove the coefficient.
You will have exactly n*n/2 loop iterations (or (n*n-1)/2 if n is odd).
In the big O notation we have O((n*n-1)/2) = O(n*n/2) = O(n*n) because constant factors "don't count".
Your algorithm is equivalent to
while (i += 2 < n*n)
...
which is O(n^2/2) which is the same to O(n^2) because big O complexity does not care about constants.
Let m be the number of iterations taken. Then,
i+m = n^2 - m
which gives,
m = (n^2-i)/2
In Big-O notation, this implies a complexity of O(n^2).
Yes, this algorithm is O(n^2).
To calculate complexity, we have a table the complexities:
O(1)
O(log n)
O(n)
O(n log n)
O(n²)
O(n^a)
O(a^n)
O(n!)
Each row represent a set of algorithms. A set of algorithms that is in O(1), too it is in O(n), and O(n^2), etc. But not at reverse. So, your algorithm realize n*n/2 sentences.
O(n) < O(nlogn) < O(n*n/2) < O(n²)
So, the set of algorithms that include the complexity of your algorithm, is O(n²), because O(n) and O(nlogn) are smaller.
For example:
To n = 100, sum = 5000. => 100 O(n) < 200 O(n·logn) < 5000 (n*n/2) < 10000(n^2)
I'm sorry for my english.
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
Yes! That's why it's O(n^2). By the same logic, it's a lot less than n * n * n, which makes it O(n^3). It's even O(6^n), by similar logic.
big-O gives you information about upper bounds.
I believe you are trying to ask why the complexity is theta(n) or omega(n), but if you're just trying to understand what big-O is, you really need to understand that it gives upper bounds on functions first and foremost.

Resources