Calculate the big-O and big-Omega of the following piece of code - complexity-theory

I was asked to find the big-O and Big-Omega if I know that function f has O(log(n)), Ω(1) and function g has O(n), Ω((log(n))^2)
for (int i = n; i >= 0; i/=2)
if (f(i) <= g(i))
for (int j = f(i)*f(i); j < n; j++)
f(j);
The big problem that I have is that I don't know how to incorporate the complexity of the funstions in the calculation. I mean I know how to calculate the complexity of loops that looks like this:
for(int i =0 ; i< n*2; i++) {
....
}
or like this
for(int i = 0; i < n; i++) {
for(int j = 0; j < n; j++) {
}
}
Thank you in advance.
This is what I've tried:
for (int i = n; i >= 0; i/=2)// this is aproximatly O(log(n))
if (f(i) <= g(i))// because O(n) < O(log(n)) this line is O(n)
for (int j = f(i)*f(i); j < n; j++)// O(n*log(n))
f(j);// O(log(n))
So by my calculation I get O(log(n)*n *n *log(n)*log(n))=O(n^2*log^3(n))

This is tricky question, because the loop execution depends on values, returned by functions f and g. However remember, that you need to estimate the worst case - so you need to assume two things:
f(i) <= g(i) is always true, so the internal loop always executes
the internal loop starts from 0, because it's a minimal value, which you can get as a result of squaring the f(i) value
So, your piece of code becomes much simpler:
for (int i = n; i >= 0; i/=2)
{
f(i);
g(i);
f(i);
f(i);
for (int j = 0; j < n; j++)
f(j);
}
I think you can take over from here.

Related

How to determine the precise (i.e., not just the order of magnitude) Big-Oh values based on the number of statement executions?

I am studying Data Structures, but I am a bit confused about my homework assignment. I am trying to figure out how to count the precise Big-Oh values of the following:
for (int i = 0; i < n; i++)
{
for (int j = i; j < n; j++)
{
for (int k = j; k < n; k++)
{
sum += j;
}
}
}
I know how to count the Big-Oh's in terms of magnitude, but I don't know how to count it precisely. An example answer is in the form: 10N + 3n^2 + 2 (IE)

Algorithmic big o order of growth code

I'm doing an online course and i'm stuck on this question. I know there are similar questions but they don't help me.
What is the order of growth of the worst case running time of the
following code fragment as a function of N?
int sum = 0;
for (int i = 0; i*i*i < N; i++)
for (int j = 0; j*j*j < N; j++)
for (int k = 0; k*k*k < N; k++)
sum++;
I thought that the order would be n^3 but I don't think this is correct because the loops only go through a third of n each time. So would that make it nlogn?
Also
int sum = 0;
for (int i = 1; i <= N; i++)
for (int j = 1; j <= N; j++)
for (int k = 1; k <= N; k = k*2)
for (int h = 1; h <= k; h++)
sum++;
I think this one would be n^4 because you have n * n * 0.5n * 0.5n
The loops in fact only go up to the cube root of N. (i^3 < n, etc.)
The 3 nested loops of this length, give O(cube root of N, cubed). This O(N)
Of note, if you were correct and they each went to one third of N, then cubing this still gives O(N^3/9), 1/9 is constant, so this is O(n^3)
If you examine the value of sum for various values of N, then it becomes pretty clear what the time complexity of the algorithm is:
#include <iostream>
int main()
{
for( int N=1 ; N<=100 ; ++N ) {
int sum = 0;
for (int i = 0; i*i*i < N; i++)
for (int j = 0; j*j*j < N; j++)
for (int k = 0; k*k*k < N; k++)
sum++;
std::cout << "For N=" << N << ", sum=" << sum << '\n';
}
return 0;
}
You can then draw your own conclusions with greater insight.

Algorithmic complexity of o(n)

I recently started playing with algorithms from this princeton course and I observed the following pattern
O(N)
double max = a[0];
for (int i = 1; i < N; i++)
if (a[i] > max) max = a[i];
O(N^2)
for (int i = 0; i < N; i++)
for (int j = i+1; j < N; j++)
if (a[i] + a[j] == 0)
cnt++;
O(N^3)
for (int i = 0; i < N; i++)
for (int j = i+1; j < N; j++)
for (int k = j+1; k < N; k++)
if (a[i] + a[j] + a[k] == 0)
cnt++;
The common pattern here is that as the nesting in the loop grows the exponent also increases.
Is it safe to assume that if I have 20-for loops my complexity would be 0(N^20)?
PS: Note that 20 is just a random number I picked, and yes if you nest 20 for loops in your code there is clearly something wrong with you.
It depends on what the loops do. For example, if I change the end of the 2nd loop to just do 3 iterations like this:
for (int i = 0; i < N; i++)
for (int j = i; j < i+3; j++)
if (a[i] + a[j] == 0)
cnt++;
we get back to O(N)
The key is whether the number of iterations in the loop is related to N and increases linearly as N does.
Here is another example where the 2nd loop goes to N ^ 2:
for (int i = 0; i < N; i++)
for (int j = i; j < N*N; j++)
if (a[i] + a[j] == 0)
cnt++;
This would be o(N^3)
Yes, if the length of the loop is proportional to N and the loops are nested within each other like you described.
In your specific pattern, yes. But it is not safe to assume that in general. You need to check whether the number of iterations in each loop is O(n) regardless of the state of all the enclosing loops. Only after you have verified that this is the case can you conclude that the complexity is O(nloop-nesting-level).
Yes. Even though you decrease the interval of iteration, Big-o notation works with N increasing towards infinity and as all your loops' lengths grow proportional to N, it is true that such an algorithm would have time complexity O(N^20)
I strongly recommend that you understand why a doubly nested loop with each loop running from 0 to N is O(N^2).Use summations to evaluate the number of steps involved in the for loops, and then dropping constants and lower order terms, you will get the Big-Oh of that algorithm.

Big O notation of nested sequential loops

I have been searching through the forums on big O notation and learned quite a bit. My problem is pretty specific and I think a unique case will better help me understand big O, I am ignoring constants.
To my understanding if a loop goes through all elements than it's O(n).
for(int i = 0; i < n; i++)
{
}
If a loop goes through all of n, inside another loop that goes through all of n, it's multiplied n * n = n^2
for(int i = 0; i < n; i++)
{
for(int j = 0; j < n; j++)
{
}
}
Lastly if a loop is followed by another loop that goes through all elements it is n + n = 2n
for(int j = 0; j < n; j++)
{
}
for(int k = 0; k < n; k++)
{
}
My question directly proceeds these lines of code
for(int i = 0; i < n; i++)
{
for(int j = 0; j < n; j++)
{
}
for(int k = 0; k < n; k++)
{
}
for(int l = 0; l < n; l++)
{
for(int m = 0; m < n; m++)
{
}
}
}
So based on the rules above I am calculating big O to be n * (n + n + n * n), which is n^3 + 2n^2. So would that make my big O(n^3) or would my big O be O(n^3 +2n^2). Am I going about this all wrong? Or am I somewhere close in the ballpark? Mainly I'm trying to figure out if these loops would be less than O(n^4). Thanks in advance.
The big-O notation is used to characterize the asymptotic behavior of an algorithm depending on some value n that indicates the data volume, but independent of any constant, e.g. processor speed.
In your example, n^3 grown faster than 2n^2, i.e., for large n, 2n^2 can be neglected compared to n^3. The asymptotic behavior of your nested loops thus have order O(n^3).

What will be the complexity of for loop if nothing is happening in the body of loop

Code:
int c = 0;
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
c = i * j;
}
}
Time Complexity: O(n2)
Now what will be the complexity of following code:
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
//c = i * j;
// nothing is happening inside the loop
}
}
whether complexity will be same as above( O(n2) ) or something else??
Theoretically - yes because there is still the issue of increasing the i and j which still needs to happen, and comparing them to the end value in each iteration.
However - compilers might optimize it to be done in constant time, and just set the post values of i and j.
For both complexity is O(N^2).

Resources