I need to determine the running time for each program.
constTimeFunction(n): T(n) = O(1)
linTimeFunction(n): T(n) = O(n)
quadTimeFunction(n): T(n) = O(n^2)
cubeTimeFunction(n): T(n) = O(n^3)
I've provided my answers. Am I wrong?
Program 1:
for (i=0; i<4*n; ++i) {
linTimeFunction( n );
quadTimeFunction( n );
constTimeFunction( n );
}
The running time is O(n^3).
Program 2:
for (i=0; i<3; ++i) {
for (j=0; j<n; ++j) {
linTimeFunction( n );
linTimeFunction( n );
linTimeFunction( n );
}
}
The running time is O(n^2).
Program 3:
for (i=0;i<9*n;++i) {
cubeTimeFunction( n );
for (j=0;j<5;++j) {
quadTimeFunction( n );
}
linTimeFunction( n );
}
The running time is O(n^5).
(1) would be O(n^3), assuming quad is quadratic. The others are correct.
This is one (formal) manner, among many others, of how you can deduce the order of growth of iterative algorithms:
Related
if a function's statements execution increases with input but with a limit, would it be considered O(n) or O(1)?
for example:
void func(int n)
{
if (n > 1000)
{
for (int i = 0; i < 1000; i++)
{
//do thing
}
}
else
{
for (int i = 0; i < n; i++)
{
//do same thing
}
}
}
is this function O(n) or O(1)?
It is O(1), not O(n).
Big-O analysis is asymptotic: it intentionally ignores an arbitrarily large initial section of the performance function, in order to accurately describe the large-scale behavior.
can anyone help me to identify the steps for the following example and give more explanation on this Example the steps That determine Big-O notation is O(2n)
int i, j = 1;
for(i = 1; i <= n; i++)
{
j = j * 2;
}
for(i = 1; i <= j; i++)
{
cout << j << "\n";
}
thank you in advance
The first loop has n iterations and assigns 2^n to j.
The second loop has j = 2^n iterations.
The cout has time complexity O(log j) = O(n).
Hence the overall complexity is O(n * 2^n), which is strictly larger than O(2^n).
I'd be very grateful if someone explained to me how to analyse the time complexity of this loop:
int t;
for (i = 1; i <= n; i++){
t = i;
while(t > 0){
t = t/2
}
}
I'm inclined to think that is O(n*log(n)) since it's very similar to:
int t;
for (i = 1; i <= n; i++){
t = n;
while(t > 0){
t = t/2
}
}
but it does less assignments to the variable t. Is this the case?
If it's the case, How could I arrive to this conclusion more rigorously?
For the first snippet, the inner loop runs log(1) + log(2) + ... + log(n) times, which is the same as log(1 * 2 * ... * n) = log(n!) which through Sterling's Approximation is n log(n). Your second snippet has the same complexity. Even if it happens to do less assignments we only care about the overall behavior. In this case both are linearithmic.
How does the if-statement of this code affect the time complexity of this code?
Based off of this question: Runtime analysis, the for loop in the if statement would run n*n times. But in this code, j outpaces i so that once the second loop is run j = i^2. What does this make the time complexity of the third for loop then? I understand that the first for loop runs n times, the second runs n^2 times, and the third runs n^2 times for a certain amount of times when triggered. So the complexity would be given by n*n^2(xn^2) for which n is the number of times the if statement is true. The complexity is not simply O(n^6) because the if-statement is not true n times right?
int n;
int sum;
for (int i = 1; i < n; i++)
{
for (int j = 0; j < i*i; j++)
{
if (j % i == 0)
{
for (int k = 0; k < j; k++)
{
sum++;
}
}
}
}
The if condition will be true when j is a multiple of i; this happens i times as j goes from 0 to i * i, so the third for loop runs only i times. The overall complexity is O(n^4).
for (int i = 1; i < n; i++)
{
for (int j = 0; j < i*i; j++) // Runs O(n) times
{
if (j % i == 0) // Runs O(n) × O(n^2) = O(n^3) times
{
for (int k = 0; k < j; k++) // Runs O(n) × O(n) = O(n^2) times
{
sum++; // Runs O(n^2) × O(n^2) = O(n^4) times
}
}
}
}
The complexity is not simply O(n^6) because the if-statement is not true n times right?
No, it is not.
At worst, it is going to be O(n^5). It is less than that since j % i is equal to 0 only i times.
The first loop is run n times.
The second loop is run O(n^2) times.
The third loop is run at most O(n) times.
The worst combined complexity of the loop is going to be O(n) x O(n^2) x O(n), which is O(n^4).
I'm still learning about complexity measurement using the Big O Notation, was wondering if I'm correct to say that following method's complexity is O(n*log4n), where the "4" is a subscript.
public static void f(int n)
{
for (int i=n; i>0; i--)
{
int j = n;
while (j>0)
j = j/4;
}
}
Yes, You are correct, that the complexity of the function is O(n*log_4(n))
Log_4(n) = ln(n) / ln(4) and ln(4) is a constant, so if Your function has complexity O(n*log_4(n)) it is also true, that it has a complexity of O(n*ln(n))
Did you mean
public static void f(int n)
{
for (int i=n; i>0; i--)
{
int j = i; // Not j = n.
while (j>0)
j = j/4;
}
}
?
In that case, too you are correct. It is O(nlogn). Using the 4 as subscript is correct too, but it only makes it more cumbersome to write.
Even with the j=n line, it is O(nlogn).
In fact to be more accurate you can say it is Theta(n logn).
yes you are right, complexity is n* log4(n)