for (int i = 1; i < a; i++){
for(int j = 1; j < b; j = j + a){
Function() <-- O(1)
}
}
In this case, the outer loop will be executed 'a'times(O(a)), and
the inner loop will be executed 'b/a' times(O(b/a)).
Then the total time complexity will be O(a * b/a ) = O(b)?
I am not this interpretation is right or not..
Well O(a * b/a) = O(b) is obviously right because there is the identity right there: O(b*a/a) = O(b*1) = O(b).
However, it seems like the time complexity is O(a*b*1) (assuming looping causes no overheads in time). The computational effort increases linearly with each individual loop size. That is the reason for O(a*b).
I think it is a good question, my thought is
The original complexity should be O(a) * O(b/a)
But before you jump into conclusion, you have to judge the cases:
If b <= a, then O(b/a) = O(1), so O(a) * O(b/a) = O(a)
If b > a, then O(b/a) = O(b/a), so O(a) * O(b/a) = O(b)
So combined these cases, I would say it is O(max(a,b))
Armen is correct, the answer is O(ab). We can get the answer by these steps:
O((a-1)(b-1)(1))
=O(ab-a-b+1), which -a-b+1 can be ignored.
=O(ab)
O(b) is incorrect since you pass through the outer loop a times hence the answer must be at least O(a) (and, for all you know, b might be much smaller than a). The answer should depend on both a and b rather than b alone.
Counting carefully, you pass through the inner loop ceil((b-1)/a) times, hence the complexity is
O(a*ceil((b-1)/a))
But,
ceil((b-1)/a) <= (b-1)/a + 1
Thus
a*ceil((b-1)/a) <= a*((b-1)/a + 1) = b - 1 + a = a + b - 1
The 1 is asymptotically negligible, Hence the complexity is O(a+b).
Since O(a+b) = O(max(a,b)) this agrees with the answer of #shole.
Related
1) int p = 0;
2) for (int i = 1; i < n; i*=2) p++;
3) for (int j = 1; j < p; j*=2) stmt;
In my analysis, line #1 O(1), line #2 O(lg(n)), and line #3 O(lg(p)). I believe that the second and third lines are independent. Therefore, the asymptotic time complexity should be O(lg(n) + lg(p)). By the way, the lecturer said O(lglg(n)) because of p = lg(n). At this point, I have three questions.
How does the second line relate to the third line? Could you please explain it in detail with some examples? I don't understand how p = lg(n) is available.
O(lg(n) + lg(p)) is wrong? Would you please explain if I am wrong?
If the complexity in my second question is correct, I don't understand O(lglg(n)) can be answer because I think O(lg(n) + lg(p)) > O(lglg(n)).
Please comment if you could not catch my question point.
It can be shown that p will be O(log n) after line 2 is finished. Therefore, the overall time complexity is O(O(stmt) * log(p) + log(n)) and since we know p, this can be reduced to O(O(stmt) * log(log(n)) + log(n)). I assume stmt is O(1), so the real runtime would be O(log(n) + log(log(n))). This can be further reduced to O(log(n)) since it can be shown that for any non-trivial n, log(n) > log(log(n)).
Why is p O(log n)? Well, consider what p evaluates to after line 2 is complete when n is 2, 4, 8, 16. Each time, p will end up being 1, 2, 3, 4. Thus, to increase p by one, you need to double n. Thus, p is the inverse of 2^n, which is log(n). This same logic must be carried to line 3, and the final construction of the runtime is detailed in the first paragraph of this post.
As of your question I made this c program and try to do the complexity analysis step by step so you can understand:
#include<stdio.h>
int main(){
//-----------------------------------//
//------------first line to analysis-------------//
//O(1) as of input size siz(p)=1
int p = 0;
int i=1,j=1,n=100;
//-----------------------------------//
//-----------second line to analysis---//
//O(log(n)) as of input size siz(loop1)=n
for(i=1;i<n;i=i*2)
printf("%d",i);
//---------------------------------//
//-------------third line to analysis---//
//O(log(p)) as of input size siz(loop2)=p
//we get O(log(n)) if we assume that input size siz(loop2)=p=n
for(j=1;j<p;j=j*2)
printf("%d",j);
}
As of first line there is one variable p and it can take only one input at a time,so the time complexity is constant time.
we can say that int p = 1 is O(1) and we take the function f(n)=O(1).
After that we have the first loop and it increases in a logarithmic scale like log with a base of 2,so it will be O(log(n)) as of input size is dependent on variable n.
so the worst case time complexity is now f(n) = O(1)+O(log(n)).
in third case it is same as second loop so we can say that time complexity is O(log(p)) as of input size is p and the 3rd line of code or 2nd loop is always independent part of the source code.if it will be a nested loop then it will depend on the first loop.
so the time complexity now f(n) = O(1)+O(log(n))+O(log(p))
Now we the time complexity formula and need to choose the worst one from this.
**O(LogLogn) Time Complexity of a loop is considered as O(LogLogn) if the loop variables is reduced / increased exponentially by a constant amount.
// Here c is a constant greater than 1
for (int i = 2; i <=n; i = pow(i, c)) {
// some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant root
for (int i = n; i > 0; i = fun(i)) {
// some O(1) expressions
}
so by the reference of ** mark we can easily understand that the time complexity will be O(log(log(n)) if the input size of p = n.This is the answer of your 3rd question.
reference: time complexity analysis
1) i=s=1;
while(s<=n)
{
i++;
s=s+i;
}
2) for(int i=1;i<=n;i++)
for(int j=1;j<=n;j+=i)
cout<<"*";
3) j=1;
for(int i=1;i<=n;i++)
for(j=j*i;j<=n;j=j+i)
cout<<"*";
can someone explain me the time complexity of these three codes?
I know the answers but I can't understand how it came
1) To figure this out, we need to figure out how large s is on the x'th iteration of the loop. Then we'll know how many iterations occur until the condition s > n is reached.
On the x'th iteration, the variable i has value x + 1
And the variable s has value equal to the sum of i for all previous values. So, on that iteration, s has value equal to
sum_{y = 1 .. x} (y+1) = O(x^2)
This means that we have s = n on the x = O(\sqrt{n}) iteration. So that's the running time of the loop.
If you aren't sure about why the sum is O(x^2), I gave an answer to another question like this once here and the same technique applies. In this particular case you could also use an identity
sum_{y = 1 .. x} y = y choose 2 = (y+1)(y) / 2
This identity can be easily proved by induction on y.
2) Try to analyze how long the inner loop runs, as a function of i and n. Since we start at one, end at n, and count up by i, it runs n/i times. So the total time for the outer loop is
sum_{i = 1 .. n} n/i = n * sum_{i = 1 .. n} 1 / i = O(n log n)
The series sum_{i = 1 .. n} 1 / i is called the harmonic series. It is well-known that it converges to O(log n). I can't enclose here a simple proof. It can be proved using calculus though. This is a series you just have to know. If you want to see a simple proof, you can look on on wikipedia at the "comparison test". The proof there only shows the series is >= log n, but the same technique can be used to show it is <= O(log n) also.
3.) This looks like kind of a trick question. The inner loop is going to run once, but once it exits with j = n + 1, we can never reenter this loop, because no later line that runs will make j <= n again. We will run j = j * i many times, where i is a positive number. So j is going to end up at least as large as n!. For any significant value of n, this is going to cause an overflow. Ignoring that possibility, the code is going to perform O(n) operations in total.
Consider this fragment of code
int sum = 0;
for( int i = 1; i <= n*n; i = i*2 ){
sum++ ;
}
How to do a quick proper analysis for it to get order of growth of the worst case running time?
How does changing the increment statement to i = i * 3 instead of i = i * 2 changes the worst case running time?
And is our analysis affected by changing comparison operator to < instead of <= ?
int sum = 0;
for( int i = 0; i <= n*n; i = i*2 ){
sum++ ;
}
As it stands, this is an infinite loop which will never stop, since i is never changing.
As complexity is defined for only Algorithms, which by definition should terminate in finite amount of time, it is undefined for this snippet.
However, if you change the code to the following :
int sum = 0;
for( int i = 1; i <= n*n; i = i*2 ){
sum++ ;
}
We can analyze the complexity as follows:
Let the loop run k - 1 times, and terminate at kth updation of i.
Since it's better to be redundant than to be unclear, here is what is happening:
Init(1) -> test(1) -> Loop(1) [i = 1]->
Update(2) -> test(2) -> Loop(2) [i = 2]->
...
Update(k - 1) -> test(k - 1) -> Loop(k - 1) [i = 2 ^ (k - 2)] ->
Update(k) -> test(k)->STOP [Test fails as i becomes 2 ^ (k - 1)]
Where Update(k) means kth update (i = i * 2).
Since, the increments in i are such that in the pth loop (or equivalently, after pth updation), the value of i will be 2 ^ (p - 1), we can say that at termination:
2 ^ (k - 1) > (n * n)
In verbose, we have terminated at the kth updation. Whatever the value of i was, it would've been greater than (n * n) or we would have gone for the kth loop. Taking log base 2 on both sides:
k ~ 2 * log(n)
Which implies that k is O(log(n)).
Equivalently, the number of times the loop runs is O(log(n)).
You can easily extend this idea to any limit (say n*n*n) and any increments (i*3, i*4 etc.)
The Big O complexity will be unaffected by using < instead of <=
Actualy this loop is infinte loop.
i=0
i=i*2 //0*2=0
So this loop will never end. Make i=1 to get the count of powers of 2 till n^2 not sum.
for any loop, to analys it. u have to see 2 things. the condition that will make it exit, and the iteration applied to the variable used in the condition..
for your code. u can notice that the loop stops when i goes from 0 to n*n (n^2). and the variable i is increasing like i = i*2. as i is increasing i in this manner, the loop would run for log (n^2) times. this you can see by taking an example value of n^2, like 128, and then iterate it manually one by one.
This is my question and I've managed to bring out an answer for part a, but for part b I'm not really confident about my answer of part b.
In a recent court case, a judge cited a city for contempt and ordered a fine of $2 for the first day.
Each subsequent day, until the city followed the judge’s order, the fine was squared
(i.e., the fine progressed as follows: $2, $4, $16, $256, $65,536,...).
a. What would be the fine on day N?
b. How many days would it take for the fine to reach D dollars (a Big-Oh answer will do)?
Ans a : 2^(2^n-1)
For answer b, I made the following program to find the big Oh.
for (int i = 0; i < n - 1; i++) {
result = 2 * result;
}
printf("%d\t", result);
for (int j = 0; j < result; j++) {
res = 2 * res ;
}
printf("%d\n", res);
I have calculated the big Oh of the first loop to be Sumation of n
And since the second loop runs 2^n-1 times the first loop, its big Oh is 2^n and adding them both they become (2^n) + n
According to my algorithm my answer is O(N)
int days=5;
int fine = 2;
for(int i=0; i<days-1; i++)
fine = fine * fine;
cout << fine;
The first loop runs n-1 times, the second runs 2^(n-1) times. The time-complexity
is the sum of these so (n-1) + 2^(n-1) = O(2^n + n) = O(2^n).
The question doesn't seem to be asking for the time-complexity though. It's asking
how many days would pass before the fine reaches D dollars. This is the inverse of
the answer to a): O(log log D) ($65536 is reached after log(log(65536)) + 1 days, for example).
You don't really need any software to answer these questions. Big O is a math term that happens to be used in software development.
Let's look at the progression:
2 = 2^1 = 2^(2^0)
4 = 2^2 = 2^(2^1)
16 = 2^4 = 2^(2^2)
256 = 2^8 = 2^(2^3)
65536 = 2^16 = 2^(2^4)
Answer to question a.
The penalty on day n would be 2^(2^(n-1)).
You could program it like this:
pow(2, pow(2, n-1));
Answer to question b.
x = log2 (log2 D) + 1
Or without the "+ 1" if we're not to count the first day.
This will return a positive real number, so you may want to ceil it depending on the requirements.
Now, in big O notation, it would be O(log(log)), which describes how the value grows. What that means is that when the input (D in this case) is multiplied by n, the value of the function will increase by at most log(log n) times.
Here is the code:
int Outcome = 0;
for (int i = 0; i < N; i++)
for (int j = i+2; j = 0; j--)
Outcome += i*j;
Here's my analysis. Since the first line is an assignment statement, this takes exactly one time unit, O(1). The breakdown for line 2 is : 1 + N + N = 2N + 2. With line 3,
since the loop’s content is a single operation, the loop and its block perform i+1 operations. This is also a nested for loop. Finally, line 4 takes exactly one time unit to execute. Therefore, the big-Oh notation for this code in terms of N is O(N2).
To be exact: As you say, line 4 is 1 operation. For a specific i, you execute the inner loop i+3 times. Therefore, your total number of operations is
sum(0 <= i <= N-1 : i+3)
= 3N + sum(0 <= i <= N-1 : i)
= 3N + N(N-1) / 2
= N^2/2 + 5N/2
= O(N^2)
Your intuition is correct about the final efficiency class, but it is possible to be more rigorous. The first thing is that you usually just pick the most expensive basic operation to count for your analysis. In this case it would likely be the multiplication in the innermost loop, which is executed once per iteration. So how many times is it called? On the first iteration of the outermost loop, the inner loop will iterate twice. On the second outer iteration, it will be three times, and similarly up to N+2 (I'm assuming the inner loop condition is meant to be j >= 0). So that leaves us with the following summation:
sum(2, 3, 4, 5, 6 ..., N+2)
= sum(1, 2, 3, 4 ..., N+2) - 1
= (N+2)(N+3)/2 - 1
Which is in O(N²) (and actually since you have this specific result that will always be the same you can say it's in ϴ(N²)).