Can you explain me how to find time complexity for this?
sum=0;
for(k=1;k<=n;k*=2)
for(j=1;j<=k;j++)
sum++;
So, i know the outer loop has time complexity of O(logn), but since the iterations of the inner loop depends on the value of the outer loop, the complexity of this algorithm is not O(nlogn).
The book says it is O(n).
I really dont understand how it is O(n)...Can someone please explain it...
I'll be really grateful if u could go into the details btw :D
A mathematical solution would help me understand better...
Just see how many times the inner loop runs:
1 + 2 + 4 + 8 + 16 +...+ n
Note that if n = 32, then this sum = 31 + 32. ~ 2n.
This is because the sum of all the terms except the last term is almost equal to the last term.
Hence the overall complexity = O(n).
EDIT:
The geometric series sum (http://www.mathsisfun.com/algebra/sequences-sums-geometric.html) is of the order of:
(2^(logn) - 1)/(2-1) = n-1.
The outer loop executed log(Base2)n times.so it is O(log(Base2)n).
the inner loop executed k times for each iteration of the outer loop.now in each iteration of the outer loop, k gets incremented to k*2.
so total number of inner loop iterations=1+2+4+8+...+2^(log(Base2)n)
=2^0+2^1+2^2+...+2^log(Base2)n (geometric series)
=2^((log(base2)n+1)-1/(2-1)
=2n-1.
=O(n)
so the inner loop is O(n).
So total time complexity=O(n), as O(n+log(base2)n)=O(n).
UPDATE:It is also O(nlogn) because nlogn>>n for large value of n , but it is not asymptotically tight. you can say it is o(nlogn)[Small o] .
I believe you should proceed like the following to formally obtain your algorithm's order of growth complexity, using Mathematics (Sigma Notation):
Related
Trying to calculate time complexity of some simple code but I do not know how to calculate time complexity while summing a sub array. The code is as follows:
for i=1 to n {
for j = i+1 to n {
s = sum(A[i...j])
B[i,j]=s
}}
So I know the nested for loops inevitably give us a O(n^2) and I believe the function to sum to the sub array is also O(n^2). However, I think the time complexity for the whole algorithm is O(n^3). How do I get here with this information? Thank you!
I like to think of for loops as summations. As such, the number of steps (written as a function, T(n)) is:
T(n) = \sum_{i=1}^n numStepsInInnerForLoop
Here, I'm using something written in pseudo-MathJax, and have written the outer for loop as a summation from i=1 to n of the number of steps in the inner for loop (the one from i+1 to n). You can think of this analagously as summing the number of steps in the inner for loop, from i=1 to n. Substituting in numStepsInInnerForLoop results in:
T(n) = \sum_{i=1}^n [\sum_{j=i+1}^n numStepsOfSumFunction]
This function now represents the number of steps where both for loops have been fleshed out as summations. Assuming that s = sum(A[i...j]) takes j-i+1 steps and B[i,j]=s takes just one step, we can substitute numStepsOfSumFunction with these more useful parameters and the equation now becomes:
T(n) = \sum_{i=1}^n [\sum_{j=i+1}^n (j-i+1 + 1)]
When you solve these summations (using the kind of formulas you see on this summation tutorial page) you'll get a cubic function for T(n) which corresponds to O(n^3).
Your reasoning leads me to believe that you're running this algorithm on a array of size n. If so, then every time you call the sum method in the inner for loop, you're calling this method on a specific range of values (indices i to j). For each iteration of this for loop, this sum method will iterate through 1, 2, 3, ..., then finally n elements in the last iteration as j increases from (i + 1) to n. Note that this is when i = 1. As i increases, it won't necessarily go from 1, 2, 3, ..., to n anymore since it will technically go up to n - i elements. Big O, though, is the worst case so we have to use this scenario.
1 + 2 + 3 + ... + n gives us n^2. The runtime of the sum method depends on the values of i and j; however, when run in the for loop with the given conditions, the total time-complexity of all the calls to sum in one iteration of the inner for loop is O(n^2). And finally, since this inner for loop is executed n times, the total time-complexity for the whole algorithm is O(n^3).
I calculated it to be O(N^2), but my instructor marked it incorrect in the exam. The Correct answer was O(1). Can anyone help me, how did the time complexity come out to be O(1)?
The outer loop will run for 2N times. (int j = 2 * N) and later decrementing everytime by 1)
And since N is not changing, and the i is assigned the values of N always (int i = N), the inner loop will always run for logN base 2 times.
(Notice the way i changes i = i div 2)
Therefore, the complexity is O(NlogN)
Question: What happens when you repeatedly half input(or search space) ?(Like in Binary Search).
Answer: Well, you get log(N) complexity. (Reference : The Algorithm Design Manual by Steven S. Skiena)
See the inner loop in your algorithm, i = i div 2 makes it a log(N) complexity loop. Therefore the overall complexity will be N log(N).
Take this with a pinch of salt : Whenever you divide your input (search space) by 2, 3 , 4 or whatever constant number greater than 1, you get log(N) complexity.
P.S. : the complexity of your algorithm is nowhere near to O(1).
For this question part A,I know the Big-O is n^2, because the outer loop can run at most (n-1) times, and each inner loop can run at most (n(n+1))/2 = n^2/2 + n/2 , and since we are calculating the Big-O, we only take the higher bound, hence, we have (n * n) = O(n^2).
But for part B, I know the array is A[1.....n] = {1,1,4,7,10,..,3(n-2)+1}, and
From my understand, the outer loop have at least (n-1) iterations, and the inner loop have at least (n/2) iterations. So we have (n*n/2) = (cn^2) = (n^2), is this correct?
According to the answer sheet, there are at least n^2/4 iteration, which is Big-Omega(n^2), I just don't understand how they get to n^2/4 and not n^2/2, can someone explain how to do part B in detail please, Thanks.
You are correct the best case time complexity of the bizzare() procedure is Big-Omega (n^2/2) assuming that the inner-loop gets executed for all i.
Look at it this way:
Let n = A.size(),
so for the first time when i=2 the inner loop will run atleast once,
when i=2, the inner loop will run atleast twice
when i=3, inner loop runs atleast thrice and so on
So the total best case complexity is actually Big-Omega(sum of first n-1 natural numbers) = Big-Omega(n*(n-1)/2) = Big-Omega(n^2). Also, note that Big-Omega(n^2/2)=Big-Omega(n^2/4). If you take average of outer loop * average of inner loop that will give you n^2/4 iterations on average assuming that the distribution of data is uniform which means half will go to the if block and half will go to the else block. The constant really doesnt matter.
find the big oh characterization
input: n
s<-0
for i<-1 to n^2 do
for j<-1 to i do
s<-s+i
Both loops iterate n^2 times. I'm not sure if I should add or multiply the loops. Right now I think the answer is O(n^4).
The outer loops i from 0 to n^2, while the inner one loops j from 1 to i.
Thus, the inner loop has complexity of i (inner loop needs i steps, i is varying). Let us denote the runtime of the inner loop by IL(i)=i.
To get the total runtime, we see that the inner loop is executed n^2 times with varying "parameter" i. Thus, we obtain as total runtime:
n^2 n^2
---- ----
\ IL(i) = \ i
/ /
---- ----
i=1 i=1
That is, you have to sum up all numbers from 1 to n^2. There are many basic textbooks explaining how to evaluate this.
The result is (n^2)(n^2+1)/2, which leads to an overal complexity of O(n^4).
You are correct the answer is O(n^4). First the inner loop cost i. Then the outer loop cost sum(1->p) i = p(p-1)/2, where p=n^2. Which gives a cost of O(n^4)
Your argument is almost correct.
The number of loop iterations:
1 + 2 + ... + n^2
= (n^2+1)*(n^2)/2
= (n^4 + n^2)/2
So the answer is Θ(n^4) (as a side note it also is O(n^4)).
You can prove that formally by choosing three constants c1, c2 and n0 such that:
c1*n^4 <= (n^4 + n^2)/2 <= c2*n^4 forall n >= n0
Since the maximum value of i is n^2, that gives us O(n^2) * O(n^2) which is equal to O(n^4).
You can check simply the basic sum formula. Your sum goes not to N but N^2 which gives you
n^2(n^2+1)/2
and this is indeed O(n^4)
for (int j=0,k=0; j<n; j++)
for (double m=1; m<n; m*=2)
k++;
I think it's O(n^2) but I'm not certain. I'm working on a practice problem and I have the following choices:
O(n^2)
O(2^n)
O(n!)
O(n log(n))
Hmmm... well, break it down.
It seems obvious that the outer loop is O(n). It is increasing by 1 each iteration.
The inner loop however, increases by a power of 2. Exponentials are certainly related (in fact inversely) to logarithms.
Why have you come to the O(n^2) solution? Prove it.
Its O(nlog2n). The code block runs n*log2n times.
Suppose n=16; Then the first loop runs 16 (=n) times. And the second loops runs 4(=log2n) times (m=1,2,4,8). So the inner statement k++ runs 64 times = (n*log2n) times.
lets look at the worst-case behaviour. for second loop search continues from 1, 2, 4, 8.... lets say n is 2^k for some k >= 0. in the worst-case we might end up searching until 2^k and realise we overshot the target. Now we know that target can be in 2^(k - 1) and 2^k. The number of elements in that range are 2^(k - 1) (think a second.). The number of elements that we have examined so far is O(k) which is O(logn) and for first loop it's O(n).(too simple to find out). then order of whole code will O(n(logn)).
A generic way to approach these sorts of problems is to consider the order of each loop, and because they are nested, you can multiply the "O" notations.
Some simple rules for big "O":
O(n)O(m) = O(nm)
O(n) + O(m) = O(n + m)
O(kn) = O(n) where 'k' is some constant
The 'j' loop iterates across n elements, so clearly it is O(n).
The 'm' loop iterates across log(n) elements, so it is O(log(n)).
Since the loops are nested, our final result would O(n) * O(log(n)) = O(n*log(n)).