Here's some code segment I'm trying to find the big-theta for:
i = 1
while i ≤ n do #loops Θ(n) times
A[i] = i
i = i + 1
for j ← 1 to n do #loops Θ(n) times
i = j
while i ≤ n do #loops n times at worst when j = 1, 1 times at best given j = n.
A[i] = i
i = i + j
So given the inner while loop will be a summation of 1 to n, the big theta is Θ(n2. So does that mean the big theta is Θ(n2) for the entire code?
The first while loop and the inner while loop should be equal to Θ(n) + Θ(n2) which should just equal Θ(n2).
Thanks!
for j = 1 to n step 1
for i = j to n step j
# constant time op
The double loop is O(n⋅log(n)) because the number of iterations in the inner loop falls inversely to j. Counting the total number of iterations gives:
floor(n/1) + floor(n/2) + ... + floor(n/n) <= n⋅(1/1 + 1/2 + ... + 1/n) ∼ n⋅log(n)
The partial sums of the harmonic series have logarithmic behavior asymptotically, so the above shows that the double loop is O(n⋅log(n)). That can be strengthened to Θ(n⋅log(n)) with a math argument involving the Dirichlet Divisor Problem.
[ EDIT ] For an alternative derivation of the lower bound that establishes the Θ(n⋅log(n)) asymptote, it is enough to use the < part of the x - 1 < floor(x) <= x inequality, avoiding the more elaborate math (linked above) that gives the exact expression.
floor(n/1) + floor(n/2) + ... + floor(n/n) > (n/1 - 1) + (n/2 - 1) + ... + (n/n - 1)
= n⋅(1/1 + 1/2 + ... + 1/n) - n
∼ n⋅log(n) - n
∼ n⋅log(n)
Related
I feel that in worst case also, condition is true only two times when j=i or j=i^2 then loop runs for an extra i + i^2 times.
In worst case, if we take sum of inner 2 loops it will be theta(i^2) + i + i^2 , which is equal to theta(i^2) itself;
Summation of theta(i^2) on outer loop gives theta(n^3).
So, is the answer theta(n^3) ?
I would say that the overall performance is theta(n^4). Here is your pseudo-code, given in text format:
for (i = 1 to n) do
for (j = 1 to i^2) do
if (j % i == 0) then
for (k = 1 to j) do
sum = sum + 1
Appreciate first that the j % i == 0 condition will only be true when j is multiples of n. This would occur in fact only n times, so the final inner for loop would only be hit n times coming from the for loop in j. The final for loop would require n^2 steps for the case where j is near the end of the range. On the other hand, it would only take roughly n steps for the start of the range. So, the overall performance here should be somewhere between O(n^3) and O(n^4), but theta(n^4) should be valid.
For fixed i, the i integers 1 ≤ j ≤ i2 such that j % i = 0 are {i,2i,...,i2}. It follows that the inner loop is executed i times with arguments i * m for 1 ≤ m ≤ i and the guard executed i2 times. Thus, the complexity function T(n) ∈ Θ(n4) is given by:
T(n) = ∑[i=1,n] (∑[j=1,i2] 1 + ∑[m=1,i] ∑[k=1,i*m] 1)
= ∑[i=1,n] ∑[j=1,i2] 1 + ∑[i=1,n] ∑[m=1,i] ∑[k=1,i*m] 1
= n3/3 + n2/2 + n/6 + ∑[i=1,n] ∑[m=1,i] ∑[k=1,i*m] 1
= n3/3 + n2/2 + n/6 + n4/8 + 5n3/12 + 3n2/8 + n/12
= n4/8 + 3n3/4 + 7n2/8 + n/4
Computing complexity and Big o of an algorithm
T(n) = 5n log n + 3log n + 2 // to the base 2 big o = o(n log n)
for(int x = 0,i = 1;i <= N;i*=2)
{
for(int j = 1 ;j <= i ;j++)
{
x++;
}
}
The Big o expected was linear where as mine is logarithmic
Your Big-Oh analysis is not correct. While it is true that the outer loop is executed log n times, the inner loop is linear in i at each iteration.
If you count the total number of iterations of the inner loop, you will see that the whole thing is linear:
The inner loop will do 1 + 2 + 4 + 8 + 16 + ... + (the last power of 2 <= N) iterations. This sum will be between N and 2*N, which makes the whole loop linear.
Let me explain why your analysis is wrong.
It is clear that inner loop will execute 1 + 2 + 4 + ... + 2^k times where k is the biggest integer which satisfies equation . This implies that upper bound for k is
Without loss of generality we can take upper bound for k and assume that k is integer, complexity equals to 1 + 2 + 4 + ... + = which is geometric series so it is equal to
Therefore in O notation it is O(n)
First, you should notice that your analysis is not logarithmic! As N \log N is not logarithmic.
Also, the time complexity is T(n) = sum_{j = 0}^{log(n)} 2^j (as the value of i duplicated each time). Hence, T(n) = 2^(log(N) + 1) - 1 = 2N - 1 = \Theta(N).
I'm trying to figure out the exact big-O value of algorithms. I'll provide an example:
for (int i = 0; i < n; i++) // 2N + 2
{
for (int x = i; x < n; x++) // N * 2N + 2 ?
{
sum += i; // N
}
} // Extra N?
So if I break some of this down, int i = 0 would be O(1), i < n is N+1, i++ is N, multiply the inner loop by N:
2N + 2 + N(1 + N + 1 + N) = 2N^2 + 2N + 2N + 2 = 2N^2 + 4N + 2
Add an N for the loop termination and the sum constant, = 3N^2 + 5N + 2...
Basically, I'm not 100% sure how to calculate the exact O notation for an algorithm, my guess is O(3N^2 + 5N + 2).
What do you mean by exact? Big O is an asymptotic upper bound, so it's by definition not exact.
Thinking about i=0 as O(1) and i<n as O(N+1) is not correct. Instead, think of the outer loop as doing something n times, and for every iteration of the outer loop, the inner loop is executed at most n times. The calculation inside the loop takes constant time (the calculation is not getting more complex as n gets bigger). So you end up with O(n*n*1) = O(n^2), quadratic complexity.
When asking about "exact", you're running the inner loop from 0 to n, then from 1 to n, then from 2 to n, ... , from (n-1) to n, each time doing a constant time operation. So you do n + (n-1) + (n-2) + ... + 1 = n*(n+1)/2 = n^2/2 + n/2 iterations. To get from the exact number of calculations to big O notation, omit constants and lower-order terms, and you'll end up with O(n^2) (the 1/2 and +n/2 are omitted).
Big O means Worst case complexity.
And Here worst case will occur only if both the loops will run for n numbers of time i.e n*n.
So, complexity is O(n2).
What is the time complexity for the following function?
for(int i = 0; i < a.size; i++) {
for(int j = i; j < a.size; i++) {
//
}
}
I think it is less than big O n^2 because we arent iterating over all of the elements in the second for loop. I believe the time complexity comes out to be something like this:
n[ (n) + (n-1) + (n-2) + ... + (n-n) ]
But when I solve this formula it comes out to be
n^2 - n + n^2 - 2n + n^2 - 3n + ... + n^2 - n^2
Which doesn't seem correct at all. Can somebody tell me exactly how to solve this problem, and where I am wrong.
That is O(n^2). If you consider the iteration where i = a.size() - 1, and you work your way backwards (i = a.size() - 2, i = a.size - 3, etc), you are looking at the following sum of number of iterations, where n = a.size.
1 + 2 + 3 + 4 + ... + n
The sum of this series is n(n+1)/2, which is O(n^2). Note that big-O notation ignores constants and takes the highest polynomial power when it is applied to a polynomial function.
It will run for:
1 + 2 + 3 + .. + n
Which is 1/2 n(n+1) which give us O(n^2)
The Big-O notation will only keep the dominant term, neglecting constants too
The Big-O is only used to compare algorithms on the same variation of a problem using the same complexity analysis standard, if and only if the dominant terms are different.
If the dominant terms are the same, you need to compare Big-Theta or Time complexity, which will show minor differences.
Example
A
for i = 1 .. n
for j = i .. n
..
B
for i = 1 .. n
for j = 1 .. n
..
We have
Time(A) = 1/2 n(n+1) ~ O(n^2)
Time(B) = n^2 ~ O(n^2)
O(A) = O(B)
T(A) < T(B)
Analysis
To visualize how we got 1 + 2 + 3 + .. n:
for i = 1 .. n:
print "(1 + "
sum = 0
for j = i .. n:
sum++
print sum") + "
will print the following:
(1+n) + (1+(n-1)) + .. + (1+3) + (1+2) + (1+1) + (1+0)
n+1 + n + n-1 + .. + 3 + 2 + 1
1 + 2 + 3 + .. + n + n+1
1/2 n(n+1) + (n+1)
1/2 n^2 + 1/2 n + n + 1
1/2 n^2 + 3/2 n + 1
Yes, the number of iterations is strictly less than n^2, but it's still Θ(n^2). It will eventually be greater than n^k for any k<2, and it will eventually be less than n^k for any k>2.
(As a side note, computer scientists often say big-O when they really mean big-theta (Θ). It's technically correct to say that almost every algorithm you've seen has O(n!) running time; all reasonably algorithms have running times that grow no more quickly than n!. But it's not really useful to say that the complexity is O(n!) if it's also O(n log n), so by some kind of Gricean maxim we assume that when someone says an algorithm's complexiy is O(f(x)) that f(x) is as small as possible.)
Here's the pseudocode:
Baz(A) {
big = −∞
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
sum = 0
for k = j to j + i - 1
sum = sum + A(k)
if sum > big
big = sum
return big
So line 3 will be O(n) (n being the length of the array, A)
I'm not sure what line 4 would be...I know it decreases by 1 each time it is run, because i will increase.
and I can't get line 6 without getting line 4...
All help is appreciated, thanks in advance.
Let us first understand how first two for loops work
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
First for loop will run from 1 to n(length of Array A) and the second for loop will depend on value of i. SO when i = 1 second for loop will run for n times..When i increments to 2 your second for loop will run for (n-1) time ..so it will go on till 1.
So your second for loop will run as follows:
n + (n - 1) + (n - 2) + (n - 3) + .... + 1 times...
You can use following formula: sum(1 to n) = N * (N + 1) / 2 which gives (N^2 + N)/2 So we have Big oh for these two loops as
O(n^2) (Big Oh of n square )
Now let us consider third loop also...
Your third for loop looks like this
for k = j to j + i - 1
But this actually means,
for k = 0 to i - 1 (you are just shifting the range of values by adding/subtracting j but number of times the loop should run will not change, as difference remains same)
So your third loop will run from 0 to 1(value of i) for first n iterations of second loop then it will run from 0 to 2(value of i) for first (n - 1) iterations of second loop and so on..
So you get:
n + 2(n-1) + 3(n-2) + 4(n-3).....
= n + 2n - 2 + 3n - 6 + 4n - 12 + ....
= n(1 + 2 + 3 + 4....) - (addition of some numbers but this can not be greater than n^2)
= `N(N(N+1)/2)`
= O(N^3)
So your time complexity will be N^3 (Big Oh of n cube)
Hope this helps!
Methodically, you can follow the steps using Sigma Notation:
Baz(A):
big = −∞
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
sum = 0
for k = j to j + i - 1
sum = sum + A(k)
if sum > big
big = sum
return big
For Big-O, you need to look for the worst scenario
Also the easiest way to find the Big-O is to look into most important parts of the algorithm, it can be loops or recursion
So we have this part of the algorithm consisting of loops
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
for k = j to j + i - 1
sum = sum + A(k)
We have,
SUM { SUM { i } for j = 1 to n-i+1 } for i = 1 to n
= 1/6 n (n+1) (n+2)
= (1/6 n^2 + 1/6 n) (n + 2)
= 1/6 n^3 + 2/6 2 n^2 + 1/6 n^2 + 2/6 n
= 1/6 n^3 + 3/6 2 n^2 + 2/6 n
= 1/6 n^3 + 1/2 2 n^2 + 1/3 n
T(n) ~ O(n^3)