I do not understand why there is no factor of โn in time complexity.
Here is the code
def sieve_of_erastothenes(max):
flags = [True] * (max+1)
num = 2
while num*num <= max:
if flags[num] is True: # if it is a prime number
cross_off(flags, num)
num += 1
return [i for i in range(2,len(flags)) if flags[i] is True]
def cross_off(flags, prime):
# no point in crossing anything before prime*prime
for num in range(prime*prime, len(flags), prime):
flags[num] = False
num += prime
The outer loop (num*num <= max) runs for โn times. If flag has been set, inner loop runs for n/p times. Summation of n/p where p is only prime gives n*loglogn
So time complexity should be O(โn * n * loglogn). But everywhere I read, it is given as O(nloglogn)
I think I have figured it out with the help of comments from Matt Timmermans and rici. I had 2 misunderstandings
The inner loop for a given prime runs n / prime times.
The summation n/2 + n/3 + n/5 + n/7 + n/11 .... is already taking into account the outer โn factor. The summation itself is because of the outer loop.
The failure case of if condition IS being counted, but it is being overpowered by the other factor. So we can write it as,
num => 2 3 4 5 6 7 8 9 10 11
sum => n/2 + n/3 + 1 + n/5 + 1 + n/7 + 1 + 1 + 1 + n/11...
sum => (n/2 + n/3 + n/5 ...) + (1 + 1 + 1 + 1....)
sum => (nloglogn) + (n) # Worst case. 1+1+1.. will always be less than n
O(nloglogn + n) = O(nloglogn)
Related
I am struggling with this question would like some help , thank you.
Determine the big O running time of the method myMethod() by counting the
approximate number of operations it performs. Show all details of your answer.
Note: the symbol % represent a modulus operator, that is, the remainder of a
number divided by another number.
๐ ๐ก๐๐ก๐๐ ๐๐๐ก ๐๐ฆ๐๐๐กโ๐๐(๐ด[], ๐) {
๐๐๐ข๐๐ก โ 0
๐๐๐ ๐ โ 0 ๐ก๐ ๐ โ 1 {
๐๐๐ ๐ โ ๐ ๐ก๐ ๐ โ 1 {
๐๐๐ข๐๐ก โ ๐๐๐ข๐๐ก+ ๐ด[๐]
๐ โ 1
๐คโ๐๐๐ (๐ < ๐ + 2) {
๐๐ (๐%2 == 0) {
๐๐๐ข๐๐ก = ๐๐๐ข๐๐ก + 1
}
๐ + +
}
}
}
๐๐๐ก๐ข๐๐ ๐๐๐ข๐๐ก
}
The outer for loop with the variable i runs n times.
The inner for loop with the variable j runs n - i times for each iteration of the outer loop. This would make the inner loop run n + n-1 + n-2 +...+ 1 times in aggregation which is the equivalent of n * (n+1) / 2.
The while loop inside the inner for loop runs n + 1 times for each iteration of the inner for loop.
This makes the while loop to run (n * (n+1) / 2) * (n + 2). This produces (n^2 + n) / 2 * (n + 2) = (n^2 + n) * (n + 2) / 2 = (n^3 + 2n^2 + n^2 + 2n) / 2 = (n^3 + 3n^2 + 2n) / 2).
Dropping lower degrees of n and constants we get O(n^3).
You could have also argued that n + n-1 + ... + 1 is O(n^2) times a linear operation becomes O(n^3). Which would have been more intuitive and faster.
Say if n here is any number less than 256
the inner loop is going to be true for 4 times,
now what kind of sequence does the inner loop follow as n increases.
for ( int i=1; i<=n; i++){
for ( int j=2;j<=n; j=j*j){
}
}
The outer loop will be iterated n times. The inner loop each time squared the previous value, i.e., 2 + 2^2 + 2^4 + 2^8 + ... + 2^{2^k}. Hence, the time complexity is n * k. To compute the k, we need to find a k such that n = 2 + 2^2 + 2^4 + 2^8 + ... + 2^{2^k}:
2 + 2^2 + 2^4 + 2^8 + ... + 2^{2^k} = sum_{t=0}^{k} 2^{2^t} = n
=> k = \theta(log(log(n))
Therefore, the time complexity if Theta(n log(log(n))).
I feel that in worst case also, condition is true only two times when j=i or j=i^2 then loop runs for an extra i + i^2 times.
In worst case, if we take sum of inner 2 loops it will be theta(i^2) + i + i^2 , which is equal to theta(i^2) itself;
Summation of theta(i^2) on outer loop gives theta(n^3).
So, is the answer theta(n^3) ?
I would say that the overall performance is theta(n^4). Here is your pseudo-code, given in text format:
for (i = 1 to n) do
for (j = 1 to i^2) do
if (j % i == 0) then
for (k = 1 to j) do
sum = sum + 1
Appreciate first that the j % i == 0 condition will only be true when j is multiples of n. This would occur in fact only n times, so the final inner for loop would only be hit n times coming from the for loop in j. The final for loop would require n^2 steps for the case where j is near the end of the range. On the other hand, it would only take roughly n steps for the start of the range. So, the overall performance here should be somewhere between O(n^3) and O(n^4), but theta(n^4) should be valid.
For fixed i, the i integers 1 โค j โค i2 such that j % i = 0 are {i,2i,...,i2}. It follows that the inner loop is executed i times with arguments i * m for 1 โค m โค i and the guard executed i2 times. Thus, the complexity function T(n) โ ฮ(n4) is given by:
T(n) = โ[i=1,n] (โ[j=1,i2] 1 + โ[m=1,i] โ[k=1,i*m] 1)
= โ[i=1,n] โ[j=1,i2] 1 + โ[i=1,n] โ[m=1,i] โ[k=1,i*m] 1
= n3/3 + n2/2 + n/6 + โ[i=1,n] โ[m=1,i] โ[k=1,i*m] 1
= n3/3 + n2/2 + n/6 + n4/8 + 5n3/12 + 3n2/8 + n/12
= n4/8 + 3n3/4 + 7n2/8 + n/4
Here's the pseudocode:
Baz(A) {
big = โโ
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
sum = 0
for k = j to j + i - 1
sum = sum + A(k)
if sum > big
big = sum
return big
So line 3 will be O(n) (n being the length of the array, A)
I'm not sure what line 4 would be...I know it decreases by 1 each time it is run, because i will increase.
and I can't get line 6 without getting line 4...
All help is appreciated, thanks in advance.
Let us first understand how first two for loops work
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
First for loop will run from 1 to n(length of Array A) and the second for loop will depend on value of i. SO when i = 1 second for loop will run for n times..When i increments to 2 your second for loop will run for (n-1) time ..so it will go on till 1.
So your second for loop will run as follows:
n + (n - 1) + (n - 2) + (n - 3) + .... + 1 times...
You can use following formula: sum(1 to n) = N * (N + 1) / 2 which gives (N^2 + N)/2 So we have Big oh for these two loops as
O(n^2) (Big Oh of n square )
Now let us consider third loop also...
Your third for loop looks like this
for k = j to j + i - 1
But this actually means,
for k = 0 to i - 1 (you are just shifting the range of values by adding/subtracting j but number of times the loop should run will not change, as difference remains same)
So your third loop will run from 0 to 1(value of i) for first n iterations of second loop then it will run from 0 to 2(value of i) for first (n - 1) iterations of second loop and so on..
So you get:
n + 2(n-1) + 3(n-2) + 4(n-3).....
= n + 2n - 2 + 3n - 6 + 4n - 12 + ....
= n(1 + 2 + 3 + 4....) - (addition of some numbers but this can not be greater than n^2)
= `N(N(N+1)/2)`
= O(N^3)
So your time complexity will be N^3 (Big Oh of n cube)
Hope this helps!
Methodically, you can follow the steps using Sigma Notation:
Baz(A):
big = โโ
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
sum = 0
for k = j to j + i - 1
sum = sum + A(k)
if sum > big
big = sum
return big
For Big-O, you need to look for the worst scenario
Also the easiest way to find the Big-O is to look into most important parts of the algorithm, it can be loops or recursion
So we have this part of the algorithm consisting of loops
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
for k = j to j + i - 1
sum = sum + A(k)
We have,
SUM { SUM { i } for j = 1 to n-i+1 } for i = 1 to n
= 1/6 n (n+1) (n+2)
= (1/6 n^2 + 1/6 n) (n + 2)
= 1/6 n^3 + 2/6 2 n^2 + 1/6 n^2 + 2/6 n
= 1/6 n^3 + 3/6 2 n^2 + 2/6 n
= 1/6 n^3 + 1/2 2 n^2 + 1/3 n
T(n) ~ O(n^3)
I'am taking a course in complexity theory,and so it's need some mathematical background which i have a problem,.
so while i'am trying to do some practicing i stuck in the bellow example
1) for (i = 1; i < n; i++) {
2) SmallPos = i;
3) Smallest = Array[SmallPos];
4) for (j = i+1; j <= n; j++)
5) if (Array[j] < Smallest) {
6) SmallPos = j;
7) Smallest = Array[SmallPos]
}
8) Array[SmallPos] = Array[i];
9) Array[i] = Smallest;
}
Thus, the total computing time is:
T(n) = (n) + 4(n-1) + n(n+1)/2 โ 1 + 3[n(n-1) / 2]
= n + 4n - 4 + (n^2 + n)/2 โ 1 + (3n^2 - 3n) / 2
= 5n - 5 + (4n2 - 2n) / 2
= 5n - 5 + 2n^2 - n
= 2n^2 + 4n - 5
= O(n^2)
and what i don't understand or confused about line 4 analyzed to n(n+1)/2 โ 1,
and line 5 3[n(n-1) / 2].
i knew that the sum of positive series is =n(first+last)/2 ,but when i tried to calculate it as i understand it it gives me different result.
i calculate for line no 4 so it shoulb be =n((n-1)+2)/2 according to n(first+last)/2 ,but here it's n(n+1)/2 โ 1.
and same for 3[n(n-1) / 2].....i don't understand this too
also here's what is written in the analysis it could help if anyone can explain to me,
Statement 1 is executed n times (n - 1 + 1); statements 2, 3, 8, and 9 (each representing O(1) time) are executed n - 1 times each, once on each pass through the outer loop. On the first pass through this loop with i = 1, statement 4 is executed n times; statement 5 is executed n - 1 times, and assuming a worst case where the elements of the array are in descending order, statements 6 and 7 (each O(1) time) are executed n - 1 times.
On the second pass through the outer loop with i = 2, statement 4 is executed n - 1 times and statements 5, 6, and 7 are executed n - 2 times, etc. Thus, statement 4 is executed (n) + (n-1) +... + 2 times and statements 5, 6, and 7 are executed (n-1) + (n-2) + ... + 2 + 1 times. The first sum is equal to n(n+1)/2 - 1, and the second is equal to n(n-1)/2.
Thus, the total computing time is:
T(n) = (n) + 4(n-1) + n(n+1)/2 โ 1 + 3[n(n-1) / 2]
= n + 4n - 4 + (n^2 + n)/2 โ 1 + (3n^2 - 3n) / 2
= 5n - 5 + (4n2 - 2n) / 2
= 5n - 5 + 2n^2 - n
= 2n^2 + 4n - 5
= O(n^2)
here's the link for the file containing this example:
http://www.google.com.eg/url?sa=t&rct=j&q=Consider+the+sorting+algorithm+shown+below.++Find+the+number+of+instructions+executed+&source=web&cd=1&cad=rja&ved=0CB8QFjAA&url=http%3A%2F%2Fgcu.googlecode.com%2Ffiles%2FAnalysis%2520of%2520Algorithms%2520I.doc&ei=3H5wUNiOINDLswaO3ICYBQ&usg=AFQjCNEBqgrtQldfp6eqdfSY_EFKOe76yg
line 4: as the analysis says, it is executed n+(n-1)+...+2 times. This is a sum of (n-1) terms. In the formula you use, n(first+last)/2, n represents the number of terms. If you apply the formula to your sequence of n-1 terms, then it should be (n-1)((n)+(2))/2=(nยฒ+n-2)/2=n(n+1)/2-1.
line 5: the same formula can be used. As the analysis says, you have to calculate (n-1)+...+1. This is a sum of n-1 terms, with the first and last being n-1 and 1. The sum is given by (n-1)(n-1+1)/2. The factor 3 is from the 3 lines (5,6,7) that are each being done (n-1)(n)/2 times