I believe that it is O(n) because there is no nested loops or dividing/multiplying going on. Am I right?
i <- 0
count <- 0
while(i<n)
x <- random()
y <- random()
if (x^2 + y^2 <= 1)
count count+1
i <- i + 1
done
pi <- 4*count/n
It is O(n) because while loop runs n times.
Let's take a look at all the statements, this time adding some labels in brackets
[1] i <- 0
[2] count <- 0
[3] while(i < n)
[3.1] x <- random()
[3.2] y <- random()
[3.3] if (x^2 + y^2 <= 1)
[3.3.1] count <- count + 1
[3.4] i <- i + 1
Now let's calculate the complexity of each of the statements:
[1] O(1)
[2] O(1)
[3] n times
[3.1] O(1)
[3.2] O(1)
[3.3] O(1)
[3.3.1] O(1)
[3.4] O(1)
So, the total complexity is
O(1) + O(1) + n * 4 * O(1) = O(n)
Related
Here's some code segment I'm trying to find the big-theta for:
i = 1
while i ≤ n do #loops Θ(n) times
A[i] = i
i = i + 1
for j ← 1 to n do #loops Θ(n) times
i = j
while i ≤ n do #loops n times at worst when j = 1, 1 times at best given j = n.
A[i] = i
i = i + j
So given the inner while loop will be a summation of 1 to n, the big theta is Θ(n2. So does that mean the big theta is Θ(n2) for the entire code?
The first while loop and the inner while loop should be equal to Θ(n) + Θ(n2) which should just equal Θ(n2).
Thanks!
for j = 1 to n step 1
for i = j to n step j
# constant time op
The double loop is O(n⋅log(n)) because the number of iterations in the inner loop falls inversely to j. Counting the total number of iterations gives:
floor(n/1) + floor(n/2) + ... + floor(n/n) <= n⋅(1/1 + 1/2 + ... + 1/n) ∼ n⋅log(n)
The partial sums of the harmonic series have logarithmic behavior asymptotically, so the above shows that the double loop is O(n⋅log(n)). That can be strengthened to Θ(n⋅log(n)) with a math argument involving the Dirichlet Divisor Problem.
[ EDIT ] For an alternative derivation of the lower bound that establishes the Θ(n⋅log(n)) asymptote, it is enough to use the < part of the x - 1 < floor(x) <= x inequality, avoiding the more elaborate math (linked above) that gives the exact expression.
floor(n/1) + floor(n/2) + ... + floor(n/n) > (n/1 - 1) + (n/2 - 1) + ... + (n/n - 1)
= n⋅(1/1 + 1/2 + ... + 1/n) - n
∼ n⋅log(n) - n
∼ n⋅log(n)
I feel that in worst case also, condition is true only two times when j=i or j=i^2 then loop runs for an extra i + i^2 times.
In worst case, if we take sum of inner 2 loops it will be theta(i^2) + i + i^2 , which is equal to theta(i^2) itself;
Summation of theta(i^2) on outer loop gives theta(n^3).
So, is the answer theta(n^3) ?
I would say that the overall performance is theta(n^4). Here is your pseudo-code, given in text format:
for (i = 1 to n) do
for (j = 1 to i^2) do
if (j % i == 0) then
for (k = 1 to j) do
sum = sum + 1
Appreciate first that the j % i == 0 condition will only be true when j is multiples of n. This would occur in fact only n times, so the final inner for loop would only be hit n times coming from the for loop in j. The final for loop would require n^2 steps for the case where j is near the end of the range. On the other hand, it would only take roughly n steps for the start of the range. So, the overall performance here should be somewhere between O(n^3) and O(n^4), but theta(n^4) should be valid.
For fixed i, the i integers 1 ≤ j ≤ i2 such that j % i = 0 are {i,2i,...,i2}. It follows that the inner loop is executed i times with arguments i * m for 1 ≤ m ≤ i and the guard executed i2 times. Thus, the complexity function T(n) ∈ Θ(n4) is given by:
T(n) = ∑[i=1,n] (∑[j=1,i2] 1 + ∑[m=1,i] ∑[k=1,i*m] 1)
= ∑[i=1,n] ∑[j=1,i2] 1 + ∑[i=1,n] ∑[m=1,i] ∑[k=1,i*m] 1
= n3/3 + n2/2 + n/6 + ∑[i=1,n] ∑[m=1,i] ∑[k=1,i*m] 1
= n3/3 + n2/2 + n/6 + n4/8 + 5n3/12 + 3n2/8 + n/12
= n4/8 + 3n3/4 + 7n2/8 + n/4
for i = 1 to n do
for j = 1 to i do
for k = 1 to j do
What is its time complexity in terms of 'n'?
The inner-most loop will obviously run j times. Assuming that it contains operations worth 1 time unit, this will be:
T_inner(j) = j
The middle loop will run i times, i.e.
T_middle(i) = Sum {j from 1 to i} T_inner(j)
= Sum {j from 1 to i} j
= i/2 * (1 + i)
Finally:
T_outer(n) = Sum {i from 1 to n} T_middle(i)
= Sum {i from 1 to n} (i/2 * (1 + i))
= 1/6 * n * (1 + n) * (2 + n)
= 1/6 n^3 + 1/2 n^2 + 1/3 n
And this is obviously O(n^3).
Note: This only counts the operations in the inner most block. It neglects the operations necessary to perform the loop. But if you include those, you will see that the time complexity is the same.
What is the big-O (O(n)) and Big-Omega (Ω(n)) time complexity of the Count(A, B, n) algorithm below in terms of n
Algorithm Count (A, B, n)
I did the following:
Algorithm Count (A, B, n)
Input: Arrays A and B of size n, where n is even.
A, B store integers. # of operations:
i <- 0 1
sum <- 0 1
while i < n/2 do n/2 + 1
if A[i + n/2] < 0 then n/2 + 1
for j <- i + n/2 to n do n/2 + n/2+1 + … + n = n(n+1)/2
sum <- sum + B[j] n/2 + … + n = n(n+1)/2
i <- i + 1 n/2 + 1
return sum _ _1______
The Algorithm Count runs in Big-O and Big-Omega of O(n^2) and Ω(n^2). This algorithm involves a nested "for" loop. The maximum number of primitive operations executed by the algorithm count is 0.5n^2+ n + 1 in both the worst case and best case.
I think the worst case is O(n^2), while the best case is Ω(1). The best case is when you have an array of size 2, and A[1] >= 0, in which case the algorithm only has to pass through the loop once. If n can be 0 (i.e. empty arrays), then that's even better.
To illustrate the best case (n = 2, assuming 0 is not acceptable, and A[1] >= 0), let's assume that assignment operation takes constant time, C1.
i <- 0
sum <- 0
takes constant time 2*C1.
while 0 < 2/2 do
takes constant time C2.
if A[0 + 2/2] < 0 then //evaluates to false when A[1] >= 0
takes constant time C3, and the nested for loop will never get executed.
i <- i + 1
takes constant time C4. The loop checks invariant:
while 1 < 2/2 do
which takes C2. Then the operation returns:
return sum
which takes C5.
So the operation in the best case scenario where n = 2 and A[1] >= 0 is:
2*C1 + 2*C2 + C3 + C4 + C5 = O(1)
Now, you can argue that it should have been:
2*C1 + (n/2 + 1)*C2 + C3 + C4 + C5 = O(n)
but we already know that at best n = 2, which is constant.
In the case where the best scenario is when n = 0:
2*C1 + C2 + C5 = O(1)
I am trying to understand Big-O Time complexity and am unfortunately struggling, I cannot seem to grasp the concept, I know my results are correct for the following two code fragments however how I got there seems to be wrong. Would someone be able to help explain where I'm mis-understanding (not please, however not using sigma. Thank you!
Code Fragment Time Complexity
sum ← 0 O(1)
for i ← 1 to n do O(n)
for j ← 1 to n do O(n)
k←1 O(1)
while k < n do
k ← k * C O(log n) - affected by C (due to multiplication)
sum ← sum + 1 O(1)
-----------
O(1 x n x n x 1 x [log n] x 1)
O(n2 log n)
Code Fragment Time Complexity
sum ← 0 O(1)
for i ← 1 to n do O(n)
for j ← 1 to i do O(n)
k←1 O(1)
while k < n do
k ← k + C O(n) – not affected by C, k is arbitrarily large
sum ← sum + 1 O(1)
-----------
O(1 x n x n x 1 x n x 1)
O(n^3)
I see minor errors in the computation, though the final results are correct.
In the first algorithm :
O(1 x n x n x 1 x [log n] x 1)
should be
1 + n x n x (1 + (O(log n) x 2)) = O(n^2 * log n)
In the second algorithm :
O(1 x n x n x 1 x n x 1)
should be
1 + n x O(n) x (1 + (O(n) x 2)) = O(n^3)