Big O runtime for this algorithm? - algorithm

Here's the pseudocode:
Baz(A) {
big = โˆ’โˆž
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
sum = 0
for k = j to j + i - 1
sum = sum + A(k)
if sum > big
big = sum
return big
So line 3 will be O(n) (n being the length of the array, A)
I'm not sure what line 4 would be...I know it decreases by 1 each time it is run, because i will increase.
and I can't get line 6 without getting line 4...
All help is appreciated, thanks in advance.

Let us first understand how first two for loops work
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
First for loop will run from 1 to n(length of Array A) and the second for loop will depend on value of i. SO when i = 1 second for loop will run for n times..When i increments to 2 your second for loop will run for (n-1) time ..so it will go on till 1.
So your second for loop will run as follows:
n + (n - 1) + (n - 2) + (n - 3) + .... + 1 times...
You can use following formula: sum(1 to n) = N * (N + 1) / 2 which gives (N^2 + N)/2 So we have Big oh for these two loops as
O(n^2) (Big Oh of n square )
Now let us consider third loop also...
Your third for loop looks like this
for k = j to j + i - 1
But this actually means,
for k = 0 to i - 1 (you are just shifting the range of values by adding/subtracting j but number of times the loop should run will not change, as difference remains same)
So your third loop will run from 0 to 1(value of i) for first n iterations of second loop then it will run from 0 to 2(value of i) for first (n - 1) iterations of second loop and so on..
So you get:
n + 2(n-1) + 3(n-2) + 4(n-3).....
= n + 2n - 2 + 3n - 6 + 4n - 12 + ....
= n(1 + 2 + 3 + 4....) - (addition of some numbers but this can not be greater than n^2)
= `N(N(N+1)/2)`
= O(N^3)
So your time complexity will be N^3 (Big Oh of n cube)
Hope this helps!

Methodically, you can follow the steps using Sigma Notation:

Baz(A):
big = โˆ’โˆž
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
sum = 0
for k = j to j + i - 1
sum = sum + A(k)
if sum > big
big = sum
return big
For Big-O, you need to look for the worst scenario
Also the easiest way to find the Big-O is to look into most important parts of the algorithm, it can be loops or recursion
So we have this part of the algorithm consisting of loops
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
for k = j to j + i - 1
sum = sum + A(k)
We have,
SUM { SUM { i } for j = 1 to n-i+1 } for i = 1 to n
= 1/6 n (n+1) (n+2)
= (1/6 n^2 + 1/6 n) (n + 2)
= 1/6 n^3 + 2/6 2 n^2 + 1/6 n^2 + 2/6 n
= 1/6 n^3 + 3/6 2 n^2 + 2/6 n
= 1/6 n^3 + 1/2 2 n^2 + 1/3 n
T(n) ~ O(n^3)

Related

Determine the big O running time

I am struggling with this question would like some help , thank you.
Determine the big O running time of the method myMethod() by counting the
approximate number of operations it performs. Show all details of your answer.
Note: the symbol % represent a modulus operator, that is, the remainder of a
number divided by another number.
๐‘ ๐‘ก๐‘Ž๐‘ก๐‘–๐‘ ๐‘–๐‘›๐‘ก ๐‘š๐‘ฆ๐‘€๐‘’๐‘กโ„Ž๐‘œ๐‘‘(๐ด[], ๐‘›) {
๐‘๐‘œ๐‘ข๐‘›๐‘ก โ† 0
๐‘“๐‘œ๐‘Ÿ ๐‘– โ† 0 ๐‘ก๐‘œ ๐‘› โˆ’ 1 {
๐‘“๐‘œ๐‘Ÿ ๐‘— โ† ๐‘– ๐‘ก๐‘œ ๐‘› โˆ’ 1 {
๐‘๐‘œ๐‘ข๐‘›๐‘ก โ† ๐‘๐‘œ๐‘ข๐‘›๐‘ก+ ๐ด[๐‘—]
๐‘˜ โ† 1
๐‘คโ„Ž๐‘–๐‘™๐‘’ (๐‘˜ < ๐‘› + 2) {
๐‘–๐‘“ (๐‘—%2 == 0) {
๐‘๐‘œ๐‘ข๐‘›๐‘ก = ๐‘๐‘œ๐‘ข๐‘›๐‘ก + 1
}
๐‘˜ + +
}
}
}
๐‘Ÿ๐‘’๐‘ก๐‘ข๐‘Ÿ๐‘› ๐‘๐‘œ๐‘ข๐‘›๐‘ก
}
The outer for loop with the variable i runs n times.
The inner for loop with the variable j runs n - i times for each iteration of the outer loop. This would make the inner loop run n + n-1 + n-2 +...+ 1 times in aggregation which is the equivalent of n * (n+1) / 2.
The while loop inside the inner for loop runs n + 1 times for each iteration of the inner for loop.
This makes the while loop to run (n * (n+1) / 2) * (n + 2). This produces (n^2 + n) / 2 * (n + 2) = (n^2 + n) * (n + 2) / 2 = (n^3 + 2n^2 + n^2 + 2n) / 2 = (n^3 + 3n^2 + 2n) / 2).
Dropping lower degrees of n and constants we get O(n^3).
You could have also argued that n + n-1 + ... + 1 is O(n^2) times a linear operation becomes O(n^3). Which would have been more intuitive and faster.

How to find the big theta?

Here's some code segment I'm trying to find the big-theta for:
i = 1
while i โ‰ค n do #loops ฮ˜(n) times
A[i] = i
i = i + 1
for j โ† 1 to n do #loops ฮ˜(n) times
i = j
while i โ‰ค n do #loops n times at worst when j = 1, 1 times at best given j = n.
A[i] = i
i = i + j
So given the inner while loop will be a summation of 1 to n, the big theta is ฮ˜(n2. So does that mean the big theta is ฮ˜(n2) for the entire code?
The first while loop and the inner while loop should be equal to ฮ˜(n) + ฮ˜(n2) which should just equal ฮ˜(n2).
Thanks!
for j = 1 to n step 1
for i = j to n step j
# constant time op
The double loop is O(nโ‹…log(n)) because the number of iterations in the inner loop falls inversely to j. Counting the total number of iterations gives:
floor(n/1) + floor(n/2) + ... + floor(n/n) <= nโ‹…(1/1 + 1/2 + ... + 1/n) โˆผ nโ‹…log(n)
The partial sums of the harmonic series have logarithmic behavior asymptotically, so the above shows that the double loop is O(nโ‹…log(n)). That can be strengthened to ฮ˜(nโ‹…log(n)) with a math argument involving the Dirichlet Divisor Problem.
[ EDIT ] For an alternative derivation of the lower bound that establishes the ฮ˜(nโ‹…log(n)) asymptote, it is enough to use the < part of the x - 1 < floor(x) <= x inequality, avoiding the more elaborate math (linked above) that gives the exact expression.
floor(n/1) + floor(n/2) + ... + floor(n/n) > (n/1 - 1) + (n/2 - 1) + ... + (n/n - 1)
= nโ‹…(1/1 + 1/2 + ... + 1/n) - n
โˆผ nโ‹…log(n) - n
โˆผ nโ‹…log(n)

Calculate the code complexity of below code

I feel that in worst case also, condition is true only two times when j=i or j=i^2 then loop runs for an extra i + i^2 times.
In worst case, if we take sum of inner 2 loops it will be theta(i^2) + i + i^2 , which is equal to theta(i^2) itself;
Summation of theta(i^2) on outer loop gives theta(n^3).
So, is the answer theta(n^3) ?
I would say that the overall performance is theta(n^4). Here is your pseudo-code, given in text format:
for (i = 1 to n) do
for (j = 1 to i^2) do
if (j % i == 0) then
for (k = 1 to j) do
sum = sum + 1
Appreciate first that the j % i == 0 condition will only be true when j is multiples of n. This would occur in fact only n times, so the final inner for loop would only be hit n times coming from the for loop in j. The final for loop would require n^2 steps for the case where j is near the end of the range. On the other hand, it would only take roughly n steps for the start of the range. So, the overall performance here should be somewhere between O(n^3) and O(n^4), but theta(n^4) should be valid.
For fixed i, the i integers 1 โ‰ค j โ‰ค i2 such that j % i = 0 are {i,2i,...,i2}. It follows that the inner loop is executed i times with arguments i * m for 1 โ‰ค m โ‰ค i and the guard executed i2 times. Thus, the complexity function T(n) โˆˆ ฮ˜(n4) is given by:
T(n) = โˆ‘[i=1,n] (โˆ‘[j=1,i2] 1 + โˆ‘[m=1,i] โˆ‘[k=1,i*m] 1)
= โˆ‘[i=1,n] โˆ‘[j=1,i2] 1 + โˆ‘[i=1,n] โˆ‘[m=1,i] โˆ‘[k=1,i*m] 1
= n3/3 + n2/2 + n/6 + โˆ‘[i=1,n] โˆ‘[m=1,i] โˆ‘[k=1,i*m] 1
= n3/3 + n2/2 + n/6 + n4/8 + 5n3/12 + 3n2/8 + n/12
= n4/8 + 3n3/4 + 7n2/8 + n/4

Time complexity of fun()?

I was going through this question to calculate time complexity.
int fun(int n)
{
int count = 0;
for (int i = n; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count += 1;
return count;
}
My first impression was O(n log n) but the answer is O(n). Please help me understand why it is O(n).
The inner loop does n iterations, then n/2, then n/4, etc. So the total number of inner loop iterations is:
n + n/2 + n/4 + n/8 + ... + 1
<= n * (1 + 1/2 + 1/4 + 1/8 + ...)
= 2n
(See Geometric series), and therefore is O(n).
For an input integer n,
for i=n, j loop will run n times,
then for i= n/2, j loop will run n/2 times,
.
.
so on,
.
.
till i=1, where j loop will run 1 time.
So,
T(n) = (n + n/2 + n/4 + n/8 + ...... + 1)
T(n) = n(1 + 1/2 + 1/4 + 1/8 + ..... + 1/n) .....(1)
let 1/n = 1/2^k
so, k = logn
now, using summation of geometric series in eq 1
T(n) = n((1-1/2^k) / 1-1/2)
T(n) = 2n(1-1/2^k)
using k = logn
T(n) = 2n(1-1/2^logn)
now for larger value of n, logn tends to infinite and 1/2^infinite will tend to 0.
so, T(n) = 2n
so, T(n) = O(n)
For a input integer n,
the innermost statement of fun() is executed following times. n + n/2 + n/4 + ... 1 So time complexity T(n) can be written as
T(n) = O(n + n/2 + n/4 + ... 1) = O(n) The value of count is also n + n/2 + n/4 + .. + 1
The outermost loop iterates in O(logn) so it is ignored as O(n) is high
let's have i and j table
where n = 8
i -> less than 0
j -> less than i
i
j
8
[0,7]
4
[0,3]
2
[0,1]
1
[0,0]
"i" which is outer loop is logn
Now check for "j" inner loop
[0, 7] -> b - a + 1 -> 8 -> n
[0, 3] -> b - a + 1 -> 4 -> n/2
[0, 1] -> b - a + 1 -> 2 -> n/4
[0, 0] -> b - a + 1 -> 1 -> n/n
if you see it's [n + n/2 + n/4....1] it's moving to infinity not logn times
it's infinite series right if we look closely it's GP with infinite series n[1 + 1/2 + 1/4 + .......] where (r < 1)
sum of series will gives us 2 (for GP proof you check google)
which is
n[1 + 1/2 + 1/4 + .......] -> 2n
so logn + 2n => O(n)

complexity theory-sorting algorithm

I'am taking a course in complexity theory,and so it's need some mathematical background which i have a problem,.
so while i'am trying to do some practicing i stuck in the bellow example
1) for (i = 1; i < n; i++) {
2) SmallPos = i;
3) Smallest = Array[SmallPos];
4) for (j = i+1; j <= n; j++)
5) if (Array[j] < Smallest) {
6) SmallPos = j;
7) Smallest = Array[SmallPos]
}
8) Array[SmallPos] = Array[i];
9) Array[i] = Smallest;
}
Thus, the total computing time is:
T(n) = (n) + 4(n-1) + n(n+1)/2 โ€“ 1 + 3[n(n-1) / 2]
= n + 4n - 4 + (n^2 + n)/2 โ€“ 1 + (3n^2 - 3n) / 2
= 5n - 5 + (4n2 - 2n) / 2
= 5n - 5 + 2n^2 - n
= 2n^2 + 4n - 5
= O(n^2)
and what i don't understand or confused about line 4 analyzed to n(n+1)/2 โ€“ 1,
and line 5 3[n(n-1) / 2].
i knew that the sum of positive series is =n(first+last)/2 ,but when i tried to calculate it as i understand it it gives me different result.
i calculate for line no 4 so it shoulb be =n((n-1)+2)/2 according to n(first+last)/2 ,but here it's n(n+1)/2 โ€“ 1.
and same for 3[n(n-1) / 2].....i don't understand this too
also here's what is written in the analysis it could help if anyone can explain to me,
Statement 1 is executed n times (n - 1 + 1); statements 2, 3, 8, and 9 (each representing O(1) time) are executed n - 1 times each, once on each pass through the outer loop. On the first pass through this loop with i = 1, statement 4 is executed n times; statement 5 is executed n - 1 times, and assuming a worst case where the elements of the array are in descending order, statements 6 and 7 (each O(1) time) are executed n - 1 times.
On the second pass through the outer loop with i = 2, statement 4 is executed n - 1 times and statements 5, 6, and 7 are executed n - 2 times, etc. Thus, statement 4 is executed (n) + (n-1) +... + 2 times and statements 5, 6, and 7 are executed (n-1) + (n-2) + ... + 2 + 1 times. The first sum is equal to n(n+1)/2 - 1, and the second is equal to n(n-1)/2.
Thus, the total computing time is:
T(n) = (n) + 4(n-1) + n(n+1)/2 โ€“ 1 + 3[n(n-1) / 2]
= n + 4n - 4 + (n^2 + n)/2 โ€“ 1 + (3n^2 - 3n) / 2
= 5n - 5 + (4n2 - 2n) / 2
= 5n - 5 + 2n^2 - n
= 2n^2 + 4n - 5
= O(n^2)
here's the link for the file containing this example:
http://www.google.com.eg/url?sa=t&rct=j&q=Consider+the+sorting+algorithm+shown+below.++Find+the+number+of+instructions+executed+&source=web&cd=1&cad=rja&ved=0CB8QFjAA&url=http%3A%2F%2Fgcu.googlecode.com%2Ffiles%2FAnalysis%2520of%2520Algorithms%2520I.doc&ei=3H5wUNiOINDLswaO3ICYBQ&usg=AFQjCNEBqgrtQldfp6eqdfSY_EFKOe76yg
line 4: as the analysis says, it is executed n+(n-1)+...+2 times. This is a sum of (n-1) terms. In the formula you use, n(first+last)/2, n represents the number of terms. If you apply the formula to your sequence of n-1 terms, then it should be (n-1)((n)+(2))/2=(nยฒ+n-2)/2=n(n+1)/2-1.
line 5: the same formula can be used. As the analysis says, you have to calculate (n-1)+...+1. This is a sum of n-1 terms, with the first and last being n-1 and 1. The sum is given by (n-1)(n-1+1)/2. The factor 3 is from the 3 lines (5,6,7) that are each being done (n-1)(n)/2 times

Resources