O(.) Running time of the following algorithm - algorithm

What's the bigOh running time of the following algorithm (psuedocode form):
for i = 1,2,...,n do
for j = i+1, i+2, ... n do
Add up array entries A[i] through A[j]
Store the result in B[i,j]
end for
end for
Calculating it myself I thought the nested for loop would result in a worst case of O(n^2), how ever I am unsure of the Adding and Storing worst case complexity.
Thanks

The loop on i is n iterations.
The loop on j is close to n/2 iterations on average.
The sum of the A[k] is over n/3 terms on average
Therefore you need around (n^3)/6 additions. That is O(n^3).
But if you keep a running total of A[i]+...+A[j] in your loop on j, or use previously computed values of B[i,j], you can bring it down to O(n^2).
The loop on j uses n-1 iterations and goes down to 0. And the average of n successive numbers is the average of the 2 extremes, That is (n-1 + 0)/2. Almost n/2.
The length of the sum is more difficult to explain. It is a weighted average with more weight on small numbers than large ones. The resulting factor 1/3 isn't important anyway.

Related

program complexity of nested loops

I have found the following algorithm:
for i in range(1, n)
for j in range(i+1,n+1)
for k in range(1,j+1)
//some instructions
and I would like to determine its complexity, so far what I have done is the following:
I have converted the three loops into summations so I have
so when I analize the loops j and k it is easy to see that when j starts with 2 then k with make 2 loops, when j starts with 3 then k will do 3 loops, and so on. At this poing I can have something like:
I am considering c as the instructions that are inside the k loop. For finishing I can say that I have:
Is this analysis correct or am I missing something?
Thanks
There are O(n^3) steps here, as you believe. Your heuristic has shown that O(n^3) is an upper bound --- but one might ask if it is a tight upper bound. In fact, it is (assuming that the content of each loop is essentially a constant-time operation).
One way to see this is to make some loose upper and lower bounds.
For an upper bound, note that i ranges from 1 to n, j ranges over a subset of 1 to n+1, and k ranges over a subset of 1 to n+2. There are then fewer steps than if i, j, and k ran over (1,n), (1,n+1), and (1,n+2), respectively. And this is O(n^3) steps. Thus there are at most O(n^3) steps. I note that this is roughly the heuristic that you used to produce your answer.
For a lower bound, we can note that when i is in (n/3, 2n/3), the j index will always include the range (2n/3 + 1, n + 1). And for i and j in these ranges, the k index will always include the range (1, 2n/3 + 2). The lengths of these ranges are n/3 (for i), n/3 (for j), and 2n/3 + 1 (for k). This is also of order n^3, and so O(n^3) is the right estimate. Some people would then say that this is BigTheta(n^3).

How will summing a sub array affect time complexity in a nested for loop?

Trying to calculate time complexity of some simple code but I do not know how to calculate time complexity while summing a sub array. The code is as follows:
for i=1 to n {
for j = i+1 to n {
s = sum(A[i...j])
B[i,j]=s
}}
So I know the nested for loops inevitably give us a O(n^2) and I believe the function to sum to the sub array is also O(n^2). However, I think the time complexity for the whole algorithm is O(n^3). How do I get here with this information? Thank you!
I like to think of for loops as summations. As such, the number of steps (written as a function, T(n)) is:
T(n) = \sum_{i=1}^n numStepsInInnerForLoop
Here, I'm using something written in pseudo-MathJax, and have written the outer for loop as a summation from i=1 to n of the number of steps in the inner for loop (the one from i+1 to n). You can think of this analagously as summing the number of steps in the inner for loop, from i=1 to n. Substituting in numStepsInInnerForLoop results in:
T(n) = \sum_{i=1}^n [\sum_{j=i+1}^n numStepsOfSumFunction]
This function now represents the number of steps where both for loops have been fleshed out as summations. Assuming that s = sum(A[i...j]) takes j-i+1 steps and B[i,j]=s takes just one step, we can substitute numStepsOfSumFunction with these more useful parameters and the equation now becomes:
T(n) = \sum_{i=1}^n [\sum_{j=i+1}^n (j-i+1 + 1)]
When you solve these summations (using the kind of formulas you see on this summation tutorial page) you'll get a cubic function for T(n) which corresponds to O(n^3).
Your reasoning leads me to believe that you're running this algorithm on a array of size n. If so, then every time you call the sum method in the inner for loop, you're calling this method on a specific range of values (indices i to j). For each iteration of this for loop, this sum method will iterate through 1, 2, 3, ..., then finally n elements in the last iteration as j increases from (i + 1) to n. Note that this is when i = 1. As i increases, it won't necessarily go from 1, 2, 3, ..., to n anymore since it will technically go up to n - i elements. Big O, though, is the worst case so we have to use this scenario.
1 + 2 + 3 + ... + n gives us n^2. The runtime of the sum method depends on the values of i and j; however, when run in the for loop with the given conditions, the total time-complexity of all the calls to sum in one iteration of the inner for loop is O(n^2). And finally, since this inner for loop is executed n times, the total time-complexity for the whole algorithm is O(n^3).

Calculating big O swaps, calculations, and comparisons

Looking at the code below:
Algorithm sort
Declare A(1 to n)
n = length(A)
for i = 1 to n
for j = 1 to n-1 inclusive do
if A[i-1] > A[i] then
swap( A[i-1], A[i] )
end if
next j
next i
I would say that there are:
2 loops, both n, n*n = n^2 (n-1 truncated to n)
1 comparison, in the j loop, that will execute n^2 times
A swap that will execute n^2 times
There are also 2n additions with the loops, executing n^2 times, so 2n^2
The answers given in a mark scheme:
Evaluation of algorithm
Comparisons
The only comparison appears in the j loop.
Since this loop will iterate a total of n^2
times, it will execute
exactly n^2
Data swaps
There may be a swap operation carried out in the j loop.
Swap( A[i-1], A[i] ) Each of these will happen n^2 times.
Therefore there are 2n^2 operation carried out within the j loop
The i loop has one addition operation incrementing i which happens n
times
Adding these up we the number of addition operations which is 2n^2 +
n
As n gets very big then n^2 will dominate therefore it is O(n^2)
NOTE: Calculations might include assignment operations but these will not affect overall time so ignore
Marking overview:
1 mark for identifying i loop will execute n times.
1 mark for identifying j loop will execute 2n^2 times Isn't this meant to be n*n = n^2? For i and j
1 mark for correct number of calculations 2n^2 + n Why is this not
+2n?
1 mark for determining that the order will be dominated by n^2 as n
gets very big giving O(n^2) for the algorithm
Edit: As can be seen from the mark scheme, I am expected to count:
Loop numbers, but n-1 can be truncated to n
Comparisons e.g. if statements
Data swaps (counted as one statement, i.e. arr[i] = arr[i+1], temp = arr[i], etc. are considered one swap)
Calculations
Space - just n for array, etc.
Could someone kindly explain how these answers are derived?
Thank you!
Here's my take on the marking scheme, explicitly marking the operations they're counting. It seems they're counting assignments (but conveniently forgetting that it takes 2 or 3 assignments to do a swap). That explains why they count increment but not the [i-1] indexing.
Counting swaps
i loop runs n times
j loop runs n-1 times (~n^2-n)
swap (happens n^2 times) n^2
Counting additions (+=)
i loop runs n times
j loop runs n-1 times (~n^2)
increment j (happens n^2 times) n^2
increment i (happens n times) n
sum: 2n^2 + n

How is Summation(n) Theta(n^2) according to its formula but Theta(n) ij we just look at it as a single for loop?

Our prof and various materials say Summation(n) = (n) (n+1) /2 and hence is theta(n^2). But intuitively, we just need one loop to find the sum of first n terms! So, it has to be theta(n).I'm wondering what am I missing here?!
All of these answers are misunderstanding the problem just like the original question: The point is not to measure the runtime complexity of an algorithm for summing integers, it's talking about how to reason about the complexity of an algorithm which takes i steps during each pass for i in 1..n. Consider insertion sort: On each step i to insert one member of the original list the output list is i elements long, thus it takes i steps (on average) to perform the insert. What is the complexity of insertion sort? It's the sum of all of those steps, or the sum of i for i in 1..n. That sum is n(n+1)/2 which has an n^2 in it, thus insertion sort is O(n^2).
The running time of the this code is Θ(1) (assuming addition/subtraction and multiplaction are constant time operations):
result = n*(n + 1)/2 // This statement executes once
The running time of the following pseudocode, which is what you described, is indeed Θ(n):
result = 0
for i from 1 up to n:
result = result + i // This statement executes exactly n times
Here is another way to compute it which has a running time of Θ(n²):
result = 0
for i from 1 up to n:
for j from i up to n:
result = result + 1 // This statement executes exactly n*(n + 1)/2 times
All three of those code blocks compute the natural numbers' sum from 1 to n.
This Θ(n²) loop is probably the type you are being asked to analyse. Whenever you have a loop of the form:
for i from 1 up to n:
for j from i up to n:
// Some statements that run in constant time
You have a running time complexity of Θ(n²), because those statements execute exactly summation(n) times.
I think the problem is that you're incorrectly assuming that the summation formula has time complexity theta(n^2).
The formula has an n^2 in it, but it doesn't require a number of computations or amount of time proportional to n^2.
Summing everything up to n in a loop would be theta(n), as you say, because you would have to iterate through the loop n times.
However, calculating the result of the equation n(n+1)/2 would just be theta(1) as it's a single calculation that is performed once regardless of how big n is.
Summation(n) being n(n+1)/2 refers to the sum of numbers from 1 to n. Which is a mathematical formula and can be calculated without a loop which is O(1) time. If you iterate an array to sum all values that is an O(n) algorithm.

Big O of this equation?

for (int j=0,k=0; j<n; j++)
for (double m=1; m<n; m*=2)
k++;
I think it's O(n^2) but I'm not certain. I'm working on a practice problem and I have the following choices:
O(n^2)
O(2^n)
O(n!)
O(n log(n))
Hmmm... well, break it down.
It seems obvious that the outer loop is O(n). It is increasing by 1 each iteration.
The inner loop however, increases by a power of 2. Exponentials are certainly related (in fact inversely) to logarithms.
Why have you come to the O(n^2) solution? Prove it.
Its O(nlog2n). The code block runs n*log2n times.
Suppose n=16; Then the first loop runs 16 (=n) times. And the second loops runs 4(=log2n) times (m=1,2,4,8). So the inner statement k++ runs 64 times = (n*log2n) times.
lets look at the worst-case behaviour. for second loop search continues from 1, 2, 4, 8.... lets say n is 2^k for some k >= 0. in the worst-case we might end up searching until 2^k and realise we overshot the target. Now we know that target can be in 2^(k - 1) and 2^k. The number of elements in that range are 2^(k - 1) (think a second.). The number of elements that we have examined so far is O(k) which is O(logn) and for first loop it's O(n).(too simple to find out). then order of whole code will O(n(logn)).
A generic way to approach these sorts of problems is to consider the order of each loop, and because they are nested, you can multiply the "O" notations.
Some simple rules for big "O":
O(n)O(m) = O(nm)
O(n) + O(m) = O(n + m)
O(kn) = O(n) where 'k' is some constant
The 'j' loop iterates across n elements, so clearly it is O(n).
The 'm' loop iterates across log(n) elements, so it is O(log(n)).
Since the loops are nested, our final result would O(n) * O(log(n)) = O(n*log(n)).

Resources