program complexity of nested loops - algorithm

I have found the following algorithm:
for i in range(1, n)
for j in range(i+1,n+1)
for k in range(1,j+1)
//some instructions
and I would like to determine its complexity, so far what I have done is the following:
I have converted the three loops into summations so I have
so when I analize the loops j and k it is easy to see that when j starts with 2 then k with make 2 loops, when j starts with 3 then k will do 3 loops, and so on. At this poing I can have something like:
I am considering c as the instructions that are inside the k loop. For finishing I can say that I have:
Is this analysis correct or am I missing something?
Thanks

There are O(n^3) steps here, as you believe. Your heuristic has shown that O(n^3) is an upper bound --- but one might ask if it is a tight upper bound. In fact, it is (assuming that the content of each loop is essentially a constant-time operation).
One way to see this is to make some loose upper and lower bounds.
For an upper bound, note that i ranges from 1 to n, j ranges over a subset of 1 to n+1, and k ranges over a subset of 1 to n+2. There are then fewer steps than if i, j, and k ran over (1,n), (1,n+1), and (1,n+2), respectively. And this is O(n^3) steps. Thus there are at most O(n^3) steps. I note that this is roughly the heuristic that you used to produce your answer.
For a lower bound, we can note that when i is in (n/3, 2n/3), the j index will always include the range (2n/3 + 1, n + 1). And for i and j in these ranges, the k index will always include the range (1, 2n/3 + 2). The lengths of these ranges are n/3 (for i), n/3 (for j), and 2n/3 + 1 (for k). This is also of order n^3, and so O(n^3) is the right estimate. Some people would then say that this is BigTheta(n^3).

Related

How did they find Lower Bound of the following code?

procedure stars(n)
for i = 1, . . . , n do
print “∗” i many times
Question - Using the Ω-notation, lowerbound the running time of stars to show that your upperbound is in fact asymptotically tight.
Solution - Assume for simplicity that n is even. We lowerbound the number of stars printed during iterations n/2 through n:
I didn't understand why they are going from n/2 to n. How do I do this Question?
For Omega it does not matter from where you've started! Just one thing for the lower bound is it must be less than the sepecified sum. Solution just wants to find a tight lower bound for the sum, as the sum is in Theta(n^2) (the sum is equal to n(n+1)/2).
Notice that the sum is not over j/2, but over n/2. For each n/2 ≤ j ≤ n, n/2 ≤ j, so the inequality holds with the possible exception of n=2: the full sum is three, the second sum is 2 (not 2²/4 = 1: the mistake is in starting at n/2 instead of n/2 + 1.)
Choosing n/2 (+1) as the lower bound for summation just makes the sum conveniently trivial in conjunction to letting each summand equal n/2.
Notice that when you are summing from n/2 to n you are summing less elements so this equation is correct.
It is done in order to simplify the expression in the end and to find a specifically lower bound.

time complexity of some recursive and none recursive algorithm

I have two pseudo-code algorithms:
RandomAlgorithm(modVec[0 to n − 1])
b = 0;
for i = 1 to n do
b = 2.b + modVec[n − i];
for i = 1 to b do
modVec[i mod n] = modVec[(i + 1) mod n];
return modVec;
Second:
AnotherRecursiveAlgo(multiplyVec[1 to n])
if n ≤ 2 do
return multiplyVec[1] × multiplyVec[1];
return
multiplyVec[1] × multiplyVec[n] +
AnotherRecursiveAlgo(multiplyVec[1 to n/3]) +
AnotherRecursiveAlgo(multiplyVec[2n/3 to n]);
I need to analyse the time complexity for these algorithms:
For the first algorithm i got the first loop is in O(n),the second loop has a best case and a worst case , best case is we have O(1) the loop runs once, the worst case is we have a big n on the first loop, but i don't know how to write this idea as a time complexity cause i usually get b=sum(from 1 to n-1) of 2^n-1 . modVec[n-1] and i get stuck here.
For the second loop i just don't get how to solve the time complexity of this one, we usually have it dependant on n , so we need the formula i think.
Thanks for the help.
The first problem is a little strange, all right.
If it helps, envision modVec as an array of 1's and 0's.
In this case, the first loop converts this array to a value.
This is O(n)
For instance, (1, 1, 0, 1, 1) will yield b = 27.
Your second loop runs b times. The dominating term for the value of b is 2^(n-1), a.k.a. O(2^n). The assignment you do inside the loop is O(1).
The second loop does depend on n. Your base case is a simple multiplication, O(1). The recursion step has three terms:
simple multiplication
recur on n/3 elements
recur on n/3 elements (from 2n/3 to the end is n/3 elements)
Just as your binary partitions result in log[2] complexities, this one will result in log[3]. The base doesn't matter; the coefficient (two recursive calls) doesn't' matter. 2*O(log3) is still O(log N).
Does that push you to a solution?
First Loop
To me this boils down to the O(First-For-Loop) + O(Second-For-Loop).
O(First-For-Loop) is simple = O(n).
O(Second-For-Loop) interestingly depends on n. Therefore, to me it's can be depicted as O(f(n)), where f(n) is some function of n. Not completely sure if I understand the f(n) based on the code presented.
The answer consequently becomes O(n) + O(f(n)). This could boil down to O(n) or O(f(n)) depending upon which one is larger and more dominant (since the lower order terms don't matter in the big-O notation.
Second Loop
In this case, I see that each call to the function invokes 3 additional calls...
The first call seems to be an O(1) call. So it won't matter.
The second and the third calls seems to recurses the function.
Therefore each function call is resulting in 2 additional recursions.
Consequently , the time complexity on this would be O(2^n).

O(.) Running time of the following algorithm

What's the bigOh running time of the following algorithm (psuedocode form):
for i = 1,2,...,n do
for j = i+1, i+2, ... n do
Add up array entries A[i] through A[j]
Store the result in B[i,j]
end for
end for
Calculating it myself I thought the nested for loop would result in a worst case of O(n^2), how ever I am unsure of the Adding and Storing worst case complexity.
Thanks
The loop on i is n iterations.
The loop on j is close to n/2 iterations on average.
The sum of the A[k] is over n/3 terms on average
Therefore you need around (n^3)/6 additions. That is O(n^3).
But if you keep a running total of A[i]+...+A[j] in your loop on j, or use previously computed values of B[i,j], you can bring it down to O(n^2).
The loop on j uses n-1 iterations and goes down to 0. And the average of n successive numbers is the average of the 2 extremes, That is (n-1 + 0)/2. Almost n/2.
The length of the sum is more difficult to explain. It is a weighted average with more weight on small numbers than large ones. The resulting factor 1/3 isn't important anyway.

Discrete Mathematics Big-O notation Algorithm Complexity

I can probably figure out part b if you can help me do part a. I've been looking at this and similar problems all day, and I'm just having problems grasping what to do with nested loops. For the first loop there are n iterations, for the second there are n-1, and for the third there are n-1.. Am I thinking about this correctly?
Consider the following algorithm,
which takes as input a sequence of n integers a1, a2, ..., an
and produces as output a matrix M = {mij}
where mij is the minimum term
in the sequence of integers ai, a + 1, ..., aj for j >= i and mij = 0 otherwise.
initialize M so that mij = ai if j >= i and mij = 0
for i:=1 to n do
for j:=i+1 to n do
for k:=i+1 to j do
m[i][j] := min(m[i][j], a[k])
end
end
end
return M = {m[i][j]}
(a) Show that this algorithm uses Big-O(n^3) comparisons to compute the matrix M.
(b) Show that this algorithm uses Big-Omega(n^3) comparisons to compute the matrix M.
Using this face and part (a), conclude that the algorithm uses Big-theta(n^3) comparisons.
In part A, you need to find an upper bound for the number of min ops.
In order to do so, it is clear that the above algorithm has less min ops then the following:
for i=1 to n
for j=1 to n //bigger range then your algorithm
for k=1 to n //bigger range then your algorithm
(something with min)
The above has exactly n^3 min ops - thus in your algorithm, there are less then n^3 min ops.
From this we can conclude: #minOps <= 1 * n^3 (for each n > 10, where 10 is arbitrary).
By definition of Big-O, this means the algorithm is O(n^3)
You said you can figure B alone, so I'll let you try it :)
hint: the middle loop has more iterations then for j=i+1 to n/2
For each iteration of outer loop inner two nested loop would give n^2 complexity if i == n. Outer loop will run for i = 1 to n. So total complexity would be a series like: 1^2 + 2^2 + 3^2 + 4^2 + ... ... ... + n^2. This summation value is n(n+1)(2n+1)/6. Ignoring lower order terms of this summation term ultimately the order would be O(n^3)

Number of Comparisons in Merge-Sort

I was studying the merge-sort subject that I ran into this concept that the number of comparisons in merge-sort (in the worst-case, and according to Wikipedia) equals (n ⌈lg n⌉ - 2⌈lg n⌉ + 1); in fact it's between (n lg n - n + 1) and (n lg n + n + O(lg n)). The problem is that I cannot figure out what these complexities try to say. I know O(nlogn) is the complexity of merge-sort but the number of comparisons?
Why to count comparisons
There are basically two operations to any sorting algorithm: comparing data and moving data. In many cases, comparing will be more expensive than moving. Think about long strings in a reference-based typing system: moving data will simply exchange pointers, but comparing might require iterating over a large common part of the strings before the first difference is found. So in this sense, comparison might well be the operation to focus on.
Why an exact count
The numbers appear to be more detailed: instead of simply giving some Landau symbol (big-Oh notation) for the complexity, you get an actual number. Once you have decided what a basic operation is, like a comparison in this case, this approach of actually counting operations becomes feasible. This is particularly important when comparing the constants hidden by the Landau symbol, or when examining the non-asymptotic case of small inputs.
Why this exact count formula
Note that throughout this discussion, lg denotes the logarithm with base 2. When you merge-sort n elements, you have ⌈lg n⌉ levels of merges. Assume you place ⌈lg n⌉ coins on each element to be sorted, and a merge costs one coin. This will certainly be enough to pay for all the merges, as each element will be included in ⌈lg n⌉ merges, and each merge won't take more comparisons than the number of elements involved. So this is the n⌈lg n⌉ from your formula.
As a merge of two arrays of length m and n takes only m + n − 1 comparisons, you still have coins left at the end, one from each merge. Let us for the moment assume that all our array lengths are powers of two, i.e. that you always have m = n. Then the total number of merges is n − 1 (sum of powers of two). Using the fact that n is a power of two, this can also be written as 2⌈lg n⌉ − 1, and subtracting that number of returned coins from the number of all coins yields n⌈lg n⌉ − 2⌈lg n⌉ + 1 as required.
If n is 1 less than a power of two, then there are ⌈lg n⌉ merges where one element less is involved. This includes a merge of two one-element lists which used to take one coin and which now disappears altogether. So the total cost reduces by ⌈lg n⌉, which is exactly the number of coins you'd have placed on the last element if n were a power of two. So you have to place fewer coins up front, but you get back the same number of coins. This is the reason why the formula has 2⌈lg n⌉ instead of n: the value remains the same unless you drop to a smaller power of two. The same argument holds if the difference between n and the next power of two is greater than 1.
On the whole, this results in the formula given in Wikipedia:
n ⌈lg n⌉ − 2⌈lg n⌉ + 1
Note: I'm pretty happy with the above proof. For those who like my formulation, feel free to distribute it, but don't forget to attribute it to me as the license requires.
Why this lower bound
To proove the lower bound formula, let's write ⌈lg n⌉ = lg n + d with 0 ≤ d < 1. Now the formula above can be written as
n (lg n + d) − 2lg n + d + 1 =
n lg n + nd − n2d + 1 =
n lg n − n(2d − d) + 1 ≥
n lg n − n + 1
where the inequality holds because 2d − d ≤ 1 for 0 ≤ d < 1
Why this upper bound
I must confess, I'm rather confused why anyone would name n lg n + n + O(lg n) as an upper bound. Even if you wanted to avoid the floor function, the computation above suggests something like n lg n − 0.9n + 1 as a much tighter upper bound for the exact formula. 2d − d has its minimum (ln(ln(2)) + 1)/ln(2) ≈ 0.914 for d = −ln(ln(2))/ln(2) ≈ 0.529.
I can only guess that the quoted formula occurs in some publication, either as a rather loose bound for this algorithm, or as the exact number of comparisons for some other algorithm which is compared against this one.
(Two different counts)
This issue has been resolved by the comment below; one formula was originally quoted incorrectly.
equals (n lg n - n + 1); in fact it's between (n lg n - n + 1) and (n lg n + n + O(lg n))
If the first part is true, the second is trivially true as well, but explicitely stating the upper bound seems kind of pointless. I haven't looked at the details myself, but these two statements appear strange when taken together like this. Either the first one really is true, in which case I'd omit the second one as it is only confusing, or the second one is true, in which case the first one is wrong and should be omitted.

Resources