Matrix multiplication worst case, best case and average case complexity - matrix

What is the (a) worst case, (b) best case, and (c) average case complexity of the following function which does matrix multiplication
for i=1 to n do
for j=1 to n do
C[i,j]=0
for k=1 to n do
C[i,j]=C[i,j]+A[i,k]*B[k,j]
end {for}
end {for}
end {for}
How would you justify the complexity?

i, j and k all go from 1 to n.
Therefore the best, average, and worst cases are O(n * n * n) = O(n^3)
For each of the n possible is, there are n js and for each of the n js, there are n ks.
Which gives n * n * n executions of the inner loop.

O(n^3), because on each of the nested loop, N is multiplied by N, since you have a nested loop 3 times which completely process the entire N, that will be N X N X N = N^3

Related

O(n) runtime algorithm

The algorithm below has runtime O(n) according to our professor, however I am confused as to why it is not
O(n log(n)), because the outer loop can run up to log(n) times and the inner loop can run up to n times.
Algoritme Loop5(n)
i = 1
while i ≤ n
j = 1
while j ≤ i
j = j + 1
i = i∗2
Your professor was right, the running time is O(n).
In the i-th iteration of the outer while-loop, when we have i=2^k for k=0,1,...,log n, the inner while-loop makes O(i) iterations. (When I say log n I mean the base-2 logarithm log_2 n.)
The running time is O(1+2+2^2+2^3+...+2^k) for k=floor(log n). This sums to O(2^{k+1}) which is O(2^{log n}). (This follows from the formula for the partial sum of geometric series.)
Because 2^{log n} = n, the total running time is O(n).
For the interested, here's a proof that the powers of two really sum to what I claim they sum to. (This is a very special case of a more general result.)
Claim. For any natural k, we have 1+2+2^2+...+2^k = 2^{k+1}-1.
Proof. Note that (2-1)*(1+2+2^2+...+2^k) = (2 - 1) + (2^2 - 2) + ... + (2^{k+1} - 2^k) where all 2^i for 0<i<k+1 cancel out, except for i=0 and i=k+1, and we are left with 2^{k+1}-1. QED.

Algorithm Analysis: Big-O explanation

I'm currently taking a class in algorithms. The following is a question I got wrong from a quiz: Basically, we have to indicate the worst case running time in Big O notation:
int foo(int n)
{
m = 0;
while (n >=2)
{
n = n/4;
m = m + 1;
}
return m;
}
I don't understand how the worst case time for this just isn't O(n). Would appreciate an explanation. Thanks.
foo calculates log4(n) by dividing n by 4 and counting number of 4's in n using m as a counter. At the end, m is going to be the number of 4's in n. So it is linear in the final value of m, which is equal to log base 4 of n. The algorithm is then O(logn), which is also O(n).
Let's suppose that the worst case is O(n). That implies that the function takes at least n steps.
Now let's see the loop, n is being divided by 4 (or 2²) at each step. So, in the first iteration n is reduced to n/4, in the second, to n/8. It isn't being reduced linearly. It's being reduced by a power of two so, in the worst case, it's running time is O(log n).
The computation can be expressed as a recurrence formula:
f(r) = 4*f(r+1)
The solution is
f(r) = k * 4 ^(1-r)
Where ^ means exponent. In our case we can say f(0) = n
So f(r) = n * 4^(-r)
Solving for r on the end condition we have: 2 = n * 4^(-r)
Using log in both sides, log(2) = log(n) - r* log(4) we can see
r = P * log(n);
Not having more branches or inner loops, and assuming division and addition are O(1) we can confidently say the algorithm, runs P * log(n) steps therefore is a O((log(n)).
http://www.wolframalpha.com/input/?i=f%28r%2B1%29+%3D+f%28r%29%2F4%2C+f%280%29+%3D+n
Nitpickers corner: A C int usually means the largest value is 2^32 - 1 so in practice it means only max 15 iterations, which is of course O(1). But I think your teacher really means O(log(n)).

Calculating the Computational Complexity (Big-O)

Algorithm 1. QUEUESTUFF(n)
Input: Integer n
1) Let Q = an empty Queue
2) For i = 1 to n
3) Q.Enqueue(i)
4) End For
5) For i = 1 to n-1
6) Let X = Q.Dequeue()
7) End For
8) Let X = Q.Dequeue()
Output: The contents of X
What is the computational complexity O(n) for algorithm QUEUESTUFF?
The first For loop is just of n and the second nested one is of n-1. So would this make O(n):
O(2n-1) by just doing (n + n) - 1
or would it be O(n^2 - 1) by doing (n * n) - 1
Thanks for any help, I just wanted to clarify this. My guess is that because we have a nested For loop, we would have to times n by n-1, however I just thought that I could assure myself better by getting someone else opinion.
Got the answer thanks to Smoore. Since the loop is not nested, the Big-O would be O(2n-1).
There are two independent (not nested) for loops. n items are enqueued and then n items are dequeued, giving a complexity of O(n) times the complexity of enqueueing or dequeueing. If queue-operation complexity is O(1), then the procedure's complexity is O(n), but if queue-operation complexity is O(ln n), then the procedure's complexity is O(n ln n).

Discrete Mathematics Big-O notation Algorithm Complexity

I can probably figure out part b if you can help me do part a. I've been looking at this and similar problems all day, and I'm just having problems grasping what to do with nested loops. For the first loop there are n iterations, for the second there are n-1, and for the third there are n-1.. Am I thinking about this correctly?
Consider the following algorithm,
which takes as input a sequence of n integers a1, a2, ..., an
and produces as output a matrix M = {mij}
where mij is the minimum term
in the sequence of integers ai, a + 1, ..., aj for j >= i and mij = 0 otherwise.
initialize M so that mij = ai if j >= i and mij = 0
for i:=1 to n do
for j:=i+1 to n do
for k:=i+1 to j do
m[i][j] := min(m[i][j], a[k])
end
end
end
return M = {m[i][j]}
(a) Show that this algorithm uses Big-O(n^3) comparisons to compute the matrix M.
(b) Show that this algorithm uses Big-Omega(n^3) comparisons to compute the matrix M.
Using this face and part (a), conclude that the algorithm uses Big-theta(n^3) comparisons.
In part A, you need to find an upper bound for the number of min ops.
In order to do so, it is clear that the above algorithm has less min ops then the following:
for i=1 to n
for j=1 to n //bigger range then your algorithm
for k=1 to n //bigger range then your algorithm
(something with min)
The above has exactly n^3 min ops - thus in your algorithm, there are less then n^3 min ops.
From this we can conclude: #minOps <= 1 * n^3 (for each n > 10, where 10 is arbitrary).
By definition of Big-O, this means the algorithm is O(n^3)
You said you can figure B alone, so I'll let you try it :)
hint: the middle loop has more iterations then for j=i+1 to n/2
For each iteration of outer loop inner two nested loop would give n^2 complexity if i == n. Outer loop will run for i = 1 to n. So total complexity would be a series like: 1^2 + 2^2 + 3^2 + 4^2 + ... ... ... + n^2. This summation value is n(n+1)(2n+1)/6. Ignoring lower order terms of this summation term ultimately the order would be O(n^3)

How to prove logarithmic complexity

for (int i = 1; i < N; i *= 2) { ... }
Things like that are the signatures of logarithmic complexity.
But how get log(N)?
Could you give mathematical evidence?
Useful reference on algorithmic complexity: http://en.wikipedia.org/wiki/Big_O_notation
On the nth iteration,
i = 2^n
We know that it iterates until i >= N
Therefore,
i < N
Now,
2^n = i < N
N > 2^n
log2 N > log2 (2^n)
log2 N > n
We know it iterates n times, being less than log2 N.
Thus # iterations < log2 N, or # iterations is O(log N)
QED. Logarithmic complexity.
Multiplying N by 2 adds one more iteration, regardless of the size of N. That's pretty much the definition of the log function -- it goes up by a constant amount every time you multiply N by a constant.
Your code will work untill i < N, and each step i *= 2. We say your loop has logarithmic complexity if it runs log(N) + const times. 2 ^ log(N) = N, so after [log(N)] + 1 times i > N.

Resources