What is the time complexity of multiplying two matrices of unequal dimension? - algorithm

I've looked over the big-O complexity of multiplying two n × n matrices, which takes time O(n3). But how do you get big-O complexity for multiplying two rectangular matrices which are of dimensions m × n and n × r? I've been told the answer is O(mnr), but I'm not sure where this comes from. Can anyone explain this?
Thanks!

I assume that you're talking about the complexity of multiplying two square matrices of dimensions n × n working out to O(n3) and are asking the complexity of multiplying an m × n matrix and an n × r matrix. There are specialized algorithms that can solve this problem faster than the naive approach, but for the purposes of this question I'll just talk about the standard "multiply each row by each column for each entry" algorithm.
First, let's see where the O(n3) term comes from in multiplying two n × n matrices. Note that for each value of the resulting matrix, the entry at position (i, j) is given by the inner product of the ith row of the left matrix and the jth column of the right matrix. There are n elements in each row and column, so computing each element takes time Θ(n). Doing this Θ(n2) times (once for each element of the resulting matrix) takes time Θ(n3).
Now think about this in the context of the product of an m × n matrix and an n × r matrix. Entry (i, j) in the matrix is given by the inner product of the ith row of the left matrix (which has n entries) and the jth column of the right matrix (which has n entries), so computing it takes time Θ(n). You do this once per element of the resulting matrix. Since the resulting matrix has dimension m × r, there are Θ(mr) elements to consider. Therefore, the total work done is Θ(mnr).
Hope this helps!

Related

How much computationally expensive is Matrix multiplication?

What advantage does the function
N(x;θ) = θ1(θ2*x)
has over
G(x;θ) = θ*x
for an input vector
x ∈ R^n
θ1 ∈ R^(nx1)
θ2 ∈ R^(1xn)
θ ∈ R^(nxn)
For first case, θ2 with dimension 1xn is multiplied with x with dimension n. That gives output of 1x1. Then multiplied by nx1 the output dimension of N(x;θ) is nx1. So there are n elements in θ2 and n elements in θ1. In total there are n+n (2n) elements.
For second case, θ with dimension nxn is multiplied with x with dimension n. That gives output dimension for G(x;θ) as nx1. In this case there are n*n (n^2) elements for θ.
Therefore, the advantage is that, it is computationally inexpensive to calculate the first case then the second case.

Does the asymptotic complexity of a multiplication algorithm only rely on the larger of the two operands?

I'm taking an algorithms class and I repeatedly have trouble when I'm asked to analyze the runtime of code when there is a line with multiplication or division. How can I find big-theta of multiplying an n digit number with an m digit number (where n>m)? Is it the same as multiplying two n digit numbers?
For example, right now I'm attempting to analyze the following line of code:
return n*count/100
where count is at most 100. Is the asymptotic complexity of this any different from n*n/100? or n*n/n?
You can always look up here Computational complexity of mathematical operations.
In your complexity of n*count/100 is O(length(n)) as 100 is a constant and length(count) is at most 3.
In general multiplication of two numbers n and m digits length, takes O(nm), the same time required for division. Here i assume we are talking about long division. There are many sophisticated algorithms which will beat this complexity.
To make things clearer i will provide an example. Suppose you have three numbers:
A - n digits length
B - m digits length
C - p digits length
Find complexity of the following formula:
A * B / C
Multiply first. Complexity of A * B it is O(nm) and as result we have number D, which is n+m digits length. Now consider D / C, here complexity is O((n+m)p), where overall complexity is sum of the two O(nm + (n+m)p) = O(m(n+p) + np).
Divide first. So, we divide B / C, complexity is O(mp) and we have m digits number E. Now we calculate A * E, here complexity is O(nm). Again overall complexity is O(mp + nm) = O(m(n+p)).
From the analysis you can see that it is beneficial to divide first. Of course in real life situation you would account for numerical stability as well.
From Modern Computer Arithmetic:
Assume the larger operand has size
m, and the smaller has size n ≤ m, and denote by M(m,n) the corresponding
multiplication cost.
When m is an exact multiple of n, say m = kn, a trivial strategy is to cut the
larger operand into k pieces, giving M(kn,n) = kM(n) + O(kn).
Suppose m ≥ n and n is large. To use an evaluation-interpolation scheme,
we need to evaluate the product at m + n points, whereas balanced k by k
multiplication needs 2k points. Taking k ≈ (m+n)/2, we see that M(m,n) ≤ M((m + n)/2)(1 + o(1)) as n → ∞. On the other hand, from the discussion
above, we have M(m,n) ≤ ⌈m/n⌉M(n)(1 + o(1)).

Algorithm to beat Strassen's Algorithm

Professor Caesar wishes to develop a matrix-multiplication algorithm that is
asymptotically faster than Strassen’s algorithm. His algorithm will use the divide and-conquer method, dividing each matrix into pieces of size n/4 x n/4, and the divide and combine steps together will take Theta(n^2) time.
You don't really specify what the question is here, but I guess it is to disprove that this trivial algorithm runs faster than Strassen.
Say you divide your matrices into blocks each of dimension (n / k) X (n / k) (in your question, k is 4). Then each matrix will have k2 blocks, and there will be k3 block multiplications (each block in the first matrix will be multiplied by k blocks in the second matrix). Consequently, the complexity recurrence is
T(n) = k3 T(n / k) + Θ(n2).
By case 1 of the Master theorem, this implies
T(n) = Θ(nlogk(k3)) = Θ(n3).
This is the same as ordinary matrix multiplication. It does not beat Strassen, obviously.

Big O notation for a matrix algorithm

I have a simple algorithm that prints the two dimensional matrix (m*n, m and n are different numbers):
for(i=0;i<m;i++)
for(j=0;j<n;j++)
Console.WriteLine("{0}",A[i,j]);
I read that the big O notation for this algorithm is O(n^2);
Could somebody explain me what is the "n^2" in that statement? If this is number of elementary operations, then it should be m*n, not n^2?
In reality it should me m*n. We can assume it to be the number of elementary operations in this case, but the actual definition is its "the upper bound of the number of elementary operations."
Yeah, time complexity of for the specified block of code is O(n * m).
In simple words, that means your algo does <= k * n * m operations, k is some small constant factor.
With for-loops, the complexity is measured as O(N) * the block inside the for-loop. The first for-loop contains a second for-loop so the complexity would be 0(N) * 0(N) = O(N^2). The inner for-loop contains a simple output statement, which has a complexity of 0(1) The N corresponds to the number of inputs so the time taken to execute the code is proportional to the number of items squared.
first for loop runs for m-1 times and second for loop runs for n-1 times..
m-1 times = 1+2+3....m-1 times (same for second forloop)
we know that sum of natural numbers is x(x-1)/2 = x^2/2 - x/2
there are 2 loops so adding them gives you 2(x^2/2-x/2)
in Bing O notation we consider only the most dominant value and ignore coefficients, so we get x^2
so O(N) = x^2

Big O, what is the complexity of summing a series of n numbers?

I always thought the complexity of:
1 + 2 + 3 + ... + n is O(n), and summing two n by n matrices would be O(n^2).
But today I read from a textbook, "by the formula for the sum of the first n integers, this is n(n+1)/2" and then thus: (1/2)n^2 + (1/2)n, and thus O(n^2).
What am I missing here?
The big O notation can be used to determine the growth rate of any function.
In this case, it seems the book is not talking about the time complexity of computing the value, but about the value itself. And n(n+1)/2 is O(n^2).
You are confusing complexity of runtime and the size (complexity) of the result.
The running time of summing, one after the other, the first n consecutive numbers is indeed O(n).1
But the complexity of the result, that is the size of “sum from 1 to n” = n(n – 1) / 2 is O(n ^ 2).
1 But for arbitrarily large numbers this is simplistic since adding large numbers takes longer than adding small numbers. For a precise runtime analysis, you indeed have to consider the size of the result. However, this isn’t usually relevant in programming, nor even in purely theoretical computer science. In both domains, summing numbers is usually considered an O(1) operation unless explicitly required otherwise by the domain (i.e. when implementing an operation for a bignum library).
n(n+1)/2 is the quick way to sum a consecutive sequence of N integers (starting from 1). I think you're confusing an algorithm with big-oh notation!
If you thought of it as a function, then the big-oh complexity of this function is O(1):
public int sum_of_first_n_integers(int n) {
return (n * (n+1))/2;
}
The naive implementation would have big-oh complexity of O(n).
public int sum_of_first_n_integers(int n) {
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += n;
}
return sum;
}
Even just looking at each cell of a single n-by-n matrix is O(n^2), since the matrix has n^2 cells.
There really isn't a complexity of a problem, but rather a complexity of an algorithm.
In your case, if you choose to iterate through all the numbers, the the complexity is, indeed, O(n).
But that's not the most efficient algorithm. A more efficient one is to apply the formula - n*(n+1)/2, which is constant, and thus the complexity is O(1).
So my guess is that this is actually a reference to Cracking the Coding Interview, which has this paragraph on a StringBuffer implementation:
On each concatenation, a new copy of the string is created, and the
two strings are copied over, character by character. The first
iteration requires us to copy x characters. The second iteration
requires copying 2x characters. The third iteration requires 3x, and
so on. The total time therefore is O(x + 2x + ... + nx). This reduces
to O(xn²). (Why isn't it O(xnⁿ)? Because 1 + 2 + ... n equals n(n+1)/2
or, O(n²).)
For whatever reason I found this a little confusing on my first read-through, too. The important bit to see is that n is multiplying n, or in other words that n² is happening, and that dominates. This is why ultimately O(xn²) is just O(n²) -- the x is sort of a red herring.
You have a formula that doesn't depend on the number of numbers being added, so it's a constant-time algorithm, or O(1).
If you add each number one at a time, then it's indeed O(n). The formula is a shortcut; it's a different, more efficient algorithm. The shortcut works when the numbers being added are all 1..n. If you have a non-contiguous sequence of numbers, then the shortcut formula doesn't work and you'll have to go back to the one-by-one algorithm.
None of this applies to the matrix of numbers, though. To add two matrices, it's still O(n^2) because you're adding n^2 distinct pairs of numbers to get a matrix of n^2 results.
There's a difference between summing N arbitrary integers and summing N that are all in a row. For 1+2+3+4+...+N, you can take advantage of the fact that they can be divided into pairs with a common sum, e.g. 1+N = 2+(N-1) = 3+(N-2) = ... = N + 1. So that's N+1, N/2 times. (If there's an odd number, one of them will be unpaired, but with a little effort you can see that the same formula holds in that case.)
That is not O(N^2), though. It's just a formula that uses N^2, actually O(1). O(N^2) would mean (roughly) that the number of steps to calculate it grows like N^2, for large N. In this case, the number of steps is the same regardless of N.
Adding the first n numbers:
Consider the algorithm:
Series_Add(n)
return n*(n+1)/2
this algorithm indeed runs in O(|n|^2), where |n| is the length (the bits) of n and not the magnitude, simply because multiplication of 2 numbers, one of k bits and the other of l bits runs in O(k*l) time.
Careful
Considering this algorithm:
Series_Add_pseudo(n):
sum=0
for i= 1 to n:
sum += i
return sum
which is the naive approach, you can assume that this algorithm runs in linear time or generally in polynomial time. This is not the case.
The input representation(length) of n is O(logn) bits (any n-ary coding except unary), and the algorithm (although it is running linearly in the magnitude) it runs exponentially (2^logn) in the length of the input.
This is actually the pseudo-polynomial algorithm case. It appears to be polynomial but it is not.
You could even try it in python (or any programming language), for a medium length number like 200 bits.
Applying the first algorithm the result comes in a split second, and applying the second, you have to wait a century...
1+2+3+...+n is always less than n+n+n...+n n times. you can rewrite this n+n+..+n as n*n.
f(n) = O(g(n)) if there exists a positive integer n0 and a positive
constant c, such that f(n) ≤ c * g(n) ∀ n ≥ n0
since Big-Oh represents the upper bound of the function, where the function f(n) is the sum of natural numbers up to n.
now, talking about time complexity, for small numbers, the addition should be of a constant amount of work. but the size of n could be humongous; you can't deny that probability.
adding integers can take linear amount of time when n is really large.. So you can say that addition is O(n) operation and you're adding n items. so that alone would make it O(n^2). of course, it will not always take n^2 time, but it's the worst-case when n is really large. (upper bound, remember?)
Now, let's say you directly try to achieve it using n(n+1)/2. Just one multiplication and one division, this should be a constant operation, no?
No.
using a natural size metric of number of digits, the time complexity of multiplying two n-digit numbers using long multiplication is Θ(n^2). When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. Wikipedia
That again leaves us to O(n^2).
It's equivalent to BigO(n^2), because it is equivalent to (n^2 + n) / 2 and in BigO you ignore constants, so even though the squared n is divided by 2, you still have exponential growth at the rate of square.
Think about O(n) and O(n/2) ? We similarly don't distinguish the two, O(n/2) is just O(n) for a smaller n, but the growth rate is still linear.
What that means is that as n increase, if you were to plot the number of operations on a graph, you would see a n^2 curve appear.
You can see that already:
when n = 2 you get 3
when n = 3 you get 6
when n = 4 you get 10
when n = 5 you get 15
when n = 6 you get 21
And if you plot it like I did here:
You see that the curve is similar to that of n^2, you will have a smaller number at each y, but the curve is similar to it. Thus we say that the magnitude is the same, because it will grow in time complexity similarly to n^2 as n grows bigger.
answer of sum of series of n natural can be found using two ways. first way is by adding all the numbers in loop. in this case algorithm is linear and code will be like this
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += n;
}
return sum;
it is analogous to 1+2+3+4+......+n. in this case complexity of algorithm is calculated as number of times addition operation is performed which is O(n).
second way of finding answer of sum of series of n natural number is direst formula n*(n+1)/2. this formula use multiplication instead of repetitive addition. multiplication operation has not linear time complexity. there are various algorithm available for multiplication which has time complexity ranging from O(N^1.45) to O (N^2). therefore in case of multiplication time complexity depends on the processor's architecture. but for the analysis purpose time complexity of multiplication is considered as O(N^2). therefore when one use second way to find the sum then time complexity will be O(N^2).
here multiplication operation is not same as the addition operation. if anybody has knowledge of computer organisation subject then he can easily understand the internal working of multiplication and addition operation. multiplication circuit is more complex than the adder circuit and require much higher time than the adder circuit to compute the result. so time complexity of sum of series can't be constant.

Resources