How much computationally expensive is Matrix multiplication? - performance

What advantage does the function
N(x;θ) = θ1(θ2*x)
has over
G(x;θ) = θ*x
for an input vector
x ∈ R^n
θ1 ∈ R^(nx1)
θ2 ∈ R^(1xn)
θ ∈ R^(nxn)

For first case, θ2 with dimension 1xn is multiplied with x with dimension n. That gives output of 1x1. Then multiplied by nx1 the output dimension of N(x;θ) is nx1. So there are n elements in θ2 and n elements in θ1. In total there are n+n (2n) elements.
For second case, θ with dimension nxn is multiplied with x with dimension n. That gives output dimension for G(x;θ) as nx1. In this case there are n*n (n^2) elements for θ.
Therefore, the advantage is that, it is computationally inexpensive to calculate the first case then the second case.

Related

Does the asymptotic complexity of a multiplication algorithm only rely on the larger of the two operands?

I'm taking an algorithms class and I repeatedly have trouble when I'm asked to analyze the runtime of code when there is a line with multiplication or division. How can I find big-theta of multiplying an n digit number with an m digit number (where n>m)? Is it the same as multiplying two n digit numbers?
For example, right now I'm attempting to analyze the following line of code:
return n*count/100
where count is at most 100. Is the asymptotic complexity of this any different from n*n/100? or n*n/n?
You can always look up here Computational complexity of mathematical operations.
In your complexity of n*count/100 is O(length(n)) as 100 is a constant and length(count) is at most 3.
In general multiplication of two numbers n and m digits length, takes O(nm), the same time required for division. Here i assume we are talking about long division. There are many sophisticated algorithms which will beat this complexity.
To make things clearer i will provide an example. Suppose you have three numbers:
A - n digits length
B - m digits length
C - p digits length
Find complexity of the following formula:
A * B / C
Multiply first. Complexity of A * B it is O(nm) and as result we have number D, which is n+m digits length. Now consider D / C, here complexity is O((n+m)p), where overall complexity is sum of the two O(nm + (n+m)p) = O(m(n+p) + np).
Divide first. So, we divide B / C, complexity is O(mp) and we have m digits number E. Now we calculate A * E, here complexity is O(nm). Again overall complexity is O(mp + nm) = O(m(n+p)).
From the analysis you can see that it is beneficial to divide first. Of course in real life situation you would account for numerical stability as well.
From Modern Computer Arithmetic:
Assume the larger operand has size
m, and the smaller has size n ≤ m, and denote by M(m,n) the corresponding
multiplication cost.
When m is an exact multiple of n, say m = kn, a trivial strategy is to cut the
larger operand into k pieces, giving M(kn,n) = kM(n) + O(kn).
Suppose m ≥ n and n is large. To use an evaluation-interpolation scheme,
we need to evaluate the product at m + n points, whereas balanced k by k
multiplication needs 2k points. Taking k ≈ (m+n)/2, we see that M(m,n) ≤ M((m + n)/2)(1 + o(1)) as n → ∞. On the other hand, from the discussion
above, we have M(m,n) ≤ ⌈m/n⌉M(n)(1 + o(1)).

Algorithm to beat Strassen's Algorithm

Professor Caesar wishes to develop a matrix-multiplication algorithm that is
asymptotically faster than Strassen’s algorithm. His algorithm will use the divide and-conquer method, dividing each matrix into pieces of size n/4 x n/4, and the divide and combine steps together will take Theta(n^2) time.
You don't really specify what the question is here, but I guess it is to disprove that this trivial algorithm runs faster than Strassen.
Say you divide your matrices into blocks each of dimension (n / k) X (n / k) (in your question, k is 4). Then each matrix will have k2 blocks, and there will be k3 block multiplications (each block in the first matrix will be multiplied by k blocks in the second matrix). Consequently, the complexity recurrence is
T(n) = k3 T(n / k) + Θ(n2).
By case 1 of the Master theorem, this implies
T(n) = Θ(nlogk(k3)) = Θ(n3).
This is the same as ordinary matrix multiplication. It does not beat Strassen, obviously.

Matrix multiplication when one dimension is much larger than the other

So I read that Strassen's matrix multiplication algorithm has complexity O(n^2.8)
but it works only if A is n x n and B is n x n
What if
A is m x n and B is n x o
and m is much much bigger than n and o but n and o is still very big
Padding with zeroes might make the multiplication take longer
I doing a project that requires multiplication of such a matrix so I was hoping to get some advice
Should I use the conventional algorithm or is there a way to modify Strassen's algorithm to do it faster?
https://en.m.wikipedia.org/wiki/Strassen_algorithm
A product of size [2N x N] * [N x 10N] can be done as 20 separate [N x N] * [N x N] operations, arranged to form the result;
A product of size [N x 10N] * [10N x N] can be done as 10 separate [N x N] * [N x N] operations, summed to form the result.
These techniques will make the implementation more complicated, compared to simply padding to a power-of-two square; however, it is a reasonable assumption that anyone undertaking an implementation of Strassen, rather than conventional, multiplication, will place a higher priority on computational efficiency than on simplicity of the implementation.

What is the time complexity of multiplying two matrices of unequal dimension?

I've looked over the big-O complexity of multiplying two n × n matrices, which takes time O(n3). But how do you get big-O complexity for multiplying two rectangular matrices which are of dimensions m × n and n × r? I've been told the answer is O(mnr), but I'm not sure where this comes from. Can anyone explain this?
Thanks!
I assume that you're talking about the complexity of multiplying two square matrices of dimensions n × n working out to O(n3) and are asking the complexity of multiplying an m × n matrix and an n × r matrix. There are specialized algorithms that can solve this problem faster than the naive approach, but for the purposes of this question I'll just talk about the standard "multiply each row by each column for each entry" algorithm.
First, let's see where the O(n3) term comes from in multiplying two n × n matrices. Note that for each value of the resulting matrix, the entry at position (i, j) is given by the inner product of the ith row of the left matrix and the jth column of the right matrix. There are n elements in each row and column, so computing each element takes time Θ(n). Doing this Θ(n2) times (once for each element of the resulting matrix) takes time Θ(n3).
Now think about this in the context of the product of an m × n matrix and an n × r matrix. Entry (i, j) in the matrix is given by the inner product of the ith row of the left matrix (which has n entries) and the jth column of the right matrix (which has n entries), so computing it takes time Θ(n). You do this once per element of the resulting matrix. Since the resulting matrix has dimension m × r, there are Θ(mr) elements to consider. Therefore, the total work done is Θ(mnr).
Hope this helps!

What is the BigO of linear regression?

How large a system is it reasonable to attempt to do a linear regression on?
Specifically: I have a system with ~300K sample points and ~1200 linear terms. Is this computationally feasible?
The linear regression is computed as (X'X)^-1 X'Y.
If X is an (n x k) matrix:
(X' X) takes O(n*k^2) time and produces a (k x k) matrix
The matrix inversion of a (k x k) matrix takes O(k^3) time
(X' Y) takes O(n*k^2) time and produces a (k x k) matrix
The final matrix multiplication of two (k x k) matrices takes O(k^3) time
So the Big-O running time is O(k^2*(n + k)).
See also: http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra
If you get fancy it looks like you can get the time down to O(k^2*(n+k^0.376)) with the Coppersmith–Winograd algorithm.
You can express this as a matrix equation:
where the matrix is 300K rows and 1200 columns, the coefficient vector is 1200x1, and the RHS vector is 1200x1.
If you multiply both sides by the transpose of the matrix , you have a system of equations for the unknowns that's 1200x1200. You can use LU decomposition or any other algorithm you like to solve for the coefficients. (This is what least squares is doing.)
So the Big-O behavior is something like O(mmn), where m = 300K and n = 1200. You'd account for the transpose, the matrix multiplication, the LU decomposition, and the forward-back substitution to get the coefficients.
The linear regression is computed as (X'X)^-1 X'y.
As far as I learned, y is a vector of results (or in other words: dependant variables).
Therefore, if X is an (n × m) matrix and y is an (n × 1) matrix:
The transposing of a (n × m) matrix takes O(n⋅m) time and produces a (m × n) matrix
(X' X) takes O(n⋅m²) time and produces a (m × m) matrix
The matrix inversion of a (m × m) matrix takes O(m³) time
(X' y) takes O(n⋅m) time and produces a (m × 1) matrix
The final matrix multiplication of a (m × m) and a (m x 1) matrices takes O(m²) time
So the Big-O running time is O(n⋅m + n⋅m² + m³ + n⋅m + m²).
Now, we know that:
m² ≤ m³
n⋅m ≤ n⋅m²
so asymptotically, the actual Big-O running time is O(n⋅m² + m³) = O(m²(n + m)).
And that's what we have from
http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra
But, we know that there's a significant difference between the case n → ∞ and m → ∞.
https://en.wikipedia.org/wiki/Big_O_notation#Multiple_variables
So which one should we choose? Obviously it's the number of observations which is more likely to grow, rather than the number of attributes.
So my conclusion is that if we assume that the number of attributes remains constant, we can ignore the m terms and that's a relief because the time complexity of a multivariate linear regression becomes a mere linear O(n). On the other hand, we can expect our computing time explodes by a large value when the number of attributes increase substantially.
The linear regression of closed-form model is computed as follow:
derivative of
RSS(W) = -2H^t (y-HW)
So, we solve for
-2H^t (y-HW) = 0
Then, the W value is
W = (H^t H)^-1 H^2 y
where:
W: is the vector of expected weights
H: is the features matrix N*D where N is the number of observations, and D is the number of features
y: is the actual value
Then, the complexity of
H^t H is n D^2
The complexity of the transpose is D^3
So, The complexity of
(H^t H)^-1 is n * D^2 + D^3

Resources