I am accelerating a model by replacing all its linear algebra operations with cuBlas's functions. And I want to get the time complexity or FLOPs of the model to evaluate its performance in roofline model.
There are two kinds of operation in the model: Gemm and Trsm.
I know the FLOPs of Gemm is about 2 * k * m * n from the question : How to compute the achieved FLOPS of a MPI program which calls cuBlas function:
The standard BLAS GEMM operation is C <- alpha * (A dot B) + beta * C and for A (m by k), B (k by n) and C (m by n), each inner product of a row of A and a column of B multiplied by alpha is 2 * k + 1 flop and there are m * n inner products in A dot B and another 2 * m * n flop for adding beta * C to that dot product. So the total model FLOP count is (2 * k + 3) * (m * n) when alpha and beta are both non-zero.
But for Trsm, I have no idea about its computation complexity. All the documents I found say it's about O(n^3) which isn't clear enough to get the computation complexity.
Sincerely thank you for your answers!
But for Trsm, I have no idea about its computation complexity. All the documents I found say it's about O(n^3) which isn't clear enough to get the computation complexity.
It's about O(n^2) not O(n^3).
TRSM performs a number of different implementation depending on the form of the matrix, but for the canonical B <- alpha * inv(A) dot B where B is (m by n) and A is a non-unit upper triangular matrix, the operation count should be something like n * (3/2*m + 1) [n diagonal scaling operations plus 1/2 * 3 * m * n operations over all the non-zero row entries]
If you want to know the exact FLOP count for the reference implementation of any of the BLAS or LAPACK routines, the obvious way to do it is inspect the reference serial code and count the floating point operations, either by hand or by instrumentation of the code itself.
Related
I'm having tough time in understanding Big O time complexity.
Formal definition of Big O :
f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
And worst time complexity of insertion sort is O(n^2).
I want to understand what is f(n), g(n), c and k here in case of insertion sort.
Explanation
It is not that easy to formalize an algorithm such that you can apply Big-O formally, it is a mathematical concept and does not easily translate to algorithms. Typically you would measure the amount of "computation steps" needed to perform the operation based on the size of the input.
So f is the function that measures how many computation steps the algorithm performs.
n is the size of the input, for example 5 for a list like [4, 2, 9, 8, 2].
g is the function you measure against, so g = n^2 if you check for O(n^2).
c and k heavily depend on the specific algorithm and how exactly f looks like.
Example
The biggest issue with formalizing an algorithm is that you can not really tell exactly how many computation steps are performed. Let's say we have the following Java code:
public static void printAllEven(int n) {
for (int i = 0; i < n; i++) {
if (i % 2 == 0) {
System.out.println(i);
}
}
}
How many steps does it perform? How deep should we go? What about for (int i = 0; i < count; i++)? Those are multiple statements which are executed during the loop. What about i % 2? Can we assume this is a "single operation"? On which level, one CPU cycle? One assembly line? What about the println(i), how many computation steps does it need, 1 or 5 or maybe 200?
This is not practical. We do not know the exact amount, we have to abstract and say it is a constant A, B and C amount of steps, which is okay since it runs in constant time.
After simplifying the analysis, we can say that we are effectively only interested in how often println(i) is called.
This leads to the observation that we call it precisely n / 2 times (since we have so many even numbers between 0 and n.
The exact formula for f using aboves constants would yield something like
n * A + n * B + n/2 * C
But since constants do not really play any role (they vanish in c), we could also just ignore this and simplify.
Now you are left with proving that n / 2 is in O(n^2), for example. By doing that, you will also get concrete numbers for c and k. Example:
n / 2 <= n <= 1 * n^2 // for all n >= 0
So by choosing c = 1 and k = 0 you have proven the claim. Other values for c and k work as well, example:
n / 2 <= 100 * n <= 5 * n^2 // for all n >= 20
Here we have choosen c = 5 and k = 20.
You could play the same game with the full formula as well and get something like
n * A + n * B + n/2 * C
<= n * (A + B + C)
= D * n
<= D * n^2 // for all n > 0
with c = D and k = 0.
As you see, it does not really play any role, the constants just vanish in c.
In case of insertion sort f(n) is the actual number of operations your processor will do to perform a sort. g(n)=n2. Minimal values for c and k will be implementation-defined, but they are not as important. The main idea this Big-O notation gives is that if you double the size of the array the time it takes for insertion sort to work will grow approximately by a factor of 4 (22). (For insertion sort it can be smaller, but Big-O only gives upper bound)
So I read that Strassen's matrix multiplication algorithm has complexity O(n^2.8)
but it works only if A is n x n and B is n x n
What if
A is m x n and B is n x o
and m is much much bigger than n and o but n and o is still very big
Padding with zeroes might make the multiplication take longer
I doing a project that requires multiplication of such a matrix so I was hoping to get some advice
Should I use the conventional algorithm or is there a way to modify Strassen's algorithm to do it faster?
https://en.m.wikipedia.org/wiki/Strassen_algorithm
A product of size [2N x N] * [N x 10N] can be done as 20 separate [N x N] * [N x N] operations, arranged to form the result;
A product of size [N x 10N] * [10N x N] can be done as 10 separate [N x N] * [N x N] operations, summed to form the result.
These techniques will make the implementation more complicated, compared to simply padding to a power-of-two square; however, it is a reasonable assumption that anyone undertaking an implementation of Strassen, rather than conventional, multiplication, will place a higher priority on computational efficiency than on simplicity of the implementation.
I was trying to analyse karatsuba algorithm for multiplying an m and an n digit integer. As i understand, it will be most efficient if the integers are divided into m/2 and n/2 digit sub problems. The issues are as follows:-
Can we apply gauss trick in this case and do we need some adjustments to apply it. Padding the smaller integer to match the size may be a solution but will it affect my running time.
In karatsuba algorithm, application of gauss trick ie (a+b)*(c+d) requires T(n/2 +1) as the size of the subproblem may increase by 1 digit. Can i some which way limit the size of sub problem to strictly n/2.
Actually, Karatsuba algo complexity, using Master method is T(n) = 3 * T(n / 2) + O(n),
where N is a maximum of number's length.
Actually, even if size of one sub problem is bigger for 1 digit, we don't care. Since it's just small constant.
Also, I'm not sure, what Gauss trick you're talking about? Usually this is a trick where we split first number on two parts (a, b) and second number on parts (c, d).
So (10 ^ (n / 2) * a + b) * (10 ^ (n / 2) * c + d) = 10^n * a * c + 10 ^ (n / 2) * a * d + 10 ^ (n / 2) * b * c + b * d;
So, we need to calculate ac, bd and ad + bc = (a + b) * (c + d) - ac - bd. So we need to calculate only 3 products of smaller size. And, yes - we could apply it in Karatsuba algorithm.
For more information - check this lecture from Standford Univercity (http://openclassroom.stanford.edu/MainFolder/VideoPage.php?course=IntroToAlgorithms&video=CS161L1P9)
I was curious if there was a good way to do this. My current code is something like:
def factorialMod(n, modulus):
ans=1
for i in range(1,n+1):
ans = ans * i % modulus
return ans % modulus
But it seems quite slow!
I also can't calculate n! and then apply the prime modulus because sometimes n is so large that n! is just not feasible to calculate explicitly.
I also came across http://en.wikipedia.org/wiki/Stirling%27s_approximation and wonder if this can be used at all here in some way?
Or, how might I create a recursive, memoized function in C++?
n can be arbitrarily large
Well, n can't be arbitrarily large - if n >= m, then n! ≡ 0 (mod m) (because m is one of the factors, by the definition of factorial).
Assuming n << m and you need an exact value, your algorithm can't get any faster, to my knowledge. However, if n > m/2, you can use the following identity (Wilson's theorem - Thanks #Daniel Fischer!)
to cap the number of multiplications at about m-n
(m-1)! ≡ -1 (mod m)
1 * 2 * 3 * ... * (n-1) * n * (n+1) * ... * (m-2) * (m-1) ≡ -1 (mod m)
n! * (n+1) * ... * (m-2) * (m-1) ≡ -1 (mod m)
n! ≡ -[(n+1) * ... * (m-2) * (m-1)]-1 (mod m)
This gives us a simple way to calculate n! (mod m) in m-n-1 multiplications, plus a modular inverse:
def factorialMod(n, modulus):
ans=1
if n <= modulus//2:
#calculate the factorial normally (right argument of range() is exclusive)
for i in range(1,n+1):
ans = (ans * i) % modulus
else:
#Fancypants method for large n
for i in range(n+1,modulus):
ans = (ans * i) % modulus
ans = modinv(ans, modulus)
ans = -1*ans + modulus
return ans % modulus
We can rephrase the above equation in another way, that may or may-not perform slightly faster. Using the following identity:
we can rephrase the equation as
n! ≡ -[(n+1) * ... * (m-2) * (m-1)]-1 (mod m)
n! ≡ -[(n+1-m) * ... * (m-2-m) * (m-1-m)]-1 (mod m)
(reverse order of terms)
n! ≡ -[(-1) * (-2) * ... * -(m-n-2) * -(m-n-1)]-1 (mod m)
n! ≡ -[(1) * (2) * ... * (m-n-2) * (m-n-1) * (-1)(m-n-1)]-1 (mod m)
n! ≡ [(m-n-1)!]-1 * (-1)(m-n) (mod m)
This can be written in Python as follows:
def factorialMod(n, modulus):
ans=1
if n <= modulus//2:
#calculate the factorial normally (right argument of range() is exclusive)
for i in range(1,n+1):
ans = (ans * i) % modulus
else:
#Fancypants method for large n
for i in range(1,modulus-n):
ans = (ans * i) % modulus
ans = modinv(ans, modulus)
#Since m is an odd-prime, (-1)^(m-n) = -1 if n is even, +1 if n is odd
if n % 2 == 0:
ans = -1*ans + modulus
return ans % modulus
If you don't need an exact value, life gets a bit easier - you can use Stirling's approximation to calculate an approximate value in O(log n) time (using exponentiation by squaring).
Finally, I should mention that if this is time-critical and you're using Python, try switching to C++. From personal experience, you should expect about an order-of-magnitude increase in speed or more, simply because this is exactly the sort of CPU-bound tight-loop that natively-compiled code excels at (also, for whatever reason, GMP seems much more finely-tuned than Python's Bignum).
Expanding my comment to an answer:
Yes, there are more efficient ways to do this. But they are extremely messy.
So unless you really need that extra performance, I don't suggest to try to implement these.
The key is to note that the modulus (which is essentially a division) is going to be the bottleneck operation. Fortunately, there are some very fast algorithms that allow you to perform modulus over the same number many times.
Division by Invariant Integers using Multiplication
Montgomery Reduction
These methods are fast because they essentially eliminate the modulus.
Those methods alone should give you a moderate speedup. To be truly efficient, you may need to unroll the loop to allow for better IPC:
Something like this:
ans0 = 1
ans1 = 1
for i in range(1,(n+1) / 2):
ans0 = ans0 * (2*i + 0) % modulus
ans1 = ans1 * (2*i + 1) % modulus
return ans0 * ans1 % modulus
but taking into account for an odd # of iterations and combining it with one of the methods I linked to above.
Some may argue that loop-unrolling should be left to the compiler. I will counter-argue that compilers are currently not smart enough to unroll this particular loop. Have a closer look and you will see why.
Note that although my answer is language-agnostic, it is meant primarily for C or C++.
n! mod m can be computed in O(n1/2 + ε) operations instead of the naive O(n). This requires use of FFT polynomial multiplication, and is only worthwhile for very large n, e.g. n > 104.
An outline of the algorithm and some timings can be seen here: http://fredrikj.net/blog/2012/03/factorials-mod-n-and-wilsons-theorem/
If we want to calculate M = a*(a+1) * ... * (b-1) * b (mod p), we can use the following approach, if we assume we can add, substract and multiply fast (mod p), and get a running time complexity of O( sqrt(b-a) * polylog(b-a) ).
For simplicity, assume (b-a+1) = k^2, is a square. Now, we can divide our product into k parts, i.e. M = [a*..*(a+k-1)] *...* [(b-k+1)*..*b]. Each of the factors in this product is of the form p(x)=x*..*(x+k-1), for appropriate x.
By using a fast multiplication algorithm of polynomials, such as Schönhage–Strassen algorithm, in a divide & conquer manner, one can find the coefficients of the polynomial p(x) in O( k * polylog(k) ). Now, apparently there is an algorithm for substituting k points in the same degree-k polynomial in O( k * polylog(k) ), which means, we can calculate p(a), p(a+k), ..., p(b-k+1) fast.
This algorithm of substituting many points into one polynomial is described in the book "Prime numbers" by C. Pomerance and R. Crandall. Eventually, when you have these k values, you can multiply them in O(k) and get the desired value.
Note that all of our operations where taken (mod p).
The exact running time is O(sqrt(b-a) * log(b-a)^2 * log(log(b-a))).
Expanding on my comment, this takes about 50% of the time for all n in [100, 100007] where m=(117 | 1117):
Function facmod(n As Integer, m As Integer) As Integer
Dim f As Integer = 1
For i As Integer = 2 To n
f = f * i
If f > m Then
f = f Mod m
End If
Next
Return f
End Function
I found this following function on quora:
With f(n,m) = n! mod m;
function f(n,m:int64):int64;
begin
if n = 1 then f:= 1
else f:= ((n mod m)*(f(n-1,m) mod m)) mod m;
end;
Probably beat using a time consuming loop and multiplying large number stored in string. Also, it is applicable to any integer number m.
The link where I found this function : https://www.quora.com/How-do-you-calculate-n-mod-m-where-n-is-in-the-1000s-and-m-is-a-very-large-prime-number-eg-n-1000-m-10-9+7
If n = (m - 1) for prime m then by http://en.wikipedia.org/wiki/Wilson's_theorem n! mod m = (m - 1)
Also as has already been pointed out n! mod m = 0 if n > m
Assuming that the "mod" operator of your chosen platform is sufficiently fast, you're bounded primarily by the speed at which you can calculate n! and the space you have available to compute it in.
Then it's essentially a 2-step operation:
Calculate n! (there are lots of fast algorithms so I won't repeat any here)
Take the mod of the result
There's no need to complexify things, especially if speed is the critical component. In general, do as few operations inside the loop as you can.
If you need to calculate n! mod m repeatedly, then you may want to memoize the values coming out of the function doing the calculations. As always, it's the classic space/time tradeoff, but lookup tables are very fast.
Lastly, you can combine memoization with recursion (and trampolines as well if needed) to get things really fast.
How large a system is it reasonable to attempt to do a linear regression on?
Specifically: I have a system with ~300K sample points and ~1200 linear terms. Is this computationally feasible?
The linear regression is computed as (X'X)^-1 X'Y.
If X is an (n x k) matrix:
(X' X) takes O(n*k^2) time and produces a (k x k) matrix
The matrix inversion of a (k x k) matrix takes O(k^3) time
(X' Y) takes O(n*k^2) time and produces a (k x k) matrix
The final matrix multiplication of two (k x k) matrices takes O(k^3) time
So the Big-O running time is O(k^2*(n + k)).
See also: http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra
If you get fancy it looks like you can get the time down to O(k^2*(n+k^0.376)) with the Coppersmith–Winograd algorithm.
You can express this as a matrix equation:
where the matrix is 300K rows and 1200 columns, the coefficient vector is 1200x1, and the RHS vector is 1200x1.
If you multiply both sides by the transpose of the matrix , you have a system of equations for the unknowns that's 1200x1200. You can use LU decomposition or any other algorithm you like to solve for the coefficients. (This is what least squares is doing.)
So the Big-O behavior is something like O(mmn), where m = 300K and n = 1200. You'd account for the transpose, the matrix multiplication, the LU decomposition, and the forward-back substitution to get the coefficients.
The linear regression is computed as (X'X)^-1 X'y.
As far as I learned, y is a vector of results (or in other words: dependant variables).
Therefore, if X is an (n × m) matrix and y is an (n × 1) matrix:
The transposing of a (n × m) matrix takes O(n⋅m) time and produces a (m × n) matrix
(X' X) takes O(n⋅m²) time and produces a (m × m) matrix
The matrix inversion of a (m × m) matrix takes O(m³) time
(X' y) takes O(n⋅m) time and produces a (m × 1) matrix
The final matrix multiplication of a (m × m) and a (m x 1) matrices takes O(m²) time
So the Big-O running time is O(n⋅m + n⋅m² + m³ + n⋅m + m²).
Now, we know that:
m² ≤ m³
n⋅m ≤ n⋅m²
so asymptotically, the actual Big-O running time is O(n⋅m² + m³) = O(m²(n + m)).
And that's what we have from
http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra
But, we know that there's a significant difference between the case n → ∞ and m → ∞.
https://en.wikipedia.org/wiki/Big_O_notation#Multiple_variables
So which one should we choose? Obviously it's the number of observations which is more likely to grow, rather than the number of attributes.
So my conclusion is that if we assume that the number of attributes remains constant, we can ignore the m terms and that's a relief because the time complexity of a multivariate linear regression becomes a mere linear O(n). On the other hand, we can expect our computing time explodes by a large value when the number of attributes increase substantially.
The linear regression of closed-form model is computed as follow:
derivative of
RSS(W) = -2H^t (y-HW)
So, we solve for
-2H^t (y-HW) = 0
Then, the W value is
W = (H^t H)^-1 H^2 y
where:
W: is the vector of expected weights
H: is the features matrix N*D where N is the number of observations, and D is the number of features
y: is the actual value
Then, the complexity of
H^t H is n D^2
The complexity of the transpose is D^3
So, The complexity of
(H^t H)^-1 is n * D^2 + D^3