I want to overload Times and Plus for matrix multiplication in mathematica, for example, let Times be BitAnd, and Plus be BitOr, then do the matrix multiplication.
Is there anyway to do this in a simple way, without rewriting my own matrix multiplication?
Thanks.
The question is what you want to alter - the behavior of Times and Plus, or Dot. Generally, Block trick is often the simplest way. In this case, since Dot does not call high-level Plus or Times, you can do:
mat1 = {{1,2},{3,4}};
mat2= {{5,6},{7,8}};
Block[{Dot = Inner[BitAnd,#1,#2,BitOr]&},
mat1.mat2]
{{3,0},{5,2}}
But note that this is effectively re-implementing the matrix multiplication (using Inner) - there is no other way since Dot is implemented internally and does not use Plus or Times.
Related
It might not be evident, but Prolog also offers arrays out of the box. A Prolog compound has a functor and a number of arguments. This means we could represent an array such as:
[[1,2],[3,4]]
Replacing the Prolog lists by the following Prolog compounds:
matrice(vector(1,2), vector(3,4))
The advantage would be faster element access from an integer index. Can this representation be used to realize a matrix multiplication?
There is yet another approach, as implemented in R (the statistical environment). The dimensions of the array and the values are kept separately. So your square could also be represented as:
array(dims(2, 2), v(1,2,3,4))
This approach has some (questionable) benefits and drawbacks. You can start reading here, if you are at all interested: https://stat.ethz.ch/R-manual/R-devel/library/base/html/dim.html
To your question, yes, you can implement matrix multiplication, regardless on how you decide to represent the matrix. It would be interesting to see how the two approaches (array of arrays vs. one array and calculating indexes from the dimensions) compare in terms of efficiency.
What algorithm do you want to use for the matrix multiplication? Is it any of the ones described here: https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm?
EDIT: do you want to allow the client code to be able to provide the product and sum operations? Do you want to allow specialization of the values? For example, if you want to use matrix multiplication for finding the transitive closure of a graph, you could represent the boolean square matrix as an unbounded integer. This will make the matrix itself at least quite small.
i’m a new programmer on opencl, i’ve to perform a multiplication of 2 complex matrix but i don’t know how to deal with complex matrix on opencl. please any help? I aleady tried matrix multiplication with normal numbers.
One way, though probably not the most efficient, would be to regard your complex matrix, Z say as being two real matrices X (the real parts) and Y the imaginary parts,ie
X[i,j]= Real( Z[i,j]) Y[i,j] = Imag( Z[i,j])
If you have another complex matrix W say, which is split as above into U and V then to multiply:
Z*W = (X*U-Y*V, X*V+Y*U)
where on the rhs we have real matrices and real matrix multiplication and addition.
In terms of multiplies and adds this will be the same amount of computation as doing the complex multiplications and additions (of the elements) directly. The inefficiency will come if you are given, and should return, arrays of complex numbers; then you have to split, as above the matrices you are going to multiply into real ones, and combine the product into complex array.
I have my beuatiful triangular n x n matrix, say L (for lower triangular), and I want to solve a system like
LX=B
Where B and X are n x k matrices (that is: I want to solve a triangular linear system with multiple right hand side). Additionally, I have my triangular matrix stored in PACKED FORMAT; i.e. I only store the lower triangular part. I am using BLAS and LAPACK, but I have realised that there is no specific way of solving my problem. Although there are many functions that solve similar problems:
stpsv(): Takes a triangular matrix in packed format and solves for a single right hand side.
strsm(): Takes a triangular matrix in dense format and solves for multiple right hand side.
What I really need is a combination of both. I would like to have a function accepting packed triangular format, as in stpsv(), and also accepting multiple right hand side, as in strsm(). But it seems as if there is not such a function readily available.
So my questions are:
Is there any function that can accept a packed triangular matrix and solve for multiple right hand side?
If the answer is NO, what would be more efficient? Either I call stpsv() in a for loop for every column in B, or I create a dense matrix from L, so that I have all those useless zeros in there and then I call strsm(). What would be better? Moreover, maybe I am missing a more clever way of doing all this.
Packed storage implies BLAS2 routines. Otherwise BLAS3 functions are more efficient in solving linear systems but they work in optimal blocked algorithms. If you call BLAS2 functions then you basically go back to vectored version hence it won't make too much sense.
Note that BLAS2 versions also do not perform conditioning checks. So they are directly optimized for BLAS2 performance since a triangular matrix with single RHS is a direct backward substitution.
For multiple RHS you can convert your matrix via, say stpttr and then use strtrs.
Yes, there is a function to solve Ax=B, for packed triangular matrix A and multiple right hand size B. It is stptrs() from LAPACK. In addition, there are other routines for triangular packed matrices, all featuring tp in their name according to the naming conventions of LAPACK.
However, looking at the source unveils that this function calls stpsv() from BLAS in a loop, once for each right hand side. It's exactly what you suggested!
In my function, there is a lot of element wise matrix multiplication which are independent. Is there a way to calculate them in parallel ?
All of them are very simple operations, but 70% of my run time is for these parts of code because this function is invoked millions of times.
function [r1,r2,r3]=backward(A,B,C,D,E,F,r1,r2,r3)
r1=A.*B;
r2=C.*D;
r3=E*F;
end
for i=1:300
[r1,r2,r3]=backward(A,B,C,D,E,F,r1,r2,r3)
end
EDIT: After writing the answer, I observed that you are not multiplying all the input matrices by means of matrix multiplication. Some of them are elementwise multiplications. If this is what you intended, the following answer won't apply.
You are looking for an optimal algorithm for computing product of multiple matrices. People have studied this problem long ago and they have come up with a dynamic programming algorithm to decide the optimal order.
For example, if A is of size 10000 x 1, B is of size 1 x 10000 and C is of size 10000 x 1, it would be a lot more efficient if we computed A*B*C as A*(B*C), instead of (A*B)*C. The proof of correctness of this technique lies in the fact that matrix multiplication is associative. You can read more about this on Wikipedia.
If you want a good quality MATLAB implementation of this, you can find it here. It takes the matrices as input and gives out the product. It seems like this implementation does a decent job of finding the optimal way of computing "upto" 10 matrices.
First thing to note: the last 3 variables that you provide as input are not beeing used. I don't think this will matter much, but it would be better to clean it up.
Now the actual answer:
MATLAB is all about matrix operations, and this has been highly optimized. Even using C++ you will not expect a significant speedup (and be wary of a slowdown). As such, with the information that is provided in the question, the conclusion would be that you cannot do anything to speed up independent matrix calculations.
That being said: If you could reduce the number of sequential function calls, there may be something to gain.
It is hard to say how to do this in general, but two ideas:
If you call the fuction in a for loop, use a parfor loop instead (assuming you have the parallel processing toolbox, otherwise manually break up the loop and open 4 matlab instances to paralellize the loop (can be automated if needed).
See whether you really need this many function calls to small matrix operations. If you could improve your algorithm, that could offer a huge improvement, but otherwise you may still be able to combine multiple matrices (multiple versions of A with multiple versions of B for instance) and do 1 big multiplication, rather than a 100 tiny ones).
I want to do some calculations with matrices of arbitrary size. Simple example - take two matrices NxM and MxK, with arbitrary elements, and see element of product as sum.
But i cant find a way to do such symbolic calculations without specifying matrix size as integer.
matrix() want integer, makelist() want integer.
Is there a way to do things like this in maxima? Or any CAS?
Unfortunately, Maxima does not know about arbitrary-size matrices, and I don't see an easy way to implement it.
The only way that I see is to define a new kind of expression, and provide simplification rules for operations on them. E.g. (and this is just a sketch of a possible solution): use defstruct to define a structure comprising size and a formula for a typical element, and define a simplification rule for "." (noncommutative multiplication) which creates a new expression with a typical element which is a summation.