Julia package for managing distributed SpGEMM - multiprocessing

Does anyone know a performant package under Juia to compute sparse matrix-matrix multuplication (SpGEMM) on a distributed cluster (MPI)?
I'm not sure if Elemental.jl is able to manage such computations.
I'm looking for something simple (such as COSMA.jl for dense systems), all help would be welcome...
Thanks

Elemental does appear to be able to handle this. In particular, using Elemental.jl you should be able to create a sparse distributed array with Elemental.DistSparseMatrix, which you should AFAICT be able to multiply with mul! or similar.
This does not not appear to be extensively documented, and in particular filling this DistSparseMatrix with the desired values does not appear to be trivial, but some examples appear in https://github.com/JuliaParallel/Elemental.jl/blob/master/test/lav.jl, among a few other places in the package source
Beyond this, while there are of course packages such as DistributedArrays.jl and the SparseArrays stdlib, there is not to my knowledge any sparse, distributed array package in pure Julia yet, so a wrapper package like Elemental.jl is going to be your best bet.
Other packages that should be generically capable of sparse distributed matrix multiplication appear to include PETSc and Trilinos, both of which do have Julia wrappers (the latter of which appears unmaintained, though see also the presentation thereon). With PETSc.jl, it appears that you should be able to make a "MATSEQ" sparse matrix by passing a Julia SparseMatrixCSC to PETSc.Mat.

Related

Structure from Motion or SLAM for windows?

Is there any libraries that can be used on windows for using SfM or SLAM?
This will be in python btw
So far everything I am seeing is in Linux
Since you asked about Sfm I assume you are looking for visual SLAM solutions. These solutions are computationally expensive, because you basically deal with a lot of iterative minimizations on large parameter spaces. Because of that, high-level languages are poorly suited for that task. So, I can recomment one of two things (depending on the precision you need):
1) Don't use SFM or SLAM, but just some simple visual odometry python package (quite a few on github). If you are not familiar with the term, we can say it's juste local pose computation but without the optimizations that are used in SLAM or SFM. So you might get locally decent results, but forget about globally coherent trajectories.
2) Use one of the freely available state of the art libraries such as ORBSLAM_2_windows and use your own python wrappers.

Performing many small matrix operations in parallel in OpenCL

I have a problem that requires me to do eigendecomposition and matrix multiplication of many (~4k) small (~3x3) square Hermitian matrices. In particular, I need each work item to perform eigendecomposition of one such matrix, and then perform two matrix multiplications. Thus, the work that each thread has to do is rather minimal, and the full job should be highly parallelizable.
Unfortunately, it seems all the available OpenCL LAPACKs are for delegating operations on large matrices to the GPU rather than for doing smaller linear algebra operations inside an OpenCL kernel. As I'd rather not implement matrix multiplcation and eigendecomposition for arbitrarily sized matrices in OpenCL myself, I was hoping someone here might know of a suitable library for the job?
I'm aware that OpenCL might be getting built-in matrix operations at some point since the matrix type is reserved, but that is not really of much use right now. There is a similar question here from 2011, but it pretty much just says to roll your own, so I'm hoping the situation has improved since then.
In general, my experience with libraries like LAPACK, fftw, cuFFT, etc. is that when you want to do many really small problems like this, you are better off writing your own for performance. Those libraries are usually written for generality, so you can often beat their performance for specific small problems, especially if you can use unique properties of your particular problem.
I realize you don't want to hear "roll your own" but for this type of problem it is really the best thing to do IMO. You might find a library to do this, but considering the code that you really want (for performance) will not generalize, I doubt it exists. You'll be looking specifically for code to find the eigenvalues of 3x3 matrices. That's less of a library and more of a random code snippet with a suitable license that you can manipulate to take advantage of your specific problem.
In this specific case, you can find the eigenvalues of a 3x3 matrix with the textbook method using the characteristic polynomial. Remember that there is a relatively simple closed form solution for cubic equations: http://en.wikipedia.org/wiki/Cubic_function#General_formula_for_roots.
While I think it is very likely that this approach would be much faster than iterative methods, it would be wise to verify that if performance is an issue.

There is a fast way to use Clojure vectors as matrices?

I am trying to use Clojure to process images and I would like to represent images using Clojure data structures. Basically, my first approach was using a vector of vectors and mapv to operate over each pixel value and return a new image representation with the same data structure. However, some basic operations are taking too much time.
Using Jvisual profiler, I got the results showed below. Somebody could give me a tip to improve the performance? I can give more details if it is necessary, but maybe just looking at the costs of seq and next someone can have a good guess.
You should check out core.matrix and associated libraries for anything to do with matrix computation. core.matrix is a general purpose Clojure API for matrix computation, supporting multiple back-end implementations.
Clojure's persistent data structures are great for most purposes, but are really not suited for fast processing of large matrices. The main problems are:
Immutability: usually a good thing, but can be a killer for low level code where you need to do things like accumulate results in a mutable array for performance reasons.
Boxing: Clojure data structures generally box results (as java.lang.Double etc.) which adds a lot of overhead compared to using primitives
Sequences: traversing most Clojure data structures as sequences involved the creation of temporary heap objects to hold the sequence elements. Normally not a problem, but when you are dealing with large matrices it becomes problematic.
The related libraries that you may want to look at are:
vectorz-clj : a very fast matrix library that works as a complete core.matrix implementation. The underlying code is pure Java but with a nice Clojure wrapper. I believe it is currently the fastest way of doing general purpose matrix computation in Clojure without resorting to native code. Under the hood, it uses arrays of Java primitives, but you don't need to deal with this directly.
Clatrix: another fast matrix library for Clojure which is also a core.matrix implementation. Uses JBLAS under the hood.
image-matrix : represents a Java BufferedImage as a core.matrix implementation, so you can perform matrix operations on images. A bit experimental right now, but should work for basic use cases
Clisk : a library for procedural image processing. Not so much a matrix library itself, but very useful for creating and manipulating digital images using a Clojure-based DSL.
Depending on what you want to do, the best approach may be to use image-matrix to convert the images into vectorz-clj matrices and do your processing there. Alternatively, Clisk might be able to do what you want out of the box (it has a lot of ready-made filters / distortion effects etc.)
Disclaimer: I'm the lead developer for most of the above libraries. But I'm using them all myself for serious work, so very willing to vouch for their usefulness and help fix any issues you find.
I really think that you should use arrays of primitives for this. Clojure has array support built-in, even though it's not highlighted, and it's for cases just like this, where you have a high volume of numerical data.
Any other approach, vectors, even java collections will result in all of your numbers being boxed individually, which is very wasteful. Arrays of primitives (int, double, byte, whatever is appropriate) don't have this problem, and that's why they're there. People feel shy about using arrays in clojure, but they're there for a reason, and this is it. And it'll be good protable clojure code -- int-array works in both jvm clojure and clojure-script.
Try arrays and benchmark.
Clojure's Transients offer a middle ground between full persistence and no persistence like you would get with a standard java array. This allows you to build the image using fast mutate-in-place opperations (which are limited to the current thread) and then call persistent! to convert it in constante time to a proper persistent structure for manipulation in the rest of the program
It looks like you are also seeing a lot of overhead from working with sequences over the contents of the image, if transients don't make enough of a difference you may want to next consider using normal java arrays and structure the accesses to directly access the array elements.

How can this linear solver be linked within Mathematica?

Here is a good linear solver named GotoBLAS. It is available for download and runs on most computing platforms. My question is, is there an easy way to link this solver with the Mathematica kernel, so that we can call it like LinearSolve? One thing most of you may agree on for sure is that if we have a very large Linear system then we better get it solved by some industry standard Linear solver. The inbuilt solver is not meant for really large problems.
Now that Mathematica 8 has come up with better compilation and library link capabilities we can expect to use some of those solvers from within Mathematica. The question is does that require little tuning of the source code, or you need to be an advanced wizard to do it. Here in this forum we may start linking some excellent open source programs like GotoBLAS with Mathematica and exchange our views. Less experienced people can get some insight from the pro users and at the end we get a much stronger Mathematica. It will be an open project for the ever increasing Mathematica community and a platform where these newly introduced capabilities of Mathematica 8 could be transparently documented for future users.
I hope some of you here will give solid ideas on how we can get GotoBLAS running from within Mathematica. As the newer compilation and library link capabilities are usually not very well documented, they are not used by the common users very often. This question can act as a toy example to document these new capabilities of Mathematica. Help in this direction by the experienced forum members will really lift the motivation of new users like me as well as it will teach us a very useful thing to extend Mathematica's number crunching arsenal.
The short answer, I think, is that this is not something you really want to do.
GotoBLAS, as I understand it, is a specific implementation of BLAS, which stands for Basic Linear Algebra Subroutines. "Basic" really means quite basic here - multiply a matrix times a vector, for example. Thus, BLAS is not a solver that a function like LinearSolve would call. LinearSolve would (depending on the exact form of the arguments) call a LAPACK command, which is a higher level package built on top of BLAS. Thus, to really link GotoBLAS (or any BLAS) into Mathematica, one would really need to recompile the whole kernel.
Of course, one could write a C/Fortran program that was compiled against GotoBLAS and then link that into Mathematica. The resulting program would only use GotoBLAS when running whatever specific commands you've linked into Mathematica, however, which rather misses the whole point of BLAS.
The Wolfram Kernel (Mathematica) is already linked to the highly-optimized Intel Math Kernel Library, and is distributed with Mathematica. The MKL is multithreaded and vectorized, so I'm not sure what GotoBLAS would improve upon.

Efficient EigenSolver Implementation

I am looking for an efficient eigensolver ( language not important, although I would be programming in C#), that utilizes the multi-core features found in modern CPU. Being able to work directly with pardiso solver is a major plus. My matrix are mostly sparse matrix, so an ideal solver should be able to take advantage of this fact and greatly enhance the memory usage and performance.
So far I have only found LAPACK and ARPACK. The LAPACK, as implemented in Intel MKL, is a good candidate, as it offers multi-core optimization. But it seems that the drivers inside the LAPACK don't work directly with pardiso solver, furthermore, it seems that they don't take advantage of sparse matrix ( but I am not sure on this point).
ARPACK, on the other hand, seems to be pretty hard to setup in Windows environment, and the parallel version, PARPACK, doesn't work so well. The bonus point is that it can work with pardiso solver.
The best would be Intel MKL + ARPACK with multi-core speedup. Not sure whether there is any existing implementations that already do what I want to do?
I'm working on a problem with needs very similar to the ones you state. I'm considering FEAST:
http://www.ecs.umass.edu/~polizzi/feast/index.htm
I'm trying to make it work right now, but it seems perfect. I'm interested in hearing what your experience with it is, if you use it.
cheers
Ned
Have a look at the Eigen2 library.
I've implemented it already, in C#.
The idea is that one must convert the matrix format in CSR format. Then, one can use MKL to compute linear equation solving algorithm ( using pardiso solver), the matrix-vector manipulation.

Resources