mkl_?dnscsr deprecated, what should I use? - matrix

MKL's routine mkl_?dnscsr to convert CSR format into dense format is deprecated.
Documentation says "Use the matrix manipulation routinesfrom the IntelĀ® oneAPI Math Kernel Library Inspector-executor Sparse BLAS interface instead."
There's no routine to convert CSR to dense in the Inspector-executor Sparse BLAS interface documentation that I can see.
What should I use?

Related

State-of-the-art out-of-place matrix transposition in libraries such as LaPack?

I'm looking for the most efficient way to compute out-of-place transposition for large matrices (>> 1024x1024), in C/C++. I've already came across several answers in SO, however I need more "trustworthy" sources for my work (like blas/lapack).
From an online quick search I understood blas has no such function. But it was implied that Lapack implemented matrix transposition. I've been looking for awhile (including the lapack documentation) but found no answer.
I know MKL-Blas implements matrix transposition, but I'm working in a remote server and I'm not able to install it there.
OpenBLAS (a BLAS implementation) supports those:
https://github.com/xianyi/OpenBLAS/wiki/OpenBLAS-Extensions
?omatcopy s,d,c,z out-of-place transpositon/copying

Boost compressed-matrix & Chlesky factorization & BLAS/LAPACK

I'd like to solve the following steps:
fill a boost::numeric::ublas::compressed_matrix;
now, I'd need to apply Cholesky factorization.
However, there is no such boost's function.
So I started looking for another library - I was thinking about BLAS or a kind LAPACK libs. But, although there are algorithms I'd need, how to bind boost::numeric::ublas::compressed_matrix with a BLAS or LAPACK's function/algorithm? Is there such a way?
Finally, I'd need to solve 'Ax=b', where 'A' is boost's compressed_matrix factorized by Cholesky algorithm. So how to solve 'x' with use of boost's and/or LAPACK's algorithm(s) or function(s)?
Thanks in advance.
LuP

sparse matrix multiplication program openmp

I'm looking for any standard C program that uses OpenMP APIs for a sparse matrix-vector or matrix-matrix multiplications. Can anyone let me know if there are any such programs.
If you are not looking for an open-source library, you can try to have a look at the Intel MKL Sparse-BLAS level 2 and level 3 routines:
http://software.intel.com/sites/products/documentation/hpc/mkl/updates/10.3.5/mklman/index.htm
These libraries should be multithreaded using OpenMP, as stated in the following page:
http://software.intel.com/en-us/articles/intel-math-kernel-library-intel-mkl-using-intel-mkl-with-threaded-applications/
I don't understand why you are looking for a 3rd party library to perform sparse matrix-matrix multiplications.
Have a look at this great Book (Introduction to Parallel Computing): http://www.scribd.com/doc/60118054/72/The-matrix%E2%80%93vector-multiplication-with-OpenMP

eigen value solver based on BOOST UBLAS

These days I am starting learning BOOST UBLAS and BOOST MATH for my tasks.
I was bit surprised to find that there is no eigenvalue/vector solver in it.
Since I would like to stick with Boost libs and their matrix classes, do you know about any library built on top of boost ublas capables to find eigenvalues and other stuff that might extend it or that are capable (at least) to accept boost matrix as input?
Boost ublas does not implement the gory details of numeric algorithms, it just provides a nice template interface. Access to matrix libs is provided through boost bindings, e.g. LaPack Bindings.
I used MKL for that problem. Of course it isn't connected with uBLAS

BLAS and CUBLAS

I'm wondering about NVIDIA's cuBLAS Library. Does anybody have experience with it? For example if I write a C program using BLAS will I be able to replace the calls to BLAS with calls to cuBLAS? Or even better implement a mechanism which let's the user choose at runtime?
What about if I use the BLAS Library provided by Boost with C++?
The answer by janneb is incorrect, cuBLAS is not a drop-in replacement for a CPU BLAS. It assumes data is already on the device, and the function signatures have an extra parameter to keep track of a cuBLAS context.
However, coming in CUDA 6.0 is a new library called NVBLAS which provides exactly this "drop-in" functionality. It intercepts Level3 BLAS calls (GEMM, TRSV, etc) and automatically sends them to the GPU, effectively tiling the PCIE transfer with on-GPU computation.
There is some information here: https://developer.nvidia.com/cublasxt, and CUDA 6.0 is available to CUDA registered developers today.
Full docs will be online once CUDA 6.0 is released to the general public.
CUBLAS does not wrap around BLAS.
CUBLAS also accesses matrices in a column-major ordering, such as some Fortran codes and BLAS.
I am more used to writing code in C, even for CUDA.
A code written with CBLAS (which is a C wrap of BLAS) can easily be change into a CUDA code.
Be aware that Fortran codes that use BLAS are quite different from C/C++ codes that use CBLAS.
Fortran and BLAS normally store matrices or double arrays in column-major ordering,
but C/C++ normally handle Row-major ordering.
I normally handle this problem writing saving the matrices in a 1D arrays,
and use #define to write a macro toa access the element i,j of a matrix as:
/* define macro to access Aij in the row-wise array A[M*N] */
#define indrow(ii,jj,N) (ii-1)*N+jj-1 /* does not depend on rows M */
/* define macro to access Aij in the col-wise array A[M*N] */
#define indcol(ii,jj,M) (jj-1)*M+ii-1
CBLAS library has a well organize parameters and conventions (const enum variables)
to give to each function the ordering of the matrix.
Beware that also the storage of matrices vary, a row-wise banded matrix is not stored the same as a column-wise band matrix.
I don't think there are mechanics to allow the user to choose between using BLAS or CUBLAS,
without writing the code twice.
CUBLAS also has on most function calls a "handle" variable that does not appear on BLAS.
I though of #define to change the name at each function call, but this might not work.
I've been porting BLAS code to CUBLAS. The BLAS library I use is ATLAS, so what I say may be correct only up to choice of BLAS library.
ATLAS BLAS requires you to specify if you are using Column major ordering or row major ordering, and I chose column major ordering since I was using CLAPACK which uses column major ordering. LAPACKE on the other hand would use row major ordering. CUBLAS is column major ordering. You may need to adjust accordingly.
Even if ordering is not an issue porting to CUBLAS was by no means a drop in replacement. The largest issue is that you must move the data onto and off of the GPU's memory space. That memory is setup using cudaMalloc() and released with cudaFree() which acts as one might expect. You move data into GPU memory using cudaMemcpy(). The time to do this will be a large determining factor on if it's worthwhile to move from CPU to GPU.
Once that's done however, the calls are fairly similar. CblasNoTrans becomes CUBLAS_OP_N and CblasTrans becomes CUBLAS_OP_T. If your BLAS library (as ATLAS does) allows you to pass scalars by value you will have to convert that to pass by reference (as is normal for FORTRAN).
Given this, any switch that allows for a choice of CPU/GPU would most easily be at a higher level than within the function using BLAS. In my case I have CPU and GPU variants of the algorithm and chose them at a higher level depending on the size of the problem.

Resources