I am working on a problem that requires solving 3 simultaneous equations of 3 variables. Reading:
https://www.mathsisfun.com/algebra/systems-linear-equations-matrices.html
I see that this can be solved by determining an inverse matrix of the matrix representation of the 3 equations. I found this C code that uses the GSL library at github for calculating inverse matrices:
https://gist.github.com/bjd2385/7f4685e703f7437e513608f41c65bbd7
(Many thanks to it's author, Mr. Doyle.)
I had read that if one multiplies a matrix by its inverse one should get an identity matrix (a matrix with 1.0s down the diagonal and 0.0s every where else). So I figured as a sanity check for the above github code, I could modify it to multiply the resultant inverse by the original matrix and display the result. If the result is an identity matrix, that validates the inverse calculation.
What I am finding is that, at least for the simple 2X2 matrix case, while the result of the inverse calculation looks correct, the subsequent matrix multiply is not resulting in an identity matrix.
I'm new to this GSL library, so perhaps I am just not calling the gsl_blas_dgemm() library function correctly to perform the matrix multiplication.
I've copied the modified code below:
/* A simple example of inverting a matrix using the gsl */
/* 1-26-2021, modified to sanity check result */
/* code doesn't seem to work */
#define HAVE_INLINE
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <gsl/gsl_math.h>
#include <gsl/gsl_blas.h>
#include <gsl/gsl_blas_types.h>
#include <gsl/gsl_matrix_double.h>
#include <gsl/gsl_linalg.h>
gsl_matrix *invert_a_matrix(gsl_matrix *matrix);
void print_mat_contents(gsl_matrix *matrix);
void randomize_mat_contents(gsl_matrix *matrix);
static size_t size = 2;
/************************************************************
* PROCEDURE: invert_a_matrix
*
* DESCRIPTION: Invert a matrix using GSL.
*
* RETURNS:
* gsl_matrix pointer
*/
gsl_matrix *
invert_a_matrix(gsl_matrix *matrix)
{
gsl_permutation *p = gsl_permutation_alloc(size);
int s;
// Compute the LU decomposition of this matrix
gsl_linalg_LU_decomp(matrix, p, &s);
// Compute the inverse of the LU decomposition
gsl_matrix *inv = gsl_matrix_alloc(size, size);
gsl_linalg_LU_invert(matrix, p, inv);
gsl_permutation_free(p);
return inv;
}
/************************************************************
* PROCEDURE: print_mat_contents
*
* DESCRIPTION: Print the contents of a gsl-allocated matrix
*
* RETURNS:
* None.
*/
void
print_mat_contents(gsl_matrix *matrix)
{
size_t i, j;
double element;
for (i = 0; i < size; ++i) {
for (j = 0; j < size; ++j) {
element = gsl_matrix_get(matrix, i, j);
printf("%f ", element);
}
printf("\n");
}
}
/************************************************************
* PROCEDURE: randomize_mat_contents
*
* DESCRIPTION: Overwrite entries in matrix with randomly
* generated values.
*
* RETURNS:
* None.
*/
void
randomize_mat_contents(gsl_matrix *matrix)
{
size_t i, j;
double random_value;
double range = 1.0 * RAND_MAX;
for (i = 0; i < size; ++i) {
for (j = 0; j < size; ++j) {
// generate a random value
random_value = rand() / range;
// set entry at i, j to random_value
gsl_matrix_set(matrix, i, j, random_value);
}
}
}
int
main(void)
{
srand(time(NULL));
gsl_matrix *mat = gsl_matrix_alloc(size, size);
// fill this matrix with random doubles
randomize_mat_contents(mat);
// let's see the original now
printf("Original matrix:\n");
print_mat_contents(mat);
printf("\n");
// compute the matrix inverse
gsl_matrix *inverse = invert_a_matrix(mat);
printf("Inverted matrix:\n");
print_mat_contents(inverse);
printf("\n");
gsl_matrix *product = gsl_matrix_calloc(size, size);
// if inverse is truly the inverse of mat, then mat * inverse should = identity matrix
printf("product before:\n");
print_mat_contents(product);
printf("\n");
int error;
// neither of these results in an identity matrix, 8^(
error = gsl_blas_dgemm(CblasNoTrans, CblasNoTrans, 1.0, inverse, mat, 0.0, product);
// error = gsl_blas_dgemm(CblasNoTrans, CblasNoTrans, 1.0, mat, inverse, 0.0, product);
// error = gsl_blas_dgemm(CblasNoTrans, CblasNoTrans, 1.0, inverse, mat, 0.0, product);
if (error) {
fprintf(stderr, "gsl_blas_dgemm returned %d\n", error);
}
printf("inverse * mat:\n");
print_mat_contents(product);
gsl_matrix_free(mat);
gsl_matrix_free(inverse);
gsl_matrix_free(product);
return 0;
}
In summary:
Create a matrix with random contents, print it.
Calculate its inverse, print the inverse.
Call gsl_blas_dgemm() to multiply the matrix by its inverse, print what should be an identity matrix.
When compile, link and run the program on my ubuntu 20.04 laptop, I get this:
~/opengl/matrix_code/2by2_inverter$ gcc inverter.c -lgsl -lgslcblas -lm
~/opengl/matrix_code/2by2_inverter$ ./a.out
Original matrix:
0.317588 0.113800
0.280836 0.190114
Inverted matrix:
6.689703 -4.004360
-9.882010 11.175224
product before:
0.000000 0.000000
0.000000 0.000000
inverse * mat:
-1.416400 0.402961
6.743603 -0.124569
~/opengl/matrix_code/2by2_inverter$
Now if I do the matrix multiply of the original matrix times its inverse "by hand" I find that the result is a 2X2 identity matrix:
Upper left corner: 0.317588*6.689703+0.113800*-9.882010 = .999996658 ~= 1.0
Upper right corner: 0.317588*-4.004360+0.113800*11.175224 = .000003808 ~= 0.0
Lower left corner: 0.280836*6.689703+0.190114*-9.882010 = .000000982 ~= 0.0
Lower right corner: 0.280836*-4.004360+0.190114*11.175224 = .999998091 ~= 1.0
Granted, the coordinates of the identity matrix are not exactly 1.0 and 0.0, but some error is expected. From this I conclude that the function invert_a_matrix() is doing the right thing, at least for a 2X2 matrix.
But try as I might I cannot figure out how to get the call to gsl_blas_dgemm() to produce the identity matrix.
Note that I installed the GSL libraries from the Ubuntu repository via:
~$ sudo apt-get install libgsl-dev
Any clues as to what I am doing wrong?
Thanks in advance
I figured out the problem. The call to invert_a_matrix() modifies the passed in matrix. So by the time I got to the call to gsl_blas_dgemm(), I wasn't multiplying the inverse by the original matrix.
Fix was to allocate a copy of the original matrix before the call the invert_a_matrix() and pass the copy to gsl_blas_dgemm().
Related
How would I go about getting a random number in a Metal shader?
I searched for "random" in The Metal Shading Language Specification, but found nothing.
It looks like there's not one built in. This example code for MetalShaderShowcase/AAPLWoodShader.metal defines its own simple rand function.
// Generate a random float in the range [0.0f, 1.0f] using x, y, and z (based on the xor128 algorithm)
float rand(int x, int y, int z)
{
int seed = x + y * 57 + z * 241;
seed= (seed<< 13) ^ seed;
return (( 1.0 - ( (seed * (seed * seed * 15731 + 789221) + 1376312589) & 2147483647) / 1073741824.0f) + 1.0f) / 2.0f;
}
So I was working on a Random Number Generator for another project and was wanting to package it into a neat framework for a while.
Your question pushed me to do just that. If you don't mind the shameless plug, here is a very simple framework that will generate a random number for you in a metal shader based on (up to) three seeds that you give it. The code is based on the following research paper that describes how to create random numbers on parallel processors for Monte Carlo simulations. It also has a (theoretical) period of 2^121 so it should be good for most reasonable calculations that can be done on a GPU.
All you have to call in your shader is an intializer, then you call rand(), like so:
// Initialize a random number generator, seeds 2 and 3 are optional
Loki rng = Loki(seed1, seed2, seed3);
// get a random float [0,1)
float random_float = rng.rand();
I also included a sample project in the repo so you can see how it is used.
Instead of computing the random number on the GPU, you can also compute a bunch of random numbers on the CPU and pass them into a the shader using a uniform / MTLBuffer.
Please take a look at [pcg-random], it's very simple and fast, more importantly it's fast. And it's super easy to modify their C code for Metal. https://www.pcg-random.org/
typedef struct { uint64_t state; uint64_t inc; } pcg32_random_t;
void pcg32_srandom_r(thread pcg32_random_t* rng, uint64_t initstate, uint64_t initseq)
{
rng->state = 0U;
rng->inc = (initseq << 1u) | 1u;
pcg32_random_r(rng);
rng->state += initstate;
pcg32_random_r(rng);
}
uint32_t pcg32_random_r(thread pcg32_random_t* rng)
{
uint64_t oldstate = rng->state;
rng->state = oldstate * 6364136223846793005ULL + rng->inc;
uint32_t xorshifted = ((oldstate >> 18u) ^ oldstate) >> 27u;
uint32_t rot = oldstate >> 59u;
return (xorshifted >> rot) | (xorshifted << ((-rot) & 31));
}
How do I use it?
float randomF(thread pcg32_random_t* rng)
{
//return pcg32_random_r(rng)/float(UINT_MAX);
return ldexp(float(pcg32_random_r(rng)), -32);
}
pcg32_random_t rng;
pcg32_srandom_r(&rng, pos_grid.x*int_time, pos_grid.y*int_time);
auto randomFloat = randomF(&rng);
I looked around the forum but could't find the answer to my specific problem.
Objective
To compute A(x,y)^N where N is large - say 500. I store A(x,y) in Matlab the form of matrix where A(x,y)=\sum A(i+1,j+1)* x^i * y^j. Critical point to note is that I can't compute the real values of the coefficients because the coefficients are huge. So at each step I just store log of the coefficients so that I will always store the exponents of the coefficients but not the actual values which is important to me. If not this complication things would have been much easier i guess.
I am attaching the code here. For the power values small i.e n=50,100 instead of 500 I am seeing 1.5x faster with C-Mex code. But for values like n=500, I observe that Mex function is atmost same as Matlab if not slower. Is that the best I can do with cpp Mex file. I am not very experienced at cpp so I am hoping that some experienced users can point out any inefficiencies in my cpp code.
Matlab Code:
C=log(A);
P1=log(1);
n=500;
for i=1:n
P1=log_MatMat(P1,C,zeroApprox);
%P1=reshape(log_MatMat_CPP(P1,C,zeroApprox),size(P1,1)+size(C,1)-1,[]); %Using the cpp mex file.
end
Matlab Function:log_MatMat
function OP=log_MatMat(A,B,zeroApprox)
xDeg1=size(A,1)-1;
yDeg1=size(A,2)-1;
xDeg2=size(B,1)-1;
yDeg2=size(B,2)-1;
OP=-Inf*ones(xDeg1+xDeg2+1,yDeg1+yDeg2+1);
[xAInd,yAInd]=find(A~=-Inf);
linAInd=sub2ind(size(A),xAInd,yAInd)';
[xBInd,yBInd]=find(B~=-Inf);
for i=0:numel(xBInd)-1
xdegOffset=xBInd(i+1)-1 ;
ydegOffset=yBInd(i+1)-1 ;
tmp=A(linAInd)+B(xBInd(i+1),yBInd(i+1));
opIndx=xAInd+xdegOffset;
opIndy=yAInd+ydegOffset;
opIndLin=sub2ind(size(OP),opIndx,opIndy)';
maxExp=max([OP(opIndLin);tmp]);
diffExp=abs(OP(opIndLin)-tmp);
for j=1:numel(tmp)
% Does the addition in Log-domain without converting the values into real domain. e^x+e^y=e^y*(1+e^(x-y));
if maxExp(j)<zeroApprox
OP(opIndx(j),opIndy(j))=-Inf;
else
if OP(opIndx(j),opIndy(j))==-Inf
OP(opIndx(j),opIndy(j))=tmp(j);
elseif diffExp(j)>20
OP(opIndx(j),opIndy(j))=maxExp(j);
else OP(opIndx(j),opIndy(j))=maxExp(j)+log(1+exp(-1*diffExp(j)));
end
end
end
clear tmp;
end
Mex Code:
#include "mex.h"
#include "math.h"
#include <iostream>
#include <limits>
#define MIN(a, b) ((a < b) ? a : b)
#define MAX(a, b) ((a > b) ? a : b)
#define A(i,j) a[i+j*Xa]
#define B(i,j) b[i+j*Xb]
#define C(i,j) c[i+j*Xc]
const double Inf = std::numeric_limits <double> ::max();
int zeroApprox;
/* Addition in log-domain */
double logAdd(double x, double y)
{
double z;
if(x>y) {
(x-y>20)?(z=x):(z=x+log(1+exp(y-x)));
}
else if (y>=x){
(y-x>20)?(z=y):(z=y+log(1+exp(x-y)));
}
if (z<zeroApprox)
z=-Inf;
return z;
}
/* The bivariate polynomial product routine: c(x,y)=a(x,y)*b(x,y) */
void matProduct(double *a,double *b,double *c,int Xa,int Ya,int Xb, int Yb)
{
int opX,opY;
int Xc, Yc;
unsigned int nonInfA[Xa*Ya][3];
bool cInf_Flag[Xa+Xb-1][Ya+Yb-1];
Xc=Xa+Xb-1;
Yc=Ya+Yb-1;
/* Initializing C with -Inf*/
for (int i=0; i< Xc;i++){
for (int j=0; j< Yc;j++) {
C(i,j)=-Inf;
}
}
/* Matrix Mult in Log- domain */
for (int jX=0; jX<Xb; jX++) {
for(int jY=0; jY<Yb; jY++) {
if (!isinf(B(jX,jY))) {
for (int iX=0; iX<Xa; iX++) {
for (int iY=0; iY<Ya; iY++) {
if (!isinf(A(iX,iY))) {
opX=iX+jX; opY=iY+jY;
C(opX,opY)=logAdd(C(opX,opY),A(iX,iY)+B(jX,jY));
}
}
}
}
}
}
}
/* The gateway function */
void mexFunction( int nlhs, mxArray *plhs[],
int nrhs, const mxArray *prhs[])
{
int nrowsA,nrowsB,ncolsA, ncolsB; /* size of Matrices */
int nrowsC,ncolsC; /* size of output Matrix */
double *outMatrix; /* output matrix */
double *A, *B;
/* create a pointer to the real data in the input matrix */
A = mxGetPr(prhs[0]);
B = mxGetPr(prhs[1]);
zeroApprox = mxGetScalar(prhs[2]);
/* get dimensions of the input matrix */
nrowsA = mxGetM(prhs[0]);
ncolsA = mxGetN(prhs[0]);
nrowsB = mxGetM(prhs[1]);
ncolsB = mxGetN(prhs[1]);
/* Compute output dimensions*/
nrowsC=nrowsA+nrowsB-1;
ncolsC=ncolsA+ncolsB-1;
/* create the output matrix */
plhs[0] = mxCreateDoubleMatrix(1,nrowsC*ncolsC,mxREAL);
/* get a pointer to the real data in the output matrix */
outMatrix = mxGetPr(plhs[0]);
/* call the computational routine */
if (ncolsA<=ncolsB){
matProduct(B,A,outMatrix,nrowsB,ncolsB,nrowsA,ncolsA);
}
else {
matProduct(A,B,outMatrix,nrowsA,ncolsA,nrowsB,ncolsB);
}
}
Situation is the following: I have a number (1000s) of elements which are given by small matrices of dimensions 4x2, 9x3 ... you get the idea. All matrices have the same dimension.
I want to multiply each of these matrices with a fixed vector of precalculated values. In short:
for(i = 1...n)
X[i] = M[i] . N;
What is the best approach to do this in parallel using Thrust? How do I lay out my data in memory?
NB: There might be specialized, more suitable libraries to do this on GPUs. I'm interested in Thrust because it allows me to deploy to different backends, not just CUDA.
One possible approach:
flatten the arrays (matrices) into a single data vector. This is an advantageous step for enabling general thrust processing anyway.
use a strided range mechanism to take your scaling vector and extend it to the overall length of your flattened data vector
use thrust::transform with thrust::multiplies to multiply the two vectors together.
If you need to access the matrices later out of your flattened data vector (or result vector), you can do so with pointer arithmetic, or a combination of fancy iterators.
If you need to re-use the extended scaling vector, you may want to use the method outlined in step 2 exactly (i.e. create an actual vector using that method, length = N matrices, repeated). If you are only doing this once, you can achieve the same effect with a counting iterator, followed by a transform iterator (modulo the length of your matrix in elements), followed by a permutation iterator, to index into your original scaling vector (length = 1 matrix).
The following example implements the above, without using the strided range iterator method:
#include <iostream>
#include <stdlib.h>
#include <thrust/device_vector.h>
#include <thrust/host_vector.h>
#include <thrust/functional.h>
#include <thrust/iterator/permutation_iterator.h>
#include <thrust/iterator/counting_iterator.h>
#include <thrust/iterator/transform_iterator.h>
#include <thrust/transform.h>
#define N_MAT 1000
#define H_MAT 4
#define W_MAT 3
#define RANGE 1024
struct my_modulo_functor : public thrust::unary_function<int, int>
{
__host__ __device__
int operator() (int idx) {
return idx%(H_MAT*W_MAT);}
};
int main(){
thrust::host_vector<int> data(N_MAT*H_MAT*W_MAT);
thrust::host_vector<int> scale(H_MAT*W_MAT);
// synthetic; instead flatten/copy matrices into data vector
for (int i = 0; i < N_MAT*H_MAT*W_MAT; i++) data[i] = rand()%RANGE;
for (int i = 0; i < H_MAT*W_MAT; i++) scale[i] = rand()%RANGE;
thrust::device_vector<int> d_data = data;
thrust::device_vector<int> d_scale = scale;
thrust::device_vector<int> d_result(N_MAT*H_MAT*W_MAT);
thrust::transform(d_data.begin(), d_data.end(), thrust::make_permutation_iterator(d_scale.begin(), thrust::make_transform_iterator(thrust::counting_iterator<int>(0), my_modulo_functor())) ,d_result.begin(), thrust::multiplies<int>());
thrust::host_vector<int> result = d_result;
for (int i = 0; i < N_MAT*H_MAT*W_MAT; i++)
if (result[i] != data[i] * scale[i%(H_MAT*W_MAT)]) {std::cout << "Mismatch at: " << i << " cpu result: " << (data[i] * scale[i%(H_MAT*W_MAT)]) << " gpu result: " << result[i] << std::endl; return 1;}
std::cout << "Success!" << std::endl;
return 0;
}
EDIT: Responding to a question below:
The benefit of fancy iterators (i.e. transform(numbers, iterator)) is that they often allow for eliminaion of extra data copies/data movement, as compared to assembling other number (which requires extra steps and data movement) and then passing it to transform(numbers, other numbers). If you're only going to use other numbers once, then the fancy iterators will generally be better. If you're going to use other numbers again, then you may want to assemble it explicitly. This preso is instructive, in particular "Fusion".
For a one-time use of other numbers the overhead of assembling it on the fly using fancy iterators and the functor is generally lower than explicitly creating a new vector, and then passing that new vector to the transform routine.
When looking for a software library which is concisely made for multiplying small matrices, then one may have a look at https://github.com/hfp/libxsmm. Below, the code requests a specialized matrix kernel according to the typical GEMM parameters (please note that some limitations apply).
double alpha = 1, beta = 1;
const char transa = 'N', transb = 'N';
int flags = LIBXSMM_GEMM_FLAGS(transa, transb);
int prefetch = LIBXSMM_PREFETCH_AUTO;
libxsmm_blasint m = 23, n = 23, k = 23;
libxsmm_dmmfunction xmm = NULL;
xmm = libxsmm_dmmdispatch(m, n, k,
&m/*lda*/, &k/*ldb*/, &m/*ldc*/,
&alpha, &beta, &flags, &prefetch);
Given the above code, one can proceed and run "xmm" for an entire series of (small) matrices without a particular data structure (below code also uses "prefetch locations").
if (0 < n) { /* check that n is at least 1 */
# pragma parallel omp private(i)
for (i = 0; i < (n - 1); ++i) {
const double *const ai = a + i * asize;
const double *const bi = b + i * bsize;
double *const ci = c + i * csize;
xmm(ai, bi, ci, ai + asize, bi + bsize, ci + csize);
}
xmm(a + (n - 1) * asize, b + (n - 1) * bsize, c + (n - 1) * csize,
/* pseudo prefetch for last element of batch (avoids page fault) */
a + (n - 1) * asize, b + (n - 1) * bsize, c + (n - 1) * csize);
}
In addition to the manual loop control as shown above, libxsmm_gemm_batch (or libxsmm_gemm_batch_omp) can be used (see ReadTheDocs). The latter is useful if data structures exist that describe the series of operands (A, B, and C matrices).
There are two reasons why this library gives superior performance: (1) on-the-fly code specialization using an in-memory code generation technique, and (2) loading the next matrix operands while calculating the current product.
( Given one is looking for something that blends well with C/C++, this library supports it. However, it does not aim for CUDA/Thrust. )
I'm currently working on porting a TERCOM algorithm from using only 1 thread to use multiple threads. Briefly explained , the TERCOM algorithm receives 5 measurements and the heading, and compare this measurements to a prestored map. The algorithm will choose the best match, i.e. lowest Mean Absolute Difference (MAD), and return the position.
The code is working perfectly with one thread and for-loops, but when I try to use multiple threads and blocks it returns the wrong answer. It seems like the multithread version doesn't "run through" the calculation in the same way as the singlethread versjon. Does anyone know what I am doing wrong?
Here's the code using for-loops
__global__ void kernel (int m, int n, int h, int N, float *f, float heading, float *measurements)
{
//Without threads
float pos[2]={0};
float theta=heading*(PI/180);
float MAD=0;
// Calculate how much to move in x and y direction
float offset_x = h*cos(theta);
float offset_y = -h*sin(theta);
float min=100000; //Some High value
//Calculate Mean Absolute Difference
for(float row=0;row<m;row++)
{
for(float col=0;col<n;col++)
{
for(float g=0; g<N; g++)
{
f[(int)g] = tex2D (tex, col+(g-2)*offset_x+0.5f, row+(g-2)*offset_y+0.5f);
MAD += abs(measurements[(int)g]-f[(int)g]);
}
if(MAD<min)
{
min=MAD;
pos[0]=col;
pos[1]=row;
}
MAD=0; //Reset MAD
}
}
f[0]=min;
f[1]=pos[0];
f[2]=pos[1];
}
This is my attempt to use multiple threads
__global__ void kernel (int m, int n, int h, int N, float *f, float heading, float *measurements)
{
// With threads
int idx = blockIdx.x * blockDim.x + threadIdx.x;
int idy = blockIdx.y * blockDim.y + threadIdx.y;
float pos[2]={0};
float theta=heading*(PI/180);
float MAD=0;
// Calculate how much to move in x and y direction
float offset_x = h*cos(theta);
float offset_y = -h*sin(theta);
float min=100000; //Some High value
if(idx < n && idy < m)
{
for(float g=0; g<N; g++)
{
f[(int)g] = tex2D (tex, idx+(g-2)*offset_x+0.5f, idy+(g-2)*offset_y+0.5f);
MAD += abs(measurements[(int)g]-f[(int)g]);
}
if(MAD<min)
{
min=MAD;
pos[0]=idx;
pos[1]=idy;
}
MAD=0; //Reset MAD
}
f[0]=min;
f[1]=pos[0];
f[2]=pos[1];
}
To launch the kernel
dim3 dimBlock( 16,16 );
dim3 dimGrid;
dimGrid.x = (n + dimBlock.x - 1)/dimBlock.x;
dimGrid.y = (m + dimBlock.y - 1)/dimBlock.y;
kernel <<< dimGrid,dimBlock >>> (m, n, h, N, dev_results, heading, dev_measurements);
The basic problem here is that you have a memory race in the code, centered around the use of f as both some sort of thread local scratch space and an output variable. Every concurrent thread will be trying to write values into the same locations in f simultaneously, which will produce undefined behaviour.
As best as I can tell, the use of f as scratch space isn't even necessary at all and the main computational section of the kernel could be written as something like:
if(idx < n && idy < m)
{
for(float g=0; g<N; g++)
{
float fval = tex2D (tex, idx+(g-2)*offset_x+0.5f, idy+(g-2)*offset_y+0.5f);
MAD += abs(measurements[(int)g]-fval);
}
min=MAD;
pos[0]=idx;
pos[1]=idy;
}
[disclaimer: written in browser, use at own risk]
At the end of that calculation, each thread has its own values of min and pos. At a minimum these must be stored in unique global memory (ie. the output must have enough space for each thread result). You will then need to perform some sort of reduction operation to obtain the global minimum from the set of thread local values. That could be in the host, or in the device code, or some combination of the two. There is a lot of code already available for CUDA parallel reductions which you should be able to find by searching and/or looking in the examples supplied with the CUDA toolkit. It should be trivial to adapt them to your specify case where you need to retain the position along with the minimum value.
I need to compute the nullspace of several thousand small matrices (8x9, not 4x3 as I wrote previously) in parallel (CUDA). All references point to SVD but the algorithm in numerical recipes seems very expensive, and gives me lots of things other than the null space that I don't really need. Is Gaussian elimination really not an option? Are there any other commonly used methods?
To answer your question directly... yes! QR decomposition!
Let A be an m-by-n matrix with rank n. QR decomposition finds orthonormal m-by-m matrix Q and upper triangular m-by-n matrix R such that A = QR. If we define Q = [Q1 Q2], where Q1 is m-by-n and Q2 is m-by-(m-n), then the columns of Q2 form the null space of A^T.
QR decomposition is computed either by Gram-Schmidt, Givens rotations, or Householder reflections. They have different stability properties and operation counts.
You are right: SVD is expensive! I can't speak for what state-of-the-art stuff uses, but when I hear "compute null space" (EDIT: in a way that is simple for me to understand), I think QR.
I don't think the above proposed method always gives the whole null space. To recap: "A = QR, where Q = [Q1 Q2], and Q1 is m-by-n and Q2 is m-by-(m-n). Then the columns of Q2 form the null space of A^T."
Indeed, this may only give a subspace of the null space. Simple counter-example is when A=0, in which case the null space of A^T is the whole R^m.
Therefore, it is necessary to check R too. Based on my experience with Matlab, if a row of R is straight 0, then the corresponding column in Q should also be a basis of the null space of A^T. Clearly this observation is heuristic and hinges on the particular algorithm used for QR decomposition.
Gaussian elimination is plenty fast for 4x3 matrices. IIRC I've done about 5 million per second with Java without parallelism. With such a small problem, your best bet is to code the routine (row reduce etc.) yourself; otherwise you'll waste most of the time putting the data into the right format for the external routine.
In the anwers above, it has been already pointed out how the null space of a matrix can be calculated by using the QR or the SVD approach. SVD should be preferred when accuracy is required, see also Null-space of a rectangular dense matrix.
As of February 2015, CUDA 7 (now in release candidate) makes SVD available through its new cuSOLVER library. Below I report an example on how using cuSOLVER's SVD to calculate the null space of a matrix.
Be aware that the problem you are focusing on concerns the calculation of several small matrices, so you should adapt the example I'm providing below by using streams to make sense for your case. To associate a stream to each task you can use
cudaStreamCreate()
and
cusolverDnSetStream()
kernel.cu
#include "cuda_runtime.h"
#include "device_launch_paraMeters.h"
#include<iostream>
#include<iomanip>
#include<stdlib.h>
#include<stdio.h>
#include<assert.h>
#include<math.h>
#include <cusolverDn.h>
#include <cuda_runtime_api.h>
#include "Utilities.cuh"
/********/
/* MAIN */
/********/
int main(){
// --- gesvd only supports Nrows >= Ncols
// --- column major memory ordering
const int Nrows = 7;
const int Ncols = 5;
// --- cuSOLVE input/output parameters/arrays
int work_size = 0;
int *devInfo; gpuErrchk(cudaMalloc(&devInfo, sizeof(int)));
// --- CUDA solver initialization
cusolverDnHandle_t solver_handle;
cusolverDnCreate(&solver_handle);
// --- Singular values threshold
double threshold = 1e-12;
// --- Setting the host, Nrows x Ncols matrix
double *h_A = (double *)malloc(Nrows * Ncols * sizeof(double));
for(int j = 0; j < Nrows; j++)
for(int i = 0; i < Ncols; i++)
h_A[j + i*Nrows] = (i + j*j) * sqrt((double)(i + j));
// --- Setting the device matrix and moving the host matrix to the device
double *d_A; gpuErrchk(cudaMalloc(&d_A, Nrows * Ncols * sizeof(double)));
gpuErrchk(cudaMemcpy(d_A, h_A, Nrows * Ncols * sizeof(double), cudaMemcpyHostToDevice));
// --- host side SVD results space
double *h_U = (double *)malloc(Nrows * Nrows * sizeof(double));
double *h_V = (double *)malloc(Ncols * Ncols * sizeof(double));
double *h_S = (double *)malloc(min(Nrows, Ncols) * sizeof(double));
// --- device side SVD workspace and matrices
double *d_U; gpuErrchk(cudaMalloc(&d_U, Nrows * Nrows * sizeof(double)));
double *d_V; gpuErrchk(cudaMalloc(&d_V, Ncols * Ncols * sizeof(double)));
double *d_S; gpuErrchk(cudaMalloc(&d_S, min(Nrows, Ncols) * sizeof(double)));
// --- CUDA SVD initialization
cusolveSafeCall(cusolverDnDgesvd_bufferSize(solver_handle, Nrows, Ncols, &work_size));
double *work; gpuErrchk(cudaMalloc(&work, work_size * sizeof(double)));
// --- CUDA SVD execution
cusolveSafeCall(cusolverDnDgesvd(solver_handle, 'A', 'A', Nrows, Ncols, d_A, Nrows, d_S, d_U, Nrows, d_V, Ncols, work, work_size, NULL, devInfo));
int devInfo_h = 0; gpuErrchk(cudaMemcpy(&devInfo_h, devInfo, sizeof(int), cudaMemcpyDeviceToHost));
if (devInfo_h != 0) std::cout << "Unsuccessful SVD execution\n\n";
// --- Moving the results from device to host
gpuErrchk(cudaMemcpy(h_S, d_S, min(Nrows, Ncols) * sizeof(double), cudaMemcpyDeviceToHost));
gpuErrchk(cudaMemcpy(h_U, d_U, Nrows * Nrows * sizeof(double), cudaMemcpyDeviceToHost));
gpuErrchk(cudaMemcpy(h_V, d_V, Ncols * Ncols * sizeof(double), cudaMemcpyDeviceToHost));
for(int i = 0; i < min(Nrows, Ncols); i++)
std::cout << "d_S["<<i<<"] = " << std::setprecision(15) << h_S[i] << std::endl;
printf("\n\n");
int count = 0;
bool flag = 0;
while (!flag) {
if (h_S[count] < threshold) flag = 1;
if (count == min(Nrows, Ncols)) flag = 1;
count++;
}
count--;
printf("The null space of A has dimension %i\n\n", min(Ncols, Nrows) - count);
for(int j = count; j < Ncols; j++) {
printf("Basis vector nr. %i\n", j - count);
for(int i = 0; i < Ncols; i++)
std::cout << "d_V["<<i<<"] = " << std::setprecision(15) << h_U[j*Ncols + i] << std::endl;
printf("\n");
}
cusolverDnDestroy(solver_handle);
return 0;
}
Utilities.cuh
#ifndef UTILITIES_CUH
#define UTILITIES_CUH
extern "C" int iDivUp(int, int);
extern "C" void gpuErrchk(cudaError_t);
extern "C" void cusolveSafeCall(cusolverStatus_t);
#endif
Utilities.cu
#include <stdio.h>
#include <assert.h>
#include "cuda_runtime.h"
#include <cuda.h>
#include <cusolverDn.h>
/*******************/
/* iDivUp FUNCTION */
/*******************/
extern "C" int iDivUp(int a, int b){ return ((a % b) != 0) ? (a / b + 1) : (a / b); }
/********************/
/* CUDA ERROR CHECK */
/********************/
// --- Credit to http://stackoverflow.com/questions/14038589/what-is-the-canonical-way-to-check-for-errors-using-the-cuda-runtime-api
void gpuAssert(cudaError_t code, char *file, int line, bool abort=true)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) { exit(code); }
}
}
extern "C" void gpuErrchk(cudaError_t ans) { gpuAssert((ans), __FILE__, __LINE__); }
/**************************/
/* CUSOLVE ERROR CHECKING */
/**************************/
static const char *_cudaGetErrorEnum(cusolverStatus_t error)
{
switch (error)
{
case CUSOLVER_STATUS_SUCCESS:
return "CUSOLVER_SUCCESS";
case CUSOLVER_STATUS_NOT_INITIALIZED:
return "CUSOLVER_STATUS_NOT_INITIALIZED";
case CUSOLVER_STATUS_ALLOC_FAILED:
return "CUSOLVER_STATUS_ALLOC_FAILED";
case CUSOLVER_STATUS_INVALID_VALUE:
return "CUSOLVER_STATUS_INVALID_VALUE";
case CUSOLVER_STATUS_ARCH_MISMATCH:
return "CUSOLVER_STATUS_ARCH_MISMATCH";
case CUSOLVER_STATUS_EXECUTION_FAILED:
return "CUSOLVER_STATUS_EXECUTION_FAILED";
case CUSOLVER_STATUS_INTERNAL_ERROR:
return "CUSOLVER_STATUS_INTERNAL_ERROR";
case CUSOLVER_STATUS_MATRIX_TYPE_NOT_SUPPORTED:
return "CUSOLVER_STATUS_MATRIX_TYPE_NOT_SUPPORTED";
}
return "<unknown>";
}
inline void __cusolveSafeCall(cusolverStatus_t err, const char *file, const int line)
{
if(CUSOLVER_STATUS_SUCCESS != err) {
fprintf(stderr, "CUSOLVE error in file '%s', line %d\n %s\nerror %d: %s\nterminating!\n",__FILE__, __LINE__,err, \
_cudaGetErrorEnum(err)); \
cudaDeviceReset(); assert(0); \
}
}
extern "C" void cusolveSafeCall(cusolverStatus_t err) { __cusolveSafeCall(err, __FILE__, __LINE__); }
I think the most important thing for CUDA is to find an algorithm that doesn't depend on conditional branching (which is quite slow on graphics hardware). Simple if statements that can be optimized into conditional assignment are much better (or you can use the ?: operator).
If necessary, you should be able to do some form of pivoting using conditional assignment. It might actually be harder to determine how to store your result: if your matrix is rank-deficient, what do you want your CUDA program to do about it?
If you assume your 4x3 matrix is not actually rank-deficient, you can find your (single) null-space vector without any conditionals at all: the matrix is small enough that you can use Cramer's rule efficiently.
Actually, since you don't actually care about the scale of your null vector, you don't have to divide by the determinant -- you can just take the determinants of the minors:
x1 x2 x3
M = y1 y2 y3
z1 z2 z3
w1 w2 w3
|y1 y2 y3| |x1 x2 x3| |x1 x2 x3| |x1 x2 x3|
-> x0 = |z1 z2 z3| y0 = -|z1 z2 z3| z0 = |y1 y2 y3| w0 = -|y1 y2 y3|
|w1 w2 w3| |w1 w2 w3| |w1 w2 w3| |z1 z2 z3|
Note that these 3x3 determinants are just triple products; you can save computation by reusing the cross products.
"seems very expensive" - what data do you have that supports this?
Maybe Block Lanczos is the answer you seek.
Or maybe this.
Both JAMA and Apache Commons Math have SVD implementations in Java. Why not take those and try them out? Get some real data for your case instead of impressions. It won't cost you much, since the code is already written and tested.
I wondered if the matrixes are related rather than just being random, so that the null spaces you are seeking can be considered to be like 1-dimensional tangents to a curve in N-space (N = 9). If so, you may be able to speed things up by using Newton's method to solve successive instances of the system of quadratic equations Ax = 0, |x|^2 = 1, starting from a previous null space vector. Newton's method uses first derivatives to converge to a solution, and so would use Gaussian elimination to solve 9x9 systems. Using this technique would require that you be able to make small steps from matrix to matrix by say varying a parameter.
So the idea is that you initialize using SVD on the first matrix, but thereafter you step from matrix to matrix, using the null space vector of one as the starting point for the iteration for the next one. You need one or two iterations to get convergence. If you don't get convegence you use SVD to restart. If this situation is what you have, it is much faster than starting fresh on each matrix.
I used this a long time ago to map contours in the solutions of sets of 50 x 50 quadratic equations associated with the behavior of electric power systems.