LAPACK: Triangular system of equations with multiple right hand side - matrix

I have my beuatiful triangular n x n matrix, say L (for lower triangular), and I want to solve a system like
LX=B
Where B and X are n x k matrices (that is: I want to solve a triangular linear system with multiple right hand side). Additionally, I have my triangular matrix stored in PACKED FORMAT; i.e. I only store the lower triangular part. I am using BLAS and LAPACK, but I have realised that there is no specific way of solving my problem. Although there are many functions that solve similar problems:
stpsv(): Takes a triangular matrix in packed format and solves for a single right hand side.
strsm(): Takes a triangular matrix in dense format and solves for multiple right hand side.
What I really need is a combination of both. I would like to have a function accepting packed triangular format, as in stpsv(), and also accepting multiple right hand side, as in strsm(). But it seems as if there is not such a function readily available.
So my questions are:
Is there any function that can accept a packed triangular matrix and solve for multiple right hand side?
If the answer is NO, what would be more efficient? Either I call stpsv() in a for loop for every column in B, or I create a dense matrix from L, so that I have all those useless zeros in there and then I call strsm(). What would be better? Moreover, maybe I am missing a more clever way of doing all this.

Packed storage implies BLAS2 routines. Otherwise BLAS3 functions are more efficient in solving linear systems but they work in optimal blocked algorithms. If you call BLAS2 functions then you basically go back to vectored version hence it won't make too much sense.
Note that BLAS2 versions also do not perform conditioning checks. So they are directly optimized for BLAS2 performance since a triangular matrix with single RHS is a direct backward substitution.
For multiple RHS you can convert your matrix via, say stpttr and then use strtrs.

Yes, there is a function to solve Ax=B, for packed triangular matrix A and multiple right hand size B. It is stptrs() from LAPACK. In addition, there are other routines for triangular packed matrices, all featuring tp in their name according to the naming conventions of LAPACK.
However, looking at the source unveils that this function calls stpsv() from BLAS in a loop, once for each right hand side. It's exactly what you suggested!

Related

Matrix multiplication using Prolog arrays

It might not be evident, but Prolog also offers arrays out of the box. A Prolog compound has a functor and a number of arguments. This means we could represent an array such as:
[[1,2],[3,4]]
Replacing the Prolog lists by the following Prolog compounds:
matrice(vector(1,2), vector(3,4))
The advantage would be faster element access from an integer index. Can this representation be used to realize a matrix multiplication?
There is yet another approach, as implemented in R (the statistical environment). The dimensions of the array and the values are kept separately. So your square could also be represented as:
array(dims(2, 2), v(1,2,3,4))
This approach has some (questionable) benefits and drawbacks. You can start reading here, if you are at all interested: https://stat.ethz.ch/R-manual/R-devel/library/base/html/dim.html
To your question, yes, you can implement matrix multiplication, regardless on how you decide to represent the matrix. It would be interesting to see how the two approaches (array of arrays vs. one array and calculating indexes from the dimensions) compare in terms of efficiency.
What algorithm do you want to use for the matrix multiplication? Is it any of the ones described here: https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm?
EDIT: do you want to allow the client code to be able to provide the product and sum operations? Do you want to allow specialization of the values? For example, if you want to use matrix multiplication for finding the transitive closure of a graph, you could represent the boolean square matrix as an unbounded integer. This will make the matrix itself at least quite small.

Placing a matrix in echelon form with the Eigen library

This post concerns very short, wide arrays (# columns can be several orders of magnitude larger than the number of rows).
Due to the disparity in row/column number and the large size of the matrices I work with, it's usually infeasible to hold the U part of an LU decomposition in memory. Does Eigen have functionality to compute just the L? Equivalently, to place the input matrix in echelon form using row operations?
General notes
(1) I saw a related question here
https://forum.kde.org/viewtopic.php?f=74&t=138686&p=371097&hilit=echelon#p371097
The answer suggested looking at the image() method under FullPivLU, but I wasn't able to find the necessary information in the docs. In particular, it's often important to obtain the matrix L, in practice. An arbitrary basis for the column space of the matrix does not suffice.
(2) There was another question here
https://forum.kde.org/viewtopic.php?f=74&t=130430&p=348923&hilit=echelon#p348923
but it did not seem to get a response.
(3) Issues of stability are of less concern in the (fairly specialized) application domain that motivates this question, since we usually work over finite fields.
Thanks!

Evolving a matrix using a genetic algorithm

I recently discovered genetic algothims and after doing a little research I can't find any example on how to evolve structures more complex than a vector or a string.
Let's say that I'm using a covariance matrix for a certain computation (to compute a mahalanobis distance for example) and I want to look for a better matrix to do the job and linimize a certain criteria, are there any classic examples on how to evolve the matrix and which crossover operators to use ?
Thanks !
Any structure of fixed size and shape that is made of numbers (or any other elements) can be rewritten to a 1-D vector and back. You can then use any operator you like which works on vectors.
If you wanted to work with matrices (or any other structures) directly you can always design your own operators, but a matrix basically is a vector, just written in a different way. For the matrix case there are a number of possibilites of operators (crossover):
Swap rows/columns (between the parents)
Swap submatrices (generalization of the above)
Continuous-space crossover methos like BLX-alpha, PCX, arithmetic crossover... These all are designed for vectors but you will just treat the matrix as a vector (it's really not that different).
Mutation is probably going to be more or less identical to the vector-like - you just mutate the elements (or some of them).

Is it possible to calculate the mathematical function of a 2D image?

The question basically says it all. I would like to add that lets suppose I have an image, a photograph and I wish to calculate its mathematical function, so that when I input x and y pixel value, it returns a vector consisting of R,G,B values at that x,y point. Therefore I can use a for loop to construct the whole image by just that function. I am not asking about the whole solution or algorithm here, but just that if this thing is possible, which direction should I take to go about doing this. Reference to relevant papers would be really nice.
Thanks
Azmuh
Yes, it is absolutely always possible. Basically, if you choose some points, there is always (an infinity of) smooth explicit functions (that is nice functions) which value on the points is exactly the one you choose.
For example, you can have a look at http://en.wikipedia.org/wiki/Lagrange_polynomial or http://en.wikipedia.org/wiki/Trigonometric_interpolation. They are two different methods to compute an explicit function which pass exactly by the data points you have. So you can apply those methods for your image, seen as a set of data points, and separately for R, G, and B.
At the end, you get one simple function explicitly (a polynomial or a trigonometric series, depending on what you chose), and you can compute its values where you want.
However, note that I would definitely not recommend to use those methods to effectively retrieve the data. Indeed, the functions you get are absolutely not optimized (that is with a veeeery high degree (for a n×m image, each color will have a degree nm-1), very high coefficients) and furthermore will have extremely large values between your original points (look for Runge's phenomenon).
This is not possible in general... Imagine an image that has been generated by random values for each pixel. You can't find a mathematical expression that will give you the value of a pixel given its 2d coordinates.
Now it may be possible for some images that have been generated using a function. In that case, it's not a problem specific to image processing, it's get back the function from some points of the function (in your case, you have all the points). It's exactly the same thing as extrapolating a curve from a set of points when you trace a graph in excel. The more points you have, the more precise the function you wind will be.
Look for information about Regression analysis. I can't help you much but there are some algorithms that exist.

Difference between a linear problem and a non-linear problem? Essence of Dot-Product and Kernel trick

The kernel trick maps a non-linear problem into a linear problem.
My questions are:
1. What is the main difference between a linear and a non-linear problem? What is the intuition behind the difference of these two classes of problem? And How does kernel trick helps use the linear classifiers on a non-linear problem?
2. Why is the dot product so important in the two cases?
Thanks.
When people say linear problem with respect to a classification problem, they usually mean linearly separable problem. Linearly separable means that there is some function that can separate the two classes that is a linear combination of the input variable. For example, if you have two input variables, x1 and x2, there are some numbers theta1 and theta2 such that the function theta1.x1 + theta2.x2 will be sufficient to predict the output. In two dimensions this corresponds to a straight line, in 3D it becomes a plane and in higher dimensional spaces it becomes a hyperplane.
You can get some kind of intuition about these concepts by thinking about points and lines in 2D/3D. Here's a very contrived pair of examples...
This is a plot of a linearly inseparable problem. There is no straight line that can separate the red and blue points.
However, if we give each point an extra coordinate (specifically 1 - sqrt(x*x + y*y)... I told you it was contrived), then the problem becomes linearly separable since the red and blue points can be separated by a 2-dimensional plane going through z=0.
Hopefully, these examples demonstrate part of the idea behind the kernel trick:
Mapping a problem into a space with a larger number of dimensions makes it more likely that the problem will become linearly separable.
The second idea behind the kernel trick (and the reason why it is so tricky) is that it is usually very awkward and computationally expensive to work in a very high-dimensional space. However, if an algorithm only uses the dot products between points (which you can think of as distances), then you only have to work with a matrix of scalars. You can implicitly perform the calculations in the higher-dimensional space without ever actually having to do the mapping or handle the higher-dimensional data.
Many classifiers, among them the linear Support Vector Machine (SVM), can only solve problems that are linearly separable, i.e. where the points belonging to class 1 can be separated from the points belonging to class 2 by a hyperplane.
In many cases, a problem that is not linearly separable can be solved by applying a transform phi() to the data points; this transform is said to transform the points to feature space. The hope is that, in feature space, the points will be linearly separable. (Note: This is not the kernel trick yet... stay tuned.)
It can be shown that, the higher the dimension of the feature space, the greater the number of problems that are linearly separable in that space. Therefore, one would ideally want the feature space to be as high-dimensional as possible.
Unfortunately, as the dimension of feature space increases, so does the amount of computation required. This is where the kernel trick comes in. Many machine learning algorithms (among them the SVM) can be formulated in such a way that the only operation they perform on the data points is a scalar product between two data points. (I will denote a scalar product between x1 and x2 by <x1, x2>.)
If we transform our points to feature space, the scalar product now looks like this:
<phi(x1), phi(x2)>
The key insight is that there exists a class of functions called kernels that can be used to optimize the computation of this scalar product. A kernel is a function K(x1, x2) that has the property that
K(x1, x2) = <phi(x1), phi(x2)>
for some function phi(). In other words: We can evaluate the scalar product in the low-dimensional data space (where x1 and x2 "live") without having to transform to the high-dimensional feature space (where phi(x1) and phi(x2) "live") -- but we still get the benefits of transforming to the high-dimensional feature space. This is called the kernel trick.
Many popular kernels, such as the Gaussian kernel, actually correspond to a transform phi() that transforms into an infinte-dimensional feature space. The kernel trick allows us to compute scalar products in this space without having to represent points in this space explicitly (which, obviously, is impossible on computers with finite amounts of memory).
The main difference (for practical purposes) is: A linear problem either does have a solution (and then it's easily found), or you get a definite answer that there is no solution at all. You do know this much, before you even know the problem at all. As long as it's linear, you'll get an answer; quickly.
The intuition beheind this is the fact that if you have two straight lines in some space, it's pretty easy to see whether they intersect or not, and if they do, it's easy to know where.
If the problem is not linear -- well, it can be anything, and you know just about nothing.
The dot product of two vectors just means the following: The sum of the products of the corresponding elements. So if your problem is
c1 * x1 + c2 * x2 + c3 * x3 = 0
(where you usually know the coefficients c, and you're looking for the variables x), the left hand side is the dot product of the vectors (c1,c2,c3) and (x1,x2,x3).
The above equation is (pretty much) the very defintion of a linear problem, so there's your connection between the dot product and linear problems.
Linear equations are homogeneous, and superposition applies. You can create solutions using combinations of other known solutions; this is one reason why Fourier transforms work so well. Non-linear equations are not homogeneous, and superposition does not apply. Non-linear equations usually have to be solved numerically using iterative, incremental techniques.
I'm not sure how to express the importance of the dot product, but it does take two vectors and returns a scalar. Certainly a solution to a scalar equation is less work than solving a vector or higher-order tensor equation, simply because there are fewer components to deal with.
My intuition in this matter is based more on physics, so I'm having a hard time translating to AI.
I think following link also useful ...
http://www.simafore.com/blog/bid/113227/How-support-vector-machines-use-kernel-functions-to-classify-data

Resources