Definition of matrix-vector division operator of Julia - matrix

I stumbled upon something, which I consider very strange.
As an example consider the code
A = reshape(1:6, 3,2)
A/[1 1]
which gives
3×1 Array{Float64,2}:
2.5
3.5
4.5
As I understand, in general such division gives the weighted average of columns, where each weight is inversely proportional to the corresponding element of the vector.
So my question is, why is it defined such way?
What is the mathematical justification of this definition?

It's the minimum error solution to |A - v*[1 1]|₂ – which, being overconstrained, has no exact solution in general (i.e. value v such that the norm is precisely zero). The behavior of / and \ is heavily overloaded, solving both under and overconstrained systems by a variety of techniques and heuristics. Whether this kind of overloading is a good idea or not is debatable, but it's what people have come to expect from these operations in Matlab and Octave, and it's often quite convenient to have so much functionality available in a single operator.

Let A be an NxN matrix and b be a Nx1 column vector. Then \ solves Ax=b, and / solves xA=b.
As Stefan mentions, this is extended to underdetermined cases as the least squares solution. This is done via the QR or SVD decompositions. See the details on these algorithms to see why this is the case. Hint: the linear form of the OLS estimator can actually be written as the solution to matrix decompositions, so it's the same thing.
Now you might ask, how does it actually solve it? That's a complicated question. Essentially, it uses a matrix factorization. But which matrix factorization is used is dependent on the matrix type. The reason for this is because Gaussian elimination is O(n^3), and so treating the problem generally is usually not good. But whenever you can specialize, you can get speedups. So essentially \ (and /, which transposes and calls \) check for a bunch of special types and pick a factorization or other algorithm (LU, QR, SVD, Cholesky, etc.) based on the matrix type. The flow chart from MATLAB explains this very well. There's a lot of details here, and it gets even more details when the matrix is sparse. Also IterativeSolvers.jl should be mentioned because it's another set of algorithms for solving Ax=b.
Most applied math problems reduce down to linear algebra, with solving Ax=b being one of the most important and difficult problems, which is why there is tons of research on the subject. In fact, you can probably say that the vast majority of the field of numerical linear algebra is devoted to finding fast methods for solving Ax=b on specific matrix types. \ essentially puts all of the direct (non-iterative) methods into one convenient operator.

Related

Should one calculate QR decomposition before Least Squares to speed up the process?

I am reading the book "Introduction to linear algebra" by Gilbert Strang. The section is called "Orthonormal Bases and Gram-Schmidt". The author several times emphasised the fact that with orthonormal basis it's very easy and fast to calculate Least Squares solution, since Qᵀ*Q = I, where Q is a design matrix with orthonormal basis. So your equation becomes x̂ = Qᵀb.
And I got the impression that it's a good idea to every time calculate QR decomposition before applying Least Squares. But later I figured out time complexity for QR decomposition and it turned out to be that calculating QR decomposition and after that applying Least Squares is more expensive than regular x̂ = inv(AᵀA)Aᵀb.
Is that right that there is no point in using QR decomposition to speed up Least Squares? Or maybe I got something wrong?
So the only purpose of QR decomposition regarding Least Squares is numerical stability?
There are many ways to do least squares; typically these vary in applicability, accuracy and speed.
Perhaps the Rolls-Royce method is to use SVD. This can be used to solve under-determined (fewer obs than states) and singular systems (where A'*A is not invertible) and is very accurate. It is also the slowest.
QR can only be used to solve non-singular systems (that is we must have A'*A invertible, ie A must be of full rank), and though perhaps not as accurate as SVD is also a good deal faster.
The normal equations ie
compute P = A'*A
solve P*x = A'*b
is the fastest (perhaps by a large margin if P can be computed efficiently, for example if A is sparse) but is also the least accurate. This too can only be used to solve non singular systems.
Inaccuracy should not be taken lightly nor dismissed as some academic fanciness. If you happen to know that the problems ypu will be solving are nicely behaved, then it might well be fine to use an inaccurate method. But otherwise the inaccurate routine might well fail (ie say there is no solution when there is, or worse come up with a totally bogus answer).
I'm a but confused that you seem to be suggesting forming and solving the normal equations after performing the QR decomposition. The usual way to use QR in least squares is, if A is nObs x nStates:
decompose A as A = Q*(R )
(0 )
transform b into b~ = Q'*b
(here R is upper triangular)
solve R * x = b# for x,
(here b# is the first nStates entries of b~)

lapack library in codeblocks, boundary value

I am a student from Germany, currently doing the master thesis. In my master thesis, I am writing a Fortran code in code blocks. In my code, I am using some of the LAPACK functions.
I want help regarding adding LAPACK library in code block software. I searched a lot on the internet but I couldn't find anything. It'll be better if you provide me all the source links extension of the previous question.
In my code, I need to solve the following system of equation, {K}{p} = {m}
Where
{K} = Matrix
{p} = vector
{m} = vector
I have all the elements of vector {m} and matrix {K} computed and I have some known values of vector {p}. It's boundary value problem.
Now I want to find out only unknown values of elements of a vector {p}.
Which function should I use?
I went through the LAPACK manual available online but couldn't find.
Well you did not go through the documentation so thoroughly :) You are looking for solvers of a linear equation.
http://www.netlib.org/lapack/lug/node38.html
Please specify a little more about complex/real double/float. ill posed? Over or under determined or quadratic? If quadratic then symmetric or upper triangular? Banded?
There is a hord of different algorithms one would consider. Runtime / stability / convergence behaviour are very different dependent on K. The most stable would be [x]gelsd. It's a divide and conquer algorithm via SVD and gives you with proper conditioning the moore penrose generalised inverse. But it is by far the slowest algorithm too.
btw, http://www.netlib.org/lapack/lug/node27.html, outlines all general solvers.
If you already have some values of your p, you would like to go a different route than a straight forward inversion. You are a lot better off, if you use an iterative method like conjugate gradient least squares problem with regularization. This is discussed in length in Stoer, Bullirsch, Introduction to Numerical Analysis, Chapter 8.7 (The Conjugate-gradient Method of Heestens and Stiefel).
There are multiple implementations online. One you will find in a library I wrote during my PhD: https://github.com/kvahed/codeare/blob/master/src/optimisation/CGLS.hpp

Why Gauss Siedel uses less memory than Gauss Elimination

I am studying numerical methods from Steven C. Charpa's book. The book says "Gauss-Siedel uses less memory than Gauss-Elimination because it does not stores "0" values in matrix", however the algorithm, written in the book, handle same matrix as Gauss Elimination. I didn't understand how Gauss-Siedel uses less memory. I searched this issue on internet people say same thing but nobody explain how.
Note: I can share algorithm in book, if won't be problem about Copyrights.
The Gauss-Elimination method has to store zeros while computing. This is because in the course of elimination of lower triangular matrix, the zeros can become non-zero values. On the other hand the Gauss-Siedel method, if written to handle sparse matrices, can only operate on non-zero values.
In simple way you can say that Gauss-Siedel method works on one equation at a time, solving for i^{th} variable with non-zero coefficient, therefore it can easily skip the terms with zero coefficient.
Gauss-Elimination works on complete matrix making all the coefficients below the i^{th} coefficient zero, but in the process the coefficients in the upper triangular matrix are changed. I think that there is no easy way of writing Gauss-Elimination method for sparse matrices.

Accurate least-squares fit algorithm needed

I've experimented with the two ways of implementing a least-squares fit (LSF) algorithm shown here.
The first code is simply the textbook approach, as described by Wolfram's page on LSF. The second code re-arranges the equation to minimize machine errors. Both codes produce similar results for my data. I compared these results with Matlab's p=polyfit(x,y,1) function, using correlation coefficients to measure the "goodness" of fit and compare each of the 3 routines. I observed that while all 3 methods produced good results, at least for my data, Matlab's routine had the best fit (the other 2 routines had similar results to each other).
Matlab's p=polyfit(x,y,1) function uses a Vandermonde matrix, V (n x 2 matrix) and QR factorization to solve the least-squares problem. In Matlab code, it looks like:
V = [x1,1; x2,1; x3,1; ... xn,1] % this line is pseudo-code
[Q,R] = qr(V,0);
p = R\(Q'*y); % performs same as p = V\y
I'm not a mathematician, so I don't understand why it would be more accurate. Although the difference is slight, in my case I need to obtain the slope from the LSF and multiply it by a large number, so any improvement in accuracy shows up in my results.
For reasons I can't get into, I cannot use Matlab's routine in my work. So, I'm wondering if anyone has a more accurate equation-based approach recommendation I could use that is an improvement over the above two approaches, in terms of rounding errors/machine accuracy/etc.
Any comments appreciated! thanks in advance.
For a polynomial fitting, you can create a Vandermonde matrix and solve the linear system, as you already done.
Another solution is using methods like Gauss-Newton to fit the data (since the system is linear, one iteration should do fine). There are differences between the methods. One possibly reason is the Runge's phenomenon.

Large-scale pseudoinverse

I would like to compute the Moore–Penrose pseudoinverse of an enormous matrix. Ideally, I would like to do it on a matrix that has 23 million rows and 1000 columns, but if necessary I can reduce the number of rows to 4 million by only running on one part of my experiment.
Obviously, loading the matrix in to memory and running SVD on it is not going to work. Wikipedia points to Krylov subspace methods and mention the Arnoldi, Lanczos, Conjugate gradient, GMRES (generalized minimum residual), BiCGSTAB (biconjugate gradient stabilized), QMR (quasi minimal residual), TFQMR (transpose-free QMR), and MINRES (minimal residual) methods as being among the best Krylov subspace methods. But I don't know where to go from here. Is computing the pseudoinverse of such a huge matrix even feasible? If so, using which algorithms or software libraries? I have a large computing cluster available, so parallel approaches are welcome.
This answer points to the R package biglm. Would that work? Has anyone used it? I normally work in Python, but don't mind using other languages and tools for this particular task.
You might be better off using a block iterative algorithm that converges directly to the least squares solution than computing the least squares solution through the pseudoinverse. See "Applied Iterative Methods" by Charlie Byrne. These algorithms are closely related to the Krylov subspace methods, but are tuned for easy computation. You can get an introduction by looking at chapter 3 of this preprint of another of his books.

Resources