non-linear correlation formula - correlation

There are some formula that calculate the linear correlation between two random variables, for example Pearson or Spearman Correlation. My question is that is there any formula that can calculate the non-linear correlation between two random variables ?

As is searched, i found out that the Mutual Information between two random variables can reveals the non-linear relation between them which is described here.

Related

Find Independent Vectors (High Performance)

I'm in desperate need of a high performance algorithm to reduce a matrix to its independent vectors (row echelon form), aka find the basis vectors. I've seen the Bareiss algorithm and Row Reduction but they are all too slow, if anyone could recommend a faster implementation I'd be grateful!!! Happy to use TBB parallelisation.
Thanks!
What are you trying to do with the reduced echelon form? Do you just need the basis vectors to have them or are you trying to solve a system of equation? If you're solving a system of equations you can do an LU factorization and probably get faster calculation times. Otherwise gaussian elimination with partial pivoting is your fastest option.
Also do you know if your matrix is of a special form? Like upper or lower triangular for example. If it is then you can rewrite some of these algorithms to be faster based on the type of matrix that you have.

Precise matrix inversion in Q

Given an invertible matrix M over the rationals Q, the inverse matrix M^(-1) is again a matrix over Q. Are their (efficient) libraries to compute the inverse precisely?
I am aware of high-performance linear algebra libraries such as BLAS/LAPACK, but these libraries are based on floating point arithmetic and are thus not suitable for computing precise (analytical) solutions.
Motivation: I want to compute the absorption probabilities of a large absorbing Markov chain using its fundamental matrix. I would like to do so precisely.
Details: By large, I mean a 1000x1000 matrix in the best case, and a several million dimensional matrix in the worst case. The further I can scale things the better. (I realize that the worst case is likely far out of reach.)
You can use the Eigen matrix library, which with little effort works on arbitrary scalar types. There is an example in the documentation how to use it with GMPs mpq_class: http://eigen.tuxfamily.org/dox/TopicCustomizing_CustomScalar.html
Of course, as #btilly noted, most of the time you should not calculate the inverse, but calculate a matrix decomposition and use that to solve equation systems. For rational numbers you can use any LU-decomposition, or if the matrix is symmetric, the LDLt decomposition. See here for a catalogue of decompositions.

Do linear algebra packages accept matrix elements that are infinite or other number-like input?

Background
I'm doing research about stability analysis of some dynamical system. In the literature, others have used analytical methods to query the signs of the eigenvalues of the linearized system matrix. My approach is to use a numerical eigenvalue solver. In some cases, input to the stability analysis consists of some coefficients that become infinite. In the analytical approach, this is tackled by taking the limit of the resulting stability criteria to infinity. "Taking the limit" is however not possible in a numerical approach, so I have reformulated the problem to avoid infinite coefficients in my numerical implementation.
Question
Now my question should be clear. Would a linear algebra package allow to use infinite coefficients? My direct application only needs eigenvalue solvers, but I don't want to narrow down to that. Any answer regarding infinite coefficients as input to linear algebra algorithms (matrix-solve, eigenvalue problem, singular value decomposition, LU etc.) is welcome.

Inverting a sparse Matrix

I have a sparse, square, symmetric matrix with the following structure:
(Let's say the size of the matrix is N x N)
Here, the area under the blue stripes is the non-zero elements. Could someone tell me if there is a algorithm to invert this kind of matrix that is simple yet more efficient than Gaussian elimination and LU decomposition? Thank you in advance.
Cholesky factorization is faster, O(n²). Or some specialized multi-band solvers, if you know the number of non-zero off-diagonals.
You can also apply iterative methods, maybe with preconditioning, it depends on your purpose.
There are a lot of sparse solvers. This can easily be solved using libeigen. What solver you choose is really going to depend on the properties of the sparse matrix besides the structure. Hope this helps.

Find a bijection that best preserves distances

I have two spaces (not necessarily equal in dimension) with N points.
I am trying to find a bijection (pairing) of the points, such that the distances are preserved as well as possible.
I can't seem to find a discussion of possible solutions or algorithms to this question online. Can anyone suggest keywords that I could search for? Does this problem have a name, or does it come up in any domain?
I believe you are looking for a Multidimensional Scaling algorithm where you are minimizing the total change in distance. Unfortunately, I have very little experience in this area and can't be of much more help.
I haven't heard of the exact same problem. There are two similar types of problems:
Non-linear dimensionality reduction, you're given N high dimensional points and you want to find N low dimensional points that preserve distance as well as possible. MDS, mentioned by Michael Koval is one such method.
This might be more promising: algorithms for the assignment problem. For example Kuhn-Munkres (the Hungarian algorithm), you're given an NxN matrix that encodes the cost of matching pi with pj and you want to find the minimum cost bijection. There are many generalizations of this problem, for example b-matching (Kuhn-Munkres solves 1-matching).
Depending on how you define "preserves distances as well as possible" I think you either want (2) or a generalization of (2) in such a way that the cost doesn't only depend on the two points being matched but the assignment of all other points.
Finally, Kuhn-Munkres comes up everywhere in operations research.

Resources