Matrix valued undefined functions in SymPy - matrix

I'm looking for a possibility to specify matrix quantities that depend on a variables. For scalars that works as follows, using undefined functions:
from sympy import *
x = Function('f')(t)
diff(x,t)
For Matrix Symbols like
x = MatrixSymbol('x',3,3)
i cannot find an equivalent. There is
i,j = Symbols('i j')
x = FunctionMatrix(6,1,Lambda((i,j),f))
but this is not what i need as you need to specify the contents of the matrix. The context is that i have equations
which should be derived in time and contain matrix valued elements.
I cannot deal with the elements of the matrices one by one.
Thanks!

I'm not sure about what you want, but I think you want to make a Matrix with differentiable elements. In that case, see if this works for you.
Create a matrix with function elements:
X = sym.FunctionMatrix(6,1,lambda i,j:sym.Function("x_%d%d" % (i,j))(t))
M = sym.Matrix(X)
M.diff(t)
This results in
Matrix([
[Derivative(x_00(t), t)],
[Derivative(x_10(t), t)],
[Derivative(x_20(t), t)],
[Derivative(x_30(t), t)],
[Derivative(x_40(t), t)],
[Derivative(x_50(t), t)]])
You may then replace stuff as you need.
Also, it may be preferrable if you populate the matrix with the expressions you need before differentiating. Leaving them as undefined functions may make it harder for you to simplify after substitution.

Related

How do I calculate inner product of two vectors in nalgebra?

From the following
let v = OVector::<f64, U2>::from_column_slice(&[3_f64, 4_f64]);
let x = &v.transpose() * &v; // get the inner product, i.e. <v,v>
I expected x to be a f64 scalar, i.e. x = 25.0.
But actually, I can only obtain x as OMatrix::<f64, Cosnt<1>, Const<1>>.
The case can be even worse in matrix product operations. for example, the following code doesn't work since v^T v is not a scalar.
let m = OMatrix::<f64, U2, U2>::from_element(1.0);
let v = OVector::<f64, U2>::from_column_slice(&[3_f64, 4_f64]);
// not working
let y = &v.transpose() * &v * m; // types conflict
// working
let y = 25.0 * m; // expected to behave like this
What is the correct way to do this?
Usually, in maths, you would identify 1x1 matrices with scalars (because, for some definition of being equivalent, they are equivalent...). When doing this, the dot product of two vectors is exactly the dot product between two matrices, when we see vectors as matrix columns (which are also equivalent for some equivalence...).
However, here, it is not the case: Rust has to know what is the type of the data. So, I would suggest, since you are using matrices to start with, to use the actual matrix dot product, not the vector one. It's simply (v.transpose()*v).trace(). This is a more general dot product, but notice taking the trace will exactly "extract" the scalar from the 1x1 matrix.
Otherwise, this operation is already defined as the dot product (unsurprisingly): v.dot(v).

Computing a single element of the adjugate or inverse of a symbolic binary matrix

I'm trying to get a single element of an adjugate A_adj of a matrix A, both of which need to be symbolic expressions, where the symbols x_i are binary and the matrix A is symmetric and sparse. Python's sympy works great for small problems:
from sympy import zeros, symbols
size = 4
A = zeros(size,size)
x_i = [x for x in symbols(f'x0:{size}')]
for i in range(size-1):
A[i,i] += 0.5*x_i[i]
A[i+1,i+1] += 0.5*x_i[i]
A[i,i+1] = A[i+1,i] = -0.3*(i+1)*x_i[i]
A_adj_0 = A[1:,1:].det()
A_adj_0
This calculates the first element A_adj_0 of the cofactor matrix (which is the corresponding minor) and correctly gives me 0.125x_0x_1x_2 - 0.28x_2x_2^2 - 0.055x_1^2x_2 - 0.28x_1x_2^2, which is the expression I need, but there are two issues:
This is completely unfeasible for larger matrices (I need this for sizes of ~100).
The x_i are binary variables (i.e. either 0 or 1) and there seems to be no way for sympy to simplify expressions of binary variables, i.e. simplifying polynomials x_i^n = x_i.
The first issue can be partly addressed by instead solving a linear equation system Ay = b, where b is set to the first basis vector [1, 0, 0, 0], such that y is the first column of the inverse of A. The first entry of y is the first element of the inverse of A:
b = zeros(size,1)
b[0] = 1
y = A.LUsolve(b)
s = {x_i[i]: 1 for i in range(size)}
print(y[0].subs(s) * A.subs(s).det())
print(A_adj_0.subs(s))
The problem here is that the expression for the first element of y is extremely complicated, even after using simplify() and so on. It would be a very simple expression with simplification of binary expressions as mentioned in point 2 above. It's a faster method, but still unfeasible for larger matrices.
This boils down to my actual question:
Is there an efficient way to compute a single element of the adjugate of a sparse and symmetric symbolic matrix, where the symbols are binary values?
I'm open to using other software as well.
Addendum 1:
It seems simplifying binary expressions in sympy is possible with a simple custom substitution which I wasn't aware of:
A_subs = A_adj_0
for i in range(size):
A_subs = A_subs.subs(x_i[i]*x_i[i], x_i[i])
A_subs
You should make sure to use Rational rather than floats in sympy so S(1)/2 or Rational(1, 2) rather than 0.5.
There is a new (undocumented and for the moment internal) implementation of matrices in sympy called DomainMatrix. It is likely to be a lot faster for a problem like this and always produces polynomial results in a fully expanded form. I expect that it will be much faster for this kind of problem but it still seems to be fairly slow for this because is is not sparse internally (yet - that will probably change in the next release) and it does not take advantage of the simplification from the symbols being binary-valued. It can be made to work over GF(2) but not with symbols that are assumed to be in GF(2) which is something different.
In case it is helpful though this is how you would use it in sympy 1.7.1:
from sympy import zeros, symbols, Rational
from sympy.polys.domainmatrix import DomainMatrix
size = 10
A = zeros(size,size)
x_i = [x for x in symbols(f'x0:{size}')]
for i in range(size-1):
A[i,i] += Rational(1, 2)*x_i[i]
A[i+1,i+1] += Rational(1, 2)*x_i[i]
A[i,i+1] = A[i+1,i] = -Rational(3, 10)*(i+1)*x_i[i]
# Convert to DomainMatrix:
dM = DomainMatrix.from_list_sympy(size-1, size-1, A[1:, 1:].tolist())
# Compute determinant and convert back to normal sympy expression:
# Could also use dM.det().as_expr() although it might be slower
A_adj_0 = dM.charpoly()[-1].as_expr()
# Reduce powers:
A_adj_0 = A_adj_0.replace(lambda e: e.is_Pow, lambda e: e.args[0])
print(A_adj_0)

sympy matrix to explicit sum and back (to matrix notation)

I am working in sympy with symbolic matrices.
Once made explicit I can not return to implicit representations.
I tried to work something out with the pair of .as_explicit() and MatrixExpr.from_index_summation(expr)
But the latter seems to expect an explicit sigma notation sum, not a sum of indexed elements.
As a minimal working example here is my approach on matrix multiplication:
A = MatrixSymbol('A',3,4)
B = MatrixSymbol('B',4,3)
Matrix_Notation = A * B
Expanded = (A * B).as_explicit()
FromSummation = MatrixExpr.from_index_summation(Expanded)
Here we can see, that FromSummation is still the same as Expanded
I suppose that the Expanded expression should be converted to sigma sums such that .from_index_summation can be expected to work. But how can this be done?

flatten a BlockMatrix into a Matrix in Sympy

Sympy has BlockMatrix class, but it is not a regular Matrix,
eg you can not matrix multiply a BlockMatrix.
BlockMatrix is a convenient way to build a structured matrix, but I do not see a way to use it with unstructured matrices.
Is there a way to flatten a BlockMatrix, or another convenient way to build a regular Matrix from blocks, similar to numpy.blocks?
You can use the method as_explicit() to get a flat explicit matrix, like this:
from sympy import *
n = 3
X = Identity(n)
Y = Identity(n)
Z = Identity(n)
W = Identity(n)
R = BlockMatrix([[X,Y],[Z,W]])
print (R.as_explicit())

sympy nsolve with MatrixSymbol

I'd like to numerically solve an equation involving a MatrixSymbol. Here's a basic example:
import sympy as sy
v = sy.MatrixSymbol('v', 2, 1)
equation = (v - sy.Matrix([17, 23])).as_explicit()
I'd like something like:
sy.nsolve(equation, v, sy.Matrix([0,0]))
But because nsolve does not accept MatrixSymbols, I've made a cludgy workaround that gives the correct output of Matrix([[17.0], [23.0]]):
vx, vy = sy.symbols('v_x v_y')
sy.nsolve(equation.subs(v, sy.Matrix([vx, vy])), [vx, vy], [0,0])
Essentially, I've converted a MatrixSymbol to a matrix of Symbols to make nsolve happy.
Is there a better way I should be doing this?
Edit: the workaround can be simplified to:
vseq = sy.symbols('a b') #names must be distinct
sy.nsolve(equation.subs(v, sy.Matrix(vseq)), vseq, [0,0])
But there ought to be a cleaner way to convert a MatrixSymbol to a sequence of Symbols, or a way to avoid needing to do so in the first place.
A cleaner way is to create a Matrix from symarray:
v = sy.Matrix(sy.symarray("v", (2,)))
equation = v - sy.Matrix([17, 23])
sy.nsolve(equation, v, [0, 0])
Here, symarray creates a (NumPy) array of symbols [v_0, v_1] which is then turned into a Matrix. One can also use sy.symarray("v", (2, 1)) so it's a double array, but since SymPy's Matrix constructor is cool with 1D inputs, this is not necessary.

Resources