in Maxima, how is it possible to simply equations that are components of a matrix? I have a rather big matrix and want to simplify the components of it (e.g. factor out and cancel out).
Thanks.
Most functions (where appropriate) already thread over lists, matrices, equations, etc...
For example:
(%i1) a : [[cos(x)^2+sin(x)^2,1],[0,sin(x)*cos(x)]];
2 2
(%o1) [[sin (x) + cos (x), 1], [0, cos(x) sin(x)]]
(%i2) trigsimp(a);
(%o2) [[1, 1], [0, cos(x) sin(x)]]
(%i3) trigreduce(a);
cos(2 x) + 1 1 - cos(2 x) sin(2 x)
(%o3) [[------------ + ------------, 1], [0, --------]]
2 2 2
(%i4) expand(%o3);
sin(2 x)
(%o4) [[1, 1], [0, --------]]
2
If this doesn't help you, can you give more details of the problem that you're having?
Related
Find a matrix for the Linear Transformation T: R2 → R3, defined by
T (x, y) = (13x - 9y, -x - 2y, -11x - 6y) with respect to the basis
B = {(2, 3), (-3, -4)} and C = {(-1, 2, 2), (-4, 1, 3), (1, -1, -1)} for R2 & R3
respectively.
Here, the process should be to find the transformation for the vectors of B and express those as a linear combination of C and those vectors will form the matrix for linear transformation. Is my approach correct or do I need to change something?
I will show you how to do in python with sympy module
import sympy
# Assuming that B(x, y) = (2,3)*x + (-3, -4)*y it can be expressed as a left multiplication by
B = sympy.Matrix(
[[2, -3],
[3, -4]])
# Then you apply T as a left multiplication by
T = sympy.Matrix(
[[13, -9],
[-1, -2],
[-11, -6]])
#And finally to get the representation on the basis C you multiply of the result
# by the inverse of C
C = sympy.Matrix(
[[-1, -4, 1],
[2, 1, -1],
[2, 3, -1]])
combined = C.inv() * T * B
The combined transformation matrix yelds
[[-57, 77],
[-16, 23],
[-122, 166]])
This may be very obvious, but how do you multiply a column vector by a matrix? For example:
let n matrix:from-column-list [[1 0]]
let m matrix:from-row-list [[1 0] [0 1]]
Using matrix:times does not work as the dimensions are not the same, and I can't find any other way of doing it. Any help would be great.
Matrix multiplication is not commutative. This works fine:
let n matrix:from-column-list [[1 0]] let m matrix:from-row-list [[1 0] [0 1]] show matrix:times m n
hth
I need to create a 3 by 3 real orthonormal symbolic matrix in Mathematica.
How can I do so?
Not that I recommend this, but...
m = Array[a, {3, 3}];
{q, r} = QRDecomposition[m];
q2 = Simplify[q /. Conjugate -> Identity]
So q2 is a symbolic orthogonal matrix (assuming we work over reals).
You seem to want some SO(3) group parametrization in Mathematica I think. You will only have 3 independent symbols (variables), since you have 6 constraints from mutual orthogonality of vectors and the norms equal to 1. One way is to construct independent rotations around the 3 axes, and multiply those matrices. Here is the (perhaps too complex) code to do that:
makeOrthogonalMatrix[p_Symbol, q_Symbol, t_Symbol] :=
Module[{permute, matrixGeneratingFunctions},
permute = Function[perm, Permute[Transpose[Permute[#, perm]], perm] &];
matrixGeneratingFunctions =
Function /# FoldList[
permute[#2][#1] &,
{{Cos[#], 0, Sin[#]}, {0, 1, 0}, {-Sin[#], 0, Cos[#]}},
{{2, 1, 3}, {3, 2, 1}}];
#1.#2.#3 & ## MapThread[Compose, {matrixGeneratingFunctions, {p, q, t}}]];
Here is how this works:
In[62]:= makeOrthogonalMatrix[x,y,z]
Out[62]=
{{Cos[x] Cos[z]+Sin[x] Sin[y] Sin[z],Cos[z] Sin[x] Sin[y]-Cos[x] Sin[z],Cos[y] Sin[x]},
{Cos[y] Sin[z],Cos[y] Cos[z],-Sin[y]},
{-Cos[z] Sin[x]+Cos[x] Sin[y] Sin[z],Cos[x] Cos[z] Sin[y]+Sin[x] Sin[z],Cos[x] Cos[y]}}
You can check that the matrix is orthonormal, by using Simplify over the various column (or row) dot products.
I have found a "direct" way to impose special orthogonality.
See below.
(*DEFINITION OF ORTHOGONALITY AND SELF ADJUNCTNESS CONDITIONS:*)
MinorMatrix[m_List?MatrixQ] := Map[Reverse, Minors[m], {0, 1}]
CofactorMatrix[m_List?MatrixQ] := MapIndexed[#1 (-1)^(Plus ## #2) &, MinorMatrix[m], {2}]
UpperTriangle[ m_List?MatrixQ] := {m[[1, 1 ;; 3]], {0, m[[2, 2]], m[[2, 3]]}, {0, 0, m[[3, 3]]}};
FlatUpperTriangle[m_List?MatrixQ] := Flatten[{m[[1, 1 ;; 3]], m[[2, 2 ;; 3]], m[[3, 3]]}];
Orthogonalityconditions[m_List?MatrixQ] := Thread[FlatUpperTriangle[m.Transpose[m]] == FlatUpperTriangle[IdentityMatrix[3]]];
Selfadjunctconditions[m_List?MatrixQ] := Thread[FlatUpperTriangle[CofactorMatrix[m]] == FlatUpperTriangle[Transpose[m]]];
SO3conditions[m_List?MatrixQ] := Flatten[{Selfadjunctconditions[m], Orthogonalityconditions[m]}];
(*Building of an SO(3) matrix*)
mat = Table[Subscript[m, i, j], {i, 3}, {j, 3}];
$Assumptions = SO3conditions[mat]
Then
Simplify[Det[mat]]
gives 1;...and
MatrixForm[Simplify[mat.Transpose[mat]]
gives the identity matrix;
...finally
MatrixForm[Simplify[CofactorMatrix[mat] - Transpose[mat]]]
gives a Zero matrix.
========================================================================
This is what I was looking for when I asked my question!
However, let me know your thought on this method.
Marcellus
Marcellus, you have to use some parametrization of SO(3), since your general matrix has to reflect the RP3 topology of the group. No single parametrization will cover the whole group without either multivaluedness or singular points. Wikipedia has a nice page about the various charts on SO(3).
Maybe one of the conceptually simplest is the exponential map from the Lie algebra so(3).
Define an antisymmetric, real A (which spans so(3))
A = {{0, a, -c},
{-a, 0, b},
{c, -b, 0}};
Then MatrixExp[A] is an element of SO(3).
We can check that this is so, using
Transpose[MatrixExp[A]].MatrixExp[A] == IdentityMatrix[3] // Simplify
If we write t^2 = a^2 + b^2 + c^2, we can simplify the matrix exponential down to
{{ b^2 + (a^2 + c^2) Cos[t] , b c (1 - Cos[t]) + a t Sin[t], a b (1 - Cos[t]) - c t Sin[t]},
{b c (1 - Cos[t]) - a t Sin[t], c^2 + (a^2 + b^2) Cos[t] , a c (1 - Cos[t]) + b t Sin[t]},
{a b (1 - Cos[t]) + c t Sin[t], a c (1 - Cos[t]) - b t Sin[t], a^2 + (b^2 + c^2) Cos[t]}} / t^2
Note that this is basically the same parametrization as RotationMatrix gives.
Compare with the output from
RotationMatrix[s, {b, c, a}] // ComplexExpand // Simplify[#, Trig -> False] &;
% /. a^2 + b^2 + c^2 -> 1
Although I really like the idea of Marcellus' answer to his own question, it's not completely correct. Unfortunately, the conditions he arrives at also result in
Simplify[Transpose[mat] - mat]
evaluating to a zero matrix! This is clearly not right. Here's an approach that's both correct and more direct:
OrthogonalityConditions[m_List?MatrixQ] := Thread[Flatten[m.Transpose[m]] == Flatten[IdentityMatrix[3]]];
SO3Conditions[m_List?MatrixQ] := Flatten[{OrthogonalityConditions[m], Det[m] == 1}];
i.e. multiplying a rotation matrix by its transpose results in the identity matrix, and the determinant of a rotation matrix is 1.
This is an example. I want to know if there is a general way to deal with this kind of problems.
Suppose I have a function (a ε ℜ) :
f[a_, n_Integer, m_Integer] := Sum[a^i k[i],{i,0,n}]^m
And I need a closed form for the coefficient a^p. What is the better way to proceed?
Note 1:In this particular case, one could go manually trying to represent the sum through Multinomial[ ], but it seems difficult to write down the Multinomial terms for a variable number of arguments, and besides, I want Mma to do it.
Note 2: Of course
Collect[f[a, 3, 4], a]
Will do, but only for a given m and n.
Note 3: This question is related to this other one. My application is different, but probably the same methods apply. So, feel free to answer both with a single shot.
Note 4:
You can model the multinomial theorem with a function like:
f[n_, m_] :=
Sum[KroneckerDelta[m - Sum[r[i], {i, n}]]
(Multinomial ## Sequence#Array[r, n])
Product[x[i]^r[i], {i, n}],
Evaluate#(Sequence ## Table[{r[i], 0, m}, {i, 1, n}])];
So, for example
f[2,3]
is the cube of a binomial
x[1]^3+ 3 x[1]^2 x[2]+ 3 x[1] x[2]^2+ x[2]^3
The coefficient by a^k can be viewed as derivative of order k at zero divided by k!. In version 8, there is a function BellY, which allows to construct a derivative at a point for composition of functions, out of derivatives of individual components. Basically, for f[g[x]] and expanding around x==0 we find Derivative[p][Function[x,f[g[x]]][0] as
BellY[ Table[ { Derivative[k][f][g[0]], Derivative[k][g][0]}, {k, 1, p} ] ]/p!
This is also known as generalized Bell polynomial, see wiki.
In the case at hand:
f[a_, n_Integer, m_Integer] := Sum[a^i k[i], {i, 0, n}]^m
With[{n = 3, m = 4, p = 7},
BellY[ Table[{FactorialPower[m, s] k[0]^(m - s),
If[s <= n, s! k[s], 0]}, {s, 1, p}]]/p!] // Distribute
(*
Out[80]= 4 k[1] k[2]^3 + 12 k[1]^2 k[2] k[3] + 12 k[0] k[2]^2 k[3] +
12 k[0] k[1] k[3]^2
*)
With[{n = 3, m = 4, p = 7}, Coefficient[f[a, n, m], a, p]]
(*
Out[81]= 4 k[1] k[2]^3 + 12 k[1]^2 k[2] k[3] + 12 k[0] k[2]^2 k[3] +
12 k[0] k[1] k[3]^2
*)
Doing it this way is more computationally efficient than building the entire expression and extracting coefficients.
EDIT The approach here outlined will work for symbolic orders n and m, but requires explicit value for p. When using it is this circumstances, it is better to replace If with its Piecewise analog, e.g. Boole:
With[{p = 2},
BellY[Table[{FactorialPower[m, s] k[0]^(m - s),
Boole[s <= n] s! k[s]}, {s, 1, p}]]/p!]
(* 1/2 (Boole[1 <= n]^2 FactorialPower[m, 2] k[0]^(-2 + m)
k[1]^2 + 2 m Boole[2 <= n] k[0]^(-1 + m) k[2]) *)
Does anyone know of any standard algorithms to determine an affine transformation matrix based upon a set of known points in two co-ordinate systems?
Affine transformations are given by 2x3 matrices. We perform an affine transformation M by taking our 2D input (x y), bumping it up to a 3D vector (x y 1), and then multiplying (on the left) by M.
So if we have three points (x1 y1) (x2 y2) (x3 y3) mapping to (u1 v1) (u2 v2) (u3 v3) then we have
[x1 x2 x3] [u1 u2 u3]
M [y1 y2 y3] = [v1 v2 v3].
[ 1 1 1]
You can get M simply by multiplying on the right by the inverse of
[x1 x2 x3]
[y1 y2 y3]
[ 1 1 1].
A 2x3 matrix multiplied on the right by a 3x3 matrix gives us the 2x3 we want. (You don't actually need the full inverse, but if matrix inverse is available it's easy to use.)
Easily adapted to other dimensions. If you have more than 3 points you may want a least squares best fit. You'll have to ask again for that, but it's a little harder.
I'm not sure how standard it is, but there is a nice formula especially for your case presented in "Beginner's guide to mapping simplexes affinely" and "Workbook on mapping simplexes affinely".
Putting it into code should look something like this (sorry for bad codestyle -- I'm mathematician, not programmer)
import numpy as np
# input data
ins = [[1, 1, 2], [2, 3, 0], [3, 2, -2], [-2, 2, 3]] # <- points
out = [[0, 2, 1], [1, 2, 2], [-2, -1, 6], [4, 1, -3]] # <- mapped to
# calculations
l = len(ins)
B = np.vstack([np.transpose(ins), np.ones(l)])
D = 1.0 / np.linalg.det(B)
entry = lambda r,d: np.linalg.det(np.delete(np.vstack([r, B]), (d+1), axis=0))
M = [[(-1)**i * D * entry(R, i) for i in range(l)] for R in np.transpose(out)]
A, t = np.hsplit(np.array(M), [l-1])
t = np.transpose(t)[0]
# output
print("Affine transformation matrix:\n", A)
print("Affine transformation translation vector:\n", t)
# unittests
print("TESTING:")
for p, P in zip(np.array(ins), np.array(out)):
image_p = np.dot(A, p) + t
result = "[OK]" if np.allclose(image_p, P) else "[ERROR]"
print(p, " mapped to: ", image_p, " ; expected: ", P, result)
This code recovers affine transformation from given points ("ins" transformed to "outs") and tests that it works.