How to define trigonometric results in certain form in Mathematica? - wolfram-mathematica

I have a problem here about trigonometric form in mathematica.
Tabc2dqInv = {{Cos[\[Omega]t], -Sin[\[Omega]t],
1}, {Cos[\[Omega]t - 2/3 Pi], -Sin[\[Omega]t - 2/3 Pi],
1} , {Cos[\[Omega]t + 2/3 Pi], -Sin[\[Omega]t + 2/3 Pi], 1}};
Print["dq->abc Transformation Matrix is: ", Tabc2dqInv // MatrixForm]
The results shows:
The question is how I can constraint the results in "±2/3 Pi" rather that converted ""±1/6 Pi""?
Thanks in advance!

use HoldForm
Tabc2dqInv =
HoldForm[{{Cos[\[Omega] t], -Sin[\[Omega] t],
1}, {Cos[\[Omega] t - 2/3 Pi], -Sin[\[Omega] t - 2/3 Pi],
1}, {Cos[\[Omega] t + 2/3 Pi], -Sin[\[Omega] t + 2/3 Pi], 1}}]
Print["dq->abc Transformation Matrix is: ", Tabc2dqInv // MatrixForm]
Notice the form is preserved but MatrixForm didnt work because the expression now has head HoldForm not Matrix. To fix that you could put each expression in HoldForm :
Tabc2dqInv = {{Cos[\[Omega] t], -Sin[\[Omega] t],
1}, {HoldForm[Cos[\[Omega] t - 2/3 Pi]],
HoldForm[-Sin[\[Omega] t - 2/3 Pi]],
1}, {HoldForm[Cos[\[Omega] t + 2/3 Pi]],
HoldForm[-Sin[\[Omega] t + 2/3 Pi]], 1}}
Print["dq->abc Transformation Matrix is: ",
Tabc2dqInv // MatrixForm ]
Also be aware you'll need to release the hold to do most other things, eg:
ReleaseHold[Tabc2dqInv /. \[Omega] -> 0]
{{1, 0, 1}, {-(1/2), Sqrt[3]/2, 1}, {-(1/2), -(Sqrt[3]/2), 1}}
My general advice is to not get too uptight about the way mathematica decides to simplify things so long as its mathematically correct.

Related

Mathematica FindMinimum redundant evaluations

The following code evaluates the objective function at the same point more than once as evidenced by the output of the Print function. Why does Mathematica perform these redundant steps? Seems inefficient.
obj[x_?NumberQ, y_?NumberQ, z_?NumberQ] := Module[{},
Print[x, " ", y, " ", z];
x^2 + y^2 + z^2]
minisub =
FindMinimum[obj[x, y, z], {{x, 1}, {y, 2}, {z, 3}}, Method -> "Newton"]

Constructing a triadiagonal matrix in Mathematica where nonzero elements contain functions and variables

Suppose I want to construct a matrix A such that A[[i,i]]=f[x_,y_]+d[i], A[[i,i+1]]=u[i], A[[i+1,i]]=l[i], i=1,N . Say, f[x_,y_]=x^2+y^2.
How can I code this in Mathematica?
Additionally, if I want to integrate the first diagonal element of A, i.e. A[[1,1]] over x and y, both running from 0 to 1, how can I do that?
In[1]:= n = 4;
f[x_, y_] := x^2 + y^2;
A = Normal[SparseArray[{
{i_,i_}/;i>1 -> f[x,y]+ d[i],
{i_,j_}/;j-i==1 -> u[i],
{i_,j_}/;i-j==1 -> l[i-1],
{1, 1} -> Integrate[f[x,y]+d[1], {x,0,1}, {y,0,1}]},
{n, n}]]
Out[3]= {{2/3+d[1], l[1], 0, 0},
{u[1], x^2+y^2+ d[2], l[2], 0},
{0, u[2], x^2+y^2+d[3], l[3]},
{0, 0, u[3], x^2+y^2+d[4]}}
Band is tailored specifically for this:
myTridiagonalMatrix#n_Integer?Positive :=
SparseArray[
{ Band#{1, 1} -> f[x, y] + Array[d, n]
, Band#{1, 2} -> Array[u, n - 1]
, Band#{2, 1} -> Array[l, n - 1]}
, {n, n}]
Check it out (no need to define f, d, u, l):
myTridiagonalMatrix#5 // MatrixForm
Note that MatrixForm should not be part of a definition. For example, it's a bad idea to set A = (something) // MatrixForm. You will get a MatrixForm object instead of a table (= array of arrays) or a sparse array, and its only purpose is to be pretty-printed in FrontEnd. Trying to use MatrixForm in calculations will yield errors and will lead to unnecessary confusion.
Integrating the element at {1, 1}:
myTridiagonalMatrixWithFirstDiagonalElementIntegrated#n_Integer?Positive :=
MapAt[
Integrate[#, {x, 0, 1}, {y, 0, 1}]&
, myTridiagonalMatrix#n
, {1, 1}]
You may check it out without defining f or d, as well:
myTridiagonalMatrixWithFirstDiagonalElementIntegrated#5
The latter operation, however, looks suspicious. For example, it does not leave your matrix (or its corresponding linear system) invariant w.r.t. reasonable transformations. (This operation does not even preserve linearity of matrices.) You probably don't want to do it.
Comment on comment above: there's no need to define A[x_, y_] := … to Integrate[A[[1,1]], {x,0,1}, {y,0,1}]. Note that A[[1,1]] is totally different from A[1, 1]: the former is Part[A, 1, 1] which is a certain element of table A. A[1, 1] is a different expression: if A is some table then A[1, 1] is (that table)[1, 1], which is a valid expression but is normally considered meaningless.

How to plot this function?

I am trying to plot the region $x^{p}+y^{p}\le 1$ in the xy-plane. But when I ran commands like this:
RegionPlot[x^0.7 + y^0.7 <= 1, {x, -500, 500}, {y, -500, 500}]
I always encounter error messages like:
LessEqual::nord: Invalid comparison with -91.0952+125.382 I attempted. >>
I am confused - how can I make mathematican know I am seeking the region in R^{2}, not in C^{2}?
The invalid comparison error is actually not the problem here. RegionPlot[] will plot the region where the expression evaluates to True. The regions where the expression is complex do not evaluate True and regionplot will leave them blank.
The reason you see a fully blank plot is simply that your absolute range is too large. RegionPlot uses a coarse grid by default and misses the small True region all together.
This works (throwing the Invalid comparison as a warning)
RegionPlot[TrueQ[( x^0.7 + y^0.7 <= 1)], {x, -1, 1}, {y, -1, 1},
PlotPoints -> 100]
You can surpress the warning:
Quiet[RegionPlot[TrueQ[( x^0.7 + y^0.7 <= 1)], {x, -1, 1}, {y, -1, 1},
PlotPoints -> 100], {LessEqual::nord}]
Your plotting range is invalid. You're calculating (-500)^0.7, which is a complex number (-45.5509762 + 62.69554i to be specific).
RegionPlot[Table[x^i + y^i <= 1, {i,.1,1,.1}], {x,0,1}, {y,0,1}, Evaluated->True]

Create a symbolic orthonormal matrix in mathematica

I need to create a 3 by 3 real orthonormal symbolic matrix in Mathematica.
How can I do so?
Not that I recommend this, but...
m = Array[a, {3, 3}];
{q, r} = QRDecomposition[m];
q2 = Simplify[q /. Conjugate -> Identity]
So q2 is a symbolic orthogonal matrix (assuming we work over reals).
You seem to want some SO(3) group parametrization in Mathematica I think. You will only have 3 independent symbols (variables), since you have 6 constraints from mutual orthogonality of vectors and the norms equal to 1. One way is to construct independent rotations around the 3 axes, and multiply those matrices. Here is the (perhaps too complex) code to do that:
makeOrthogonalMatrix[p_Symbol, q_Symbol, t_Symbol] :=
Module[{permute, matrixGeneratingFunctions},
permute = Function[perm, Permute[Transpose[Permute[#, perm]], perm] &];
matrixGeneratingFunctions =
Function /# FoldList[
permute[#2][#1] &,
{{Cos[#], 0, Sin[#]}, {0, 1, 0}, {-Sin[#], 0, Cos[#]}},
{{2, 1, 3}, {3, 2, 1}}];
#1.#2.#3 & ## MapThread[Compose, {matrixGeneratingFunctions, {p, q, t}}]];
Here is how this works:
In[62]:= makeOrthogonalMatrix[x,y,z]
Out[62]=
{{Cos[x] Cos[z]+Sin[x] Sin[y] Sin[z],Cos[z] Sin[x] Sin[y]-Cos[x] Sin[z],Cos[y] Sin[x]},
{Cos[y] Sin[z],Cos[y] Cos[z],-Sin[y]},
{-Cos[z] Sin[x]+Cos[x] Sin[y] Sin[z],Cos[x] Cos[z] Sin[y]+Sin[x] Sin[z],Cos[x] Cos[y]}}
You can check that the matrix is orthonormal, by using Simplify over the various column (or row) dot products.
I have found a "direct" way to impose special orthogonality.
See below.
(*DEFINITION OF ORTHOGONALITY AND SELF ADJUNCTNESS CONDITIONS:*)
MinorMatrix[m_List?MatrixQ] := Map[Reverse, Minors[m], {0, 1}]
CofactorMatrix[m_List?MatrixQ] := MapIndexed[#1 (-1)^(Plus ## #2) &, MinorMatrix[m], {2}]
UpperTriangle[ m_List?MatrixQ] := {m[[1, 1 ;; 3]], {0, m[[2, 2]], m[[2, 3]]}, {0, 0, m[[3, 3]]}};
FlatUpperTriangle[m_List?MatrixQ] := Flatten[{m[[1, 1 ;; 3]], m[[2, 2 ;; 3]], m[[3, 3]]}];
Orthogonalityconditions[m_List?MatrixQ] := Thread[FlatUpperTriangle[m.Transpose[m]] == FlatUpperTriangle[IdentityMatrix[3]]];
Selfadjunctconditions[m_List?MatrixQ] := Thread[FlatUpperTriangle[CofactorMatrix[m]] == FlatUpperTriangle[Transpose[m]]];
SO3conditions[m_List?MatrixQ] := Flatten[{Selfadjunctconditions[m], Orthogonalityconditions[m]}];
(*Building of an SO(3) matrix*)
mat = Table[Subscript[m, i, j], {i, 3}, {j, 3}];
$Assumptions = SO3conditions[mat]
Then
Simplify[Det[mat]]
gives 1;...and
MatrixForm[Simplify[mat.Transpose[mat]]
gives the identity matrix;
...finally
MatrixForm[Simplify[CofactorMatrix[mat] - Transpose[mat]]]
gives a Zero matrix.
========================================================================
This is what I was looking for when I asked my question!
However, let me know your thought on this method.
Marcellus
Marcellus, you have to use some parametrization of SO(3), since your general matrix has to reflect the RP3 topology of the group. No single parametrization will cover the whole group without either multivaluedness or singular points. Wikipedia has a nice page about the various charts on SO(3).
Maybe one of the conceptually simplest is the exponential map from the Lie algebra so(3).
Define an antisymmetric, real A (which spans so(3))
A = {{0, a, -c},
{-a, 0, b},
{c, -b, 0}};
Then MatrixExp[A] is an element of SO(3).
We can check that this is so, using
Transpose[MatrixExp[A]].MatrixExp[A] == IdentityMatrix[3] // Simplify
If we write t^2 = a^2 + b^2 + c^2, we can simplify the matrix exponential down to
{{ b^2 + (a^2 + c^2) Cos[t] , b c (1 - Cos[t]) + a t Sin[t], a b (1 - Cos[t]) - c t Sin[t]},
{b c (1 - Cos[t]) - a t Sin[t], c^2 + (a^2 + b^2) Cos[t] , a c (1 - Cos[t]) + b t Sin[t]},
{a b (1 - Cos[t]) + c t Sin[t], a c (1 - Cos[t]) - b t Sin[t], a^2 + (b^2 + c^2) Cos[t]}} / t^2
Note that this is basically the same parametrization as RotationMatrix gives.
Compare with the output from
RotationMatrix[s, {b, c, a}] // ComplexExpand // Simplify[#, Trig -> False] &;
% /. a^2 + b^2 + c^2 -> 1
Although I really like the idea of Marcellus' answer to his own question, it's not completely correct. Unfortunately, the conditions he arrives at also result in
Simplify[Transpose[mat] - mat]
evaluating to a zero matrix! This is clearly not right. Here's an approach that's both correct and more direct:
OrthogonalityConditions[m_List?MatrixQ] := Thread[Flatten[m.Transpose[m]] == Flatten[IdentityMatrix[3]]];
SO3Conditions[m_List?MatrixQ] := Flatten[{OrthogonalityConditions[m], Det[m] == 1}];
i.e. multiplying a rotation matrix by its transpose results in the identity matrix, and the determinant of a rotation matrix is 1.

What is the most efficient way to construct large block matrices in Mathematica?

Inspired by Mike Bantegui's question on constructing a matrix defined as a recurrence relation, I wonder if there is any general guidance that could be given on setting up large block matrices in the least computation time. In my experience, constructing the blocks and then putting them together can be quite inefficient (thus my answer was actually slower than Mike's original code). Join and possibly ArrayFlatten are possibly less efficient than they could be.
Obviously if the matrix is sparse, one can use SparseMatrix constructs, but there will be times when the block matrix you are constructing is not sparse.
What is best practice for this kind of problem? I am assuming the elements of the matrix are numeric.
The code shown below is available here: http://pastebin.com/4PWWxGhB. Just copy and paste it into a notebook to test it out.
I was actually trying to do several functional ways of calculating matrices, since I
figured the functional way (which is typically idiomatic in Mathematica) is more efficient.
As one example, I had this matrix which was composed of two lists:
In: L = 1200;
e = Table[..., {2L}];
f = Table[..., {2L}];
h = Table[0, {2L}, {2L}];
Do[h[[i, i]] = e[[i]], {i, 1, L}];
Do[h[[i, i]] = e[[i-L]], {i, L+1, 2L}];
Do[h[[i, j]] = f[[i]]f[[j-L]], {i, 1, L}, {j, L+1, 2L}];
Do[h[[i, j]] = h[[j, i]], {i, 1, 2 L}, {j, 1, i}];
My first step was to time everything.
In: h = Table[0, {2 L}, {2 L}];
AbsoluteTiming[Do[h[[i, i]] = e[[i]], {i, 1, L}];]
AbsoluteTiming[Do[h[[i, i]] = e[[i - L]], {i, L + 1, 2 L}];]
AbsoluteTiming[
Do[h[[i, j]] = f[[i]] f[[j - L]], {i, 1, L}, {j, L + 1, 2 L}];]
AbsoluteTiming[Do[h[[i, j]] = h[[j, i]], {i, 1, 2 L}, {j, 1, i}];]
Out: {0.0020001, Null}
{0.0030002, Null}
{5.0012861, Null}
{4.0622324, Null}
DiagonalMatrix[...] was slower than the do loops, so I decided to just use Do loops on the last step. As you can see, using Outer[Times, f, f] was much faster in this case.
I then wrote the equivalent using Outer for the blocks in the upper right and bottom left of the matrix, and DiagonalMatrix for the diagonal:
AbsoluteTiming[h1 = ArrayPad[Outer[Times, f, f], {{0, L}, {L, 0}}];]
AbsoluteTiming[h1 += Transpose[h1];]
AbsoluteTiming[h1 += DiagonalMatrix[Join[e, e]];]
Out: {0.9960570, Null}
{0.3770216, Null}
{0.0160009, Null}
The DiagonalMatrix was actually slower. I could replace this with just the Do loops, but I kept it because it was cleaner looking.
The current tally is 9.06 seconds for the naive Do loop, and 1.389 seconds for my next version using Outer and DiagonalMatrix. About a 6.5 times speedup, not too bad.
Sounds a lot faster, now doesn't it? Let's try using Compile now.
In: cf = Compile[{{L, _Integer}, {e, _Real, 1}, {f, _Real, 1}},
Module[{h},
h = Table[0.0, {2 L}, {2 L}];
Do[h[[i, i]] = e[[i]], {i, 1, L}];
Do[h[[i, i]] = e[[i - L]], {i, L + 1, 2 L}];
Do[h[[i, j]] = f[[i]] f[[j - L]], {i, 1, L}, {j, L + 1, 2 L}];
Do[h[[i, j]] = h[[j, i]], {i, 1, 2 L}, {j, 1, i}];
h]];
AbsoluteTiming[cf[L, e, f];]
Out: {0.3940225, Null}
Now it's running 3.56 times faster than my last version, and 23.23 times faster than the first one. Next version:
In: cf = Compile[{{L, _Integer}, {e, _Real, 1}, {f, _Real, 1}},
Module[{h},
h = Table[0.0, {2 L}, {2 L}];
Do[h[[i, i]] = e[[i]], {i, 1, L}];
Do[h[[i, i]] = e[[i - L]], {i, L + 1, 2 L}];
Do[h[[i, j]] = f[[i]] f[[j - L]], {i, 1, L}, {j, L + 1, 2 L}];
Do[h[[i, j]] = h[[j, i]], {i, 1, 2 L}, {j, 1, i}];
h], CompilationTarget->"C", RuntimeOptions->"Speed"];
AbsoluteTiming[cf[L, e, f];]
Out: {0.1370079, Null}
Most of the speed came from CompilationTarget->"C". Here I got another 2.84 speedup over the fastest version, and 66.13 times speedup over the first version. But all I did was just compile it!
Now, this is a very simple example. But this is real code I'm using to solve a problem in condensed matter physics. So don't dismiss it as possibly being a "toy example."
How's about another example of a technique we can use? I have a relatively simple matrix I have to build up. I have a matrix that's composed of nothing but ones from the start to some arbitrary point. The naive way may look something like this:
In: k = L;
AbsoluteTiming[p = Table[If[i == j && j <= k, 1, 0], {i, 2L}, {j, 2L}];]
Out: {5.5393168, Null}
Instead, let's build it up using ArrayPad and IdentityMatrix:
In: AbsoluteTiming[ArrayPad[IdentityMatrix[k], {{0, 2L-k}, {0, 2L-k}}
Out: {0.0140008, Null}
This actually doesn't work for k = 0, but you can special case that if you need that. Furthermore, depending on the size of k, this can be faster or slower. It's always faster than the Table[...] version though.
You could even write this using SparseArray:
In: AbsoluteTiming[SparseArray[{i_, i_} /; i <= k -> 1, {2 L, 2 L}];]
Out: {0.0040002, Null}
I could go on about some other things, but I'm afraid if I do I'll make this answer unreasonably large. I've accumulated a number of techniques for forming these various matrices and lists in the time I spent trying to optimize some code. The base code I worked with took over 6 days for one calculation to run, and now it takes only 6 hours to do the same thing.
I'll see if I can pick out the general techniques I've come up with and just stick them in a notebook to use.
TL;DR: It seems like for these cases, the functional way outperforms the procedural way. But when compiled, the procedural code outperforms the functional code.
Looking at what Compile does to Do loops is instructive. Consider this:
L=1200;
Do[.7, {i, 1, 2 L}, {j, 1, i}] // Timing
Do[.3 + .4, {i, 1, 2 L}, {j, 1, i}] // Timing
Do[.3 + .4 + .5, {i, 1, 2 L}, {j, 1, i}] // Timing
Do[.3 + .4 + .5 + .8, {i, 1, 2 L}, {j, 1, i}] // Timing
(*
{0.390163, Null}
{1.04115, Null}
{1.95333, Null}
{2.42332, Null}
*)
First, it seems safe to assume that Do does not automatically compile its argument if it's over some length (as Map, Nest etc do): you can keep adding constants and the derivative of time taken vs number of constants is constant. This is further supported by the nonexistence of such an option in SystemOptions["CompileOptions"].
Next, since this loops around n(n-1)/2 times with n=2*L, so around 3*10^6 times for our L=1200, the time taken for each addition indicates that there is a lot more going on than is necessary.
Next let us try
Compile[{{L,_Integer}},Do[.7,{i,1,2 L},{j,1,i}]]#1200//Timing
Compile[{{L,_Integer}},Do[.7+.7,{i,1,2 L},{j,1,i}]]#1200//Timing
Compile[{{L,_Integer}},Do[.7+.7+.7+.7,{i,1,2 L},{j,1,i}]]#1200//Timing
(*
{0.032081, Null}
{0.032857, Null}
{0.032254, Null}
*)
So here things are more reasonable. Let's take a look:
Needs["CompiledFunctionTools`"]
f1 = Compile[{{L, _Integer}},
Do[.7 + .7 + .7 + .7, {i, 1, 2 L}, {j, 1, i}]];
f2 = Compile[{{L, _Integer}}, Do[2.8, {i, 1, 2 L}, {j, 1, i}]];
CompilePrint[f1]
CompilePrint[f2]
the two CompilePrints give the same output, namely,
1 argument
9 Integer registers
Underflow checking off
Overflow checking off
Integer overflow checking on
RuntimeAttributes -> {}
I0 = A1
I5 = 0
I2 = 2
I1 = 1
Result = V255
1 I4 = I2 * I0
2 I6 = I5
3 goto 8
4 I7 = I6
5 I8 = I5
6 goto 7
7 if[ ++ I8 < I7] goto 7
8 if[ ++ I6 < I4] goto 4
9 Return
f1==f2 returns True.
Now, do
f5 = Compile[{{L, _Integer}}, Block[{t = 0.},
Do[t = Sin[i*j], {i, 1, 2 L}, {j, 1, i}]; t]];
f6 = Compile[{{L, _Integer}}, Block[{t = 0.},
Do[t = Sin[.45], {i, 1, 2 L}, {j, 1, i}]; t]];
CompilePrint[f5]
CompilePrint[f6]
I won't show the full listings, but in the first there is a line R3 = Sin[ R1] while in the second there is an assignment to a register R1 = 0.43496553411123023 (which, however, is reassigned in the innermost part of the loop by R2 = R1; perhaps if we output to C this will be optimized by gcc eventually).
So, in these very simple cases, uncompiled Do just blindly executes the body without inspecting it, while Compile does do various simple optimizations (in addition to outputing byte code). While here I am choosing examples that exaggerate how literally Do interprets its argument, this kind of thing partly explains the large speedup after compiling.
As for the huge speedup in Mike Bantegui's question yesterday, I think the speedup in such simple problems (just looping and multiplying things) is because there is no reason that automatically produced C code can't be optimized by the compiler to get things running as fast as possible. The C code produced is too hard to understand for me, but the bytecode is readable and I don't think that there is anything all that wasteful. So it is not that shocking that it is so fast when compiled to C. Using built-in functions shouldn't be any faster than that, since there shouldn't be any difference in the algorithm (if there is, the Do loop shouldn't have been written that way).
All this should be checked case by case, of course. In my experience, Do loops usually are the fastest way to go for this kind of operation. However, compilation has its limits: if you are producing large objects and trying to pass them around between two compiled functions (as arguments), the bottleneck can be this transfer. One solution is to simply put everything into one giant function and compile that; this ends up being harder and harder to do (you are forced to write C in mma, so to speak). Or you can try compiling the individual functions and using CompilationOptions -> {"InlineCompiledFunctions" -> True}] in the Compile. Things can get tricky very fast, though.
But this is getting too long.

Resources