Factorization in Mathematica - wolfram-mathematica

I am completely new to Mathematica and I need some help (sorry if it is trivial, but I haven't found the answer). I have this code
K[Q_, n_Integer] := Module[{z, x}, SymmetricReduction[
SeriesCoefficient[
Product[ComposeSeries[Series[Q[z], {z, 0, n}],
Series[x[i] z, {z, 0, n}]], {i, 1, n}], n],
Table[x[i], {i, 1, n}], Table[Subscript[c, i], {i, 1, n}]][[1]]]
Then I call this:
K[ Sqrt[#]/Tanh[Sqrt[#]]&, 8] /. c->p
The output is a polynomial in p1, p2 ... p8 (in this case) with coefficients written as fractions (23412/7538293, for example). What should I do in order to output the numerator and denominator of each function in it's prime factorization (preferably factor out the common divisor of the denominator)?
Thank you!

Related

How does this deviation come out after I use Evaluate and Plot in Mathematica?

I ran into this problem when I try to solve a partial differential equation. Here is my code:
dd = NDSolve[{D[tes[t, x], t] ==D[tes[t, x], x, x] + Exp[-1/(tes[t, x])],
tes[t, 0] == 1, tes[t, -1] == 1, tes[0, x] == 1}, {tes[t, x]}, {t, 0, 5}, {x, -1, 0}]
f[t_, x_] = tes[t, x] /. dd
kkk = FunctionInterpolation[Integrate[Exp[-1.1/( Evaluate[f[t, x]])], {x, -1, 0}], {t, 0, 0.05}]
kkg[t_] = Integrate[Exp[-1.1/( Evaluate[f[t, x]])], {x, -1, 0}]
Plot[Evaluate[kkk[t]] - Evaluate[kkg[t]], {t, 0, 0.05}]
N[kkg[0.01] - kkk[0.01], 1]
It's strange that the deviation showed in the graph reaches up to more than 5*10^-7 around t=0.01, while it's only -3.88578*10^-16 when calculated by N[kkg[0.01] - kkk[0.01], 1], I wonder how this error comes out.
By the way, I feel it strange that the output of N[kkg[0.01] - kkk[0.01], 1] has so many decimal places, I've set the precision as 1, right?
Using Mathematica 7 the plot I get does not show a peak at 0.01:
Plot[kkk[t] - kkg[t], {t, 0, 0.05}, GridLines -> Automatic]
There is a peak at about 0.00754:
kkk[0.00754] - kkg[0.00754] // N
{6.50604*10^-7}
Regarding N, it does not change the precision of machine precision numbers as it does for exact or arbitrary precision ones:
N[{1.23456789, Pi, 1.23456789`50}, 2]
Precision /# %
{1.23457, 3.1, 1.2}
{MachinePrecision, 2., 2.}
Look at SetPrecision if you want to force (fake) a precision, and NumberForm if you want to print a number in a specific format.

Generate terms of an m-order power series in n variables

Consider a situation where you have data in a list of the form
data = {{x1, x2, x3, ..., xn, y}, {...}, ..., {...}}
For example,
data = {{0, 2, 3, 2}, {0, 0, 1, 4}, {7, 6, 8, 3}}
I'd like to fit the data to a multivariate polynomial of order, say, 2.
So, the 3-variable function values are:
{2, 4, 3}
in respective points
{{0, 2, 3}, {0, 0, 1}, {7, 6, 8}}
I'd say something like
Fit[data, {1, x, y, z, x^2, y^2, z^2, x y , x z, y z}, {x, y, z}]
This is all very nice, but i may not have only 3-variate data, there may be an arbitrary number of variables, and I don't know how to programmatically generate all the linear, quadratic or even higher-order terms, to insert them as the second argument of Fit[].
For 4-variate date do second order, it would be something like:
{1, x1, x2, x3, x4, x1^2, x2^2, x3^2, x4^2, x1 x2, x1 x3, x1 x4, x2 x3, x2 x4, x3 x4}
Is there any way I can generate such a list for n variables, to m-th order?
Like terms (without coefficients) in a m-order power series expansion of an n-variable function.
Does this do what you want?
Union[Times ### Tuples[{1, x, y, z}, 2]]
While the solution of #ruebenko is perfectly correct, I'd like to mention that it will be quite slow for higher powers / larger number of variables, because of the complexity of Tuples and lots of duplicates for higher powers. Here is an algebraic method with a much better performance for those cases (both run-time and memory-wise):
List ## Expand[(1 + x + y + z + t)^2] /. a_Integer*b_ :> b
Here is a comparison for large number of variables:
In[257]:= (res1=Union[Times###Tuples[{1,x,y,z,t},9]])//Short//Timing
Out[257]= {19.345,{1,t,t^2,t^3,t^4,t^5,t^6,t^7,t^8,t^9,x,<<694>>,x^2 z^7,y z^7,
t y z^7,x y z^7,y^2 z^7,z^8,t z^8,x z^8,y z^8,z^9}}
In[259]:= (res2=List##Expand[(1+x+y+z+t)^9]/. a_Integer*b_:>b)//Short//Timing
Out[259]= {0.016,{1,t,t^2,t^3,t^4,t^5,t^6,t^7,t^8,t^9,x,<<694>>,x^2 z^7,y z^7,
t y z^7,x y z^7,y^2 z^7,z^8,t z^8,x z^8,y z^8,z^9}}
In[260]:= res1===res2
Out[260]= True
In this case, we observe a 1000x speedup, but generally the two methods just have different computational complexities. The above code is an application of a general and nice method, called Algebraic Programming. For an interesting discussion of it in the context of Mathematica, see this Mathematica Journal paper by Andrzej Kozlowski.
Using #ruebenko's neat solution,
varsList[y_, n_?IntegerQ, k_?IntegerQ] :=
Union[Times ###
Tuples[Prepend[Table[Subscript[y, i], {i, 1, n}], 1], k]]
you can generate desired list via varsList[x, 4, 2].
Here is another method that I believe is worth knowing:
set = {1, x, y, z};
Union ## Outer[Times, set, set]

What is the most efficient way to construct large block matrices in Mathematica?

Inspired by Mike Bantegui's question on constructing a matrix defined as a recurrence relation, I wonder if there is any general guidance that could be given on setting up large block matrices in the least computation time. In my experience, constructing the blocks and then putting them together can be quite inefficient (thus my answer was actually slower than Mike's original code). Join and possibly ArrayFlatten are possibly less efficient than they could be.
Obviously if the matrix is sparse, one can use SparseMatrix constructs, but there will be times when the block matrix you are constructing is not sparse.
What is best practice for this kind of problem? I am assuming the elements of the matrix are numeric.
The code shown below is available here: http://pastebin.com/4PWWxGhB. Just copy and paste it into a notebook to test it out.
I was actually trying to do several functional ways of calculating matrices, since I
figured the functional way (which is typically idiomatic in Mathematica) is more efficient.
As one example, I had this matrix which was composed of two lists:
In: L = 1200;
e = Table[..., {2L}];
f = Table[..., {2L}];
h = Table[0, {2L}, {2L}];
Do[h[[i, i]] = e[[i]], {i, 1, L}];
Do[h[[i, i]] = e[[i-L]], {i, L+1, 2L}];
Do[h[[i, j]] = f[[i]]f[[j-L]], {i, 1, L}, {j, L+1, 2L}];
Do[h[[i, j]] = h[[j, i]], {i, 1, 2 L}, {j, 1, i}];
My first step was to time everything.
In: h = Table[0, {2 L}, {2 L}];
AbsoluteTiming[Do[h[[i, i]] = e[[i]], {i, 1, L}];]
AbsoluteTiming[Do[h[[i, i]] = e[[i - L]], {i, L + 1, 2 L}];]
AbsoluteTiming[
Do[h[[i, j]] = f[[i]] f[[j - L]], {i, 1, L}, {j, L + 1, 2 L}];]
AbsoluteTiming[Do[h[[i, j]] = h[[j, i]], {i, 1, 2 L}, {j, 1, i}];]
Out: {0.0020001, Null}
{0.0030002, Null}
{5.0012861, Null}
{4.0622324, Null}
DiagonalMatrix[...] was slower than the do loops, so I decided to just use Do loops on the last step. As you can see, using Outer[Times, f, f] was much faster in this case.
I then wrote the equivalent using Outer for the blocks in the upper right and bottom left of the matrix, and DiagonalMatrix for the diagonal:
AbsoluteTiming[h1 = ArrayPad[Outer[Times, f, f], {{0, L}, {L, 0}}];]
AbsoluteTiming[h1 += Transpose[h1];]
AbsoluteTiming[h1 += DiagonalMatrix[Join[e, e]];]
Out: {0.9960570, Null}
{0.3770216, Null}
{0.0160009, Null}
The DiagonalMatrix was actually slower. I could replace this with just the Do loops, but I kept it because it was cleaner looking.
The current tally is 9.06 seconds for the naive Do loop, and 1.389 seconds for my next version using Outer and DiagonalMatrix. About a 6.5 times speedup, not too bad.
Sounds a lot faster, now doesn't it? Let's try using Compile now.
In: cf = Compile[{{L, _Integer}, {e, _Real, 1}, {f, _Real, 1}},
Module[{h},
h = Table[0.0, {2 L}, {2 L}];
Do[h[[i, i]] = e[[i]], {i, 1, L}];
Do[h[[i, i]] = e[[i - L]], {i, L + 1, 2 L}];
Do[h[[i, j]] = f[[i]] f[[j - L]], {i, 1, L}, {j, L + 1, 2 L}];
Do[h[[i, j]] = h[[j, i]], {i, 1, 2 L}, {j, 1, i}];
h]];
AbsoluteTiming[cf[L, e, f];]
Out: {0.3940225, Null}
Now it's running 3.56 times faster than my last version, and 23.23 times faster than the first one. Next version:
In: cf = Compile[{{L, _Integer}, {e, _Real, 1}, {f, _Real, 1}},
Module[{h},
h = Table[0.0, {2 L}, {2 L}];
Do[h[[i, i]] = e[[i]], {i, 1, L}];
Do[h[[i, i]] = e[[i - L]], {i, L + 1, 2 L}];
Do[h[[i, j]] = f[[i]] f[[j - L]], {i, 1, L}, {j, L + 1, 2 L}];
Do[h[[i, j]] = h[[j, i]], {i, 1, 2 L}, {j, 1, i}];
h], CompilationTarget->"C", RuntimeOptions->"Speed"];
AbsoluteTiming[cf[L, e, f];]
Out: {0.1370079, Null}
Most of the speed came from CompilationTarget->"C". Here I got another 2.84 speedup over the fastest version, and 66.13 times speedup over the first version. But all I did was just compile it!
Now, this is a very simple example. But this is real code I'm using to solve a problem in condensed matter physics. So don't dismiss it as possibly being a "toy example."
How's about another example of a technique we can use? I have a relatively simple matrix I have to build up. I have a matrix that's composed of nothing but ones from the start to some arbitrary point. The naive way may look something like this:
In: k = L;
AbsoluteTiming[p = Table[If[i == j && j <= k, 1, 0], {i, 2L}, {j, 2L}];]
Out: {5.5393168, Null}
Instead, let's build it up using ArrayPad and IdentityMatrix:
In: AbsoluteTiming[ArrayPad[IdentityMatrix[k], {{0, 2L-k}, {0, 2L-k}}
Out: {0.0140008, Null}
This actually doesn't work for k = 0, but you can special case that if you need that. Furthermore, depending on the size of k, this can be faster or slower. It's always faster than the Table[...] version though.
You could even write this using SparseArray:
In: AbsoluteTiming[SparseArray[{i_, i_} /; i <= k -> 1, {2 L, 2 L}];]
Out: {0.0040002, Null}
I could go on about some other things, but I'm afraid if I do I'll make this answer unreasonably large. I've accumulated a number of techniques for forming these various matrices and lists in the time I spent trying to optimize some code. The base code I worked with took over 6 days for one calculation to run, and now it takes only 6 hours to do the same thing.
I'll see if I can pick out the general techniques I've come up with and just stick them in a notebook to use.
TL;DR: It seems like for these cases, the functional way outperforms the procedural way. But when compiled, the procedural code outperforms the functional code.
Looking at what Compile does to Do loops is instructive. Consider this:
L=1200;
Do[.7, {i, 1, 2 L}, {j, 1, i}] // Timing
Do[.3 + .4, {i, 1, 2 L}, {j, 1, i}] // Timing
Do[.3 + .4 + .5, {i, 1, 2 L}, {j, 1, i}] // Timing
Do[.3 + .4 + .5 + .8, {i, 1, 2 L}, {j, 1, i}] // Timing
(*
{0.390163, Null}
{1.04115, Null}
{1.95333, Null}
{2.42332, Null}
*)
First, it seems safe to assume that Do does not automatically compile its argument if it's over some length (as Map, Nest etc do): you can keep adding constants and the derivative of time taken vs number of constants is constant. This is further supported by the nonexistence of such an option in SystemOptions["CompileOptions"].
Next, since this loops around n(n-1)/2 times with n=2*L, so around 3*10^6 times for our L=1200, the time taken for each addition indicates that there is a lot more going on than is necessary.
Next let us try
Compile[{{L,_Integer}},Do[.7,{i,1,2 L},{j,1,i}]]#1200//Timing
Compile[{{L,_Integer}},Do[.7+.7,{i,1,2 L},{j,1,i}]]#1200//Timing
Compile[{{L,_Integer}},Do[.7+.7+.7+.7,{i,1,2 L},{j,1,i}]]#1200//Timing
(*
{0.032081, Null}
{0.032857, Null}
{0.032254, Null}
*)
So here things are more reasonable. Let's take a look:
Needs["CompiledFunctionTools`"]
f1 = Compile[{{L, _Integer}},
Do[.7 + .7 + .7 + .7, {i, 1, 2 L}, {j, 1, i}]];
f2 = Compile[{{L, _Integer}}, Do[2.8, {i, 1, 2 L}, {j, 1, i}]];
CompilePrint[f1]
CompilePrint[f2]
the two CompilePrints give the same output, namely,
1 argument
9 Integer registers
Underflow checking off
Overflow checking off
Integer overflow checking on
RuntimeAttributes -> {}
I0 = A1
I5 = 0
I2 = 2
I1 = 1
Result = V255
1 I4 = I2 * I0
2 I6 = I5
3 goto 8
4 I7 = I6
5 I8 = I5
6 goto 7
7 if[ ++ I8 < I7] goto 7
8 if[ ++ I6 < I4] goto 4
9 Return
f1==f2 returns True.
Now, do
f5 = Compile[{{L, _Integer}}, Block[{t = 0.},
Do[t = Sin[i*j], {i, 1, 2 L}, {j, 1, i}]; t]];
f6 = Compile[{{L, _Integer}}, Block[{t = 0.},
Do[t = Sin[.45], {i, 1, 2 L}, {j, 1, i}]; t]];
CompilePrint[f5]
CompilePrint[f6]
I won't show the full listings, but in the first there is a line R3 = Sin[ R1] while in the second there is an assignment to a register R1 = 0.43496553411123023 (which, however, is reassigned in the innermost part of the loop by R2 = R1; perhaps if we output to C this will be optimized by gcc eventually).
So, in these very simple cases, uncompiled Do just blindly executes the body without inspecting it, while Compile does do various simple optimizations (in addition to outputing byte code). While here I am choosing examples that exaggerate how literally Do interprets its argument, this kind of thing partly explains the large speedup after compiling.
As for the huge speedup in Mike Bantegui's question yesterday, I think the speedup in such simple problems (just looping and multiplying things) is because there is no reason that automatically produced C code can't be optimized by the compiler to get things running as fast as possible. The C code produced is too hard to understand for me, but the bytecode is readable and I don't think that there is anything all that wasteful. So it is not that shocking that it is so fast when compiled to C. Using built-in functions shouldn't be any faster than that, since there shouldn't be any difference in the algorithm (if there is, the Do loop shouldn't have been written that way).
All this should be checked case by case, of course. In my experience, Do loops usually are the fastest way to go for this kind of operation. However, compilation has its limits: if you are producing large objects and trying to pass them around between two compiled functions (as arguments), the bottleneck can be this transfer. One solution is to simply put everything into one giant function and compile that; this ends up being harder and harder to do (you are forced to write C in mma, so to speak). Or you can try compiling the individual functions and using CompilationOptions -> {"InlineCompiledFunctions" -> True}] in the Compile. Things can get tricky very fast, though.
But this is getting too long.

Extract contours from ContourPlot in Mathematica

I have a function f(x,y) of two variables, of which I need to know the location of the curves at which it crosses zero. ContourPlot does that very efficiently (that is: it uses clever multi-grid methods, not just a brute force fine-grained scan) but just gives me a plot. I would like to have a set of values {x,y} (with some specified resolution) or perhaps some interpolating function which allows me to get access to the location of these contours.
Have thought of extracting this from the FullForm of ContourPlot but this seems to be a bit of a hack. Any better way to do this?
If you end up extracting points from ContourPlot, this is one easy way to do it:
points = Cases[
Normal#ContourPlot[Sin[x] Sin[y] == 1/2, {x, -3, 3}, {y, -3, 3}],
Line[pts_] -> pts,
Infinity
]
Join ## points (* if you don't want disjoint components to be separate *)
EDIT
It appears that ContourPlot does not produce very precise contours. They're of course meant for plotting and are good enough for that, but the points don't lie precisely on the contours:
In[78]:= Take[Join ## points /. {x_, y_} -> Sin[x] Sin[y] - 1/2, 10]
Out[78]= {0.000163608, 0.0000781187, 0.000522698, 0.000516078,
0.000282781, 0.000659909, 0.000626086, 0.0000917416, 0.000470424,
0.0000545409}
We can try to come up with our own method to trace the contour, but it's a lot of trouble to do it in a general way. Here's a concept that works for smoothly varying functions that have smooth contours:
Start from some point (pt0), and find the intersection with the contour along the gradient of f.
Now we have a point on the contour. Move along the tangent of the contour by a fixed step (resolution), then repeat from step 1.
Here's a basic implementation that only works with functions that can be differentiated symbolically:
rot90[{x_, y_}] := {y, -x}
step[f_, pt : {x_, y_}, pt0 : {x0_, y0_}, resolution_] :=
Module[
{grad, grad0, t, contourPoint},
grad = D[f, {pt}];
grad0 = grad /. Thread[pt -> pt0];
contourPoint =
grad0 t + pt0 /. First#FindRoot[f /. Thread[pt -> grad0 t + pt0], {t, 0}];
Sow[contourPoint];
grad = grad /. Thread[pt -> contourPoint];
contourPoint + rot90[grad] resolution
]
result = Reap[
NestList[step[Sin[x] Sin[y] - 1/2, {x, y}, #, .5] &, {1, 1}, 20]
];
ListPlot[{result[[1]], result[[-1, 1]]}, PlotStyle -> {Red, Black},
Joined -> True, AspectRatio -> Automatic, PlotMarkers -> Automatic]
The red points are the "starting points", while the black points are the trace of the contour.
EDIT 2
Perhaps it's an easier and better solution to use a similar technique to make the points that we get from ContourPlot more precise. Start from the initial point, then move along the gradient until we intersect the contour.
Note that this implementation will also work with functions that can't be differentiated symbolically. Just define the function as f[x_?NumericQ, y_?NumericQ] := ... if this is the case.
f[x_, y_] := Sin[x] Sin[y] - 1/2
refine[f_, pt0 : {x_, y_}] :=
Module[{grad, t},
grad = N[{Derivative[1, 0][f][x, y], Derivative[0, 1][f][x, y]}];
pt0 + grad*t /. FindRoot[f ## (pt0 + grad*t), {t, 0}]
]
points = Join ## Cases[
Normal#ContourPlot[f[x, y] == 0, {x, -3, 3}, {y, -3, 3}],
Line[pts_] -> pts,
Infinity
]
refine[f, #] & /# points
A slight variation for extracting points from ContourPlot (possibly due to David Park):
pts = Cases[
ContourPlot[Cos[x] + Cos[y] == 1/2, {x, 0, 4 Pi}, {y, 0, 4 Pi}],
x_GraphicsComplex :> First#x, Infinity];
or (as a list of {x,y} points)
ptsXY = Cases[
Cases[ContourPlot[
Cos[x] + Cos[y] == 1/2, {x, 0, 4 Pi}, {y, 0, 4 Pi}],
x_GraphicsComplex :> First#x, Infinity], {x_, y_}, Infinity];
Edit
As discussed here, an article by Paul Abbott in the Mathematica Journal (Finding Roots in an Interval) gives the following two alternative methods for obtaining a list of {x,y} values from ContourPlot, including (!)
ContourPlot[...][[1, 1]]
For the above example
ptsXY2 = ContourPlot[
Cos[x] + Cos[y] == 1/2, {x, 0, 4 Pi}, {y, 0, 4 Pi}][[1, 1]];
and
ptsXY3 = Cases[
Normal#ContourPlot[
Cos[x] + Cos[y] == 1/2, {x, 0, 4 Pi}, {y, 0, 4 Pi}],
Line[{x__}] :> x, Infinity];
where
ptsXY2 == ptsXY == ptsXY3

how to draw a triangular grid of length n in mathematica

I am wondering if anyone could please help to draw a triangular grid (equilateral) with edge length n in mathematica. Thanks.
A Simple Grid:
p = Table[ Table[
Polygon[{j - 1/2 i, i Sqrt[3]/2} + # & /# {{0, 0}, {1/2,Sqrt[3]/2}, {1, 0}}],
{j, i, 9}], {i, 0, 9}];
Graphics[{EdgeForm[Black], FaceForm[White], p}]
Edit
A more clear version, I guess:
s3 = Sqrt[3];
templateTriangleVertex = {{0, 0}, {1, s3}, {2, 0}};
p = Table[Table[
Polygon[{2 j - i, s3 i } + # & /# templateTriangleVertex],
{j, i, 9}], {i, 0, 9}];
Graphics[{EdgeForm[Black], FaceForm[White], p}]
Something like this?
(source: yaroslavvb.com)
This is the code I used. Perhaps too complicated for the specific task above, it's part of code I had to visualize integer lattices like this
A = Sqrt[2/3] {Cos[#], Sin[#], Sqrt[1/2]} & /#
Table[Pi/2 + 2 Pi/3 + 2 k Pi/3, {k, 0, 2}] // Transpose;
p2r[{x_, y_, z_}] := Most[A.{x, y, z}];
n = 10;
types = 1/n Permutations /# IntegerPartitions[n, {3}, Range[1, n]] //
Flatten[#, 1] &;
points = p2r /# types;
Needs["ComputationalGeometry`"]
Graphics[{EdgeForm[Black], FaceForm[Transparent],
GraphicsComplex[points,
Polygon /# DelaunayTriangulation[points // N][[All, 2]]]}]
What this does
types contains all 3 tuples of integers that add up to n. Those integers lie on a 2-dimensional subspace of R^3
A is a linear transformation to rotate those 3-tuples into x-y plane
Delauney triangulation finds all triangles connecting nearby points
Here is a variation on belisarius' method.
p = Table[{2 j - i, Sqrt[3] i}, {i, 0, 9}, {j, i, 9}]
Graphics[ Line # Join[p, Riffle ### Partition[p, 2, 1]] ]

Resources