Related
I need to solve a diferential equation of the form w'=g(t,w(t)) where g is defined as follows
g[t_, w_] := {f1[t, {w[[3]], w[[4]]}], f2[t, {w[[3]], w[[4]]}], w[[1]],w[[2]]};
and f1, f2 are
f1[t_, y_] := Sum[\[Mu][[i]] (s[[i]] - y)/Norm[s[[i]] - y]^2, {i, 1, 5}][[1]];
f2[t_, y_] := Sum[\[Mu][[i]] (s[[i]] - y)/Norm[s[[i]] - y]^2, {i, 1, 5}][[2]];
Everything else is defined properly and is not the cause of the error.
Yet when I use
sout = NDSolve[{y'[tvar] == g[tvar, y[tvar]],
y[0] == {Cos[Pi/6], Sin[Pi/6], 0, 0}}, y, {tvar, 0, 2}, Method -> "ExplicitRungeKutta"];
I get the error
Part::partw: Part 3 of y[tvar] does not exist.
Part::partw: Part 4 of y[tvar] does not exist.
I have looked in other questions and none of them solved this problem.
You want to find a function in $R^4$ satisfying a differential equation.
I don't think DSolve and NDSolve have a standard way of manipulating vectorial differential equations except by representing each component with an explicit name or an index for the dimensions.
Here is a working example, that can be executed without Method specification in dimension 4 with notations similar to your problem:
sout={w1[t],w2[t],w3[t],w4[t]} /. NDSolve[{
w1'[t]== t*w2[t],
w2'[t]== 2*t*w1[t],
w3'[t]==-2*w2[t]+w1[t],
w4'[t]== t*w3[t]-w1[t]+w2[t],
w1[0]==0,
w2[0]==1,
w3[0]==1,
w4[0]==0
},{w1[t],w2[t],w3[t],w4[t]},{t,0,2}]
ParametricPlot[{{sout[[1, 1]], sout[[1, 3]]}, {sout[[1, 2]], sout[[1, 4]]}}, {t, 0, 2}]
I think you will be able to adapt this working example to your needs.
I didn't use your original problem as I wanted to focus on the specification for Mathematica, not on the mathematics of your equation. There are constants involved such as Mu and s that you do not give.
I need to find a square matrix A satisfying the equation
A.L.A = -17/18A -2(A.L.L + L.A.L + (L.L).A) + 3(A.L + L.A) -4L.L.L + 8L.L - 44/9L + 8/9*(ID)
,where L is a diagonal matrix L = {{2/3,0,0,0},{0,5/12,0,0},{0,0,11/12,0},{0,0,0,2/3}}.
I can find the answers in the case that A is of dimension 2 and 3, but there is a problem with dimension 4 and above.
Actually, the matrix A has to satisfy the equation A.A = A too, but with a suitable matrix L only the equation above equation is enough.
This is my code ;
A = Table[a[i,j],{i,1,4},{j,1,4}]
B = A.L.A
ID = IdentityMatrix[4]
M = -17/18A -2(A.L.L + L.A.L + (L.L).A) + 3(A.L + L.A) -4L.L.L + 8L.L - 44/9L + 8/9*(ID)
diff = (B - M)//ExpandAll//Flatten ( so I get 16 non linear system of equations here )
A1 = A/.Solve[diff == 0][[1]]
After running this code for quite sometime, the error come up with there is not enough memory to compute.
In this case there are 16 equations and 16 variables. Some of the entries are parameters but I just do not know which one until I get the result.
I am not sure if there is anyway to solve this problem. I need the answer to be rational(probably integers) which is possible theoretically.
Could this problem be solved by matrix equation or any other method? I see one problem for this is there are too many equations and variables.
This evaluates fairly quickly and with modest memory for a problem this size.
L = {{2/3, 0, 0, 0}, {0, 5/12, 0, 0}, {0, 0, 11/12, 0}, {0, 0, 0, 2/3}};
A = {{a, b, c, d}, {e, f, g, h}, {i, j, k, l}, {m, n, o, p}};
Reduce[{A.L.A == -17/18 A - 2 (A.L.L + L.A.L + (L.L).A) + 3 (A.L + L.A) -
4 L.L.L + 8 L.L - 44/9 L + 8/9*IdentityMatrix[4]},
{a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p}, Backsubstitution->True
]
Then you just have to sort through the 143 potential solutions that it returns.
You might be able to Select from those that satisfy your A.A==A. You can also use ToRules on the result returned from Reduce to put this into a form similar to that returned from Solve, but check this carefully to make certain it is doing what you expect.
Check this very carefully to make certain I haven't made any mistakes.
I have a question about NDSolve function in Mathematica.
I have an oscillator defined by these two equations:
x' = v
v' = -x - u*v^3
where u is some constant.
How to create an NDSolve that resolves this? I tried following code (it has to depend on time) but it doesnt work:
eq1 = x'[t] == v;
eq2 = v' == -x[t] - u*v^3;
eq3 = x[0] == 2;
(initial displacement is 2m).
s = NDSolve[{eq1, eq2, eq3}, x, {t, 0, 30}]
Thank you very much...
You need to observe that the first equation once differentiated with respect to t can be used to substitute for v[t]. But then the second equation becomes a ODE of second order and requires to be supplied with another extra initial condition. We will give
v[0]==x'[0]==some number
Then after solving this ODE for x you can recover v[t]==x'[t]
I give you the solution in term of a Manipulate so that geometrically the situation becomes clear to you.
(* First equation *)
v[t] = x'[t];
(*
Differentiate this equation once and substitute
for v[t] in the second equation
*)
Manipulate[
With[{u = Constant, der = derval},
res = NDSolve[{x''[t] == -x[t] - u*x'[t]^3, x[0.] == 2,x'[0.] == der},
x, {t, 0., 30.}] // First;
Plot[Evaluate[{x[t], v[t]} /. res], {t, 0, 30}, PlotRange -> All,
Frame -> True,Axes -> None, ImageSize -> 600]
],
{{Constant, 0.,TraditionalForm#(u)}, 0.,3, .1},
{{derval, -3., TraditionalForm#(v[0] == x'[0])}, -3, 3, .1}
]
Hope this helps you but next time before you ask you need to brush up the theory first as you can see the question you asked concerns very basic and elementary Mathematics not Mathematica programming. Good luck!!
You need to specify a numeric value for your u as well as an initial condition for v[t] :
u=1.0;
solution=NDSolve[{x'[t]==v[t], v'[t]==-x[t]-u v[t]^3,x[0]==2,v[0]==-1},{x,v},{t,0,1}]
Plot[{solution[[1,1,2]][t],solution[[1,2,2]][t]},{t,0,1}]
I am trying to get Mathematica to approximate an integral that is a function of various parameters. I don't need it to be extremely precise -- the answer will be a fraction, and 5 digits would be nice, but I'd settle for as few as 2.
The problem is that there is a symbolic integral buried in the main integral, and I can't use NIntegrate on it since its symbolic.
F[x_, c_] := (1 - (1 - x)^c)^c;
a[n_, c_, x_] := F[a[n - 1, c, x], c];
a[0, c_, x_] = x;
MyIntegral[n_,c_] :=
NIntegrate[Integrate[(D[a[n,c,y],y]*y)/(1-a[n,c,x]),{y,x,1}],{x,0,1}]
Mathematica starts hanging when n is greater than 2 and c is greater than 3 or so (generally as both n and c get a little higher).
Are there any tricks for rewriting this expression so that it can be evaluated more easily? I've played with different WorkingPrecision and AccuracyGoal and PrecisionGoal options on the outer NIntegrate, but none of that helps the inner integral, which is where the problem is. In fact, for the higher values of n and c, I can't even get Mathematica to expand the inner derivative, i.e.
Expand[D[a[4,6,y],y]]
hangs.
I am using Mathematica 8 for Students.
If anyone has any tips for how I can get M. to approximate this, I would appreciate it.
Since you only want a numerical output (or that's what you'll get anyway), you can convert the symbolic integration into a numerical one using just NIntegrate as follows:
Clear[a,myIntegral]
a[n_Integer?Positive, c_Integer?Positive, x_] :=
a[n, c, x] = (1 - (1 - a[n - 1, c, x])^c)^c;
a[0, c_Integer, x_] = x;
myIntegral[n_, c_] :=
NIntegrate[D[a[n, c, y], y]*y/(1 - a[n, c, x]), {x, 0, 1}, {y, x, 1},
WorkingPrecision -> 200, PrecisionGoal -> 5]
This is much faster than performing the integration symbolically. Here's a comparison:
yoda:
myIntegral[2,2]//Timing
Out[1]= {0.088441, 0.647376595...}
myIntegral[5,2]//Timing
Out[2]= {1.10486, 0.587502888...}
rcollyer:
MyIntegral[2,2]//Timing
Out[3]= {1.0029, 0.647376}
MyIntegral[5,2]//Timing
Out[4]= {27.1697, 0.587503006...}
(* Obtained with WorkingPrecision->500, PrecisionGoal->5, MaxRecursion->20 *)
Jand's function has timings similar to rcollyer's. Of course, as you increase n, you will have to increase your WorkingPrecision way higher than this, as you've experienced in your previous question. Since you said you only need about 5 digits of precision, I've explicitly set PrecisionGoal to 5. You can change this as per your needs.
To codify the comments, I'd try the following. First, to eliminate infinite recursion with regards to the variable, n, I'd rewrite your functions as
F[x_, c_] := (1 - (1-x)^c)^c;
(* see note below *)
a[n_Integer?Positive, c_, x_] := F[a[n - 1, c, x], c];
a[0, c_, x_] = x;
that way n==0 will actually be a stopping point. The ?Positive form is a PatternTest, and useful for applying additional conditions to the parameters. I suspect the issue is that NIntegrate is re-evaluating the inner Integrate for every value of x, so I'd pull that evaluation out, like
MyIntegral[n_,c_] :=
With[{ int = Integrate[(D[a[n,c,y],y]*y)/(1-a[n,c,x]),{y,x,1}] },
NIntegrate[int,{x,0,1}]
]
where With is one of several scoping constructs specifically for creating local constants.
Your comments indicate that the inner integral takes a long time, have you tried simplifying the integrand as it is a derivative of a times a function of a? It seems like the result of a chain rule expansion to me.
Note: as per Yoda's suggestion in the comments, you can add a cacheing, or memoization, mechanism to a. Change its definition to
d:a[n_Integer?Positive, c_, x_] := d = F[a[n - 1, c, x], c];
The trick here is that in d:a[ ... ], d is a named pattern that is used again in d = F[...] cacheing the value of a for those particular parameter values.
I'm trying to figure out how to use Mathematica to solve systems of equations where some of the variables and coefficients are vectors. A simple example would be something like
where I know A, V, and the magnitude of P, and I have to solve for t and the direction of P. (Basically, given two rays A and B, where I know everything about A but only the origin and magnitude of B, figure out what the direction of B must be such that it intersects A.)
Now, I know how to solve this sort of thing by hand, but that's slow and error-prone, so I was hoping I could use Mathematica to speed things along and error-check me. However, I can't see how to get Mathematica to symbolically solve equations involving vectors like this.
I've looked in the VectorAnalysis package, without finding anything there that seems relevant; meanwhile the Linear Algebra package only seems to have a solver for linear systems (which this isn't, since I don't know t or P, just |P|).
I tried doing the simpleminded thing: expanding the vectors into their components (pretend they're 3D) and solving them as if I were trying to equate two parametric functions,
Solve[
{ Function[t, {Bx + Vx*t, By + Vy*t, Bz + Vz*t}][t] ==
Function[t, {Px*t, Py*t, Pz*t}][t],
Px^2 + Py^2 + Pz^2 == Q^2 } ,
{ t, Px, Py, Pz }
]
but the "solution" that spits out is a huge mess of coefficients and congestion. It also forces me to expand out each of the dimensions I feed it.
What I want is a nice symbolic solution in terms of dot products, cross products, and norms:
But I can't see how to tell Solve that some of the coefficients are vectors instead of scalars.
Is this possible? Can Mathematica give me symbolic solutions on vectors? Or should I just stick with No.2 Pencil technology?
(Just to be clear, I'm not interested in the solution to the particular equation at top -- I'm asking if I can use Mathematica to solve computational geometry problems like that generally without my having to express everything as an explicit matrix of {Ax, Ay, Az}, etc.)
With Mathematica 7.0.1.0
Clear[A, V, P];
A = {1, 2, 3};
V = {4, 5, 6};
P = {P1, P2, P3};
Solve[A + V t == P, P]
outputs:
{{P1 -> 1 + 4 t, P2 -> 2 + 5 t, P3 -> 3 (1 + 2 t)}}
Typing out P = {P1, P2, P3} can be annoying if the array or matrix is large.
Clear[A, V, PP, P];
A = {1, 2, 3};
V = {4, 5, 6};
PP = Array[P, 3];
Solve[A + V t == PP, PP]
outputs:
{{P[1] -> 1 + 4 t, P[2] -> 2 + 5 t, P[3] -> 3 (1 + 2 t)}}
Matrix vector inner product:
Clear[A, xx, bb];
A = {{1, 5}, {6, 7}};
xx = Array[x, 2];
bb = Array[b, 2];
Solve[A.xx == bb, xx]
outputs:
{{x[1] -> 1/23 (-7 b[1] + 5 b[2]), x[2] -> 1/23 (6 b[1] - b[2])}}
Matrix multiplication:
Clear[A, BB, d];
A = {{1, 5}, {6, 7}};
BB = Array[B, {2, 2}];
d = {{6, 7}, {8, 9}};
Solve[A.BB == d]
outputs:
{{B[1, 1] -> -(2/23), B[2, 1] -> 28/23, B[1, 2] -> -(4/23), B[2, 2] -> 33/23}}
The dot product has an infix notation built in just use a period for the dot.
I do not think the cross product does however. This is how you use the Notation package to make one. "X" will become our infix form of Cross. I suggest coping the example from the Notation, Symbolize and InfixNotation tutorial. Also use the Notation Palette which helps abstract away some of the Box syntax.
Clear[X]
Needs["Notation`"]
Notation[x_ X y_\[DoubleLongLeftRightArrow]Cross[x_, y_]]
Notation[NotationTemplateTag[
RowBox[{x_, , X, , y_, }]] \[DoubleLongLeftRightArrow]
NotationTemplateTag[RowBox[{ ,
RowBox[{Cross, [,
RowBox[{x_, ,, y_}], ]}]}]]]
{a, b, c} X {x, y, z}
outputs:
{-c y + b z, c x - a z, -b x + a y}
The above looks horrible but when using the Notation Palette it looks like:
Clear[X]
Needs["Notation`"]
Notation[x_ X y_\[DoubleLongLeftRightArrow]Cross[x_, y_]]
{a, b, c} X {x, y, z}
I have run into some quirks using the notation package in the past versions of mathematica so be careful.
I don't have a general solution for you by any means (MathForum may be the better way to go), but there are some tips that I can offer you. The first is to do the expansion of your vectors into components in a more systematic way. For instance, I would solve the equation you wrote as follows.
rawSol = With[{coords = {x, y, z}},
Solve[
Flatten[
{A[#] + V[#] t == P[#] t & /# coords,
Total[P[#]^2 & /# coords] == P^2}],
Flatten[{t, P /# coords}]]];
Then you can work with the rawSol variable more easily. Next, because you are referring the vector components in a uniform way (always matching the Mathematica pattern v_[x|y|z]), you can define rules that will aid in simplifying them. I played around a bit before coming up with the following rules:
vectorRules =
{forms___ + vec_[x]^2 + vec_[y]^2 + vec_[z]^2 :> forms + vec^2,
forms___ + c_. v1_[x]*v2_[x] + c_. v1_[y]*v2_[y] + c_. v1_[z]*v2_[z] :>
forms + c v1\[CenterDot]v2};
These rules will simplify the relationships for vector norms and dot products (cross-products are left as a likely painful exercise for the reader). EDIT: rcollyer pointed out that you can make c optional in the rule for dot products, so you only need two rules for norms and dot products.
With these rules, I was immediately able to simplify the solution for t into a form very close to yours:
In[3] := t /. rawSol //. vectorRules // Simplify // InputForm
Out[3] = {(A \[CenterDot] V - Sqrt[A^2*(P^2 - V^2) +
(A \[CenterDot] V)^2])/(P^2 - V^2),
(A \[CenterDot] V + Sqrt[A^2*(P^2 - V^2) +
(A \[CenterDot] V)^2])/(P^2 - V^2)}
Like I said, it's not a complete way of solving these kinds of problems by any means, but if you're careful about casting the problem into terms that are easy to work with from a pattern-matching and rule-replacement standpoint, you can go pretty far.
I've taken a somewhat different approach to this issue. I've made some definitions that return this output:
Patterns that are known to be vector quantities may be specified using vec[_], patterns that have an OverVector[] or OverHat[] wrapper (symbols with a vector or hat over them) are assumed to be vectors by default.
The definitions are experimental and should be treated as such, but they seem to work well. I expect to add to this over time.
Here are the definitions. The need to be pasted into a Mathematica Notebook cell and converted to StandardForm to see them properly.
Unprotect[vExpand,vExpand$,Cross,Plus,Times,CenterDot];
(* vec[pat] determines if pat is a vector quantity.
vec[pat] can be used to define patterns that should be treated as vectors.
Default: Patterns are assumed to be scalar unless otherwise defined *)
vec[_]:=False;
(* Symbols with a vector hat, or vector operations on vectors are assumed to be vectors *)
vec[OverVector[_]]:=True;
vec[OverHat[_]]:=True;
vec[u_?vec+v_?vec]:=True;
vec[u_?vec-v_?vec]:=True;
vec[u_?vec\[Cross]v_?vec]:=True;
vec[u_?VectorQ]:=True;
(* Placeholder for matrix types *)
mat[a_]:=False;
(* Anything not defined as a vector or matrix is a scalar *)
scal[x_]:=!(vec[x]\[Or]mat[x]);
scal[x_?scal+y_?scal]:=True;scal[x_?scal y_?scal]:=True;
(* Scalars times vectors are vectors *)
vec[a_?scal u_?vec]:=True;
mat[a_?scal m_?mat]:=True;
vExpand$[u_?vec\[Cross](v_?vec+w_?vec)]:=vExpand$[u\[Cross]v]+vExpand$[u\[Cross]w];
vExpand$[(u_?vec+v_?vec)\[Cross]w_?vec]:=vExpand$[u\[Cross]w]+vExpand$[v\[Cross]w];
vExpand$[u_?vec\[CenterDot](v_?vec+w_?vec)]:=vExpand$[u\[CenterDot]v]+vExpand$[u\[CenterDot]w];
vExpand$[(u_?vec+v_?vec)\[CenterDot]w_?vec]:=vExpand$[u\[CenterDot]w]+vExpand$[v\[CenterDot]w];
vExpand$[s_?scal (u_?vec\[Cross]v_?vec)]:=Expand[s] vExpand$[u\[Cross]v];
vExpand$[s_?scal (u_?vec\[CenterDot]v_?vec)]:=Expand[s] vExpand$[u\[CenterDot]v];
vExpand$[Plus[x__]]:=vExpand$/#Plus[x];
vExpand$[s_?scal,Plus[x__]]:=Expand[s](vExpand$/#Plus[x]);
vExpand$[Times[x__]]:=vExpand$/#Times[x];
vExpand[e_]:=e//.e:>Expand[vExpand$[e]]
(* Some simplification rules *)
(u_?vec\[Cross]u_?vec):=\!\(\*OverscriptBox["0", "\[RightVector]"]\);
(u_?vec+\!\(\*OverscriptBox["0", "\[RightVector]"]\)):=u;
0v_?vec:=\!\(\*OverscriptBox["0", "\[RightVector]"]\);
\!\(\*OverscriptBox["0", "\[RightVector]"]\)\[CenterDot]v_?vec:=0;
v_?vec\[CenterDot]\!\(\*OverscriptBox["0", "\[RightVector]"]\):=0;
(a_?scal u_?vec)\[Cross]v_?vec :=a u\[Cross]v;u_?vec\[Cross](a_?scal v_?vec ):=a u\[Cross]v;
(a_?scal u_?vec)\[CenterDot]v_?vec :=a u\[CenterDot]v;
u_?vec\[CenterDot](a_?scal v_?vec) :=a u\[CenterDot]v;
(* Stealing behavior from Dot *)
Attributes[CenterDot]=Attributes[Dot];
Protect[vExpand,vExpand$,Cross,Plus,Times,CenterDot];