I've just started to learn mathematica so forgive me if it's a simple question. I'm trying to find out why Plot that contains expression with ReplaceAll works different from Plot with Set . I have:
Clear["Global`*"]
I0[t_] = HeavisidePi[(t - 1/2 10^-9)/10^-9];
sol = DSolve[{D[I2[t], t]*R == I1[t]/C0, I0[t] == I1[t] + I2[t],
I2[0] == 0}, {I1[t], I2[t]}, t];
I2 = I2[t] /. sol[[1]];
Plot[I2 /. {C0 -> 5*10^-12, R -> 500}, {t, -2 10^-9, 10^-8}]
C0 = 5*10^-12;
R = 500;
Plot[I2, {t, -2 10^-9, 10^-8}]
For some reason first Plot gives the right answer and the second one wrong. I expected same answers. What is the reason for the difference?
Yes, that's interesting. If t is set first the value at t = 0.5* 10^-9 is 0.181269 but if it stays symbolic till later the result is 0.402672
a = Plot[
I2 /. {C0 -> 5*10^-12, R -> 500}, {t, -2 10^-9, 10^-8}];
b = Plot[Evaluate[
I2 /. {C0 -> 5*10^-12, R -> 500}], {t, -2 10^-9, 10^-8}];
x = 0.5* 10^-9;
c = I2 /. t -> x /. {C0 -> 5*10^-12, R -> 500}
0.181269
d = I2 /. {t -> x, C0 -> 5*10^-12, R -> 500}
0.402672
Show[{a, b, ListPlot[{{x, c}, {x, d}}]}, PlotRange -> All]
Here's the problem statement:
Two non-linear inter-dependent, initial value first order differential equations were solved using NDSolve to yield an analytical solution. The solution was used to calculate another parameter, as a function of the same x value.
Let's say we have the ODEs as:
X'[t]=a*S[t]*X[t]/(b+S[t]
S'[t]=-a*S[t]*X[t]/(c(b+S[t])) where a,b,c are also known constants
X[0]=constant
S[0]=constant
soln = NDSolve[{X'[t]=a*S[t]*X[t]/(b+S[t],S'[t]=-a*S[t]*X[t]/(c(b+S[t])),X[0]=constant,S[0]=constant},{X,S},{t,0,50}]
The solution is of the form
X-> InterpolatingFunction[{{0.0,50}},<>],S->InterpolationFunction[{{0.0,50}},<>}}
Now the new parameter is: Yvalue=(S[t]/.soln)+(X[t]/.soln)
I'm trying to figure out the correct syntax to calculate Yvalue for an entered t value.
Ex- One needs to calculate Yvalue at t=0.1,0.56, 2.3 etc
Thank you for your time.
Regards,
Ankur
NDSolve demands that all parameters be given specific numeric values. If you assign values to a,b,c,X[0],S[0] and carefully match up all your parens and carefully use == versus = correctly, then this can work
In[1]:= a = 2; b = 3; c = 5;
soln = NDSolve[{X'[t] == a*S[t]*X[t]/(b + S[t]),
S'[t] == -a*S[t]*X[t]/(c(b+S[t])), X[0]==7, S[0]==11}, {X,S}, {t,0,50}][[1]]
Out[2]= {X -> InterpolatingFunction[{{0.,50.}}, <>],
S -> InterpolatingFunction[{{0.,50.}}, <>]}
In[3]:= Yvalue = S[t] + X[t] /. soln /. t -> 0.1
Out[3]= 18.9506
In[4]:= Yvalue = S[t] + X[t] /. soln /. t -> 0.56
Out[4]= 25.6919
In[5]:= Yvalue = S[t] + X[t] /. soln /. t -> 2.3
Out[5]= 61.9823
and even
In[6]:= Plot[S[t] + X[t] /. soln, {t, 0, 50}, PlotRange -> {0, 70}]
Out[6]= ...PlotSnipped...
Does anyone know the Mathematica code that will trace the graph below?
Here is the equation for the graph, a second order linear differential equation with constant coefficients:
Here is the graph traced by this equation:
Quote from the book "Times Series Analysis and Forecasting By Example":
... where δ(t ) is an impulse (delta) function that, like a pea shot, at
time t = 0 forces the pendulum away from its equilibrium and a is the
size of the impact by the pea. It is easy to imagine that the curve
traced by this second order differential equation is a damped
sinusoidal function of time although, if the friction or viscosity is
sufficiently large, the (overdamped) pendulum may gradually come to
rest following an exponential curve without ever crossing the
centerline.
eq = m z''[t] + c z'[t] + k z[t] == a DiracDelta[t];
parms = {m -> 1, c -> .1, k -> 1, a -> 1};
sol = First#DSolve[{eq /. parms, z[0] == 1, z'[0] == 0}, z[t], t];
Plot[z[t] /. sol, {t, 0, 70}, PlotRange -> All, Frame -> True,
FrameLabel -> {{z[t], None}, {Row[{t, " (sec)"}], eq}},
GridLines -> Automatic]
Notice that, for zero initial conditions, another option is to use the Control system functions in Mathematica as follows
parms = {m -> 10, c -> 1.2, k -> 4.3, a -> 1};
tf = TransferFunctionModel[a/(m s^2 + c s + k) /. parms, s]
sol = OutputResponse[tf, DiracDelta[t], t];
Plot[sol, {t, 0, 60}, PlotRange -> All, Frame -> True,
FrameLabel -> {{z[t], None}, {Row[{t, " (sec)"}], eq}},
GridLines -> Automatic]
Update
Strictly speaking, the result of DSolve above is not what can be found by hand derivation of this problem. The correct solution should come out as follows
(see this also for reference)
The correct analytical solution is given by
which I derived for this problem and similar cases in here (first chapter).
Using the above solution, the correct response will look like this:
parms = {m -> 1, c -> .1, k -> 1, a -> 1};
w = Sqrt[k/m];
z = c/(2 m w);
wd = w Sqrt[1 - z^2];
analytical =
Exp[-z w t] (u0 Cos[wd t] + (v0 + (u0 z w))/wd Sin[wd t] +
a/(m wd) Sin[wd t]);
analytical /. parms /. {u0 -> 1, v0 -> 0}
(* E^(-0.05 t) (Cos[0.998749 t] + 1.05131 Sin[0.998749 t]) *)
Plotting it:
Plot[analytical /. parms /. {u0 -> 1, v0 -> 0}, {t, 0, 70},
PlotRange -> All, Frame -> True,
FrameLabel -> {{y[t], None}, {Row[{t, " (sec)"}],
"analytical solution"}}, GridLines -> Automatic, ImageSize -> 300]
If you compare the above plot with the first one shown above using DSolve you can see the difference near t=0.
I need to find fixed points of iterative map x[n] == 1/2 x[n-1]^2 - Mu.
My approach:
Subscript[g, n_ ][Mu_, x_] := Nest[0.5 * x^2 - Mu, x, n]
fixedPoints[n_] := Solve[Subscript[g, n][Mu, x] == x, x]
Plot[
Evaluate[{x,
Table[Subscript[g, 1][Mu, x], {Mu, 0.5, 4, 0.5}]}
], {x, 0, 0.5}, Frame -> True]
I'll change notation slightly (mostly so I myself can understand it). You might want something like this.
y[n_, mu_, x_] := Nest[#^2/2 - mu &, x, n]
fixedPoints[n_] := Solve[y[n, mu, x] == x, x]
The salient feature is that the "function" being nested now really is a function, in correct format.
Example:
fixedPoints[2]
Out[18]= {{x -> -1 - Sqrt[-3 + 2*mu]},
{x -> -1 + Sqrt[-3 + 2*mu]},
{x -> 1 - Sqrt[ 1 + 2*mu]},
{x -> 1 + Sqrt[ 1 + 2*mu]}}
Daniel Lichtblau
First of all, there is an error in your approach. Nest takes a pure function. Also I would use exact input, i.e. 1/2 instead of 0.5 since Solve is a symbolic rather than numeric solver.
Subscript[g, n_Integer][Mu_, x_] := Nest[Function[z, 1/2 z^2 - Mu], x, n]
Then
In[17]:= fixedPoints[1]
Out[17]= {{x -> 1 - Sqrt[1 + 2 Mu]}, {x -> 1 + Sqrt[1 + 2 Mu]}}
A side note:
Look what happens when you start very near to a fixed point (weird :) :
f[z_, Mu_, n_] := Abs[N#Nest[1/2 #^2 - Mu &, z, n] - z]
g[mu_] := f[1 + Sqrt[1 + 2*mu] - mu 10^-8, mu, 10^4]
Plot[g[mu], {mu, 0, 3}, PlotRange -> {0, 7}]
Edit
In fact, it seems you have an autosimilar structure there:
I am doing a brute force search for "gradient extremals" on the following example function
fv[{x_, y_}] = ((y - (x/4)^2)^2 + 1/(4 (1 + (x - 1)^2)))/2;
This involves finding the following zeros
gecond = With[{g = D[fv[{x, y}], {{x, y}}], h = D[fv[{x, y}], {{x, y}, 2}]},
g.RotationMatrix[Pi/2].h.g == 0]
Which Reduce happily does for me:
geyvals = y /. Cases[List#ToRules#Reduce[gecond, {x, y}], {y -> _}];
geyvals is the three roots of a cubic polynomial, but the expression is a bit large to put here.
Now to my question: For different values of x, different numbers of these roots are real, and I would like to pick out the values of x where the solutions branch in order to piece together the gradient extremals along the valley floor (of fv). In the present case, since the polynomial is only cubic, I could probably do it by hand -- but I am looking for a simple way of having Mathematica do it for me?
Edit: To clarify: The gradient extremals stuff is just background -- and a simple way to set up a hard problem. I am not so interested in the specific solution to this problem as in a general hand-off way of spotting the branch points for polynomial roots. Have added an answer below with a working approach.
Edit 2: Since it seems that the actual problem is much more fun than root branching: rcollyer suggests using ContourPlot directly on gecond to get the gradient extremals. To make this complete we need to separate valleys and ridges, which is done by looking at the eigenvalue of the Hessian perpendicular to the gradient. Putting a check for "valleynes" in as a RegionFunction we are left with only the valley line:
valleycond = With[{
g = D[fv[{x, y}], {{x, y}}],
h = D[fv[{x, y}], {{x, y}, 2}]},
g.RotationMatrix[Pi/2].h.RotationMatrix[-Pi/2].g >= 0];
gbuf["gevalley"]=ContourPlot[gecond // Evaluate, {x, -2, 4}, {y, -.5, 1.2},
RegionFunction -> Function[{x, y}, Evaluate#valleycond],
PlotPoints -> 41];
Which gives just the valley floor line. Including some contours and the saddle point:
fvSaddlept = {x, y} /. First#Solve[Thread[D[fv[{x, y}], {{x, y}}] == {0, 0}]]
gbuf["contours"] = ContourPlot[fv[{x, y}],
{x, -2, 4}, {y, -.7, 1.5}, PlotRange -> {0, 1/2},
Contours -> fv#fvSaddlept (Range[6]/3 - .01),
PlotPoints -> 41, AspectRatio -> Automatic, ContourShading -> None];
gbuf["saddle"] = Graphics[{Red, Point[fvSaddlept]}];
Show[gbuf /# {"contours", "saddle", "gevalley"}]
We end up with a plot like this:
Not sure if this (belatedly) helps, but it seems you are interested in discriminant points, that is, where both polynomial and derivative (wrt y) vanish. You can solve this system for {x,y} and throw away complex solutions as below.
fv[{x_, y_}] = ((y - (x/4)^2)^2 + 1/(4 (1 + (x - 1)^2)))/2;
gecond = With[{g = D[fv[{x, y}], {{x, y}}],
h = D[fv[{x, y}], {{x, y}, 2}]}, g.RotationMatrix[Pi/2].h.g]
In[14]:= Cases[{x, y} /.
NSolve[{gecond, D[gecond, y]} == 0, {x, y}], {_Real, _Real}]
Out[14]= {{-0.0158768, -15.2464}, {1.05635, -0.963629}, {1.,
0.0625}, {1., 0.0625}}
If you only want to plot the result then use StreamPlot[] on the gradients:
grad = D[fv[{x, y}], {{x, y}}];
StreamPlot[grad, {x, -5, 5}, {y, -5, 5},
RegionFunction -> Function[{x, y}, fv[{x, y}] < 1],
StreamScale -> 1]
You may have to fiddle around with the plot's precision, StreamStyle, and the RegionFunction to get it perfect. Especially useful would be using the solution for the valley floor to seed StreamPoints programmatically.
Updated: see below.
I'd approach this first by visualizing the imaginary parts of the roots:
This tells you three things immediately: 1) the first root is always real, 2) the second two are the conjugate pairs, and 3) there is a small region near zero in which all three are real. Additionally, note that the exclusions only got rid of the singular point at x=0, and we can see why when we zoom in:
We can then use the EvalutionMonitor to generate the list of roots directly:
Map[Module[{f, fcn = #1},
f[x_] := Im[fcn];
Reap[Plot[f[x], {x, 0, 1.5},
Exclusions -> {True, f[x] == 1, f[x] == -1},
EvaluationMonitor :> Sow[{x, f[x]}][[2, 1]] //
SortBy[#, First] &];]
]&, geyvals]
(Note, the Part specification is a little odd, Reap returns a List of what is sown as the second item in a List, so this results in a nested list. Also, Plot doesn't sample the points in a straightforward manner, so SortBy is needed.) There may be a more elegant route to determine where the last two roots become complex, but since their imaginary parts are piecewise continuous, it just seemed easier to brute force it.
Edit: Since you've mentioned that you want an automatic method for generating where some of the roots become complex, I've been exploring what happens when you substitute in y -> p + I q. Now this assumes that x is real, but you've already done that in your solution. Specifically, I do the following
In[1] := poly = g.RotationMatrix[Pi/2].h.g /. {y -> p + I q} // ComplexExpand;
In[2] := {pr,pi} = poly /. Complex[a_, b_] :> a + z b & // CoefficientList[#, z] & //
Simplify[#, {x, p, q} \[Element] Reals]&;
where the second step allows me to isolate the real and imaginary parts of the equation and simplify them independent of each other. Doing this same thing with the generic 2D polynomial, f + d x + a x^2 + e y + 2 c x y + b y^2, but making both x and y complex; I noted that Im[poly] = Im[x] D[poly, Im[x]] + Im[y] D[poly,[y]], and this may hold for your equation, also. By making x real, the imaginary part of poly becomes q times some function of x, p, and q. So, setting q=0 always gives Im[poly] == 0. But, that does not tell us anything new. However, if we
In[3] := qvals = Cases[List#ToRules#RReduce[ pi == 0 && q != 0, {x,p,q}],
{q -> a_}:> a];
we get several formulas for q involving x and p. For some values of x and p, those formulas may be imaginary, and we can use Reduce to determine where Re[qvals] == 0. In other words, we want the "imaginary" part of y to be real and this can be accomplished by allowing q to be zero or purely imaginary. Plotting the region where Re[q]==0 and overlaying the gradient extremal lines via
With[{rngs = Sequence[{x,-2,2},{y,-10,10}]},
Show#{
RegionPlot[Evaluate[Thread[Re[qvals]==0]/.p-> y], rngs],
ContourPlot[g.RotationMatrix[Pi/2].h.g==0,rngs
ContourStyle -> {Darker#Red,Dashed}]}]
gives
which confirms the regions in the first two plots showing the 3 real roots.
Ended up trying myself since the goal really was to do it 'hands off'. I'll leave the question open for a good while to see if anybody finds a better way.
The code below uses bisection to bracket the points where CountRoots changes value. This works for my case (spotting the singularity at x=0 is pure luck):
In[214]:= findRootBranches[Function[x, Evaluate#geyvals[[1, 1]]], {-5, 5}]
Out[214]= {{{-5., -0.0158768}, 1}, {{-0.0158768, -5.96046*10^-9}, 3}, {{0., 0.}, 2}, {{5.96046*10^-9, 1.05635}, 3}, {{1.05635, 5.}, 1}}
Implementation:
Options[findRootBranches] = {
AccuracyGoal -> $MachinePrecision/2,
"SamplePoints" -> 100};
findRootBranches::usage =
"findRootBranches[f,{x0,x1}]: Find the the points in [x0,x1] \
where the number of real roots of a polynomial changes.
Returns list of {<interval>,<root count>} pairs.
f: Real -> Polynomial as pure function, e.g f=Function[x,#^2-x&]." ;
findRootBranches[f_, {xa_, xb_}, OptionsPattern[]] := Module[
{bisect, y, rootCount, acc = 10^-OptionValue[AccuracyGoal]},
rootCount[x_] := {x, CountRoots[f[x][y], y]};
(* Define a ecursive bisector w/ automatic subdivision *)
bisect[{{x1_, n1_}, {x2_, n2_}} /; Abs[x1 - x2] > acc] :=
Module[{x3, n3},
{x3, n3} = rootCount[(x1 + x2)/2];
Which[
n1 == n3, bisect[{{x3, n3}, {x2, n2}}],
n2 == n3, bisect[{{x1, n1}, {x3, n3}}],
True, {bisect[{{x1, n1}, {x3, n3}}],
bisect[{{x3, n3}, {x2, n2}}]}]];
(* Find initial brackets and bisect *)
Module[{xn, samplepoints, brackets},
samplepoints = N#With[{sp = OptionValue["SamplePoints"]},
If[NumberQ[sp], xa + (xb - xa) Range[0, sp]/sp, Union[{xa, xb}, sp]]];
(* Start by counting roots at initial sample points *)
xn = rootCount /# samplepoints;
(* Then, identify and refine the brackets *)
brackets = Flatten[bisect /#
Cases[Partition[xn, 2, 1], {{_, a_}, {_, b_}} /; a != b]];
(* Reinclude the endpoints and partition into same-rootcount segments: *)
With[{allpts = Join[{First#xn},
Flatten[brackets /. bisect -> List, 2], {Last#xn}]},
{#1, Last[#2]} & ### Transpose /# Partition[allpts, 2]
]]]