Follow up to: TransformedDistribution in Mathematica - wolfram-mathematica

I have a follow up question to Sasha's answer of my earlier question at TransformedDistribution in Mathematica.
As I already accepted the answer a while back, I thought it made sense to ask this as a new question.
As part of the answer Sasha defined 2 functions:
LogNormalStableCDF[{alpha_, beta_, gamma_, sigma_, delta_}, x_Real] :=
Block[{u},
NExpectation[
CDF[StableDistribution[alpha, beta, gamma, sigma], (x - delta)/u],
u \[Distributed] LogNormalDistribution[Log[gamma], sigma]]]
LogNormalStablePDF[{alpha_, beta_, gamma_, sigma_, delta_}, x_Real] :=
Block[{u},
NExpectation[
PDF[StableDistribution[alpha, beta, gamma, sigma], (x - delta)/u]/u,
u \[Distributed] LogNormalDistribution[Log[gamma], sigma]]]
The PDF function seems to work fine:
Plot[LogNormalStablePDF[{1.5, 1, 1, 0.5, 1}, x], {x, -4, 6},
PlotRange -> All]
But if I try to plot the CDF variation:
Plot[LogNormalStableCDF[{1.5, 1, 1, 0.5, 1}, x], {x, -4, 6},
PlotRange -> All]
The evaluation doesn't seem to ever finish.
I've done something similar with the following - substituting a NormalDistribution for the StableDistribution above:
LogNormalNormalCDF[{gamma_, sigma_, delta_}, x_Real] :=
Block[{u},
NExpectation[CDF[NormalDistribution[0, Sqrt[2]], (x - delta)/u],
u \[Distributed] LogNormalDistribution[Log[gamma], sigma]]]
LogNormalNormalPDF[{gamma_, sigma_, delta_}, x_Real] :=
Block[{u},
NExpectation[PDF[NormalDistribution[0, Sqrt[2]], (x - delta)/u]/u,
u \[Distributed] LogNormalDistribution[Log[gamma], sigma]]]
The plots of both the CDF and PDF versions work fine.
Plot[LogNormalNormalPDF[{0.01, 0.4, 0.0003}, x], {x, -0.10, 0.10}, PlotRange -> All]
Plot[LogNormalNormalCDF[{0.01, 0.4, 0.0003}, x], {x, -0.10, 0.10}, PlotRange -> All]
This has me puzzled. Clearly the general approach works in the LogNormalNormalCDF case. Also, the LogNormalStablePDF and LogNormalStableCDF are almost identical. In fact from the code itself, the CDF version seems to have to do less than the PDF version.
So, I hoped someone could:
explain why the LogNormalStableCDF doesn't appear to work (at least in what I consider a reasonable time, I'll try running it over night and see if it ever completes the evaluation) and
suggest a way for the get LogNormalStableCDF to work more quickly.
Many thanks,
J.

The new distribution functionality has amazing potential, but its newness shows. There are several bugs that I and others have encountered and that hopefully will be dealt with in following bugfixes. However, this seems to be not one of them.
In this case the problem is the definition of variable x as real while providing the plot range in the form of integers. So when Plot starts it tries the end points for which the function returns unevaluated because there's no match. Removing Real from the definition makes it work.
The second function works because the plot range is provided with machine precision numbers.
Be prepared to wait a bit, because the function evaluates pretty slow. In fact, you have to curb MaxRecursion a bit, because Plot gets too enthusiastic and adds way too much points here (maybe due to small scale inaccuracies):
Plot[LogNormalStableCDF[{1.5, 1, 1, 0.5, 1}, x], {x, -4, 6},
PlotRange -> All, PlotPoints -> 10, MaxRecursion -> 4]
yields
It took about 9 minutes to generate and as you can see, it took a lot of points on the flanks of the graph.

Related

NSolve unable to solve implicit differentiation

I'm trying to solve for the slope with this implicit differentiation problem at x=1, but NSolve is unable to solve it. How can I get around this issue?
eqn[x_, y_] := x*Sin[y] - y*Sin[x] == 2 (*note: bound is -5<=x<=5,-5<=y<=5*)
yPrime = Solve[D[eqn[x, y[x]], x], y'[x]] /. {y[x] -> y,
y'[x] -> y'} // Simplify
{{Derivative[1][y] -> (y Cos[x] - Sin[y])/(x Cos[y] - Sin[x])}}
NSolve[eqn[x, y] /. x -> 1, y] (*this doesn't work*)
NSolve is not really the right tool for the job.
From Wolfram Documentation
If your equations involve only linear functions or polynomials, then
you can use NSolve to get numerical approximations to all the
solutions. However, when your equations involve more complicated
functions, there is in general no systematic procedure for finding all
solutions, even numerically. In such cases, you can use FindRoot to
search for solutions. You have to give FindRoot a place to start its
search.
Since your function is not really polynomial the following works :
FindRoot[eqn/.x->1,{y,-2}]
{y -> -2.7881400978783035` }
which you can plug into yprime.
Picking the starting point is not evident, however making a quick ContourPlot always works:
ContourPlot[x*Sin[y] - y*Sin[x], {x, -5, 5}, {y, -5, 5}, Contours -> {2}]
It shows that your equation is not unique for all x.

How to solve an equation system in Mathematica 9

Got a simple equation but Mathematica just can't get it:
Solve[{Sin[x] == y, x + y == 5}, {x, y}]
Error: this system cannot be solved with the methods available to Solve
Am I using the right function? If not, what should I use?
Mathematica knows a lot, but it surely doesn't know everything about math. When stuffs breakdown, you can try a few different approaches:
First let's graph it:
ContourPlot[{Sin[x] == y, x + y == 5}, {x, -10, 10}, {y, -10, 10}]
It's a line intersecting a sinusoidal wave and it looks likes there is only one solution. The point is close to (5,0) so let's use the Newton method to find the root:
FindRoot[{Sin[x] == y, x + y == 5}, {x, 5}, {y, 0}]
This gives the answer {x -> 5.61756, y -> -0.617555}. You can verify it by replacing x and y in the equation with the values provided in the solution:
{Sin[x] == y, x + y == 5} /. {x -> 5.6175550052727`,y -> -0.6175550052726998`}
That gives {True,True} so the solution is correct. Interestingly, as another commenter pointed out, Wolfram Alpha gives the same solution when you type in this:
solve Sin[x]==y,x+y==5
You can access Wolfram Alpha directly from Mathematica by typing == at the beginning of a new line.

Is this a bug of mathematica 8?

Clear["Global`*"]
Integrate[t f[x, y], {y, 0, 1}] -
t Integrate[f[x, y], {y, 0, 1}] // FullSimplify
Why doesn't M# know the result is zero?
It is not a bug. Since your f[x,y] has no definition, Mathematica can't assume anything about the integrand t f[x, y]
You can make a rule to help Mathematica as mentioned below. But without a rule, Mathematica is doing the right thing here.
This has been discussed many places before. Here are some links
https://groups.google.com/forum/#!msg/comp.soft-sys.math.mathematica/jsiYo9tRj04/rQYCy-X3SXQJ
https://mathematica.stackexchange.com/questions/5610/how-to-simplify-symbolic-integration
For example, you can add this rule:
Clear["Global`*"]
Unprotect[Integrate];
Integrate[t_Symbol*f_,dom_]:=t*Integrate[f,dom];
Protect[Integrate];
Now it will give zero
Simplify#Integrate[t f[x,y],{y,0,1}]-t Integrate[f[x,y],{y,0,1}]
(*---> 0 *)

Mathematica: branch points for real roots of polynomial

I am doing a brute force search for "gradient extremals" on the following example function
fv[{x_, y_}] = ((y - (x/4)^2)^2 + 1/(4 (1 + (x - 1)^2)))/2;
This involves finding the following zeros
gecond = With[{g = D[fv[{x, y}], {{x, y}}], h = D[fv[{x, y}], {{x, y}, 2}]},
g.RotationMatrix[Pi/2].h.g == 0]
Which Reduce happily does for me:
geyvals = y /. Cases[List#ToRules#Reduce[gecond, {x, y}], {y -> _}];
geyvals is the three roots of a cubic polynomial, but the expression is a bit large to put here.
Now to my question: For different values of x, different numbers of these roots are real, and I would like to pick out the values of x where the solutions branch in order to piece together the gradient extremals along the valley floor (of fv). In the present case, since the polynomial is only cubic, I could probably do it by hand -- but I am looking for a simple way of having Mathematica do it for me?
Edit: To clarify: The gradient extremals stuff is just background -- and a simple way to set up a hard problem. I am not so interested in the specific solution to this problem as in a general hand-off way of spotting the branch points for polynomial roots. Have added an answer below with a working approach.
Edit 2: Since it seems that the actual problem is much more fun than root branching: rcollyer suggests using ContourPlot directly on gecond to get the gradient extremals. To make this complete we need to separate valleys and ridges, which is done by looking at the eigenvalue of the Hessian perpendicular to the gradient. Putting a check for "valleynes" in as a RegionFunction we are left with only the valley line:
valleycond = With[{
g = D[fv[{x, y}], {{x, y}}],
h = D[fv[{x, y}], {{x, y}, 2}]},
g.RotationMatrix[Pi/2].h.RotationMatrix[-Pi/2].g >= 0];
gbuf["gevalley"]=ContourPlot[gecond // Evaluate, {x, -2, 4}, {y, -.5, 1.2},
RegionFunction -> Function[{x, y}, Evaluate#valleycond],
PlotPoints -> 41];
Which gives just the valley floor line. Including some contours and the saddle point:
fvSaddlept = {x, y} /. First#Solve[Thread[D[fv[{x, y}], {{x, y}}] == {0, 0}]]
gbuf["contours"] = ContourPlot[fv[{x, y}],
{x, -2, 4}, {y, -.7, 1.5}, PlotRange -> {0, 1/2},
Contours -> fv#fvSaddlept (Range[6]/3 - .01),
PlotPoints -> 41, AspectRatio -> Automatic, ContourShading -> None];
gbuf["saddle"] = Graphics[{Red, Point[fvSaddlept]}];
Show[gbuf /# {"contours", "saddle", "gevalley"}]
We end up with a plot like this:
Not sure if this (belatedly) helps, but it seems you are interested in discriminant points, that is, where both polynomial and derivative (wrt y) vanish. You can solve this system for {x,y} and throw away complex solutions as below.
fv[{x_, y_}] = ((y - (x/4)^2)^2 + 1/(4 (1 + (x - 1)^2)))/2;
gecond = With[{g = D[fv[{x, y}], {{x, y}}],
h = D[fv[{x, y}], {{x, y}, 2}]}, g.RotationMatrix[Pi/2].h.g]
In[14]:= Cases[{x, y} /.
NSolve[{gecond, D[gecond, y]} == 0, {x, y}], {_Real, _Real}]
Out[14]= {{-0.0158768, -15.2464}, {1.05635, -0.963629}, {1.,
0.0625}, {1., 0.0625}}
If you only want to plot the result then use StreamPlot[] on the gradients:
grad = D[fv[{x, y}], {{x, y}}];
StreamPlot[grad, {x, -5, 5}, {y, -5, 5},
RegionFunction -> Function[{x, y}, fv[{x, y}] < 1],
StreamScale -> 1]
You may have to fiddle around with the plot's precision, StreamStyle, and the RegionFunction to get it perfect. Especially useful would be using the solution for the valley floor to seed StreamPoints programmatically.
Updated: see below.
I'd approach this first by visualizing the imaginary parts of the roots:
This tells you three things immediately: 1) the first root is always real, 2) the second two are the conjugate pairs, and 3) there is a small region near zero in which all three are real. Additionally, note that the exclusions only got rid of the singular point at x=0, and we can see why when we zoom in:
We can then use the EvalutionMonitor to generate the list of roots directly:
Map[Module[{f, fcn = #1},
f[x_] := Im[fcn];
Reap[Plot[f[x], {x, 0, 1.5},
Exclusions -> {True, f[x] == 1, f[x] == -1},
EvaluationMonitor :> Sow[{x, f[x]}][[2, 1]] //
SortBy[#, First] &];]
]&, geyvals]
(Note, the Part specification is a little odd, Reap returns a List of what is sown as the second item in a List, so this results in a nested list. Also, Plot doesn't sample the points in a straightforward manner, so SortBy is needed.) There may be a more elegant route to determine where the last two roots become complex, but since their imaginary parts are piecewise continuous, it just seemed easier to brute force it.
Edit: Since you've mentioned that you want an automatic method for generating where some of the roots become complex, I've been exploring what happens when you substitute in y -> p + I q. Now this assumes that x is real, but you've already done that in your solution. Specifically, I do the following
In[1] := poly = g.RotationMatrix[Pi/2].h.g /. {y -> p + I q} // ComplexExpand;
In[2] := {pr,pi} = poly /. Complex[a_, b_] :> a + z b & // CoefficientList[#, z] & //
Simplify[#, {x, p, q} \[Element] Reals]&;
where the second step allows me to isolate the real and imaginary parts of the equation and simplify them independent of each other. Doing this same thing with the generic 2D polynomial, f + d x + a x^2 + e y + 2 c x y + b y^2, but making both x and y complex; I noted that Im[poly] = Im[x] D[poly, Im[x]] + Im[y] D[poly,[y]], and this may hold for your equation, also. By making x real, the imaginary part of poly becomes q times some function of x, p, and q. So, setting q=0 always gives Im[poly] == 0. But, that does not tell us anything new. However, if we
In[3] := qvals = Cases[List#ToRules#RReduce[ pi == 0 && q != 0, {x,p,q}],
{q -> a_}:> a];
we get several formulas for q involving x and p. For some values of x and p, those formulas may be imaginary, and we can use Reduce to determine where Re[qvals] == 0. In other words, we want the "imaginary" part of y to be real and this can be accomplished by allowing q to be zero or purely imaginary. Plotting the region where Re[q]==0 and overlaying the gradient extremal lines via
With[{rngs = Sequence[{x,-2,2},{y,-10,10}]},
Show#{
RegionPlot[Evaluate[Thread[Re[qvals]==0]/.p-> y], rngs],
ContourPlot[g.RotationMatrix[Pi/2].h.g==0,rngs
ContourStyle -> {Darker#Red,Dashed}]}]
gives
which confirms the regions in the first two plots showing the 3 real roots.
Ended up trying myself since the goal really was to do it 'hands off'. I'll leave the question open for a good while to see if anybody finds a better way.
The code below uses bisection to bracket the points where CountRoots changes value. This works for my case (spotting the singularity at x=0 is pure luck):
In[214]:= findRootBranches[Function[x, Evaluate#geyvals[[1, 1]]], {-5, 5}]
Out[214]= {{{-5., -0.0158768}, 1}, {{-0.0158768, -5.96046*10^-9}, 3}, {{0., 0.}, 2}, {{5.96046*10^-9, 1.05635}, 3}, {{1.05635, 5.}, 1}}
Implementation:
Options[findRootBranches] = {
AccuracyGoal -> $MachinePrecision/2,
"SamplePoints" -> 100};
findRootBranches::usage =
"findRootBranches[f,{x0,x1}]: Find the the points in [x0,x1] \
where the number of real roots of a polynomial changes.
Returns list of {<interval>,<root count>} pairs.
f: Real -> Polynomial as pure function, e.g f=Function[x,#^2-x&]." ;
findRootBranches[f_, {xa_, xb_}, OptionsPattern[]] := Module[
{bisect, y, rootCount, acc = 10^-OptionValue[AccuracyGoal]},
rootCount[x_] := {x, CountRoots[f[x][y], y]};
(* Define a ecursive bisector w/ automatic subdivision *)
bisect[{{x1_, n1_}, {x2_, n2_}} /; Abs[x1 - x2] > acc] :=
Module[{x3, n3},
{x3, n3} = rootCount[(x1 + x2)/2];
Which[
n1 == n3, bisect[{{x3, n3}, {x2, n2}}],
n2 == n3, bisect[{{x1, n1}, {x3, n3}}],
True, {bisect[{{x1, n1}, {x3, n3}}],
bisect[{{x3, n3}, {x2, n2}}]}]];
(* Find initial brackets and bisect *)
Module[{xn, samplepoints, brackets},
samplepoints = N#With[{sp = OptionValue["SamplePoints"]},
If[NumberQ[sp], xa + (xb - xa) Range[0, sp]/sp, Union[{xa, xb}, sp]]];
(* Start by counting roots at initial sample points *)
xn = rootCount /# samplepoints;
(* Then, identify and refine the brackets *)
brackets = Flatten[bisect /#
Cases[Partition[xn, 2, 1], {{_, a_}, {_, b_}} /; a != b]];
(* Reinclude the endpoints and partition into same-rootcount segments: *)
With[{allpts = Join[{First#xn},
Flatten[brackets /. bisect -> List, 2], {Last#xn}]},
{#1, Last[#2]} & ### Transpose /# Partition[allpts, 2]
]]]

Debugging a working program on Mathematica 5 with Mathematica 7

I'm currently reading the Mathematica Guidebooks for Programming and I was trying to work out one of the very first program of the book. Basically, when I run the following program:
Plot3D[{Re[Exp[1/(x + I y)]]}, {x, -0.02, 0.022}, {y, -0.04, 0.042},
PlotRange -> {-1, 8}, PlotPoints -> 120, Mesh -> False,
ColorFunction -> Function[{x1, x2, x3}, Hue[Arg[Exp[1/(x1 + I x2)]]]]]
either I get a 1/0 error and e^\infinity error or, if I lower the PlotPoints options to, say, 60, an overflow error. I have a working output though, but it's not what it's supposed to be. The hue seems to be diffusing off the left corner whereas it should be diffusing of the origin (as can be seen on the original output)
Here is the original program which apparently runs on Mathematica 5 (Trott, Mathematica Guidebook for Programming):
Off[Plot3D::gval];
Plot3D[{Re[Exp[1/(x + I y)]], Hue[Arg[Exp[1/(x + I y)]]]},
{x, -0.02, 0.022}, {y, -0.04, 0.042},
PlotRange -> {-1, 8}, PlotPoints -> 120, Mesh -> False]
Off[Plot3D::gval];
However, ColorFunction used this way (first Plot3D argument) doesn't work and so I tried to simply adapt to its new way of using it.
Well, thanks I guess!
If you are satisfied with Mathematica's defaults you can use the old version of the code, simply cut out , Hue[Arg[Exp[1/(x + I y)]]] and the function works fine.
The problems you are having with the new version of the code seem to stem from the expression Exp[1/(x1 + I x2)] -- sometimes this will require the evaluation of 1/0. At least, if I cut out 1/ the program executes (on Mathematica 7) without complaint, though obviously with the wrong colours. So you need to rewrite your colour function, probably.
I finally found two alternative ways to solve my problem. The first one is to simply use the << Version5`Graphics` command to use Plot3Dfunction the way it worked with Mathematica V5. The code taken from the book works just like it used to.
However, if one wishes to display correctly the hue (that is, without diffusion off the left-hand corner) with the latest version, the Rescale function must be used, just like this:
Plot3D[Evaluate[Re[f[x, y]]], {x, -.02, .022}, {y, -0.04, 0.042},
PlotRange -> {-1, 2}, PlotPoints -> 120, Mesh -> False,
ColorFunction -> Function[{x, y, z}, Hue#Rescale[Arg[f[x, y]], {-π, π}]],
ColorFunctionScaling -> False,
ClippingStyle -> None]
I suppose the argument function in Mathematica does not map automatically to the [-Pi,Pi) range and so it must be rescaled to this domain. The result is quite good-looking, although there are some minor differences with the original plot.

Resources