Mathematica: FindRoot errors - wolfram-mathematica

FindRoot[
27215. - 7.27596*10^-12 x + 52300. x^2 - 9977.4 Log[1. - 1. x] == 0
,
{x, 0.000001}
]
converges to the solution {x -> -0.0918521} but how can I get Mathematica to avoid the following error message before the solution:
FindRoot::nlnum: The function value {Indeterminate} is not a list of numbers with dimensions {1} at {x} = {1.}. >>
I am using FindRoot to solve some pretty messy expressions. I also sometimes receive the following error, though Mathematica will still yield an answer, but am wondering if there is a way to avoid it as well:
FindRoot::lstol: The line search decreased the step size to within tolerance specified by AccuracyGoal and PrecisionGoal but was unable to find a sufficient decrease in the merit function. You may need more than MachinePrecision digits of working precision to meet these tolerances. >>

The solution you are getting is not the actual solution. The message indicates something was wrong and FindRoot returns the last value of x. This is the last item under 'More Information' for FindRoot:
If FindRoot does not succeed in finding a solution to the accuracy you specify within MaxIterations steps, it returns the most recent approximation to a solution that it found. You can then apply FindRoot again, with this approximation as a starting point.
For example, in this case there is also no solution:
FindRoot[x^2 + 1 == 0, {x, 1}]
You will get a FindRoot::jsing warning and Mathematica returns {x -> 0.} (which is the most recent approximation).
A similar case like this, but with a Log function:
FindRoot[1 + Log[1 + x]^2 == 0, {x, 2}]
Gives a FindRoot::nlnum similar to what you are seeing and returns {x -> 0.000269448} (which is the most recent approximation in this case).
This is a plot of the same function, for illustration purposes:
If you want to include complex roots, consider this part of the documentation for FindRoot (under 'More Information' also):
You can always tell FindRoot to search for complex roots by adding 0.I to the starting value.
So, for example, you can take a starting value near one complex root, like so:
FindRoot[x^2 + 1 == 0, {x, 1 + 1. I}]
Which converges (without messages) to {x -> 8.46358*10^-23 + 1. I} (so basically I).
Or with a starting value near the other complex root:
FindRoot[x^2 + 1 == 0, {x, 1 - 1. I}]
You will get basically -I (to be precise you get {x -> 8.46358*10^-23 - 1. I}).

There isn't a real solution to this equation. Mathematica ends up getting somewhere near the minimum of the function, and reports this because that's where the algorithm converges to.
Plot[27215. - 7.27596*10^-12 x + 52300. x^2 - 9977.4 Log[1. - 1. x],
{x, -2, 0.09}, AxesOrigin -> {0, 0}]
Mathematica does warn you about this:
In[30]:= x /.
Table[FindRoot[
27215. - 7.27596*10^-12 x + 52300. x^2 - 9977.4 Log[1. - 1. x] ==
0, {x, y}], {y, -0.01, 0.01, 0.0002}]
During evaluation of In[30]:= FindRoot::nlnum: The function value {Indeterminate} is not a list of numbers with dimensions {1} at {x} = {1.}. >>
During evaluation of In[30]:= FindRoot::nlnum: The function value {Indeterminate} is not a list of numbers with dimensions {1} at {x} = {1.}. >>
During evaluation of In[30]:= FindRoot::nlnum: The function value {Indeterminate} is not a list of numbers with dimensions {1} at {x} = {1.}. >>
During evaluation of In[30]:= General::stop: Further output of FindRoot::nlnum will be suppressed during this calculation. >>
During evaluation of In[30]:= FindRoot::lstol: The line search decreased the step size to within tolerance specified by AccuracyGoal and PrecisionGoal but was unable to find a sufficient decrease in the merit function. You may need more than MachinePrecision digits of working precision to meet these tolerances. >>
During evaluation of In[30]:= FindRoot::lstol: The line search decreased the step size to within tolerance specified by AccuracyGoal and PrecisionGoal but was unable to find a sufficient decrease in the merit function. You may need more than MachinePrecision digits of working precision to meet these tolerances. >>
During evaluation of In[30]:= FindRoot::lstol: The line search decreased the step size to within tolerance specified by AccuracyGoal and PrecisionGoal but was unable to find a sufficient decrease in the merit function. You may need more than MachinePrecision digits of working precision to meet these tolerances. >>
During evaluation of In[30]:= General::stop: Further output of FindRoot::lstol will be suppressed during this calculation. >>
Out[30]= {-0.0883278, -0.0913649, -0.0901617, -0.0877546, -0.0877383, \
-0.088508, -0.0937041, -0.0881606, -0.0912122, -0.0899562, \
-0.0876965, -0.0879619, -0.0877441, -0.101551, -0.0915088, \
-0.0880611, -0.0959972, -0.0930364, -0.0902243, -0.0877198, \
-0.0881157, -0.107205, -0.103746, -0.100439, -0.0972646, -0.094208, \
-0.0912554, -0.0878633, -0.089473, -0.0884659, -0.0876997, \
-0.0876936, -0.0879112, -0.104396, -0.100987, -0.0976638, -0.0879892, \
-0.087777, -0.0881334, -0.0880071, -0.0880255, -0.0880285, \
-0.0880345, -0.0911966, -0.0879797, -0.0890295, -0.087701, \
-0.0952537, -0.0941312, -0.0929994, -0.0918578, -0.0885677, \
-0.0895444, -0.0883719, -0.103914, -0.102701, -0.0885007, -0.0915083, \
-0.098988, -0.0963068, -0.0891533, -0.0907357, -0.0881215, \
-0.0893928, -0.108191, -0.104756, -0.101456, -0.0982737, -0.0951949, \
-0.0922072, -0.0892996, -0.0878794, -0.0877164, -0.0896659, \
-0.0886859, -0.0876952, -0.0909219, -0.0899049, -0.0888758, \
-0.0878343, -0.0952044, -0.0941281, -0.0887345, -0.0919322, \
-0.0886726, -0.0876955, -0.0877232, -0.0878879, -0.0877578, \
-0.101642, -0.0916633, -0.0991254, -0.0877255, -0.0936139, \
-0.0907846, -0.0877205, -0.0877454, -0.0881589, -0.0893507, \
-0.0878747, -0.0876961}

Related

How to make Mathematica Simplify Square Root Expression

I would like Mathematica to evaluate square root of a squared variable. Instead it is just returning the squared variable under square root. I wrote a simple code as an example:
x = y^2
z = FullSimplify[Sqrt[x]]
But it is returning y^2 under a square root sign!
This behavior is documented on the Sqrt reference page:
Sqrt[z^2] is not automatically converted to z.
[…]
These conversions can be done using PowerExpand, but will typically be correct only for positive real arguments.
Thus:
In[1]:= x = y^2
Out[1]= y^2
In[15]:= PowerExpand[Sqrt[x]]
Out[15]= y
You can also get simplifications by supplying various assumptions:
In[10]:= Simplify[Sqrt[x], Assumptions -> Element[y, Reals]]
Out[10]= Abs[y]
In[13]:= Simplify[Sqrt[x], Assumptions -> y > 0]
Out[13]= y
In[14]:= Simplify[Sqrt[x], Assumptions -> y < 0]
Out[14]= -y
If you want more help, I suggest asking on the Mathematica Stack Exchange.

Find the 1st intersection of 2 PDF functions with Mathematica

With Mathematica 8.0.1.0, I have used FindRoot[] to identify the intersection of two 2 pdf functions.
But if I have pdf functions that intersect at more than one point, and I have the upper limit of the x axis range beyond the second intersection, FindRoot[] only returns the second intersection.
pdf1 = 1/x 0.5795367855565214` (E^(
11.170058830053032` (-1.525439351903338` - Log[x]))
Erfc[1.6962452696714152` (-0.5548887795964352` - Log[x])] +
E^(1.2932713057519` (2.60836043407439` + Log[x]))
Erfc[1.6962452696714152` (2.720730943938539` + Log[x])]);
pdf2 = 1/x 0.4648445097126269` (E^(
5.17560914275408` (-2.5500941338198615` - Log[x]))
Erfc[1.7747318880142482` (-2.139288893723375` - Log[x])] +
E^(1.1332542415053757` (3.050849516581922` + Log[x]))
Erfc[1.7747318880142482` (3.1407996592474956` + Log[x])]);
Plot[{pdf1, pdf2}, {x, 0, 0.5}, PlotRange -> All] (* Shows 1st intersection *)
Plot[{pdf1, pdf2}, {x, 0.4, 0.5}, PlotRange -> All] (* Shows 2nd intersection *)
{x /. FindRoot[pdf1 == pdf2, {x, 0.00001, 0.5}],
x /. FindRoot[pdf1 == pdf2, {x, 0.00001, 0.4}]}
The above plots show the issue. When plotted they intersect at two points:
{0.464719, 0.0452777}
respectively.
As I can't know before hand if I'll have a second intersection and I don't know where it might fall on the x axis if I did, can anyone suggest a way to have FindRoot[] only return the first intersection rather than the second?
If not, can anyone suggest another way to go about it?
With FindRoot[], you can only get a single root for a given starting point. Iterating through different options is cumbersome and you might not even get the desired result for certain edge cases unless you hit upon the right choice of starting point.
In this case, something like NSolve or Reduce might be a better option. If you know that your expressions decay, using a reasonable upper bound for possible values of x, you can use the following, which is pretty quick and will give you all roots.
NSolve[{pdf1 == pdf2, 0 < x < 1}, x] // Timing
Out[1]= {0.073495, {{x -> 0.0452777}, {x -> 0.464719}}}
How about the following:
First you have to find all roots in one step. I do this with
roots=Reduce[pdf1==pdf2&&0.000001<x<0.5,x]
And then you could take the minimum (first root on the x axis in your special case).
rootMin=Min[N[x/.{ToRules[roots]}]]

quantiles of sums using copula distributions too slow

Trying to create a table for quantiles of the sum of two dependent random variables using built-in copula distributions (Clayton, Frank, Gumbel) with Beta marginals. Tried NProbability and FindRoot with various methods -- not fast enough.
An example of the copula-marginal combinations I need to explore is the following:
nProbClayton[t_?NumericQ, c_?NumericQ] :=
NProbability[ x + y <= t, {x, y} \[Distributed]
CopulaDistribution[{"Clayton", c}, {BetaDistribution[8, 2],
BetaDistribution[8, 2]}]]
For a single evaluation of the numeric probability using
nProbClayton[1.9, 1/10] // Timing // Quiet
I get
{4.914, 0.939718}
on a Vista 64bit Core2 Duo T9600 2.80GHz machine (MMA 8.0.4)
To get a quantile of the sum, using
FindRoot[nProbClayton[q, 1/10] == 1/100, {q, 1, 0, 2}// Timing // Quiet
with various methods
( `Method -> Automatic`, `Method -> "Brent"`, `Method -> "Secant"` )
takes about a minute to find a single quantile: Timings are
{48.781, {q -> 0.918646}}
{50.045, {q -> 0.918646}}
{65.396, {q -> 0.918646}}
For other copula-marginal combinations timings are marginally better.
Need: any tricks/methods to improve timings.
The CDF of a Clayton-Pareto copula with parameter c can be calculated according to
cdf[c_] := Module[{c1 = CDF[BetaDistribution[8, 2]]},
(c1[#1]^(-1/c) + c1[#2]^(-1/c) - 1)^(-c) &]
Then, cdf[c][t1,t2] is the probability that x<=t1 and y<=t2. This means that you can calculate the probability that x+y<=t according to
prob[t_?NumericQ, c_?NumericQ] :=
NIntegrate[Derivative[1, 0][cdf[c]][x, t - x], {x, 0, t}]
The timings I get on my machine are
prob[1.9, .1] // Timing
(* ==> {0.087518, 0.939825} *)
Note that I get a different value for the probability from the one in the original post. However, running nProbClayton[1.9,0.1] produces a warning about slow convergence which could mean that the result in the original post is off. Also, if I change x+y<=t to x+y>t in the original definition of nProbClayton and calculate 1-nProbClayton[1.9,0.1] I get 0.939825 (without warnings) which is the same result as above.
For the quantile of the sum I get
FindRoot[prob[q, .1] == .01, {q, 1, 0, 2}] // Timing
(* ==> {1.19123, {q -> 0.912486}} *)
Again, I get a different result from the one in the original post but similar to before, changing x+y<=t to x+y>t and calculating FindRoot[nProbClayton[q, 1/10] == 1-1/100, {q, 1, 0, 2}] returns the same value for q as above.

NIntegrate fails to converge near a point that is not inside my definite integral?

I am trying to calculate a definite integral. I write:
NIntegrate[expression, {x, 0, 1}, WorkingPrecision -> 100]
"expression" is described below. The WorkingPrecision was added in to help with another error.
I get an error:
"NIntegrate::ncvb: NIntegrate failed to converge to prescribed
accuracy after 9 recursive bisections in x near {x} = {<<156>>}.
NIntegrate obtained <<157>> and <<160>> for the integral and error
estimates. >>"
Why am I getting this error for near{x} = {<<156>>} when I am only looking at 0<x<1? And what do the double pointy brackets around the number mean?
The expression is really long, so I think it would be more meaningful to show how I generate it.This is a basic version (some of the exponents I need to be variables, but these are the lowest values, and I still get the error).
F[n_] := (1 - (1 - F[n-1])^2)^2;
F[0] = x;
Expr[n_]:= (1/(1-F[n]))Integrate[D[F[n],x]*x,{x,x,1}];
I get the error when I integrate Expr[3] or higher. Oddly, when I use regular Integrate and then //N at the end, I get a complex number for n=2.
The <<156>> does not mean that the integral is being evaluated at x=156. <<>> is called Skeleton and is used to indicate that a large output was suppressed. From the documentation:
Skeleton[n]
represents a sequence of n omitted elements in an expression printed with Short or Shallow. The standard print form for Skeleton is <<n>>.
Coming to your integral, here's the error that I get:
So you can see that this long number was suppressed in your case (depending on your preferences). The last >> is a link that takes you to the corresponding error message in the documentation.
If you try the advice in the document, which is to increase MaxRecursion, you'll eventually get a new error ::slwcon
So this now tells you that either your WorkingPrecision is too small or that you have a singularity (which is brought on by a small working precision). Increasing WorkingPrecision to 200 gives the following output:
You can look a little further into the nature of your expressions.
num = Numerator#Expr#3;
den = Denominator#Expr#3;
Plot[{num, den}, {x, 0, 1}, WorkingPrecision -> 100, PlotRange -> All]
So beyond 0.7ish, your expression has the potential for serious stability issues, resulting in singularities. It is the numerator rather than the denominator, that requires high precision to converge to the right value.
num /. x -> 0.99
num /. x -> 0.99`100
Out[1]= -0.015625
Out[2]= 1.2683685178049112809413795626911317545171610885215799438968\
06379991565*10^-14
den /. x -> 0.99
den /. x -> 0.99`100
Out[3]= 1.28786*10^-14
Out[4]= 1.279743968014714505561671861369465844697720803022743298030747945923286\
915425027352809730413954909*10^-14
You can see here the difference between the numerator and denominator when you don't have sufficient precision, causing a near singularity.

Problem performing a substitution in a multiple derivative

I have a basic problem in Mathematica which has puzzled me for a while. I want to take the m'th derivative of x*Exp[t*x], then evaluate this at x=0. But the following does not work correct. Please share your thoughts.
D[x*Exp[t*x], {x, m}] /. x -> 0
Also what does the error mean
General::ivar: 0 is not a valid variable.
Edit: my previous example (D[Exp[t*x], {x, m}] /. x -> 0) was trivial. So I made it harder. :)
My question is: how to force it to do the derivative evaluation first, then do substitution.
As pointed out by others, (in general) Mathematica does not know how to take the derivative an arbitrary number of times, even if you specify that number is a positive integer.
This means that the D[expr,{x,m}] command remains unevaluated and then when you set x->0, it's now trying to take the derivative with respect to a constant, which yields the error message.
In general, what you want is the m'th derivative of the function evaluated at zero.
This can be written as
Derivative[m][Function[x,x Exp[t x]]][0]
or
Derivative[m][# Exp[t #]&][0]
You then get the table of coefficients
In[2]:= Table[%, {m, 1, 10}]
Out[2]= {1, 2 t, 3 t^2, 4 t^3, 5 t^4, 6 t^5, 7 t^6, 8 t^7, 9 t^8, 10 t^9}
But a little more thought shows that you really just want the m'th term in the series, so SeriesCoefficient does what you want:
In[3]:= SeriesCoefficient[x*Exp[t*x], {x, 0, m}]
Out[3]= Piecewise[{{t^(-1 + m)/(-1 + m)!, m >= 1}}, 0]
The final output is the general form of the m'th derivative. The PieceWise is not really necessary, since the expression actually holds for all non-negative integers.
Thanks to your update, it's clear what's happening here. Mathematica doesn't actually calculate the derivative; you then replace x with 0, and it ends up looking at this:
D[Exp[t*0],{0,m}]
which obviously is going to run into problems, since 0 isn't a variable.
I'll assume that you want the mth partial derivative of that function w.r.t. x. The t variable suggests that it might be a second independent variable.
It's easy enough to do without Mathematica: D[Exp[t*x], {x, m}] = t^m Exp[t*x]
And if you evaluate the limit as x approaches zero, you get t^m, since lim(Exp[t*x]) = 1. Right?
Update: Let's try it for x*exp(t*x)
the mth partial derivative w.r.t. x is easily had from Wolfram Alpha:
t^(m-1)*exp(t*x)(t*x + m)
So if x = 0 you get m*t^(m-1).
Q.E.D.
Let's see what is happening with a little more detail:
When you write:
D[Sin[x], {x, 1}]
you get an expression in with x in it
Cos[x]
That is because the x in the {x,1} part matches the x in the Sin[x] part, and so Mma understands that you want to make the derivative for that symbol.
But this x, does NOT act as a Block variable for that statement, isolating its meaning from any other x you have in your program, so it enables the chain rule. For example:
In[85]:= z=x^2;
D[Sin[z],{x,1}]
Out[86]= 2 x Cos[x^2]
See? That's perfect! But there is a price.
The price is that the symbols inside the derivative get evaluated as the derivative is taken, and that is spoiling your code.
Of course there are a lot of tricks to get around this. Some have already been mentioned. From my point of view, one clear way to undertand what is happening is:
f[x_] := x*Exp[t*x];
g[y_, m_] := D[f[x], {x, m}] /. x -> y;
{g[p, 2], g[0, 1]}
Out:
{2 E^(p t) t + E^(p t) p t^2, 1}
HTH!

Resources