Clear["Global`*"]
Integrate[t f[x, y], {y, 0, 1}] -
t Integrate[f[x, y], {y, 0, 1}] // FullSimplify
Why doesn't M# know the result is zero?
It is not a bug. Since your f[x,y] has no definition, Mathematica can't assume anything about the integrand t f[x, y]
You can make a rule to help Mathematica as mentioned below. But without a rule, Mathematica is doing the right thing here.
This has been discussed many places before. Here are some links
https://groups.google.com/forum/#!msg/comp.soft-sys.math.mathematica/jsiYo9tRj04/rQYCy-X3SXQJ
https://mathematica.stackexchange.com/questions/5610/how-to-simplify-symbolic-integration
For example, you can add this rule:
Clear["Global`*"]
Unprotect[Integrate];
Integrate[t_Symbol*f_,dom_]:=t*Integrate[f,dom];
Protect[Integrate];
Now it will give zero
Simplify#Integrate[t f[x,y],{y,0,1}]-t Integrate[f[x,y],{y,0,1}]
(*---> 0 *)
Related
I'm trying to solve for the slope with this implicit differentiation problem at x=1, but NSolve is unable to solve it. How can I get around this issue?
eqn[x_, y_] := x*Sin[y] - y*Sin[x] == 2 (*note: bound is -5<=x<=5,-5<=y<=5*)
yPrime = Solve[D[eqn[x, y[x]], x], y'[x]] /. {y[x] -> y,
y'[x] -> y'} // Simplify
{{Derivative[1][y] -> (y Cos[x] - Sin[y])/(x Cos[y] - Sin[x])}}
NSolve[eqn[x, y] /. x -> 1, y] (*this doesn't work*)
NSolve is not really the right tool for the job.
From Wolfram Documentation
If your equations involve only linear functions or polynomials, then
you can use NSolve to get numerical approximations to all the
solutions. However, when your equations involve more complicated
functions, there is in general no systematic procedure for finding all
solutions, even numerically. In such cases, you can use FindRoot to
search for solutions. You have to give FindRoot a place to start its
search.
Since your function is not really polynomial the following works :
FindRoot[eqn/.x->1,{y,-2}]
{y -> -2.7881400978783035` }
which you can plug into yprime.
Picking the starting point is not evident, however making a quick ContourPlot always works:
ContourPlot[x*Sin[y] - y*Sin[x], {x, -5, 5}, {y, -5, 5}, Contours -> {2}]
It shows that your equation is not unique for all x.
I am trying to numerically solve a partial differential equation, where the inhomogeneous term is an integral of another function. Something like this:
NDSolve[{D[f[x, y], x] == NIntegrate[h[x,y+y2],{y2, x, y}],f[0,y] == 0}, f, {x, 0, 1}, {y,0,1}]
where h[x,y] is a well known function previously defined.
But it seems that Mathematica does not know how to evaluate the integral.
I do not use Mathematica too often, so I am sure there is a simple solution to this.
Could someone tell me what I am doing wrong?
Thanks.
It is not really clear from the question, but I had a similar problem and the solution in my case was to follow the advice from the wolfram forums to put the integral in an extra function and force real input.
So in your case this would be
integral[x_Real,y_Real] := NIntegrate[h[x,y+y2],{y2, x, y}];
NDSolve[{D[f[x, y], x] == integral[x,y], f[0,y] == 0}, f, {x, 0, 1}, {y,0,1}]
I wanted to try to make a rule to do norm squared integrals. For example, instead of the following:
Integrate[ Conjugate[Exp[-\[Beta] Abs[x]]] Exp[-\[Beta] Abs[x]],
{x, -Infinity, Infinity}]
I tried creating a function to do so, but require the function to take a function:
Clear[complexNorm, g, x]
complexNorm[ g_[ x_ ] ] := Integrate[ Conjugate[g[x]] * g[x],
{x, -Infinity, Infinity}]
v = complexNorm[ Exp[ - \[Beta] Abs[x]]] // Trace
Mathematica doesn't have any trouble doing the first integral, but the final result of the trace when my helper function is used, shows just:
complexNorm[E^(-\[Beta] Abs[x])]
with no evaluation of the desired integral?
The syntax closely follows an example I found in http://www.mathprogramming-intro.org/download/MathProgrammingIntro.pdf [page 155], but it doesn't work for me.
The reason why your expression doesn't evaluate to what you expect is because complexNorm is expecting a pattern of the form f_[x_]. It returned what you put in because it couldn't pattern match what you gave it. If you still want to use your function, you can modify it to the following:
complexNorm[g_] := Integrate[ Conjugate[g] * g, {x, -Infinity, Infinity}]
Notice that you're just matching with anything now. Then, you just call it as complexNorm[expr]. This requires expr to have x in it somewhere though, or you'll get all kinds of funny business.
Also, can't you just use Abs[x]^2 for the norm squared? Abs[x] usually gives you a result of the form Sqrt[Conjugate[x] x].
That way you can just write:
complexNorm[g_] := Integrate[Abs[g]^2, {x, -Infinity, Infinity}]
Since you're doing quantum mechanics, you may find the following some nice syntatic sugar to use:
\[LeftAngleBracket] f_ \[RightAngleBracket] :=
Integrate[Abs[f]^2, {x, -\[Infinity], \[Infinity]}]
That way you can write your expectation values exactly as you would expect them.
I have a follow up question to Sasha's answer of my earlier question at TransformedDistribution in Mathematica.
As I already accepted the answer a while back, I thought it made sense to ask this as a new question.
As part of the answer Sasha defined 2 functions:
LogNormalStableCDF[{alpha_, beta_, gamma_, sigma_, delta_}, x_Real] :=
Block[{u},
NExpectation[
CDF[StableDistribution[alpha, beta, gamma, sigma], (x - delta)/u],
u \[Distributed] LogNormalDistribution[Log[gamma], sigma]]]
LogNormalStablePDF[{alpha_, beta_, gamma_, sigma_, delta_}, x_Real] :=
Block[{u},
NExpectation[
PDF[StableDistribution[alpha, beta, gamma, sigma], (x - delta)/u]/u,
u \[Distributed] LogNormalDistribution[Log[gamma], sigma]]]
The PDF function seems to work fine:
Plot[LogNormalStablePDF[{1.5, 1, 1, 0.5, 1}, x], {x, -4, 6},
PlotRange -> All]
But if I try to plot the CDF variation:
Plot[LogNormalStableCDF[{1.5, 1, 1, 0.5, 1}, x], {x, -4, 6},
PlotRange -> All]
The evaluation doesn't seem to ever finish.
I've done something similar with the following - substituting a NormalDistribution for the StableDistribution above:
LogNormalNormalCDF[{gamma_, sigma_, delta_}, x_Real] :=
Block[{u},
NExpectation[CDF[NormalDistribution[0, Sqrt[2]], (x - delta)/u],
u \[Distributed] LogNormalDistribution[Log[gamma], sigma]]]
LogNormalNormalPDF[{gamma_, sigma_, delta_}, x_Real] :=
Block[{u},
NExpectation[PDF[NormalDistribution[0, Sqrt[2]], (x - delta)/u]/u,
u \[Distributed] LogNormalDistribution[Log[gamma], sigma]]]
The plots of both the CDF and PDF versions work fine.
Plot[LogNormalNormalPDF[{0.01, 0.4, 0.0003}, x], {x, -0.10, 0.10}, PlotRange -> All]
Plot[LogNormalNormalCDF[{0.01, 0.4, 0.0003}, x], {x, -0.10, 0.10}, PlotRange -> All]
This has me puzzled. Clearly the general approach works in the LogNormalNormalCDF case. Also, the LogNormalStablePDF and LogNormalStableCDF are almost identical. In fact from the code itself, the CDF version seems to have to do less than the PDF version.
So, I hoped someone could:
explain why the LogNormalStableCDF doesn't appear to work (at least in what I consider a reasonable time, I'll try running it over night and see if it ever completes the evaluation) and
suggest a way for the get LogNormalStableCDF to work more quickly.
Many thanks,
J.
The new distribution functionality has amazing potential, but its newness shows. There are several bugs that I and others have encountered and that hopefully will be dealt with in following bugfixes. However, this seems to be not one of them.
In this case the problem is the definition of variable x as real while providing the plot range in the form of integers. So when Plot starts it tries the end points for which the function returns unevaluated because there's no match. Removing Real from the definition makes it work.
The second function works because the plot range is provided with machine precision numbers.
Be prepared to wait a bit, because the function evaluates pretty slow. In fact, you have to curb MaxRecursion a bit, because Plot gets too enthusiastic and adds way too much points here (maybe due to small scale inaccuracies):
Plot[LogNormalStableCDF[{1.5, 1, 1, 0.5, 1}, x], {x, -4, 6},
PlotRange -> All, PlotPoints -> 10, MaxRecursion -> 4]
yields
It took about 9 minutes to generate and as you can see, it took a lot of points on the flanks of the graph.
I have a function f[x_,y_,z_]:=Limit[g[x+eps,y,z],eps->0]; and I plot f[x,y,z] in the next step. Earlier, I used to evaluate the limit and copy the expression in the definition of f. I tried to make it all in one step. However, the evaluation of the Limit is done only when I try to plot f. As a result, every time I change around the variables and replot, the limit is evaluated all over again (it takes about a min to evaluate, so it becomes annoying). I tried evaluating the limit first, and then doing f[x_,y_,z_]:=%. But that doesn't work either. How do I get the function to evaluate the limit upon declaration?
The function you need is logically called Evaluate and you can use it within the Plot command.
Here is a contrived example:
f[x_, y_, z_] := Limit[Multinomial[x, y, z], x -> 0]
Plot3D[ Evaluate[ f[x, y, z] ], {y, 1, 5}, {z, 1, 5}]
Addressing your follow-up question, perhaps all you seek is something like
ff = f[x, y, z]
Plot3D[ff, {y, 1, 5}, {z, 1, 5}]
or possibly merely
ClearAll[f, x, y, z]
f[x_, y_, z_] = Limit[Multinomial[x, y, z], x -> 0]
Plot3D[f[x, y, z], {y, 1, 5}, {z, 1, 5}]
It would be helpful if you would post a more complete version of your code.
An alternative to Mr Wizard's solution is that you can also put the Evaluate in the function's definition:
f[x_, y_, z_] := Evaluate[Limit[Multinomial[x, y, z], x->0]]
Plot3D[f[x, y, z], {y, 1, 5}, {z, 1, 5}]
You can compare the two versions with the one without an Evaluate by Timing the Plot.