I aim to calculate and preserve the results from the maximization of a function with two arguments and one exogenous parameter, when the maximum can not be derived (in closed form) by maximize. For instance, let
f[x_,y_,a_]=Max[0,Min[a-y,1-x-y]
be the objective function where a is positive. The maximization shall take place over [0,1]^2, therefore I set
m[a_]=Maximize[{f[x, y, a], 0 <= x <= 1 && 0 <= y <= 1 && 0 <= a}, {x,y}]
Obviously m can be evaluated at any point a and it is therefore possible to plot the maximizing x by employing
Plot[x /. m[a][[2]], {a, 0.01, 1}]
As I need to do several plots and further derivations containing the optimal solutions x and y (which of course are functions of a), i would like to preserve/save the results from the optimization for further use. Is there an elegant way to do this, or do I have to write some kind of loop to extract the values myself?
Now that I've seen the full text of your comment on my original comment, I suspect that you do understand the differences between Set and SetDelayed well enough. I think what you may be looking for is memoisation, sometimes implemented a bit like this;
f[x_,y_] := f[x,y] = Max[0,Min[a-y,1-x-y]]
When you evaluate, for example f[3,4] for the first time it will evaluate to the entire expression to the right of the :=. The rhs is the assignment f[3,4] = Max[0,Min[a-y,1-x-y]]. Next time you evaluate f[3,4] Mathematica already has a value for it so doesn't need to recompute it, it just recalls it. In this example the stored value would be Max[0,Min[a-4,-6]] of course.
I remain a little uncertain of what you are trying to do so this answer may not be any use to you at all.
Simple approach
results = Table[{x, y, a} /. m[a][[2]], {a, 0.01, 1, .01}]
ListPlot[{#[[3]], #[[1]]} & /# results, Joined -> True]
(The Set = is ok here so long as 'a' is not previosly defined )
If you want to utilise Plot[]s automatic evaluation take a look at Reap[]/Sow[]
{p, data} = Reap[Plot[x /. Sow[m[a]][[2]], {a, 0.01, 1}]];
Show[p]
(this takes a few minutes as the function output is a mess..).
hmm try this again: assuming you want x,y,a and the minimum value:
{p, data} = Reap[Plot[x /. Sow[{a, m[a]}][[2, 2]], {a, 0.01, .1}]];
Show[p]
results = {#[[1]], x /. #[[2, 2]], y /. #[[2, 2]], #[[2, 1]]} & /# data[[1]]
BTW Your function appears to be independent of x over some ranges which is why the plot is a mess..
Related
With Mathematica 8.0.1.0, I have used FindRoot[] to identify the intersection of two 2 pdf functions.
But if I have pdf functions that intersect at more than one point, and I have the upper limit of the x axis range beyond the second intersection, FindRoot[] only returns the second intersection.
pdf1 = 1/x 0.5795367855565214` (E^(
11.170058830053032` (-1.525439351903338` - Log[x]))
Erfc[1.6962452696714152` (-0.5548887795964352` - Log[x])] +
E^(1.2932713057519` (2.60836043407439` + Log[x]))
Erfc[1.6962452696714152` (2.720730943938539` + Log[x])]);
pdf2 = 1/x 0.4648445097126269` (E^(
5.17560914275408` (-2.5500941338198615` - Log[x]))
Erfc[1.7747318880142482` (-2.139288893723375` - Log[x])] +
E^(1.1332542415053757` (3.050849516581922` + Log[x]))
Erfc[1.7747318880142482` (3.1407996592474956` + Log[x])]);
Plot[{pdf1, pdf2}, {x, 0, 0.5}, PlotRange -> All] (* Shows 1st intersection *)
Plot[{pdf1, pdf2}, {x, 0.4, 0.5}, PlotRange -> All] (* Shows 2nd intersection *)
{x /. FindRoot[pdf1 == pdf2, {x, 0.00001, 0.5}],
x /. FindRoot[pdf1 == pdf2, {x, 0.00001, 0.4}]}
The above plots show the issue. When plotted they intersect at two points:
{0.464719, 0.0452777}
respectively.
As I can't know before hand if I'll have a second intersection and I don't know where it might fall on the x axis if I did, can anyone suggest a way to have FindRoot[] only return the first intersection rather than the second?
If not, can anyone suggest another way to go about it?
With FindRoot[], you can only get a single root for a given starting point. Iterating through different options is cumbersome and you might not even get the desired result for certain edge cases unless you hit upon the right choice of starting point.
In this case, something like NSolve or Reduce might be a better option. If you know that your expressions decay, using a reasonable upper bound for possible values of x, you can use the following, which is pretty quick and will give you all roots.
NSolve[{pdf1 == pdf2, 0 < x < 1}, x] // Timing
Out[1]= {0.073495, {{x -> 0.0452777}, {x -> 0.464719}}}
How about the following:
First you have to find all roots in one step. I do this with
roots=Reduce[pdf1==pdf2&&0.000001<x<0.5,x]
And then you could take the minimum (first root on the x axis in your special case).
rootMin=Min[N[x/.{ToRules[roots]}]]
I am trying to get Mathematica to approximate an integral that is a function of various parameters. I don't need it to be extremely precise -- the answer will be a fraction, and 5 digits would be nice, but I'd settle for as few as 2.
The problem is that there is a symbolic integral buried in the main integral, and I can't use NIntegrate on it since its symbolic.
F[x_, c_] := (1 - (1 - x)^c)^c;
a[n_, c_, x_] := F[a[n - 1, c, x], c];
a[0, c_, x_] = x;
MyIntegral[n_,c_] :=
NIntegrate[Integrate[(D[a[n,c,y],y]*y)/(1-a[n,c,x]),{y,x,1}],{x,0,1}]
Mathematica starts hanging when n is greater than 2 and c is greater than 3 or so (generally as both n and c get a little higher).
Are there any tricks for rewriting this expression so that it can be evaluated more easily? I've played with different WorkingPrecision and AccuracyGoal and PrecisionGoal options on the outer NIntegrate, but none of that helps the inner integral, which is where the problem is. In fact, for the higher values of n and c, I can't even get Mathematica to expand the inner derivative, i.e.
Expand[D[a[4,6,y],y]]
hangs.
I am using Mathematica 8 for Students.
If anyone has any tips for how I can get M. to approximate this, I would appreciate it.
Since you only want a numerical output (or that's what you'll get anyway), you can convert the symbolic integration into a numerical one using just NIntegrate as follows:
Clear[a,myIntegral]
a[n_Integer?Positive, c_Integer?Positive, x_] :=
a[n, c, x] = (1 - (1 - a[n - 1, c, x])^c)^c;
a[0, c_Integer, x_] = x;
myIntegral[n_, c_] :=
NIntegrate[D[a[n, c, y], y]*y/(1 - a[n, c, x]), {x, 0, 1}, {y, x, 1},
WorkingPrecision -> 200, PrecisionGoal -> 5]
This is much faster than performing the integration symbolically. Here's a comparison:
yoda:
myIntegral[2,2]//Timing
Out[1]= {0.088441, 0.647376595...}
myIntegral[5,2]//Timing
Out[2]= {1.10486, 0.587502888...}
rcollyer:
MyIntegral[2,2]//Timing
Out[3]= {1.0029, 0.647376}
MyIntegral[5,2]//Timing
Out[4]= {27.1697, 0.587503006...}
(* Obtained with WorkingPrecision->500, PrecisionGoal->5, MaxRecursion->20 *)
Jand's function has timings similar to rcollyer's. Of course, as you increase n, you will have to increase your WorkingPrecision way higher than this, as you've experienced in your previous question. Since you said you only need about 5 digits of precision, I've explicitly set PrecisionGoal to 5. You can change this as per your needs.
To codify the comments, I'd try the following. First, to eliminate infinite recursion with regards to the variable, n, I'd rewrite your functions as
F[x_, c_] := (1 - (1-x)^c)^c;
(* see note below *)
a[n_Integer?Positive, c_, x_] := F[a[n - 1, c, x], c];
a[0, c_, x_] = x;
that way n==0 will actually be a stopping point. The ?Positive form is a PatternTest, and useful for applying additional conditions to the parameters. I suspect the issue is that NIntegrate is re-evaluating the inner Integrate for every value of x, so I'd pull that evaluation out, like
MyIntegral[n_,c_] :=
With[{ int = Integrate[(D[a[n,c,y],y]*y)/(1-a[n,c,x]),{y,x,1}] },
NIntegrate[int,{x,0,1}]
]
where With is one of several scoping constructs specifically for creating local constants.
Your comments indicate that the inner integral takes a long time, have you tried simplifying the integrand as it is a derivative of a times a function of a? It seems like the result of a chain rule expansion to me.
Note: as per Yoda's suggestion in the comments, you can add a cacheing, or memoization, mechanism to a. Change its definition to
d:a[n_Integer?Positive, c_, x_] := d = F[a[n - 1, c, x], c];
The trick here is that in d:a[ ... ], d is a named pattern that is used again in d = F[...] cacheing the value of a for those particular parameter values.
Suppose I write a black-box functions, which evaluates an expensive complex valued function numerically, and then returns real and imaginary part.
fun[x_?InexactNumberQ] := Module[{f = Sin[x]}, {Re[f], Im[f]}]
Then I can use it in Plot as usual, but Plot does not recognize that the function returns a pair, and colors both curves the same color. How does one tell Mathematica that the function specified always returns a vector of a fixed length ? Or how does one style this plot ?
EDIT: Given attempts attempted at answering the problem, I think that avoiding double reevalution is only possible if styling is performed as a post-processing of the graphics obtained. Most likely the following is not robust, but it seems to work for my example:
gr = Plot[fun[x + I], {x, -1, 1}, ImageSize -> 250];
k = 1;
{gr, gr /. {el_Line :> {ColorData[1][k++], el}}}
One possibility is:
Plot[{#[[1]], #[[2]]}, {x, -1, 1}, PlotStyle -> {{Red}, {Blue}}] &# fun[x + I]
Edit
If your functions are not really smooth (ie. almost linear!), there is not much you can do to prevent the double evaluation process, as it will happen (sort of) anyway due to the nature of the Plot[] mesh exploration algorithm.
For example:
fun[x_?InexactNumberQ] := Module[{f = Sin[3 x]}, {Re[f], Im[f]}];
Plot[{#[[1]], #[[2]]}, {x, -1, 1}, Mesh -> All,
PlotStyle -> {{Red}, {Blue}}] &#fun[x + I]
I don't think there's a good solution to this if your function is expensive to compute. Plot will only acknowledge that there are several curves to be styled if you either give it an explicit list of functions as argument, or you give it a function that it can evaluate to a list of values.
The reason you might not want to do what #belisarius suggested is that it would compute the function twice (twice as slow).
However, you could use memoization to avoid this (i.e. the f[x_] := f[x] = ... construct), and go with his solution. But this can fill up your memory quickly if you work with real valued functions. To prevent this you may want to try what I wrote about caching only a limited number of values, to avoid filling up the memory: http://szhorvat.net/pelican/memoization-in-mathematica.html
If possible for your actual application, one way is to allow fun to take symbolic input in addition to just numeric, and then Evaluate it inside of Plot:
fun2[x_] := Module[{f = Sin[x]}, {Re[f], Im[f]}]
Plot[Evaluate[fun2[x + I]], {x, -1, 1}]
This has the same effect as if you had instead evaluated:
Plot[{-Im[Sinh[1 - I x]], Re[Sinh[1 - I x]]}, {x, -1, 1}]
What is the simplest way to map an arbitrarily funky nested list expr to a function unflatten so that expr==unflatten##Flatten#expr?
Motivation:
Compile can only handle full arrays (something I just learned -- but not from the error message), so the idea is to use unflatten together with a compiled version of the flattened expression:
fPrivate=Compile[{x,y},Evaluate#Flatten#expr];
f[x_?NumericQ,y_?NumericQ]:=unflatten##fPrivate[x,y]
Example of a solution to a less general problem:
What I actually need to do is to calculate all the derivatives for a given multivariate function up to some order. For this case, I hack my way along like so:
expr=Table[D[x^2 y+y^3,{{x,y},k}],{k,0,2}];
unflatten=Module[{f,x,y,a,b,sslot,tt},
tt=Table[D[f[x,y],{{x,y},k}],{k,0,2}] /.
{Derivative[a_,b_][_][__]-> x[a,b], f[__]-> x[0,0]};
(Evaluate[tt/.MapIndexed[#1->sslot[#2[[1]]]&,
Flatten[tt]]/. sslot-> Slot]&) ]
Out[1]= {x^2 y + y^3, {2 x y, x^2 + 3 y^2}, {{2 y, 2 x}, {2 x, 6 y}}}
Out[2]= {#1, {#2, #3}, {{#4, #5}, {#5, #7}}} &
This works, but it is neither elegant nor general.
Edit: Here is the "job security" version of the solution provided by aaz:
makeUnflatten[expr_List]:=Module[{i=1},
Function#Evaluate#ReplaceAll[
If[ListQ[#1],Map[#0,#1],i++]&#expr,
i_Integer-> Slot[i]]]
It works a charm:
In[2]= makeUnflatten[expr]
Out[2]= {#1,{#2,#3},{{#4,#5},{#6,#7}}}&
You obviously need to save some information about list structure, because Flatten[{a,{b,c}}]==Flatten[{{a,b},c}].
If ArrayQ[expr], then the list structure is given by Dimensions[expr] and you can reconstruct it with Partition. E.g.
expr = {{a, b, c}, {d, e, f}};
dimensions = Dimensions[expr]
{2,3}
unflatten = Fold[Partition, #1, Reverse[Drop[dimensions, 1]]]&;
expr == unflatten # Flatten[expr]
(The Partition man page actually has a similar example called unflatten.)
If expr is not an array, you can try this:
expr = {a, {b, c}};
indexes = Module[{i=0}, If[ListQ[#1], Map[#0, #1], ++i]& #expr]
{1, {2, 3}}
slots = indexes /. {i_Integer -> Slot[i]}
{#1, {#2, #3}}
unflatten = Function[Release[slots]]
{#1, {#2, #3}} &
expr == unflatten ## Flatten[expr]
I am not sure what you are trying to do with Compile. It is used when you want to evaluate procedural or functional expressions very quickly on numerical values, so I don't think it is going to help here. If repeated calculations of D[f,...] are impeding your performance, you can precompute and store them with something like
Table[d[k]=D[f,{{x,y},k}],{k,0,kk}];
Then just call d[k] to get the kth derivative.
I just wanted to update the excellent solutions by aaz and Janus. It seems that, at least in Mathematica 9.0.1.0 on Mac OSX, the assignment (see aaz's solution)
{i_Integer -> Slot[i]}
fails. If, however, we use
{i_Integer :> Slot[i]}
instead, we succeed. The same holds, of course, for the ReplaceAll call in Janus's "job security" version.
For good measure, I include my own function.
unflatten[ex_List, exOriginal_List] :=
Module[
{indexes, slots, unflat},
indexes =
Module[
{i = 0},
If[ListQ[#1], Map[#0, #1], ++i] &#exOriginal
];
slots = indexes /. {i_Integer :> Slot[i]};
unflat = Function[Release[slots]];
unflat ## ex
];
(* example *)
expr = {a, {b, c}};
expr // Flatten // unflatten[#, expr] &
It might seem a little like a cheat to use the original expression in the function, but as aaz points out, we need some information from the original expression. While you don't need it all, in order to have a single function that can unflatten, all is necessary.
My application is similar to Janus's: I am parallelizing calls to Simplify for a tensor. Using ParallelTable I can significantly improve performance, but I wreck the tensor structure in the process. This gives me a quick way to reconstruct my original tensor, simplified.
I have a basic problem in Mathematica which has puzzled me for a while. I want to take the m'th derivative of x*Exp[t*x], then evaluate this at x=0. But the following does not work correct. Please share your thoughts.
D[x*Exp[t*x], {x, m}] /. x -> 0
Also what does the error mean
General::ivar: 0 is not a valid variable.
Edit: my previous example (D[Exp[t*x], {x, m}] /. x -> 0) was trivial. So I made it harder. :)
My question is: how to force it to do the derivative evaluation first, then do substitution.
As pointed out by others, (in general) Mathematica does not know how to take the derivative an arbitrary number of times, even if you specify that number is a positive integer.
This means that the D[expr,{x,m}] command remains unevaluated and then when you set x->0, it's now trying to take the derivative with respect to a constant, which yields the error message.
In general, what you want is the m'th derivative of the function evaluated at zero.
This can be written as
Derivative[m][Function[x,x Exp[t x]]][0]
or
Derivative[m][# Exp[t #]&][0]
You then get the table of coefficients
In[2]:= Table[%, {m, 1, 10}]
Out[2]= {1, 2 t, 3 t^2, 4 t^3, 5 t^4, 6 t^5, 7 t^6, 8 t^7, 9 t^8, 10 t^9}
But a little more thought shows that you really just want the m'th term in the series, so SeriesCoefficient does what you want:
In[3]:= SeriesCoefficient[x*Exp[t*x], {x, 0, m}]
Out[3]= Piecewise[{{t^(-1 + m)/(-1 + m)!, m >= 1}}, 0]
The final output is the general form of the m'th derivative. The PieceWise is not really necessary, since the expression actually holds for all non-negative integers.
Thanks to your update, it's clear what's happening here. Mathematica doesn't actually calculate the derivative; you then replace x with 0, and it ends up looking at this:
D[Exp[t*0],{0,m}]
which obviously is going to run into problems, since 0 isn't a variable.
I'll assume that you want the mth partial derivative of that function w.r.t. x. The t variable suggests that it might be a second independent variable.
It's easy enough to do without Mathematica: D[Exp[t*x], {x, m}] = t^m Exp[t*x]
And if you evaluate the limit as x approaches zero, you get t^m, since lim(Exp[t*x]) = 1. Right?
Update: Let's try it for x*exp(t*x)
the mth partial derivative w.r.t. x is easily had from Wolfram Alpha:
t^(m-1)*exp(t*x)(t*x + m)
So if x = 0 you get m*t^(m-1).
Q.E.D.
Let's see what is happening with a little more detail:
When you write:
D[Sin[x], {x, 1}]
you get an expression in with x in it
Cos[x]
That is because the x in the {x,1} part matches the x in the Sin[x] part, and so Mma understands that you want to make the derivative for that symbol.
But this x, does NOT act as a Block variable for that statement, isolating its meaning from any other x you have in your program, so it enables the chain rule. For example:
In[85]:= z=x^2;
D[Sin[z],{x,1}]
Out[86]= 2 x Cos[x^2]
See? That's perfect! But there is a price.
The price is that the symbols inside the derivative get evaluated as the derivative is taken, and that is spoiling your code.
Of course there are a lot of tricks to get around this. Some have already been mentioned. From my point of view, one clear way to undertand what is happening is:
f[x_] := x*Exp[t*x];
g[y_, m_] := D[f[x], {x, m}] /. x -> y;
{g[p, 2], g[0, 1]}
Out:
{2 E^(p t) t + E^(p t) p t^2, 1}
HTH!