Weird behavior of InterpolationOrder option of Interpolation - wolfram-mathematica

When trying to recreate an InterpolationFunction produced by NDSolve I faced very strange problem with InterpolationOrder option of Interpolation. Consider the following InterpolationFunction (an example function from the Documentation):
ifun = First[
x /. NDSolve[{x'[t] == Exp[x[t]] - x[t], x[0] == 1}, x, {t, 0, 10}]]
Now let us to try to reconstruct it. Here is the data:
Needs["DifferentialEquations`InterpolatingFunctionAnatomy`"]
data = Transpose#{InterpolatingFunctionGrid[ifun],
InterpolatingFunctionValuesOnGrid[ifun]};
And here is InterpolationOrder:
interpolationOrder = InterpolatingFunctionInterpolationOrder[ifun]
(*=> {3}*)
Now we try to construct the InterpolatingFunction:
Interpolation[data, InterpolationOrder -> interpolationOrder];
and get error Message:
Interpolation::inord: Value of option InterpolationOrder -> {3} should
be a non-negative machine-sized integer or a list of integers with
length equal to the number of dimensions, 1. >>
But if we specify InterpolationOrder by hands, it is OK:
Interpolation[data, InterpolationOrder -> {3}]
(*=> InterpolatingFunction[{{0.,0.516019}},<>]*)
Can anyone explain why InterpolationOrder -> interpolationOrder does not work while InterpolationOrder -> {3} works although interpolationOrder must be replaced with {3} BEFORE calling Interpolation according to the standard evaluation sequence?
P.S. The problem occurs in Mathematica 7.0.1 and 8.0.1 but not in Mathematica 5.2.
UPDATE
I have found one workaround for this bug:
Interpolation[data,
ToExpression#ToString[InterpolationOrder -> interpolationOrder]]
works as expected.
It seems that expressions generated by evaluation of Rule[InterpolationOrder,interpolationOrder] and Rule[InterpolationOrder,{3}] has different internal structure in spite of the fact that they are identical:
ByteCount // Attributes
ByteCount[InterpolationOrder -> interpolationOrder]
ByteCount[InterpolationOrder -> {3}]
Order[InterpolationOrder -> interpolationOrder,
InterpolationOrder -> {3}]
(*=>
{Protected}
192
112
0
*)

It seems that I have found the reason for this behavior. It is because InterpolatingFunctionInterpolationOrder function returns PackedArray:
Developer`PackedArrayQ#InterpolatingFunctionInterpolationOrder[ifun]
(*=> True*)
We can convert {3} into PackedArray ourselves:
Interpolation[data,
InterpolationOrder -> Developer`ToPackedArray#{3}];
(*=> gives the error Message*)
So the reason is that Interpolate does not support PackedArray as a value for InterpolationOrder option. The workaround is to unpack it manually:
Interpolation[data,
InterpolationOrder -> Developer`FromPackedArray#interpolationOrder]
(*=> InterpolatingFunction[{{0.,0.516019}},<>]*)

Very strange behaviour indeed. Something like
a = {3};
Interpolation[data, InterpolationOrder -> a]
works fine, and both ??interpolationOrder and OwnValues[interpolationOrder] seem to indicate that interpolationOrder is just equal to {3}. Even weirder is that this does seem to work
interpolationOrder = 2 InterpolatingFunctionInterpolationOrder[ifun]/2
Interpolation[data, InterpolationOrder -> interpolationOrder]

Related

Which of the following two pieces of code is faster or the same, and why?

-record(test, {a = 10}).
test(Test) when is_record(Test,test) -> somethings.
or
test(#test{} = Test) -> somethings.
which is faster or the same? why.
It's not too hard to test with the compiler.
To do that, I wrote this module…
-module my_mod.
-export [t1/1, t2/1].
-record(test, {a = 10}).
t1(Test) when is_record(Test,test) -> somethings.
t2(#test{} = _Test) -> somethings.
Then, I run erlc -E my_mod.erl and this is the resulting expanded code:
-file("my_mod.erl", 1).
-module(my_mod).
-export([t1/1,t2/1]).
-record(test,{a = 10}).
t1({test, _} = Test) when true ->
somethings.
t2({test, _} = _Test) ->
somethings.
So, basically… it's the same. Using is_record(Test, test) adds a useless guard (true) but that shouldn't make a difference in terms of speed.
Furthermore, if you use erlc -S my_mod.erl, to generate assembly listings, you get:
{module, my_mod}. %% version = 0
{exports, [{module_info,0},{module_info,1},{t1,1},{t2,1}]}.
{attributes, []}.
{labels, 9}.
{function, t1, 1, 2}.
{label,1}.
{line,[{location,"my_mod.erl",6}]}.
{func_info,{atom,my_mod},{atom,t1},1}.
{label,2}.
{test,is_tagged_tuple,{f,1},[{x,0},2,{atom,test}]}.
{move,{atom,somethings},{x,0}}.
return.
{function, t2, 1, 4}.
{label,3}.
{line,[{location,"my_mod.erl",8}]}.
{func_info,{atom,my_mod},{atom,t2},1}.
{label,4}.
{test,is_tagged_tuple,{f,3},[{x,0},2,{atom,test}]}.
{move,{atom,somethings},{x,0}}.
return.
{function, module_info, 0, 6}.
{label,5}.
{line,[]}.
{func_info,{atom,my_mod},{atom,module_info},0}.
{label,6}.
{move,{atom,my_mod},{x,0}}.
{line,[]}.
{call_ext_only,1,{extfunc,erlang,get_module_info,1}}.
{function, module_info, 1, 8}.
{label,7}.
{line,[]}.
{func_info,{atom,my_mod},{atom,module_info},1}.
{label,8}.
{move,{x,0},{x,1}}.
{move,{atom,my_mod},{x,0}}.
{line,[]}.
{call_ext_only,2,{extfunc,erlang,get_module_info,2}}.
As you can see, the two functions are, in fact, identical:
{function, …, 1, …}.
{label,…}.
{line,[{location,"my_mod.erl",…}]}.
{func_info,{atom,my_mod},{atom,…},1}.
{label,…}.
{test,is_tagged_tuple,{f,…},[{x,0},2,{atom,test}]}.
{move,{atom,somethings},{x,0}}.
return.
maybe the second one.There's no concept of record during runtime, just tuple.
21> rd(test, {a = 10}).
test
22> erlang:is_record({test, 1}, test).
true

Processing KMZ in Mathematica

I'm stuck on a conversion.
I have a KMZ file with some coordinates. I read the file like this:
m=Import["~/Desktop/locations.kmz","Data"]
I get something like this:
{{LayerName->Point Features,
Geometry->{
Point[{-120.934,49.3321,372}],
Point[{-120.935,49.3275,375}],
Point[{-120.935,49.323,371}]},
Labels->{},LabeledData->{},ExtendedData->{},
PlacemarkNames->{1,2,3},
Overlays->{},NetworkLinks->{}
}}
I want to extract the {x,y,z} from each of the points and also the placemark names {1,2,3} associated with the points. Even if I can just get the points out of Geometry->{} that would be fine because I can extract them into a list with List###, but I'm lost at the fundamental part where I can't extract the Geometry "Rule".
Thanks for any help,
Ron
While Leonid's answer is correct, you will likely find that it does not work with your code. The reason is that the output of your Import command contains strings, such as "LayerNames", rather than symbols, such as LayerNames. I've uploaded a KML file to my webspace so we can try this using an actual Import command. Try something like the following:
in = Import["http://facstaff.unca.edu/mcmcclur/my.kml", "Data"];
pointList = "Geometry" /.
Cases[in, Verbatim[Rule]["Geometry", _], Infinity];
pointList /. Point[stuff_] -> stuff
Again, note that "Geometry" is a string. In fact, the contents of in look like so (in InputForm):
{{"LayerName" -> "Waypoints",
"Geometry" -> {Point[{-82.5, 32.5, 0}]},
"Labels" -> {}, "LabeledData" -> {},
"ExtendedData" -> {}, "PlacemarkNames" -> {"asheville"},
"Overlays" -> {}, "NetworkLinks" -> {}}}
Context: KML refers to Keyhole Markup Language. Keyhole was a company that developed tools that ultimately became Google Earth, after they were acquired by Google. KMZ is a zipped version of KML.
A simplification to Leonid and Mark's answers that I believe can be made safely is to remove the fancy Verbatim construct. That is:
Leonid's first operation can be written:
Join ## Cases[expr, (Geometry -> x_) :> (x /. Point -> Sequence), Infinity]
Leonid's second operation:
Join ## Cases[expr, (PlacemarkNames -> x_) :> x, Infinity]
I had trouble importing Mark's data, but from what I can guess, one could write:
pointList = Cases[in, ("Geometry" -> x_) :> x, Infinity, 1]
I'll let the votes on this answer tell me if I am correct.
Given your expression
expr = {{LayerName -> Point Features,
Geometry -> {
Point[{-120.934, 49.3321, 372}],
Point[{-120.935, 49.3275, 375}],
Point[{-120.935, 49.323, 371}]},
Labels -> {}, LabeledData -> {}, ExtendedData -> {},
PlacemarkNames -> {1, 2, 3}, Overlays -> {}, NetworkLinks -> {}}}
This will extract the points:
In[121]:=
Flatten[Cases[expr, Verbatim[Rule][Geometry, x_] :> (x /. Point -> Sequence),
Infinity], 1]
Out[121]= {{-120.934, 49.3321, 372}, {-120.935, 49.3275,375}, {-120.935, 49.323, 371}}
And this will extract the placemarks:
In[124]:= Flatten[Cases[expr, Verbatim[Rule][PlacemarkNames, x_] :> x, Infinity], 1]
Out[124]= {1, 2, 3}
Here is a more elegant method exploiting that we are looking for rules, that will extract both:
In[127]:=
{Geometry, PlacemarkNames} /.Cases[expr, _Rule, Infinity] /. Point -> Sequence
Out[127]=
{{{-120.934, 49.3321, 372}, {-120.935, 49.3275,375}, {-120.935, 49.323, 371}}, {1, 2, 3}}
How about Transpose[{"PlacemarkNames", "Geometry"} /. m[[1]]] ?

Making customized InputForm and ShortInputForm

I often wish to see the internal representation of Mathematica's graphical objects not in the FullForm but in much more readable InputForm having the ability to select parts of the code by double-clicking on it and easily copy this code to a new input Cell. But the default InputForm does not allow this since InputForm is displayed by default as a String, not as Mathematica's code. Is there a way to have InputForm displayed as Mathematica's code?
I also often wish to see a shortened version of such InputForm where all long lists of coordinates are displayed as the first coordinate followed by number of skipped coordinate values wrapped with Skeleton, all empty Lists removed and all numbers are also shortened for displaying no more than 6 digits. It would be even better to use 6 digits only for coordinates but for color directives such as Hue display only 2 significant digits. For example,
Plot[{Sin[x], .5 Sin[2 x]}, {x, 0, 2 \[Pi]},
Filling -> {1 -> {2}}] // ShortInputForm
should give:
Graphics[GraphicsComplex[{{1.28228`*^-7, 1.28228*^-7}, <<1133>>},
{{{EdgeForm[], Directive[{Opacity[0.2], Hue[0.67, 0.6, 0.6]}],
GraphicsGroup[{Polygon[{{1133, <<578>>}}]}]},
{EdgeForm[], Directive[{Opacity[0.2], Hue[0.67, 0.6, 0.6]}],
GraphicsGroup[{Polygon[{{432, <<556>>}}]}]}}, {{Hue[0.67, 0.6,
0.6], Line[{1, <<431>>}]}, {Hue[0.91, 0.6, 0.6],
Line[{432, <<701>>}]}}}], {AspectRatio -> GoldenRatio^(-1),
Axes -> True, AxesOrigin -> {0, 0},
Method -> {"AxesInFront" -> True},
PlotRange -> {{0, 2*Pi}, {-1., 1}},
PlotRangeClipping -> True,
PlotRangePadding -> {Scaled[0.02], Scaled[0.02]}}]
(note that -0.9999998592131705 is converted to -1., 1.2822827157509358*^-7 is converted to 1.28228*^-7 and Hue[0.9060679774997897, 0.6, 0.6] is converted to Hue[0.91, 0.6, 0.6]).
In this way, I wish to have the output of InputForm as Mathematica's code and also have a ShortInputForm function which will give the shortened version of this code. Can anybody help me?
As to the first part of the question, I have found one way to achieve what I want:
Plot[{Sin[x], .5 Sin[2 x]}, {x, 0, 2 \[Pi]}, Filling -> {1 -> {2}}] //
InputForm // StandardForm
UPDATE
The most recent version of the shortInputForm function can be found here.
Original post
Here is another, even better solution (compatible with Mathematica 5):
myInputForm[expr_] :=
Block[{oldContexts, output, interpretation, skeleton},
output = ToString[expr, InputForm];
oldContexts = {$Context, $ContextPath};
$Context = "myTemp`"; $ContextPath = {$Context};
output = DisplayForm#ToBoxes[ToExpression[output] /.
{myTemp`interpretation -> If[$VersionNumber >= 6,
System`Interpretation, System`First#{#} &],
myTemp`Row -> System`Row,
myTemp`skeleton -> System`Skeleton,
myTemp`sequence :> (System`Sequence ## # &)}, StandardForm];
{$Context, $ContextPath} = oldContexts; output]
shortInputForm[expr_] := myInputForm[expr /. {{} -> Sequence[],
lst : {x_ /; VectorQ[x, NumberQ], y__} /;
(MatrixQ[lst, NumberQ] && Length[lst] > 3) :>
{x /. v : {a_, b__} /; Length[v] > 3 :>
{a, interpretation[skeleton[Length[{b}]], sequence#{b}]},
interpretation[skeleton[Length[{y}]], sequence#{y}]},
lst : {x_, y__} /; VectorQ[lst, NumberQ] && Length[lst] > 3 :>
{x, interpretation[skeleton[Length[{y}]], sequence#{y}]}}]
How it works
This solution is based on simple idea: we need to block conversion of such things as Graphics, Point and others to typeset expressions in order to get them displayed in the internal form (as expressions suitable for input). Happily, if we do this, the resulting StandardForm output is found to be just formatted (two-dimensional) InputForm of the original expression. This is just what is needed!
But how to do this?
First of all, this conversion is made by FormatValues defined for Symbols like Graphics, Point etc. One can get full list of such Symbols by evaluating the following:
list = Symbol /#
Select[DeleteCases[Names["*"], "I" | "Infinity"],
ToExpression[#, InputForm,
Function[symbol, Length[FormatValues#symbol] > 0, HoldAll]] &]
My first idea was just Block all these Symbols (and it works!):
myInputForm[expr_] :=
With[{list = list}, Block[list, RawBoxes#MakeBoxes#expr]]
But this method leads to the evaluation of all these Symbols and also evaluates all FormatValues for all Symbols in the $ContextPath. I think it should be avoided.
Other way to block these FormatValues is just to remove context "System`" from the $ContextPath. But it works only if these Symbols are not resolved yet to the "System`" context. So we need first to convert our expression to String, then remove "System`" context from the $ContextPath and finally convert the string backward to the original expression. Then all new Symbols will be associated with the current $Context (and Graphics, Point etc. - too, since they are not in the $ContextPath). For preventing context shadowing conflicts and littering the "Global`" context I switch $Context to "myTemp`" which can be easily cleared if necessary.
This is how myInputForm works.
Now about shortInputForm. The idea is not just to display a shortened version of myInputForm but also preserve the ability to select and copy parts of the shortened code into new input cell and use this code as it would be the full code without abbreviations. In version 6 and higher it is possible to achieve the latter with Interpretation. For compatibility with pre-6 versions of Mathematica I have added a piece of code that removes this ability if $VersionNumber is less than 6.
The only problem that I faced when working with Interpretation is that it has no SequenceHold attribute and so we cannot simply specify Sequence as the second argument for Interpretation. But this problem can easily be avoided by wrapping sequence in List and then Applying Sequence to it:
System`Sequence ## # &
Note that I need to specify the exact context for all built-in Symbols I use because at the moment of calling them the "System`" context is not in the $ContextPath.
This ends the non-standard decisions taken me in the development of these functions. Suggestions and comments are welcome!
At this moment I have come to the following solution:
round[x_, n_] := (10^-n*Round[10^n*MantissaExponent[x]]) /.
{m_, e_} :> N[m*10^e];
ShortInputForm[expr_] := ((expr /.
{{} -> Sequence[],
lst : {x_ /; VectorQ[x, NumberQ], y__} /;
(MatrixQ[lst, NumberQ] && Length[lst] > 2) :>
{x, Skeleton[Length[{y}]]},
lst : {x_, y__} /; VectorQ[lst, NumberQ] && Length[lst] > 2 :>
{x, Skeleton[Length[{y}]]}} /.
{exp : Except[List | Point][x__] /;
VectorQ[{x}, MachineNumberQ] :>
(round[#, 2] & /# exp),
x_Real /; MachineNumberQ[x] :> round[x, 6]})
// InputForm // StandardForm)
Now:

Follow up to: TransformedDistribution in Mathematica

I have a follow up question to Sasha's answer of my earlier question at TransformedDistribution in Mathematica.
As I already accepted the answer a while back, I thought it made sense to ask this as a new question.
As part of the answer Sasha defined 2 functions:
LogNormalStableCDF[{alpha_, beta_, gamma_, sigma_, delta_}, x_Real] :=
Block[{u},
NExpectation[
CDF[StableDistribution[alpha, beta, gamma, sigma], (x - delta)/u],
u \[Distributed] LogNormalDistribution[Log[gamma], sigma]]]
LogNormalStablePDF[{alpha_, beta_, gamma_, sigma_, delta_}, x_Real] :=
Block[{u},
NExpectation[
PDF[StableDistribution[alpha, beta, gamma, sigma], (x - delta)/u]/u,
u \[Distributed] LogNormalDistribution[Log[gamma], sigma]]]
The PDF function seems to work fine:
Plot[LogNormalStablePDF[{1.5, 1, 1, 0.5, 1}, x], {x, -4, 6},
PlotRange -> All]
But if I try to plot the CDF variation:
Plot[LogNormalStableCDF[{1.5, 1, 1, 0.5, 1}, x], {x, -4, 6},
PlotRange -> All]
The evaluation doesn't seem to ever finish.
I've done something similar with the following - substituting a NormalDistribution for the StableDistribution above:
LogNormalNormalCDF[{gamma_, sigma_, delta_}, x_Real] :=
Block[{u},
NExpectation[CDF[NormalDistribution[0, Sqrt[2]], (x - delta)/u],
u \[Distributed] LogNormalDistribution[Log[gamma], sigma]]]
LogNormalNormalPDF[{gamma_, sigma_, delta_}, x_Real] :=
Block[{u},
NExpectation[PDF[NormalDistribution[0, Sqrt[2]], (x - delta)/u]/u,
u \[Distributed] LogNormalDistribution[Log[gamma], sigma]]]
The plots of both the CDF and PDF versions work fine.
Plot[LogNormalNormalPDF[{0.01, 0.4, 0.0003}, x], {x, -0.10, 0.10}, PlotRange -> All]
Plot[LogNormalNormalCDF[{0.01, 0.4, 0.0003}, x], {x, -0.10, 0.10}, PlotRange -> All]
This has me puzzled. Clearly the general approach works in the LogNormalNormalCDF case. Also, the LogNormalStablePDF and LogNormalStableCDF are almost identical. In fact from the code itself, the CDF version seems to have to do less than the PDF version.
So, I hoped someone could:
explain why the LogNormalStableCDF doesn't appear to work (at least in what I consider a reasonable time, I'll try running it over night and see if it ever completes the evaluation) and
suggest a way for the get LogNormalStableCDF to work more quickly.
Many thanks,
J.
The new distribution functionality has amazing potential, but its newness shows. There are several bugs that I and others have encountered and that hopefully will be dealt with in following bugfixes. However, this seems to be not one of them.
In this case the problem is the definition of variable x as real while providing the plot range in the form of integers. So when Plot starts it tries the end points for which the function returns unevaluated because there's no match. Removing Real from the definition makes it work.
The second function works because the plot range is provided with machine precision numbers.
Be prepared to wait a bit, because the function evaluates pretty slow. In fact, you have to curb MaxRecursion a bit, because Plot gets too enthusiastic and adds way too much points here (maybe due to small scale inaccuracies):
Plot[LogNormalStableCDF[{1.5, 1, 1, 0.5, 1}, x], {x, -4, 6},
PlotRange -> All, PlotPoints -> 10, MaxRecursion -> 4]
yields
It took about 9 minutes to generate and as you can see, it took a lot of points on the flanks of the graph.

Debugging a working program on Mathematica 5 with Mathematica 7

I'm currently reading the Mathematica Guidebooks for Programming and I was trying to work out one of the very first program of the book. Basically, when I run the following program:
Plot3D[{Re[Exp[1/(x + I y)]]}, {x, -0.02, 0.022}, {y, -0.04, 0.042},
PlotRange -> {-1, 8}, PlotPoints -> 120, Mesh -> False,
ColorFunction -> Function[{x1, x2, x3}, Hue[Arg[Exp[1/(x1 + I x2)]]]]]
either I get a 1/0 error and e^\infinity error or, if I lower the PlotPoints options to, say, 60, an overflow error. I have a working output though, but it's not what it's supposed to be. The hue seems to be diffusing off the left corner whereas it should be diffusing of the origin (as can be seen on the original output)
Here is the original program which apparently runs on Mathematica 5 (Trott, Mathematica Guidebook for Programming):
Off[Plot3D::gval];
Plot3D[{Re[Exp[1/(x + I y)]], Hue[Arg[Exp[1/(x + I y)]]]},
{x, -0.02, 0.022}, {y, -0.04, 0.042},
PlotRange -> {-1, 8}, PlotPoints -> 120, Mesh -> False]
Off[Plot3D::gval];
However, ColorFunction used this way (first Plot3D argument) doesn't work and so I tried to simply adapt to its new way of using it.
Well, thanks I guess!
If you are satisfied with Mathematica's defaults you can use the old version of the code, simply cut out , Hue[Arg[Exp[1/(x + I y)]]] and the function works fine.
The problems you are having with the new version of the code seem to stem from the expression Exp[1/(x1 + I x2)] -- sometimes this will require the evaluation of 1/0. At least, if I cut out 1/ the program executes (on Mathematica 7) without complaint, though obviously with the wrong colours. So you need to rewrite your colour function, probably.
I finally found two alternative ways to solve my problem. The first one is to simply use the << Version5`Graphics` command to use Plot3Dfunction the way it worked with Mathematica V5. The code taken from the book works just like it used to.
However, if one wishes to display correctly the hue (that is, without diffusion off the left-hand corner) with the latest version, the Rescale function must be used, just like this:
Plot3D[Evaluate[Re[f[x, y]]], {x, -.02, .022}, {y, -0.04, 0.042},
PlotRange -> {-1, 2}, PlotPoints -> 120, Mesh -> False,
ColorFunction -> Function[{x, y, z}, Hue#Rescale[Arg[f[x, y]], {-π, π}]],
ColorFunctionScaling -> False,
ClippingStyle -> None]
I suppose the argument function in Mathematica does not map automatically to the [-Pi,Pi) range and so it must be rescaled to this domain. The result is quite good-looking, although there are some minor differences with the original plot.

Resources