Related
I wanted to calculate the probability associated to a given chi-squared value for any given number of degrees of freedom k. I could easily come up with this code:
P[chisquare_, k_] = Manipulate[NIntegrate[PDF[ChiSquareDistribution[k], x], {x, chisquare, Infinity}], {chisquare, 0, 10}, {k, 1, 10}]
but I was wondering: is there any way to do the opposite? I mean having the probability P as an input and getting the associated chi squared value? like if I wanted to compile a chi-squared table such as this https://www.medcalc.org/manual/chi-square-table.php
I tried using Solve but did not accomplish anything, is there an easy way around this?
You can do the integration with CFD and reverse with Quantile, e.g.
NIntegrate[PDF[ChiSquareDistribution[2], a], {a, 0, 3.0}]
0.77687
p = CDF[ChiSquareDistribution[2], 3.0]
0.77687
Quantile[ChiSquareDistribution[2], p]
Re. your link
Quantile[ChiSquareDistribution[2], 1 - #] & /# {0.995, 0.975, 0.20, 0.10, 0.05};
SetPrecision[#, If[# < 1, 3, 4]] & /# %
{0.0100, 0.0506, 3.219, 4.605, 5.991}
Consider following simple, illustrating example
cf = Block[{a, x, degree = 3},
With[{expr = Product[x - a[[i]], {i, degree}]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]
]
]
This is one of the possible ways to transfer code in the body of a Compile statement. It produces the Part::partd error, since a[[i]] is at the moment of evaluation not a list.
The easy solution is to just ignore this message or turn it off. There are of course other ways around it. For instance one could circumvent the evaluation of a[[i]] by replacing it inside the Compile-body before it is compiled
cf = ReleaseHold[Block[{a, x, degree = 3},
With[{expr = Product[x - a[i], {i, degree}]},
Hold[Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]] /.
a[i_] :> a[[i]]]
]
]
If the compiled function a large bit of code, the Hold, Release and the replacement at the end goes a bit against my idea of beautiful code. Is there a short and nice solution I have not considered yet?
Answer to the post of Szabolcs
Could you tell me though why you are using With here?
Yes, and it has to do with the reason why I cannot use := here. I use With, to have something like a #define in C, which means a code-replacement at the place I need it. Using := in With delays the evaluation and what the body of Compile sees is not the final piece of code which it is supposed to compile. Therefore,
<< CompiledFunctionTools`
cf = Block[{a, x, degree = 3},
With[{expr := Product[x - a[[i]], {i, degree}]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]]];
CompilePrint[cf]
shows you, that there is a call to the Mathematica-kernel in the compiled function
I4 = MainEvaluate[ Function[{x, a}, degree][ R0, T(R1)0]]
This is bad because Compile should use only the local variables to calculate the result.
Update
Szabolcs solution works in this case but it leaves the whole expression unevaluated. Let me explain, why it is important that the expression is expanded before it is compiled. I have to admit, my toy-example was not the best. So lets try a better one using With and SetDelayed like in the solution of Szabolcs
Block[{a, x}, With[
{expr := D[Product[x - a[[i]], {i, 3}], x]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]
]
]
Say I have a polynomial of degree 3 and I need the derivative of it inside the Compile. In the above code I want Mathematica to calculate the derivative for unassigned roots a[[i]] so I can use the formula very often in the compiled code. Looking at the compiled code above
one sees, that the D[..] cannot be compiled as nicely as the Product and stays unevaluated
11 R1 = MainEvaluate[ Hold[D][ R5, R0]]
Therefore, my updated question is: Is it possible to evaluate a piece of code without evaluating the Part[]-accesses in it better/nicer than using
Block[{a, x}, With[
{expr = D[Quiet#Product[x - a[[i]], {i, 3}], x]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]
]
]
Edit: I put the Quiet to the place it belongs. I had it in front of code block to make it visible to everyone that I used Quiet here to suppress the warning. As Ruebenko pointed already out, it should in real code always be as close as possible to where it belongs. With this approach you probably don't miss other important warnings/errors.
Update 2
Since we're moving away from the original question, we should move this discussion maybe to a new thread. I don't know to whom I should give the best answer-award to my question since we discussed Mathematica and Scoping more than how to suppress the a[[i]] issue.
Update 3
To give the final solution: I simply suppress (unfortunately like I did all the time) the a[[i]] warning with Quiet. In a real example below, I have to use Quiet outside the complete Block to suppress the warning.
To inject the required code into the body of Compile I use a pure function and give the code to inline as argument. This is the same approach Michael Trott is using in, e.g. his Numerics book. This is a bit like the where clause in Haskell, where you define stuff you used afterwards.
newtonC = Function[{degree, f, df, colors},
Compile[{{x0, _Complex, 0}, {a, _Complex, 1}},
Block[{x = x0, xn = 0.0 + 0.0 I, i = 0, maxiter = 256,
eps = 10^(-6.), zeroId = 1, j = 1},
For[i = 0, i < maxiter, ++i,
xn = x - f/(df + eps);
If[Abs[xn - x] < eps,
Break[]
];
x = xn;
];
For[j = 1, j <= degree, ++j,
If[Abs[xn - a[[j]]] < eps*10^2,
zeroId = j + 1;
Break[];
];
];
colors[[zeroId]]*(1 - (i/maxiter)^0.3)*1.5
],
CompilationTarget -> "C", RuntimeAttributes -> {Listable},
RuntimeOptions -> "Speed", Parallelization -> True]]##
(Quiet#Block[{degree = 3, polynomial, a, x},
polynomial = HornerForm[Product[x - a[[i]], {i, degree}]];
{degree, polynomial, HornerForm[D[polynomial, x]],
List ### (ColorData[52, #] & /# Range[degree + 1])}])
And this function is now fast enough to calculate the Newton-fractal of a polynomial where the position of the roots is not fixed. Therefore, we can adjust the roots dynamically.
Feel free to adjust n. Here it runs up to n=756 fluently
(* ImageSize n*n, Complex plange from -b-I*b to b+I*b *)
With[{n = 256, b = 2.0},
DynamicModule[{
roots = RandomReal[{-b, b}, {3, 2}],
raster = Table[x + I y, {y, -b, b, 2 b/n}, {x, -b, b, 2 b/n}]},
LocatorPane[Dynamic[roots],
Dynamic[
Graphics[{Inset[
Image[Reverse#newtonC[raster, Complex ### roots], "Real"],
{-b, -b}, {1, 1}, 2 {b, b}]}, PlotRange -> {{-b, b}, {-
b, b}}, ImageSize -> {n, n}]], {{-b, -b}, {b, b}},
Appearance -> Style["\[Times]", Red, 20]
]
]
]
Teaser:
Ok, here is the very oversimplified version of the code generation framework I am using for various purposes:
ClearAll[symbolToHideQ]
SetAttributes[symbolToHideQ, HoldFirst];
symbolToHideQ[s_Symbol, expandedSymbs_] :=! MemberQ[expandedSymbs, Unevaluated[s]];
ClearAll[globalProperties]
globalProperties[] := {DownValues, SubValues, UpValues (*,OwnValues*)};
ClearAll[getSymbolsToHide];
Options[getSymbolsToHide] = {
Exceptions -> {List, Hold, HoldComplete,
HoldForm, HoldPattern, Blank, BlankSequence, BlankNullSequence,
Optional, Repeated, Verbatim, Pattern, RuleDelayed, Rule, True,
False, Integer, Real, Complex, Alternatives, String,
PatternTest,(*Note- this one is dangerous since it opens a hole
to evaluation leaks. But too good to be ingored *)
Condition, PatternSequence, Except
}
};
getSymbolsToHide[code_Hold, headsToExpand : {___Symbol}, opts : OptionsPattern[]] :=
Join ## Complement[
Cases[{
Flatten[Outer[Compose, globalProperties[], headsToExpand]], code},
s_Symbol /; symbolToHideQ[s, headsToExpand] :> Hold[s],
Infinity,
Heads -> True
],
Hold /# OptionValue[Exceptions]];
ClearAll[makeHidingSymbol]
SetAttributes[makeHidingSymbol, HoldAll];
makeHidingSymbol[s_Symbol] :=
Unique[hidingSymbol(*Unevaluated[s]*) (*,Attributes[s]*)];
ClearAll[makeHidingRules]
makeHidingRules[symbs : Hold[__Symbol]] :=
Thread[List ## Map[HoldPattern, symbs] -> List ## Map[makeHidingSymbol, symbs]];
ClearAll[reverseHidingRules];
reverseHidingRules[rules : {(_Rule | _RuleDelayed) ..}] :=
rules /. (Rule | RuleDelayed)[Verbatim[HoldPattern][lhs_], rhs_] :> (rhs :> lhs);
FrozenCodeEval[code_Hold, headsToEvaluate : {___Symbol}] :=
Module[{symbolsToHide, hidingRules, revHidingRules, result},
symbolsToHide = getSymbolsToHide[code, headsToEvaluate];
hidingRules = makeHidingRules[symbolsToHide];
revHidingRules = reverseHidingRules[hidingRules];
result =
Hold[Evaluate[ReleaseHold[code /. hidingRules]]] /. revHidingRules;
Apply[Remove, revHidingRules[[All, 1]]];
result];
The code works by temporarily hiding most symbols with some dummy ones, and allow certain symbols evaluate. Here is how this would work here:
In[80]:=
FrozenCodeEval[
Hold[Compile[{{x,_Real,0},{a,_Real,1}},D[Product[x-a[[i]],{i,3}],x]]],
{D,Product,Derivative,Plus}
]
Out[80]=
Hold[Compile[{{x,_Real,0},{a,_Real,1}},
(x-a[[1]]) (x-a[[2]])+(x-a[[1]]) (x-a[[3]])+(x-a[[2]]) (x-a[[3]])]]
So, to use it, you have to wrap your code in Hold and indicate which heads you want to evaluate. What remains here is just to apply ReleseHold to it. Note that the above code just illustrates the ideas, but is still quite limited. The full version of my method involves other steps which make it much more powerful but also more complex.
Edit
While the above code is still too limited to accomodate many really interesting cases, here is one additional neat example of what would be rather hard to achieve with the traditional tools of evaluation control:
In[102]:=
FrozenCodeEval[
Hold[f[x_, y_, z_] :=
With[Thread[{a, b, c} = Map[Sqrt, {x, y, z}]],
a + b + c]],
{Thread, Map}]
Out[102]=
Hold[
f[x_, y_, z_] :=
With[{a = Sqrt[x], b = Sqrt[y], c = Sqrt[z]}, a + b + c]]
EDIT -- Big warning!! Injecting code using With or Function into Compile that uses some of Compile's local variables is not reliable! Consider the following:
In[63]:= With[{y=x},Compile[x,y]]
Out[63]= CompiledFunction[{x$},x,-CompiledCode-]
In[64]:= With[{y=x},Compile[{{x,_Real}},y]]
Out[64]= CompiledFunction[{x},x,-CompiledCode-]
Note the renaming of x to x$ in the first case. I recommend you read about localization here and here. (Yes, this is confusing!) We can guess about why this only happens in the first case and not the second, but my point is that this behaviour might not be intended (call it a bug, dark corner or undefined behaviour), so relying on it is fragile ...
Replace-based solutions, like my withRules function do work though (this was not my intended use for that function, but it fits well here ...)
In[65]:= withRules[{y->x},Compile[x,y]]
Out[65]= CompiledFunction[{x},x,-CompiledCode-]
Original answers
You can use := in With, like so:
cf = Block[{a, x, degree = 3},
With[{expr := Product[x - a[[i]], {i, degree}]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]
]
]
It will avoid evaluating expr and the error from Part.
Generally, = and := work as expected in all of With, Module and Block.
Could you tell me though why you are using With here? (I'm sure you have a good reason, I just can't see it from this simplified example.)
Additional answer
Addressing #halirutan's concern about degree not being inlined during compilation
I see this as exactly the same situation as if we had a global variable defined that we use in Compile. Take for example:
In[18]:= global=1
Out[18]= 1
In[19]:= cf2=Compile[{},1+global]
Out[19]= CompiledFunction[{},1+global,-CompiledCode-]
In[20]:= CompilePrint[cf2]
Out[20]=
No argument
3 Integer registers
Underflow checking off
Overflow checking off
Integer overflow checking on
RuntimeAttributes -> {}
I0 = 1
Result = I2
1 I1 = MainEvaluate[ Function[{}, global][ ]]
2 I2 = I0 + I1
3 Return
This is a common issue. The solution is to tell Compile to inline globals, like so:
cf = Block[{a, x, degree = 3},
With[{expr := Product[x - a[[i]], {i, degree}]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr,
CompilationOptions -> {"InlineExternalDefinitions" -> True}]]];
CompilePrint[cf]
You can check that now there's no callback to the main evaluator.
Alternatively you can inject the value of degree using an extra layer of With instead of Block. This will make you wish for something like this very much.
Macro expansion in Mathematica
This is somewhat unrelated, but you mention in your post that you use With for macro expansion. Here's my first (possibly buggy) go at implementing macro expansion in Mathematica. This is not well tested, feel free to try to break it and post a comment.
Clear[defineMacro, macros, expandMacros]
macros = Hold[];
SetAttributes[defineMacro, HoldAllComplete]
defineMacro[name_Symbol, value_] := (AppendTo[macros, name]; name := value)
SetAttributes[expandMacros, HoldAllComplete]
expandMacros[expr_] := Unevaluated[expr] //. Join ## (OwnValues /# macros)
Description:
macros is a (held) list of all symbols to be expanded.
defineMacro will make a new macro.
expandMacros will expand macro definitions in an expression.
Beware: I didn't implement macro-redefinition, this will not work while expansion is on using $Pre. Also beware of recursive macro definitions and infinite loops.
Usage:
Do macro expansion on all input by defining $Pre:
$Pre = expandMacros;
Define a to have the value 1:
defineMacro[a, 1]
Set a delayed definition for b:
b := a + 1
Note that the definition of b is not fully evaluated, but a is expanded.
?b
Global`b
b:=1+1
Turn off macro expansion ($Pre can be dangerous if there's a bug in my code):
$Pre =.
One way:
cf = Block[{a, x, degree = 3},
With[{expr = Quiet[Product[x - a[[i]], {i, degree}]]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]]]
be careful though, it you really want this.
Original code:
newtonC = Function[{degree, f, df, colors},
Compile[{{x0, _Complex, 0}, {a, _Complex, 1}},
Block[{x = x0, xn = 0.0 + 0.0 I, i = 0, maxiter = 256,
...
RuntimeOptions -> "Speed", Parallelization -> True]]##
(Quiet#Block[{degree = 3, polynomial, a, x},
polynomial = HornerForm[Product[x - a[[i]], {i, degree}]];
...
Modified code:
newtonC = Function[{degree, f, df, colors},
Compile[{{x0, _Complex, 0}, {a, _Complex, 1}},
Block[{x = x0, xn = 0.0 + 0.0 I, i = 0, maxiter = 256,
...
RuntimeOptions -> "Speed", Parallelization -> True],HoldAllComplete]##
( (( (HoldComplete###)/.a[i_]:>a[[i]] )&)#Block[{degree = 3, polynomial, a, x},
polynomial = HornerForm[Product[x - a[i], {i, degree}]];
...
Add HoldAllComplete attribute to the function.
Write a[i] in place of a[[i]].
Replace Quiet with (( (HoldComplete###)/.a[i_]:>a[[i]] )&)
Produces the identical code, no Quiet, and all of the Hold stuff is in one place.
In Mathematica there are a number of functions that return not only the final result or a single match, but all results. Such functions are named *List. Exhibit:
FoldList
NestList
ReplaceList
ComposeList
Something that I am missing is a MapList function.
For example, I want:
MapList[f, {1, 2, 3, 4}]
{{f[1], 2, 3, 4}, {1, f[2], 3, 4}, {1, 2, f[3], 4}, {1, 2, 3, f[4]}}
I want a list element for each application of the function:
MapList[
f,
{h[1, 2], {4, Sin[x]}},
{2}
] // Column
{h[f[1], 2], {4, Sin[x]}}
{h[1, f[2]], {4, Sin[x]}}
{h[1, 2], {f[4], Sin[x]}}
{h[1, 2], {4, f[Sin[x]]}}
One may implement this as:
MapList[f_, expr_, level_: 1] :=
MapAt[f, expr, #] & /#
Position[expr, _, level, Heads -> False]
However, it is quite inefficient. Consider this simple case, and compare these timings:
a = Range#1000;
#^2 & /# a // timeAvg
MapList[#^2 &, a] // timeAvg
ConstantArray[#^2 & /# a, 1000] // timeAvg
0.00005088
0.01436
0.0003744
This illustrates that on average MapList is about 38X slower than the combined total of mapping the function to every element in the list and creating a 1000x1000 array.
Therefore, how may MapList be most efficiently implemented?
I suspect that MapList is nearing the performance limit for any transformation that performs structural modification. The existing target benchmarks are not really fair comparisons. The Map example is creating a simple vector of integers. The ConstantArray example is creating a simple vector of shared references to the same list. MapList shows poorly against these examples because it is creating a vector where each element is a freshly generated, unshared, data structure.
I have added two more benchmarks below. In both cases each element of the result is a packed array. The Array case generates new elements by performing Listable addition on a. The Module case generates new elements by replacing a single value in a copy of a. These results are as follows:
In[8]:= a = Range#1000;
#^2 & /# a // timeAvg
MapList[#^2 &, a] // timeAvg
ConstantArray[#^2 & /# a, 1000] // timeAvg
Array[a+# &, 1000] // timeAvg
Module[{c}, Table[c = a; c[[i]] = c[[i]]^2; c, {i, 1000}]] // timeAvg
Out[9]= 0.0005504
Out[10]= 0.0966
Out[11]= 0.003624
Out[12]= 0.0156
Out[13]= 0.02308
Note how the new benchmarks perform much more like MapList and less like the Map or ConstantArray examples. This seems to show that there is not much scope for dramatically improving the performance of MapList without some deep kernel magic. We can shave a bit of time from MapList thus:
MapListWR4[f_, expr_, level_: {1}] :=
Module[{positions, replacements}
, positions = Position[expr, _, level, Heads -> False]
; replacements = # -> f[Extract[expr, #]] & /# positions
; ReplacePart[expr, #] & /# replacements
]
Which yields these timings:
In[15]:= a = Range#1000;
#^2 & /# a // timeAvg
MapListWR4[#^2 &, a] // timeAvg
ConstantArray[#^2 & /# a, 1000] // timeAvg
Array[a+# &, 1000] // timeAvg
Module[{c}, Table[c = a; c[[i]] = c[[i]]^2; c, {i, 1000}]] // timeAvg
Out[16]= 0.0005488
Out[17]= 0.04056
Out[18]= 0.003
Out[19]= 0.015
Out[20]= 0.02372
This comes within factor 2 of the Module case and I expect that further micro-optimizations can close the gap yet more. But it is with eager anticipation that I join you awaiting an answer that shows a further tenfold improvement.
(Updated my function)
I think I can offer another 2x boost on top of WReach's attempt.
Remove[MapListTelefunken];
MapListTelefunken[f_, dims_] :=
With[{a = Range[dims], fun = f[[1]]},
With[{replace = ({#, #} -> fun) & /# a},
ReplacePart[ConstantArray[a, {dims}], replace]
]
]
Here are the timings on my machine (Sony Z laptop; i7, 8GB ram, 256 SSD in Raid 0):
a = Range#1000;
#^2 & /# a; // timeAvg
MapList[#^2 &, a]; // timeAvg
MapListWR4[#^2 &, a]; // timeAvg
MapListTelefunken[#^2 &, 1000]; // timeAvg
0.0000296 (* just Mapping the function over a Range[1000] for baseline *)
0.0297 (* the original MapList proposed by Mr.Wizard *)
0.00936 (* MapListWR4 from WReach *)
0.00468 (* my attempt *)
I think you'd still need to create the 1000x1000 array, and I don't think there's any cleverer way than a constant array. More to the point, your examples are better served with the following definition, although I admit that it lacks the finesse of levels.
MapList[f_, list_] := (Table[MapAt[f, #, i], {i, Length##}] &)#list;
The culprit in your own definition is the Position[] call, which creates a complex auxiliary structure.
Provide a more complex use case, that will better cover your intentions.
When plotting a function using Plot, I would like to obtain the set of data points plotted by the Plot command.
For instance, how can I obtain the list of points {t,f} Plot uses in the following simple example?
f = Sin[t]
Plot[f, {t, 0, 10}]
I tried using a method of appending values to a list, shown on page 4 of Numerical1.ps (Numerical Computation in Mathematica) by Jerry B. Keiper, http://library.wolfram.com/infocenter/Conferences/4687/ as follows:
f = Sin[t]
flist={}
Plot[f, {t, 0, 10}, AppendTo[flist,{t,f[t]}]]
but generate error messages no matter what I try.
Any suggestions would be greatly appreciated.
f = Sin[t];
plot = Plot[f, {t, 0, 10}]
One way to extract points is as follows:
points = Cases[
Cases[InputForm[plot], Line[___],
Infinity], {_?NumericQ, _?NumericQ}, Infinity];
ListPlot to 'take a look'
ListPlot[points]
giving the following:
EDIT
Brett Champion has pointed out that InputForm is superfluous.
ListPlot#Cases[
Cases[plot, Line[___], Infinity], {_?NumericQ, _?NumericQ},
Infinity]
will work.
It is also possible to paste in the plot graphic, and this is sometimes useful. If,say, I create a ListPlot of external data and then mislay the data file (so that I only have access to the generated graphic), I may regenerate the data by selecting the graphic cell bracket,copy and paste:
ListPlot#Transpose[{Range[10], 4 Range[10]}]
points = Cases[
Cases[** Paste_Grphic _Here **, Point[___],
Infinity], {_?NumericQ, _?NumericQ}, Infinity]
Edit 2.
I should also have cross-referenced and acknowledged this very nice answer by Yaroslav Bulatov.
Edit 3
Brett Champion has not only pointed out that FullForm is superfluous, but that in cases where a GraphicsComplex is generated, applying Normal will convert the complex into primitives. This can be very useful.
For example:
lp = ListPlot[Transpose[{Range[10], Range[10]}],
Filling -> Bottom]; Cases[
Cases[Normal#lp, Point[___],
Infinity], {_?NumericQ, _?NumericQ}, Infinity]
gives (correctly)
{{1., 1.}, {2., 2.}, {3., 3.}, {4., 4.}, {5., 5.}, {6., 6.}, {7.,
7.}, {8., 8.}, {9., 9.}, {10., 10.}}
Thanks to Brett Champion.
Finally, a neater way of using the general approach given in this answer, which I found here
The OP problem, in terms of a ListPlot, may be obtained as follows:
ListPlot#Cases[g, x_Line :> First#x, Infinity]
Edit 4
Even simpler
ListPlot#Cases[plot, Line[{x__}] -> x, Infinity]
or
ListPlot#Cases[** Paste_Grphic _Here **, Line[{x__}] -> x, Infinity]
or
ListPlot#plot[[1, 1, 3, 2, 1]]
This evaluates to True
plot[[1, 1, 3, 2, 1]] == Cases[plot, Line[{x__}] -> x, Infinity]
One way is to use EvaluationMonitor option with Reap and Sow, for example
In[4]:=
(points = Reap[Plot[Sin[x],{x,0,4Pi},EvaluationMonitor:>Sow[{x,Sin[x]}]]][[2,1]])//Short
Out[4]//Short= {{2.56457*10^-7,2.56457*10^-7},<<699>>,{12.5621,-<<21>>}}
In addition to the methods mentioned in Leonid's answer and my follow-up comment, to track plotting progress of slow functions in real time to see what's happening you could do the following (using the example of this recent question):
(* CPU intensive function *)
LogNormalStableCDF[{alpha_, beta_, gamma_, sigma_, delta_}, x_] :=
Block[{u},
NExpectation[
CDF[StableDistribution[alpha, beta, gamma, sigma], (x - delta)/u],
u \[Distributed] LogNormalDistribution[Log[gamma], sigma]]]
(* real time tracking of plot process *)
res = {};
ListLinePlot[res // Sort, Mesh -> All] // Dynamic
Plot[(AppendTo[res, {x, #}]; #) &#
LogNormalStableCDF[{1.5, 1, 1, 0.5, 1}, x], {x, -4, 6},
PlotRange -> All, PlotPoints -> 10, MaxRecursion -> 4]
etc.
Here is a very efficient way to get all the data points:
{plot, {points}} = Reap # Plot[Last#Sow#{x, Sin[x]}, {x, 0, 4 Pi}]
Based on the answer of Sjoerd C. de Vries, I've now written the following code which automates a plot preview (tested on Mathematica 8):
pairs[x_, y_List]:={x, #}& /# y
pairs[x_, y_]:={x, y}
condtranspose[x:{{_List ..}..}]:=Transpose # x
condtranspose[x_]:=x
Protect[SaveData]
MonitorPlot[f_, range_, options: OptionsPattern[]]:=
Module[{data={}, plot},
Module[{tmp=#},
If[FilterRules[{options},SaveData]!={},
ReleaseHold[Hold[SaveData=condtranspose[data]]/.FilterRules[{options},SaveData]];tmp]]&#
Monitor[Plot[(data=Union[data, {pairs[range[[1]], #]}]; #)& # f, range,
Evaluate[FilterRules[{options}, Options[Plot]]]],
plot=ListLinePlot[condtranspose[data], Mesh->All,
FilterRules[{options}, Options[ListLinePlot]]];
Show[plot, Module[{yrange=Options[plot, PlotRange][[1,2,2]]},
Graphics[Line[{{range[[1]], yrange[[1]]}, {range[[1]], yrange[[2]]}}]]]]]]
SetAttributes[MonitorPlot, HoldAll]
In addition to showing the progress of the plot, it also marks the x position where it currently calculates.
The main problem is that for multiple plots, Mathematica applies the same plot style for all curves in the final plot (interestingly, it doesn't on the temporary plots).
To get the data produced into the variable dest, use the option SaveData:>dest
Just another way, possibly implementation dependent:
ListPlot#Flatten[
Plot[Tan#t, {t, 0, 10}] /. Graphics[{{___, {_, y__}}}, ___] -> {y} /. Line -> List
, 2]
Just look into structure of plot (for different type of plots there would be a little bit different structure) and use something like that:
plt = Plot[Sin[x], {x, 0, 1}];
lstpoint = plt[[1, 1, 3, 2, 1]];
Many times I find myself counting occurrences with Tally[ ] and then, once I discarded the original list, having to add (and join) to that counters list the results from another list.
This typically happens when I am counting configurations, occurrences, doing some discrete statistics, etc.
So I defined a very simple but handy function for Tally aggregation:
aggTally[listUnTallied__List:{},
listUnTallied1_List,
listTallied_List] :=
Join[Tally#Join[listUnTallied, listUnTallied1], listTallied] //.
{a___, {x_, p_}, b___, {x_, q_}, c___} -> {a, {x, p + q}, b, c};
Such that
l = {x, y, z}; lt = Tally#l;
n = {x};
m = {x, y, t};
aggTally[n, {}]
{{x, 1}}
aggTally[m, n, {}]
{{x, 2}, {y, 1}, {t, 1}}
aggTally[m, n, lt]
{{x, 3}, {y, 2}, {t, 1}, {z, 1}}
This function has two problems:
1) Performance
Timing[Fold[aggTally[Range##2, #1] &, {}, Range[100]];]
{23.656, Null}
(* functional equivalent to *)
Timing[s = {}; j = 1; While[j < 100, s = aggTally[Range#j, s]; j++]]
{23.047, Null}
2) It does not validate that the last argument is a real Tallied list or null (less important for me, though)
Is there a simple, elegant, faster and more effective solution? (I understand that these are too many requirements, but wishing is free)
Perhaps, this will suit your needs?
aggTallyAlt[listUnTallied__List : {}, listUnTallied1_List, listTallied : {{_, _Integer} ...}] :=
{#[[1, 1]], Total##[[All, 2]]} & /#
GatherBy[Join[Tally#Join[listUnTallied, listUnTallied1], listTallied], First]
The timings are much better, and there is a pattern-based check on the last arg.
EDIT:
Here is a faster version:
aggTallyAlt1[listUnTallied__List : {}, listUnTallied1_List, listTallied : {{_, _Integer} ...}] :=
Transpose[{#[[All, 1, 1]], Total[#[[All, All, 2]], {2}]}] &#
GatherBy[Join[Tally#Join[listUnTallied, listUnTallied1], listTallied], First]
The timings for it:
In[39]:= Timing[Fold[aggTallyAlt1[Range##2, #1] &, {}, Range[100]];]
Timing[s = {}; j = 1; While[j < 100, s = aggTallyAlt1[Range#j, s]; j++]]
Out[39]= {0.015, Null}
Out[40]= {0.016, Null}
The following solution is just a small modification of your original function. It applies Sort before using ReplaceRepeated and can thus use a less general replacement pattern which makes it much faster:
aggTally[listUnTallied__List : {}, listUnTallied1_List,
listTallied : {{_, _Integer} ...}] :=
Sort[Join[Tally#Join[listUnTallied, listUnTallied1],
listTallied]] //. {a___, {x_, p_}, {x_, q_}, c___} -> {a, {x, p + q}, c};
Here's the fastest thing I've come up with yet, (ab)using the tagging available with Sow and Reap:
aggTally5[untallied___List, tallied_List: {}] :=
Last[Reap[
Scan[((Sow[#2, #] &) ### Tally[#]) &, {untallied}];
Sow[#2, #] & ### tallied;
, _, {#, Total[#2]} &]]
Not going to win any beauty contests, but it's all about speed, right? =)
If you stay purely symbolic, you may try something along the lines of
(Plus ## Times ### Join[#1, #2] /. Plus -> List /. Times -> List) &
for joining tally lists. This is stupid fast but returns something that isn't a tally list, so it needs some work (after which it may not be so fast anymore ;) ).
EDIT: So I've got a working version:
aggT = Replace[(Plus ## Times ### Join[#1, #2]
/. Plus -> List
/. Times[a_, b_] :> List[b, a]),
k_Symbol -> List[k, 1], {1}] &;
Using a couple of random symbolic tables I get
a := Tally#b;
b := Table[f[RandomInteger#99 + 1], {i, 100}];
Timing[Fold[aggT[#1, #2] &, a, Table[a, {i, 100}]];]
--> {0.104954, Null}
This version only adds tally lists, doesn't check anything, still returns some integers, and comparing to Leonid's function:
Timing[Fold[aggTallyAlt1[#2, #1] &, a, Table[b, {i, 100}]];]
--> {0.087039, Null}
it's already a couple of seconds slower :-(.
Oh well, nice try.