Why are the following computed limits different (1 by Sage, 0 by Wolfram), and which (if either) is correct?
EDIT: Revised per #Bill's suggestion to increase the numerical precision in Wolfram. (I don't know how to do the same in Sage.) The Wolfram plot strongly suggests that the limit is indeed $0$, and that the issue is entirely about numerical precision.
Sage: (you can cut/paste/execute this code here)
#in()=
f(x) = exp(-x^2/2)/sqrt(2*pi)
F(x) = (1 + erf(x/sqrt(2)))/2
num1(a,w) = (a+w)*f(a+w) - a*f(a)
num2(a,w) = f(a+w) - f(a)
den(a,w) = F(a+w) - F(a)
V(a,w) = 1 - num1(a,w)/den(a,w) - (num2(a,w)/den(a,w))^2
assume(w>0); print(limit(V(a,w), a=oo))
plot(V(a,1),a,0,8)
#out()=
1 #<--------- computed limit = 1
Wolfram: (you can execute this code here)
#in()=
f[x_]:=Exp[-x^2/2]/Sqrt[2*Pi]
F[x_]:=(1 + Erf[x/Sqrt[2]])/2
num1[a_,w_] := (a+w)*f[a+w] - a*f[a]
num2[a_,w_] := f[a+w] - f[a]
den[a_,w_] := F[a+w] - F[a]
V[a_,w_] := 1 - num1[a,w]/den[a,w] - (num2[a,w]/den[a,w])^2
Assuming[w>0, Limit[V[a,w], a -> Infinity]]
Plot[V[a, 10], {a, 0, 100}, WorkingPrecision -> 128]
#out()=
0 (* <--------- computed limit = 0 *)
(This is supposed to compute the limit, as a -> oo, of the variance of a standard normal distribution when truncated to the interval (a,a+w).)
Related
I am a very novice Mathematica user and still can't get my head around its evaluation control, all possible constructs related to it (e.g. Hold, Unevaluated, etc.) and how they work, despite the thorough documentation and the numerous StackExchange and StackOverflow questions discussing this topic. So, apologies for any possible duplicates.
My use case is the following: I have a function (say f) defined by thousands of rules and patterns (DownValues). I want to start from an unrolled representation of f[expr] (for some expr) and get the result of applying a single, particular rule to f[expr]. I want the result to stay unrolled as well.
As a particular example, suppose we have the following:
In[1]: nat[0] := 0
In[2]: nat[n_] := 1 + nat[n - 1]
In[3]: DownValues[nat]
Out[3]: {HoldPattern[nat[0]] :> 0, HoldPattern[nat[n_]] :> 1 + nat[n - 1]}
In[4]: nat[10]
Out[4]: 10
Now, I want to start from an expression represented as nat[10] (unevaluated!) and want to specifically apply the second rule (HoldPattern[nat[n_]] :> 1 + nat[n - 1]) to obtain the expression in the form of 1 + nat[9]. Analogously, shall I wish to apply the first rule (HoldPattern[nat[0]] :> 0), I would expect the result to stay unchanged in its original form, i.e. nat[10].
Thank you for your help!
This should help with your understanding of Mathematica's method of operation.
Wolfram reference : The Ordering of Definitions
I.e. Mathematica "looks for the value of an expression of the form f[n], it tries the special case f[1] first, and only if this does not apply, it tries the general case f[n_]."
So with the functions below nat[0] is always tried first, but of course it only evaluates if the argument is 0. Then nat[n_] is tried.
nat[0] := 0
nat[n_] := 1 + nat[n - 1]
For your question as to obtaining 1 + nat[9] here is one way
Clear[nat]
nat[0] := 0
nat[n_] := HoldForm[1 + nat[o]] /. o -> n - 1
ans = nat[10]
1 + nat[9]
Do[ans = ReleaseHold[ans], 10]
ans
10
Alternatively (and better)
Clear[nat]
nat[0] := 0
nat[n_] := With[{m = n - 1}, HoldForm[1 + nat[m]]]
ans = nat[10]
1 + nat[9]
Do[ans = ReleaseHold[ans], 9]
ans
9 + (1 + nat[0])
Note this is the result after 10 iterations. The final ReleaseHold results in nat[0] evaluating to 0.
ReleaseHold[ans]
10
You might find it easier to see what is happening if you use Hold instead of HoldForm in the above demonstration.
As posted as a reply in a parallel discussion in Mathematica's StackExchange, I found a relatively more direct and straightforward way of dealing with the problem:
In[6] rules = DownValues[nat]
Out[6] {HoldPattern[nat[0]] :> 0, HoldPattern[nat[n_]] :> 1 + nat[n - 1]}
In[7] DownValues[nat] = {}
Out[7] {}
In[8] nat[10]
Out[8] nat[10]
In[9] nat[10] /. rules[[1]]
Out[9] nat[10]
In[10] nat[10] /. rules[[2]]
Out[10] 1 + nat[9]
The following was my question given by my teacher,
Generate a sequence of N = 1000 independent observations of random variable with distribution: (c) Exponential with parameter λ = 1 , by
inversion method.
Present graphically obtained sequences(except for those generated in point e) i.e. e.g. (a) i. plot in the coordinates (no. obs.,
value of the obs) ii. plot in the coordinates (obs no n, obs. no n +
i) for i = 1, 2, 3. iii. plot so called covariance function for some
values. i.e. and averages:
I have written the following code,
(*****************************************************************)
(*Task 01(c) and 02(a)*)
(*****************************************************************)
n = 1000;
taskC = Table[-Log[RandomReal[]], {n}];
ListPlot[taskC, AxesLabel->{"No. obs", "value of the obs"}]
i = 1;
ListPlot[Table[
{taskC[[k]], taskC[[k+i]]},
{k, 1, n-i,1}],
AxesLabel->{"obs.no.n", "obs.no.n+1"}]
i++;
ListPlot[Table[
{taskC[[k]], taskC[[k+i]]},
{k, 1, n-i,1}],
AxesLabel-> {"obs.no.n", "obs.no.n+2"}]
i++;
ListPlot[Table[
{taskC[[k]], taskC[[k+i]]},
{k,1,n-i,1}],
AxesLabel->{"obs.no.n", "obs.no.n+3"}]
avg = (1/n)*Sum[taskC[[i]], {i,n}];
ListPlot[Table[1/(n-tau) * Sum[(taskC[[i]]-avg)*(taskC[[i+tau]] - avg), n], {tau, 1,100}],
Joined->True,
AxesLabel->"Covariance Function"]
He has commented,
The plots of co-variance functions should start from 0-shift. Note
that for larger than 0 shifts you are estimating co-variance between
independent observations which is zero, while for 0 shift you are
estimating variance of observation which is large. Thus the contrast
between these two cases is a clear indication that the observations
are uncorrelated.
What did I do wrong?
How can I correct my code?
Zero-shift means calculating the covariance for tau = 0, which is simply the variance.
Labeled[ListPlot[Table[{tau,
1/(n - tau)*Sum[(taskC[[i]] - avg)*(taskC[[i + tau]] - avg), {i, n - tau}]},
{tau, 0, 5}], Filling -> Axis, FillingStyle -> Thick, PlotRange -> All,
Frame -> True, PlotRangePadding -> 0.2, AspectRatio -> 1],
{"Covariance Function K(n)", "n"}, {{Top, Left}, Bottom}]
Variance[taskC]
0.93484
Covariance[taskC, taskC]
0.93484
(* n = 1 *)
Covariance[Most[taskC], Rest[taskC]]
0.00926913
I am a new user of Mathematica and I can't figure out how to solve this problem.
I have a computation S that gives me for 10 Random Variates 10 results:
Xi = RandomVariate[NormalDistribution[], 10]
Mu = -0.00644131
Sigma= 0.0562005
t = 0.1
s = 100
fnmc[s_,Mu_,Sigma_, t_,Xi_] := s Exp[(Mu - Sigma^2/2) t + Sigma Sqrt[t ] Xi]
S = fnmc[s, Mu, Sigma, t, Xi]
Now I need to compute formula S 10 times - so I'll have 100 numbers in result.
I can't find the way to do it in a TABLE. Further, I will have to sum those 10 results and calculate Mean etc. I wanted to use TABLE because of the further computation - SUM, MEAN - I thought it is the easiest "form" of results to work with....is it?
I had in mind something like:
Table[S(i),{i,10}]
but off course it multiplies S x (i). Any suggestions?
S(i) multiplies S with i. S[i] calls function S with parameter i.
The four kinds of bracketing in Mathematica
I just realized that S isn't a function at all, so you don't want to call it with parameter i. You can get the result of S 10 times simply by Table[S,{10}], but since Xi is only calculated once, this will just give you 10 times the same vector. Maybe you want to do the whole calculation 10 times? That would be:
Table[
(
Xi = RandomVariate[NormalDistribution[], 10];
Mu = -0.00644131;
Sigma = 0.0562005;
t = 0.1; s = 100;
s*Exp[(Mu - Sigma^2/2)*t + Sigma*Sqrt[t]*Xi]
), {10}]
You could use a functional programming approach map ( /# ) your function over the Xis you've created.
Mu = -0.00644131;
Sigma= 0.0562005;
t = 0.1;
s = 100;
(* if you wanted ten scalar random numbers, with each one used on one application of your equation *)
Xi = RandomVariate[NormalDistribution[], 10];
ans = s Exp[(Mu - Sigma^2/2) t + Sigma Sqrt[t ] #] & /# Xi;
(* if you wanted ten 10 dimensional random numbers, with each 10D number used on one application of your equation *)
Xi = RandomVariate[NormalDistribution[], {10,10}];
ans = s Exp[(Mu - Sigma^2/2) t + Sigma Sqrt[t ] #] & /# Xi;
Memoized functions are functions which remember values they have found.
Look in the doc center for some background on this in Mathematica, if necessary.
Suppose you have the following definition
f[0] = f[1] = 1
f[x_] := f[x] = f[x - 1] + f[x - 2]
in one of your packages. A user may load the package and start asking right away f[1000].
This will trigger a $RecursionLimit::reclim error message and abort.
Even if the user then tries something smaller, say f[20], by now the definition of f is corrupt and the result is not good anymore.Of course the package developer might increase the recursion limit and warn the user, but my question is:
How can you improve the f definition so that if the user asks for f[1000] he/she gets the answer without any problem? I am interested in a way to trap the user input, analyze it and take whatever steps are necessary to evaluate f[1000].
I can easily imagine that one can change the recursion limit if the input is more than 255 (and then bring it back to the original level), but what I would really like to see is, if there is a way for the f to find out how many values it "knows" (fknownvalues) and accept any input <=fknownvalues+$RecursionLimit without problems or increase the $RecursionLimit if the input is higher.
Thank you for your help
Here is the code assuming that you can determine a value of $RecursionLimit from the value of the input argument:
Clear[f];
Module[{ff},
ff[0] = ff[1] = 1;
ff[x_] := ff[x] = ff[x - 1] + ff[x - 2];
f[x_Integer] :=f[x] =
Block[{$RecursionLimit = x + 5},
ff[x]
]]
I am using a local function ff to do the main work, while f just calls it wrapped in Block with a proper value for $RecursionLimit:
In[1552]:= f[1000]
Out[1552]= 7033036771142281582183525487718354977018126983635873274260490508715453711819693357974224
9494562611733487750449241765991088186363265450223647106012053374121273867339111198139373125
598767690091902245245323403501
EDIT
If you want to be more precise with the setting of $RecursionLimit, you can modify the part of the code above as:
f[x_Integer] :=
f[x] =
Block[{$RecursionLimit = x - Length[DownValues[ff]] + 10},
Print["Current $RecursionLimit: ", $RecursionLimit];
ff[x]]]
The Print statement is here for illustration. The value 10 is rather arbitrary - to get a lower bound on it, one has to compute the necessary depth of recursion, and take into account that the number of known results is Length[DownValues[ff]] - 2 (since ff has 2 general definitions). Here is some usage:
In[1567]:= f[500]//Short
During evaluation of In[1567]:= Current $RecursionLimit: 507
Out[1567]//Short= 22559151616193633087251269<<53>>83405015987052796968498626
In[1568]:= f[800]//Short
During evaluation of In[1568]:= Current $RecursionLimit: 308
Out[1568]//Short= 11210238130165701975392213<<116>>44406006693244742562963426
If you also want to limit the maximal $RecursionLimit possible, this is also easy to do, along the same lines. Here, for example, we will limit it to 10000 (again, this goes inside Module):
f::tooLarge =
"The parameter value `1` is too large for single recursive step. \
Try building the result incrementally";
f[x_Integer] :=
With[{reclim = x - Length[DownValues[ff]] + 10},
(f[x] =
Block[{$RecursionLimit = reclim },
Print["Current $RecursionLimit: ", $RecursionLimit];
ff[x]]) /; reclim < 10000];
f[x_Integer] := "" /; Message[f::tooLarge, x]]
For example:
In[1581]:= f[11000]//Short
During evaluation of In[1581]:= f::tooLarge: The parameter value 11000 is too
large for single recursive step. Try building the result incrementally
Out[1581]//Short= f[11000]
In[1582]:=
f[9000];
f[11000]//Short
During evaluation of In[1582]:= Current $RecursionLimit: 9007
During evaluation of In[1582]:= Current $RecursionLimit: 2008
Out[1583]//Short= 5291092912053548874786829<<2248>>91481844337702018068766626
A slight modification on Leonid's code. I guess I should post it as a comment, but the lack of comment formatting makes it impossible.
Self adaptive Recursion Limit
Clear[f];
$RecursionLimit = 20;
Module[{ff},
ff[0] = ff[1] = 1;
ff[x_] :=
ff[x] = Block[{$RecursionLimit = $RecursionLimit + 2}, ff[x - 1] + ff[x - 2]];
f[x_Integer] := f[x] = ff[x]]
f[30]
(*
-> 1346269
*)
$RecursionLimit
(*
-> 20
*)
Edit
Trying to set $RecursionLimit sparsely:
Clear[f];
$RecursionLimit = 20;
Module[{ff}, ff[0] = ff[1] = 1;
ff[x_] := ff[x] =
Block[{$RecursionLimit =
If[Length#Stack[] > $RecursionLimit - 5, $RecursionLimit + 5, $RecursionLimit]},
ff[x - 1] + ff[x - 2]];
f[x_Integer] := f[x] = ff[x]]
Not sure how useful it is ...
I've read a bunch of tutorials about the proper way to generate a logarithmic distribution of tagcloud weights. Most of them group the tags into steps. This seems somewhat silly to me, so I developed my own algorithm based on what I've read so that it dynamically distributes the tag's count along the logarthmic curve between the threshold and the maximum. Here's the essence of it in python:
from math import log
count = [1, 3, 5, 4, 7, 5, 10, 6]
def logdist(count, threshold=0, maxsize=1.75, minsize=.75):
countdist = []
# mincount is either the threshold or the minimum if it's over the threshold
mincount = threshold<min(count) and min(count) or threshold
maxcount = max(count)
spread = maxcount - mincount
# the slope of the line (rise over run) between (mincount, minsize) and ( maxcount, maxsize)
delta = (maxsize - minsize) / float(spread)
for c in count:
logcount = log(c - (mincount - 1)) * (spread + 1) / log(spread + 1)
size = delta * logcount - (delta - minsize)
countdist.append({'count': c, 'size': round(size, 3)})
return countdist
Basically, without the logarithmic calculation of the individual count, it would generate a straight line between the points, (mincount, minsize) and (maxcount, maxsize).
The algorithm does a good approximation of the curve between the two points, but suffers from one drawback. The mincount is a special case, and the logarithm of it produces zero. This means the size of the mincount would be less than minsize. I've tried cooking up numbers to try to solve this special case, but can't seem to get it right. Currently I just treat the mincount as a special case and add "or 1" to the logcount line.
Is there a more correct algorithm to draw a curve between the two points?
Update Mar 3: If I'm not mistaken, I am taking the log of the count and then plugging it into a linear equation. To put the description of the special case in other words, in y=lnx at x=1, y=0. This is what happens at the mincount. But the mincount can't be zero, the tag has not been used 0 times.
Try the code and plug in your own numbers to test. Treating the mincount as a special case is fine by me, I have a feeling it would be easier than whatever the actual solution to this problem is. I just feel like there must be a solution to this and that someone has probably come up with a solution.
UPDATE Apr 6: A simple google search turns up a many of the tutorials I've read, but this is probably the most complete example of stepped tag clouds.
UPDATE Apr 28: In response to antti.huima's solution: When graphed, the curve that your algorithm creates lies below the line between the two points. I've been trying to juggle the numbers around but still can't seem to come up with a way to flip that curve to the other side of the line. I'm guessing that if the function was changed to some form of logarithm instead of an exponent it would do exactly what I'd need. Is that correct? If so, can anyone explain how to achieve this?
Thanks to antti.huima's help, I re-thought out what I was trying to do.
Taking his method of solving the problem, I want an equation where the logarithm of the mincount is equal to the linear equation between the two points.
weight(MIN) = ln(MIN-(MIN-1)) + min_weight
min_weight = ln(1) + min_weight
While this gives me a good starting point, I need to make it pass through the point (MAX, max_weight). It's going to need a constant:
weight(x) = ln(x-(MIN-1))/K + min_weight
Solving for K we get:
K = ln(MAX-(MIN-1))/(max_weight - min_weight)
So, to put this all back into some python code:
from math import log
count = [1, 3, 5, 4, 7, 5, 10, 6]
def logdist(count, threshold=0, maxsize=1.75, minsize=.75):
countdist = []
# mincount is either the threshold or the minimum if it's over the threshold
mincount = threshold<min(count) and min(count) or threshold
maxcount = max(count)
constant = log(maxcount - (mincount - 1)) / (maxsize - minsize)
for c in count:
size = log(c - (mincount - 1)) / constant + minsize
countdist.append({'count': c, 'size': round(size, 3)})
return countdist
Let's begin with your mapping from the logged count to the size. That's the linear mapping you mentioned:
size
|
max |_____
| /
| /|
| / |
min |/ |
| |
/| |
0 /_|___|____
0 a
where min and max are the min and max sizes, and a=log(maxcount)-b. The line is of y=mx+c where x=log(count)-b
From the graph, we can see that the gradient, m, is (maxsize-minsize)/a.
We need x=0 at y=minsize, so log(mincount)-b=0 -> b=log(mincount)
This leaves us with the following python:
mincount = min(count)
maxcount = max(count)
xoffset = log(mincount)
gradient = (maxsize-minsize)/(log(maxcount)-log(mincount))
for c in count:
x = log(c)-xoffset
size = gradient * x + minsize
If you want to make sure that the minimum count is always at least 1, replace the first line with:
mincount = min(count+[1])
which appends 1 to the count list before doing the min. The same goes for making sure the maxcount is always at least 1. Thus your final code per above is:
from math import log
count = [1, 3, 5, 4, 7, 5, 10, 6]
def logdist(count, maxsize=1.75, minsize=.75):
countdist = []
mincount = min(count+[1])
maxcount = max(count+[1])
xoffset = log(mincount)
gradient = (maxsize-minsize)/(log(maxcount)-log(mincount))
for c in count:
x = log(c)-xoffset
size = gradient * x + minsize
countdist.append({'count': c, 'size': round(size, 3)})
return countdist
what you have is that you have tags whose counts are from MIN to MAX; the threshold issue can be ignored here because it amounts to setting every count below threshold to the threshold value and taking the minimum and maximum only afterwards.
You want to map the tag counts to "weights" but in a "logarithmic fashion", which basically means (as I understand it) the following. First, the tags with count MAX get max_weight weight (in your example, 1.75):
weight(MAX) = max_weight
Secondly, the tags with the count MIN get min_weight weight (in your example, 0.75):
weight(MIN) = min_weight
Finally, it holds that when your count decreases by 1, the weight is multiplied with a constant K < 1, which indicates the steepness of the curve:
weight(x) = weight(x + 1) * K
Solving this, we get:
weight(x) = weight_max * (K ^ (MAX - x))
Note that with x = MAX, the exponent is zero and the multiplicand on the right becomes 1.
Now we have the extra requirement that weight(MIN) = min_weight, and we can solve:
weight_min = weight_max * (K ^ (MAX - MIN))
from which we get
K ^ (MAX - MIN) = weight_min / weight_max
and taking logarithm on both sides
(MAX - MIN) ln K = ln weight_min - ln weight_max
i.e.
ln K = (ln weight_min - ln weight_max) / (MAX - MIN)
The right hand side is negative as desired, because K < 1. Then
K = exp((ln weight_min - ln weight_max) / (MAX - MIN))
So now you have the formula to calculate K. After this you just apply for any count x between MIN and MAX:
weight(x) = max_weight * (K ^ (MAX - x))
And you are done.
On a log scale, you just plot the log of the numbers linearly (in other words, pretend you're plotting linearly, but take the log of the numbers to be plotted first).
The zero problem can't be solved analytically--you have to pick a minimum order of magnitude for your scale, and no matter what you can't ever reach zero. If you want to plot something at zero, your choices are to arbitrarily give it the minimum order of magnitude of the scale, or to omit it.
I don't have the exact answer, but i think you want to look up Linearizing Exponential Data. Start by calculate the equation of the line passing through the points and take the log of both sides of that equation.