Is there a "normal" EqualQ function in Mathematica? - wolfram-mathematica

On the documentation page for Equal we read that
Approximate numbers with machine
precision or higher are considered
equal if they differ in at most their
last seven binary digits (roughly
their last two decimal digits).
Here are examples (32 bit system; for 64 bit system add some more zeros in the middle):
In[1]:= 1.0000000000000021 == 1.0000000000000022
1.0000000000000021 === 1.0000000000000022
Out[1]= True
Out[2]= True
I'm wondering is there a "normal" analog of the Equal function in Mathematica that does not drop last 7 binary digits?

Thanks to recent post on the official newsgroup by Oleksandr Rasputinov, now I have learned two undocumented functions which control the tolerance of Equal and SameQ: $EqualTolerance and $SameQTolerance. In Mathematica version 5 and earlier these functions live in the Experimental` context and are well documented: $EqualTolerance, $SameQTolerance. Starting from version 6, they are moved to the Internal` context and become undocumented but still work and even have built-in diagnostic messages which appear when one try to assign them illegal values:
In[1]:= Internal`$SameQTolerance = a
During evaluation of In[2]:= Internal`$SameQTolerance::tolset:
Cannot set Internal`$SameQTolerance to a; value must be a real
number or +/- Infinity.
Out[1]= a
Citing Oleksandr Rasputinov:
Internal`$EqualTolerance ... takes a
machine real value indicating the
number of decimal digits' tolerance
that should be applied, i.e.
Log[2]/Log[10] times the number of
least significant bits one wishes to
ignore.
In this way, setting Internal`$EqualTolerance to zero will force Equal to consider numbers equal only when they are identical in all binary digits (not considering out-of-Precision digits):
In[2]:= Block[{Internal`$EqualTolerance = 0},
1.0000000000000021 == 1.0000000000000022]
Out[2]= False
In[5]:= Block[{Internal`$EqualTolerance = 0},
1.00000000000000002 == 1.000000000000000029]
Block[{Internal`$EqualTolerance = 0},
1.000000000000000020 == 1.000000000000000029]
Out[5]= True
Out[6]= False
Note the following case:
In[3]:= Block[{Internal`$EqualTolerance = 0},
1.0000000000000020 == 1.0000000000000021]
RealDigits[1.0000000000000020, 2] === RealDigits[1.0000000000000021, 2]
Out[3]= True
Out[4]= True
In this case both numbers have MachinePrecision which effectively is
In[5]:= $MachinePrecision
Out[5]= 15.9546
(53*Log[10, 2]). With such precision these numbers are identical in all binary digits:
In[6]:= RealDigits[1.0000000000000020` $MachinePrecision, 2] ===
RealDigits[1.0000000000000021` $MachinePrecision, 2]
Out[6]= True
Increasing precision to 16 makes them different arbitrary-precision numbers:
In[7]:= RealDigits[1.0000000000000020`16, 2] ===
RealDigits[1.0000000000000021`16, 2]
Out[7]= False
In[8]:= Row#First#RealDigits[1.0000000000000020`16,2]
Row#First#RealDigits[1.0000000000000021`16,2]
Out[9]= 100000000000000000000000000000000000000000000000010010
Out[10]= 100000000000000000000000000000000000000000000000010011
But unfortunately Equal still fails to distinguish them:
In[11]:= Block[{Internal`$EqualTolerance = 0},
{1.00000000000000002`16 == 1.000000000000000021`16,
1.00000000000000002`17 == 1.000000000000000021`17,
1.00000000000000002`18 == 1.000000000000000021`18}]
Out[11]= {True, True, False}
There is an infinite number of such cases:
In[12]:= Block[{Internal`$EqualTolerance = 0},
Cases[Table[a = SetPrecision[1., n];
b = a + 10^-n; {n, a == b, RealDigits[a, 2] === RealDigits[b, 2],
Order[a, b] == 0}, {n, 15, 300}], {_, True, False, _}]] // Length
Out[12]= 192
Interestingly, sometimes RealDigits returns identical digits while Order shows that internal representations of expressions are not identical:
In[13]:= Block[{Internal`$EqualTolerance = 0},
Cases[Table[a = SetPrecision[1., n];
b = a + 10^-n; {n, a == b, RealDigits[a, 2] === RealDigits[b, 2],
Order[a, b] == 0}, {n, 15, 300}], {_, _, True, False}]] // Length
Out[13]= 64
But it seems that opposite situation newer happens:
In[14]:=
Block[{Internal`$EqualTolerance = 0},
Cases[Table[a = SetPrecision[1., n];
b = a + 10^-n; {n, a == b, RealDigits[a, 2] === RealDigits[b, 2],
Order[a, b] == 0}, {n, 15, 3000}], {_, _, False, True}]] // Length
Out[14]= 0

Try this:
realEqual[a_, b_] := SameQ ## RealDigits[{a, b}, 2, Automatic]
The choice of base 2 is crucial to ensure that you are comparing the internal representations.
In[54]:= realEqual[1.0000000000000021, 1.0000000000000021]
Out[54]= True
In[55]:= realEqual[1.0000000000000021, 1.0000000000000022]
Out[55]= False
In[56]:= realEqual[
1.000000000000000000000000000000000000000000000000000000000000000022
, 1.000000000000000000000000000000000000000000000000000000000000000023
]
Out[56]= False

In[12]:= MyEqual[x_, y_] := Order[x, y] == 0
In[13]:= MyEqual[1.0000000000000021, 1.0000000000000022]
Out[13]= False
In[14]:= MyEqual[1.0000000000000021, 1.0000000000000021]
Out[14]= True
This tests if two object are identical, since 1.0000000000000021 and 1.000000000000002100 differs in precision they won't be considered as identical.

I'm not aware of an already defined operator. But you may define for example:
longEqual[x_, y_] := Block[{$MaxPrecision = 20, $MinPrecision = 20},
Equal[x - y, 0.]]
Such as:
longEqual[1.00000000000000223, 1.00000000000000223]
True
longEqual[1.00000000000000223, 1.00000000000000222]
False
Edit
If you want to generalize for an arbitrary number of digits, you can do for example:
longEqual[x_, y_] :=
Block[{
$MaxPrecision = Max ## StringLength /# ToString /# {x, y},
$MinPrecision = Max ## StringLength /# ToString /# {x, y}},
Equal[x - y, 0.]]
So that your counterexample in your comment also works.
HTH!

I propose a strategy that uses RealDigits to compare the actual digits of the numbers. The only tricky bit is stripping out trailing zeroes.
trunc = {Drop[First##, Plus ## First /# {-Dimensions#First##,
Last#Position[First##, n_?(# != 0 &)]}], Last##} &# RealDigits## &;
exactEqual = SameQ ## trunc /# {#1, #2} &;
In[1] := exactEqual[1.000000000000000000000000000000000000000000000000000111,
1.000000000000000000000000000000000000000000000000000111000]
Out[1] := True
In[2] := exactEqual[1.000000000000000000000000000000000000000000000000000111,
1.000000000000000000000000000000000000000000000000000112000]
Out[2] := False

I think that you really have to specify what you want... there's no way to compare approximate real numbers that will satisfy everyone in every situation.
Anyway, here's a couple more options:
In[1]:= realEqual[lhs_,rhs_,tol_:$MachineEpsilon] := 0==Chop[lhs-rhs,tol]
In[2]:= Equal[1.0000000000000021,1.0000000000000021]
realEqual[1.0000000000000021,1.0000000000000021]
Out[2]= True
Out[3]= True
In[4]:= Equal[1.0000000000000022,1.0000000000000021]
realEqual[1.0000000000000022,1.0000000000000021]
Out[4]= True
Out[5]= False
As the precision of both numbers gets higher, then they can always be distinguished if you set tol high enough.
Note that the subtraction is done at the precision of the lowest of the two numbers. You could make it happen at the precision of the higher number (which seems a bit pointless) by doing something like
maxEqual[lhs_, rhs_] := With[{prec = Max[Precision /# {lhs, rhs}]},
0 === Chop[SetPrecision[lhs, prec] - SetPrecision[rhs, prec], 10^-prec]]
maybe using the minimum precision makes more sense
minEqual[lhs_, rhs_] := With[{prec = Min[Precision /# {lhs, rhs}]},
0 === Chop[SetPrecision[lhs, prec] - SetPrecision[rhs, prec], 10^-prec]]

One other way to define such function is by using SetPrecision:
MyEqual[a_, b_] := SetPrecision[a, Precision[a] + 3] == SetPrecision[b, Precision[b] + 3]
This seems to work in the all cases but I'm still wondering is there a built-in function. It is ugly to use high-level functions for such a primitive task...

Related

Constructing a triadiagonal matrix in Mathematica where nonzero elements contain functions and variables

Suppose I want to construct a matrix A such that A[[i,i]]=f[x_,y_]+d[i], A[[i,i+1]]=u[i], A[[i+1,i]]=l[i], i=1,N . Say, f[x_,y_]=x^2+y^2.
How can I code this in Mathematica?
Additionally, if I want to integrate the first diagonal element of A, i.e. A[[1,1]] over x and y, both running from 0 to 1, how can I do that?
In[1]:= n = 4;
f[x_, y_] := x^2 + y^2;
A = Normal[SparseArray[{
{i_,i_}/;i>1 -> f[x,y]+ d[i],
{i_,j_}/;j-i==1 -> u[i],
{i_,j_}/;i-j==1 -> l[i-1],
{1, 1} -> Integrate[f[x,y]+d[1], {x,0,1}, {y,0,1}]},
{n, n}]]
Out[3]= {{2/3+d[1], l[1], 0, 0},
{u[1], x^2+y^2+ d[2], l[2], 0},
{0, u[2], x^2+y^2+d[3], l[3]},
{0, 0, u[3], x^2+y^2+d[4]}}
Band is tailored specifically for this:
myTridiagonalMatrix#n_Integer?Positive :=
SparseArray[
{ Band#{1, 1} -> f[x, y] + Array[d, n]
, Band#{1, 2} -> Array[u, n - 1]
, Band#{2, 1} -> Array[l, n - 1]}
, {n, n}]
Check it out (no need to define f, d, u, l):
myTridiagonalMatrix#5 // MatrixForm
Note that MatrixForm should not be part of a definition. For example, it's a bad idea to set A = (something) // MatrixForm. You will get a MatrixForm object instead of a table (= array of arrays) or a sparse array, and its only purpose is to be pretty-printed in FrontEnd. Trying to use MatrixForm in calculations will yield errors and will lead to unnecessary confusion.
Integrating the element at {1, 1}:
myTridiagonalMatrixWithFirstDiagonalElementIntegrated#n_Integer?Positive :=
MapAt[
Integrate[#, {x, 0, 1}, {y, 0, 1}]&
, myTridiagonalMatrix#n
, {1, 1}]
You may check it out without defining f or d, as well:
myTridiagonalMatrixWithFirstDiagonalElementIntegrated#5
The latter operation, however, looks suspicious. For example, it does not leave your matrix (or its corresponding linear system) invariant w.r.t. reasonable transformations. (This operation does not even preserve linearity of matrices.) You probably don't want to do it.
Comment on comment above: there's no need to define A[x_, y_] := … to Integrate[A[[1,1]], {x,0,1}, {y,0,1}]. Note that A[[1,1]] is totally different from A[1, 1]: the former is Part[A, 1, 1] which is a certain element of table A. A[1, 1] is a different expression: if A is some table then A[1, 1] is (that table)[1, 1], which is a valid expression but is normally considered meaningless.

Mathematica won't NIntegrate a conditional function

To illustrate my problem here is a toy example:
F[x_] := Module[{out},
If[x > 1,
out = 1/2,
out = 1
];
out
];
The function can be evaluated and plotted. However when I try to numerically integrate it I get an error
NIntegrate[F[x], {x, 0, 2}]
NIntegrate::inumr: The integrand out$831 has evaluated to non-numerical values for all sampling points in the region with boundaries {{0,2}}. >>
Integrate and some other functions will do some probing with symbolic values first. Your Module when given only the symbol x will return only the (aliased) out.
Fix this by defining your F only for numeric values. NIntegrate will discover this and then use numeric values.
In[1]:= F[x_?NumericQ] := Module[{out},
If[x > 1, out = 1/2, out = 1]; out];
NIntegrate[F[x], {x, 0, 2}]
Out[2]= 1.5

How to replace each 0 with the preceding element in a list in an idiomatic way in Mathematica?

This is a fun little problem, and I wanted to check with the experts here if there is a better functional/Mathematica way to approach solving it than what I did. I am not too happy with my solution since I use big IF THEN ELSE in it, but could not find a Mathematica command to use easily to do it (such as Select, Cases, Sow/Reap, Map.. etc...)
Here is the problem, given a list values (numbers or symbols), but for simplicity, lets assume a list of numbers for now. The list can contain zeros and the goal is replace the each zero with the element seen before it.
At the end, the list should contain no zeros in it.
Here is an example, given
a = {1, 0, 0, -1, 0, 0, 5, 0};
the result should be
a = {1, 1, 1, -1, -1, -1, 5, 5}
It should ofcourse be done in the most efficient way.
This is what I could come up with
Scan[(a[[#]] = If[a[[#]] == 0, a[[#-1]], a[[#]]]) &, Range[2, Length[a]]];
I wanted to see if I can use Sow/Reap on this, but did not know how.
question: can this be solved in a more functional/Mathematica way? The shorter the better ofcourse :)
update 1
Thanks everyone for the answer, all are very good to learn from. This is the result of speed test, on V 8.04, using windows 7, 4 GB Ram, intel 930 #2.8 Ghz:
I've tested the methods given for n from 100,000 to 4 million. The ReplaceRepeated method does not do well for large lists.
update 2
Removed earlier result that was shown above in update1 due to my error in copying one of the tests.
The updated results are below. Leonid method is the fastest. Congratulation Leonid. A very fast method.
The test program is the following:
(*version 2.0 *)
runTests[sizeOfList_?(IntegerQ[#] && Positive[#] &)] :=
Module[{tests, lst, result, nasser, daniel, heike, leonid, andrei,
sjoerd, i, names},
nasser[lst_List] := Module[{a = lst},
Scan[(a[[#]] = If[a[[#]] == 0, a[[# - 1]], a[[#]]]) &,
Range[2, Length[a]]]
];
daniel[lst_List] := Module[{replaceWithPrior},
replaceWithPrior[ll_, n_: 0] :=
Module[{prev}, Map[If[# == 0, prev, prev = #] &, ll]
];
replaceWithPrior[lst]
];
heike[lst_List] := Flatten[Accumulate /# Split[lst, (#2 == 0) &]];
andrei[lst_List] := Module[{x, y, z},
ReplaceRepeated[lst, {x___, y_, 0, z___} :> {x, y, y, z},
MaxIterations -> Infinity]
];
leonid[lst_List] :=
FoldList[If[#2 == 0, #1, #2] &, First##, Rest##] & #lst;
sjoerd[lst_List] :=
FixedPoint[(1 - Unitize[#]) RotateRight[#] + # &, lst];
lst = RandomChoice[Join[ConstantArray[0, 10], Range[-1, 5]],
sizeOfList];
tests = {nasser, daniel, heike, leonid, sjoerd};
names = {"Nasser","Daniel", "Heike", "Leonid", "Sjoerd"};
result = Table[0, {Length[tests]}, {2}];
Do[
result[[i, 1]] = names[[i]];
Block[{j, r = Table[0, {5}]},
Do[
r[[j]] = First#Timing[tests[[i]][lst]], {j, 1, 5}
];
result[[i, 2]] = Mean[r]
],
{i, 1, Length[tests]}
];
result
]
To run the tests for length 1000 the command is:
Grid[runTests[1000], Frame -> All]
Thanks everyone for the answers.
Much (order of magnitude) faster than other solutions still:
FoldList[If[#2 == 0, #1, #2] &, First##, Rest##] &
The speedup is due to Fold autocompiling. Will not be so dramatic for non-packed arrays. Benchmarks:
In[594]:=
a=b=c=RandomChoice[Join[ConstantArray[0,10],Range[-1,5]],150000];
(b=Flatten[Accumulate/#Split[b,(#2==0)&]]);//Timing
Scan[(a[[#]]=If[a[[#]]==0,a[[#-1]],a[[#]]])&,Range[2,Length[a]]]//Timing
(c=FoldList[If[#2==0,#1,#2]&,First##,Rest##]&#c);//Timing
SameQ[a,b,c]
Out[595]= {0.187,Null}
Out[596]= {0.625,Null}
Out[597]= {0.016,Null}
Out[598]= True
This seems to be a factor 4 faster on my machine:
a = Flatten[Accumulate /# Split[a, (#2 == 0) &]]
The timings I get are
a = b = RandomChoice[Join[ConstantArray[0, 10], Range[-1, 5]], 10000];
(b = Flatten[Accumulate /# Split[b, (#2 == 0) &]]); // Timing
Scan[(a[[#]] = If[a[[#]] == 0, a[[# - 1]], a[[#]]]) &,
Range[2, Length[a]]] // Timing
SameQ[a, b]
(* {0.015815, Null} *)
(* {0.061929, Null} *)
(* True *)
FixedPoint[(1 - Unitize[#]) RotateRight[#] + # &, d]
is about 10 and 2 times faster than Heike's solutions but slower than Leonid's.
You question looks exactly like a task for ReplaceRepeated function. What it does basically is that it applies the same set of rules to the expression until no more rules are applicable. In your case the expression is a list, and the rule is to replace 0 with its predecessor whenever occurs in a list. So here is the solution:
a = {1, 0, 0, -1, 0, 0, 5, 0};
a //. {x___, y_, 0, z___} -> {x, y, y, z};
The pattern for the rule here is the following:
x___ - any symbol, zero or more repetitions, the beginning of the list
y_ - exactly one element before zero
0 - zero itself, this element will be replaced with y later
z___ - any symbol, zero or more repetitions, the end of the list

Unevaluated form of a[[i]]

Consider following simple, illustrating example
cf = Block[{a, x, degree = 3},
With[{expr = Product[x - a[[i]], {i, degree}]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]
]
]
This is one of the possible ways to transfer code in the body of a Compile statement. It produces the Part::partd error, since a[[i]] is at the moment of evaluation not a list.
The easy solution is to just ignore this message or turn it off. There are of course other ways around it. For instance one could circumvent the evaluation of a[[i]] by replacing it inside the Compile-body before it is compiled
cf = ReleaseHold[Block[{a, x, degree = 3},
With[{expr = Product[x - a[i], {i, degree}]},
Hold[Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]] /.
a[i_] :> a[[i]]]
]
]
If the compiled function a large bit of code, the Hold, Release and the replacement at the end goes a bit against my idea of beautiful code. Is there a short and nice solution I have not considered yet?
Answer to the post of Szabolcs
Could you tell me though why you are using With here?
Yes, and it has to do with the reason why I cannot use := here. I use With, to have something like a #define in C, which means a code-replacement at the place I need it. Using := in With delays the evaluation and what the body of Compile sees is not the final piece of code which it is supposed to compile. Therefore,
<< CompiledFunctionTools`
cf = Block[{a, x, degree = 3},
With[{expr := Product[x - a[[i]], {i, degree}]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]]];
CompilePrint[cf]
shows you, that there is a call to the Mathematica-kernel in the compiled function
I4 = MainEvaluate[ Function[{x, a}, degree][ R0, T(R1)0]]
This is bad because Compile should use only the local variables to calculate the result.
Update
Szabolcs solution works in this case but it leaves the whole expression unevaluated. Let me explain, why it is important that the expression is expanded before it is compiled. I have to admit, my toy-example was not the best. So lets try a better one using With and SetDelayed like in the solution of Szabolcs
Block[{a, x}, With[
{expr := D[Product[x - a[[i]], {i, 3}], x]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]
]
]
Say I have a polynomial of degree 3 and I need the derivative of it inside the Compile. In the above code I want Mathematica to calculate the derivative for unassigned roots a[[i]] so I can use the formula very often in the compiled code. Looking at the compiled code above
one sees, that the D[..] cannot be compiled as nicely as the Product and stays unevaluated
11 R1 = MainEvaluate[ Hold[D][ R5, R0]]
Therefore, my updated question is: Is it possible to evaluate a piece of code without evaluating the Part[]-accesses in it better/nicer than using
Block[{a, x}, With[
{expr = D[Quiet#Product[x - a[[i]], {i, 3}], x]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]
]
]
Edit: I put the Quiet to the place it belongs. I had it in front of code block to make it visible to everyone that I used Quiet here to suppress the warning. As Ruebenko pointed already out, it should in real code always be as close as possible to where it belongs. With this approach you probably don't miss other important warnings/errors.
Update 2
Since we're moving away from the original question, we should move this discussion maybe to a new thread. I don't know to whom I should give the best answer-award to my question since we discussed Mathematica and Scoping more than how to suppress the a[[i]] issue.
Update 3
To give the final solution: I simply suppress (unfortunately like I did all the time) the a[[i]] warning with Quiet. In a real example below, I have to use Quiet outside the complete Block to suppress the warning.
To inject the required code into the body of Compile I use a pure function and give the code to inline as argument. This is the same approach Michael Trott is using in, e.g. his Numerics book. This is a bit like the where clause in Haskell, where you define stuff you used afterwards.
newtonC = Function[{degree, f, df, colors},
Compile[{{x0, _Complex, 0}, {a, _Complex, 1}},
Block[{x = x0, xn = 0.0 + 0.0 I, i = 0, maxiter = 256,
eps = 10^(-6.), zeroId = 1, j = 1},
For[i = 0, i < maxiter, ++i,
xn = x - f/(df + eps);
If[Abs[xn - x] < eps,
Break[]
];
x = xn;
];
For[j = 1, j <= degree, ++j,
If[Abs[xn - a[[j]]] < eps*10^2,
zeroId = j + 1;
Break[];
];
];
colors[[zeroId]]*(1 - (i/maxiter)^0.3)*1.5
],
CompilationTarget -> "C", RuntimeAttributes -> {Listable},
RuntimeOptions -> "Speed", Parallelization -> True]]##
(Quiet#Block[{degree = 3, polynomial, a, x},
polynomial = HornerForm[Product[x - a[[i]], {i, degree}]];
{degree, polynomial, HornerForm[D[polynomial, x]],
List ### (ColorData[52, #] & /# Range[degree + 1])}])
And this function is now fast enough to calculate the Newton-fractal of a polynomial where the position of the roots is not fixed. Therefore, we can adjust the roots dynamically.
Feel free to adjust n. Here it runs up to n=756 fluently
(* ImageSize n*n, Complex plange from -b-I*b to b+I*b *)
With[{n = 256, b = 2.0},
DynamicModule[{
roots = RandomReal[{-b, b}, {3, 2}],
raster = Table[x + I y, {y, -b, b, 2 b/n}, {x, -b, b, 2 b/n}]},
LocatorPane[Dynamic[roots],
Dynamic[
Graphics[{Inset[
Image[Reverse#newtonC[raster, Complex ### roots], "Real"],
{-b, -b}, {1, 1}, 2 {b, b}]}, PlotRange -> {{-b, b}, {-
b, b}}, ImageSize -> {n, n}]], {{-b, -b}, {b, b}},
Appearance -> Style["\[Times]", Red, 20]
]
]
]
Teaser:
Ok, here is the very oversimplified version of the code generation framework I am using for various purposes:
ClearAll[symbolToHideQ]
SetAttributes[symbolToHideQ, HoldFirst];
symbolToHideQ[s_Symbol, expandedSymbs_] :=! MemberQ[expandedSymbs, Unevaluated[s]];
ClearAll[globalProperties]
globalProperties[] := {DownValues, SubValues, UpValues (*,OwnValues*)};
ClearAll[getSymbolsToHide];
Options[getSymbolsToHide] = {
Exceptions -> {List, Hold, HoldComplete,
HoldForm, HoldPattern, Blank, BlankSequence, BlankNullSequence,
Optional, Repeated, Verbatim, Pattern, RuleDelayed, Rule, True,
False, Integer, Real, Complex, Alternatives, String,
PatternTest,(*Note- this one is dangerous since it opens a hole
to evaluation leaks. But too good to be ingored *)
Condition, PatternSequence, Except
}
};
getSymbolsToHide[code_Hold, headsToExpand : {___Symbol}, opts : OptionsPattern[]] :=
Join ## Complement[
Cases[{
Flatten[Outer[Compose, globalProperties[], headsToExpand]], code},
s_Symbol /; symbolToHideQ[s, headsToExpand] :> Hold[s],
Infinity,
Heads -> True
],
Hold /# OptionValue[Exceptions]];
ClearAll[makeHidingSymbol]
SetAttributes[makeHidingSymbol, HoldAll];
makeHidingSymbol[s_Symbol] :=
Unique[hidingSymbol(*Unevaluated[s]*) (*,Attributes[s]*)];
ClearAll[makeHidingRules]
makeHidingRules[symbs : Hold[__Symbol]] :=
Thread[List ## Map[HoldPattern, symbs] -> List ## Map[makeHidingSymbol, symbs]];
ClearAll[reverseHidingRules];
reverseHidingRules[rules : {(_Rule | _RuleDelayed) ..}] :=
rules /. (Rule | RuleDelayed)[Verbatim[HoldPattern][lhs_], rhs_] :> (rhs :> lhs);
FrozenCodeEval[code_Hold, headsToEvaluate : {___Symbol}] :=
Module[{symbolsToHide, hidingRules, revHidingRules, result},
symbolsToHide = getSymbolsToHide[code, headsToEvaluate];
hidingRules = makeHidingRules[symbolsToHide];
revHidingRules = reverseHidingRules[hidingRules];
result =
Hold[Evaluate[ReleaseHold[code /. hidingRules]]] /. revHidingRules;
Apply[Remove, revHidingRules[[All, 1]]];
result];
The code works by temporarily hiding most symbols with some dummy ones, and allow certain symbols evaluate. Here is how this would work here:
In[80]:=
FrozenCodeEval[
Hold[Compile[{{x,_Real,0},{a,_Real,1}},D[Product[x-a[[i]],{i,3}],x]]],
{D,Product,Derivative,Plus}
]
Out[80]=
Hold[Compile[{{x,_Real,0},{a,_Real,1}},
(x-a[[1]]) (x-a[[2]])+(x-a[[1]]) (x-a[[3]])+(x-a[[2]]) (x-a[[3]])]]
So, to use it, you have to wrap your code in Hold and indicate which heads you want to evaluate. What remains here is just to apply ReleseHold to it. Note that the above code just illustrates the ideas, but is still quite limited. The full version of my method involves other steps which make it much more powerful but also more complex.
Edit
While the above code is still too limited to accomodate many really interesting cases, here is one additional neat example of what would be rather hard to achieve with the traditional tools of evaluation control:
In[102]:=
FrozenCodeEval[
Hold[f[x_, y_, z_] :=
With[Thread[{a, b, c} = Map[Sqrt, {x, y, z}]],
a + b + c]],
{Thread, Map}]
Out[102]=
Hold[
f[x_, y_, z_] :=
With[{a = Sqrt[x], b = Sqrt[y], c = Sqrt[z]}, a + b + c]]
EDIT -- Big warning!! Injecting code using With or Function into Compile that uses some of Compile's local variables is not reliable! Consider the following:
In[63]:= With[{y=x},Compile[x,y]]
Out[63]= CompiledFunction[{x$},x,-CompiledCode-]
In[64]:= With[{y=x},Compile[{{x,_Real}},y]]
Out[64]= CompiledFunction[{x},x,-CompiledCode-]
Note the renaming of x to x$ in the first case. I recommend you read about localization here and here. (Yes, this is confusing!) We can guess about why this only happens in the first case and not the second, but my point is that this behaviour might not be intended (call it a bug, dark corner or undefined behaviour), so relying on it is fragile ...
Replace-based solutions, like my withRules function do work though (this was not my intended use for that function, but it fits well here ...)
In[65]:= withRules[{y->x},Compile[x,y]]
Out[65]= CompiledFunction[{x},x,-CompiledCode-]
Original answers
You can use := in With, like so:
cf = Block[{a, x, degree = 3},
With[{expr := Product[x - a[[i]], {i, degree}]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]
]
]
It will avoid evaluating expr and the error from Part.
Generally, = and := work as expected in all of With, Module and Block.
Could you tell me though why you are using With here? (I'm sure you have a good reason, I just can't see it from this simplified example.)
Additional answer
Addressing #halirutan's concern about degree not being inlined during compilation
I see this as exactly the same situation as if we had a global variable defined that we use in Compile. Take for example:
In[18]:= global=1
Out[18]= 1
In[19]:= cf2=Compile[{},1+global]
Out[19]= CompiledFunction[{},1+global,-CompiledCode-]
In[20]:= CompilePrint[cf2]
Out[20]=
No argument
3 Integer registers
Underflow checking off
Overflow checking off
Integer overflow checking on
RuntimeAttributes -> {}
I0 = 1
Result = I2
1 I1 = MainEvaluate[ Function[{}, global][ ]]
2 I2 = I0 + I1
3 Return
This is a common issue. The solution is to tell Compile to inline globals, like so:
cf = Block[{a, x, degree = 3},
With[{expr := Product[x - a[[i]], {i, degree}]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr,
CompilationOptions -> {"InlineExternalDefinitions" -> True}]]];
CompilePrint[cf]
You can check that now there's no callback to the main evaluator.
Alternatively you can inject the value of degree using an extra layer of With instead of Block. This will make you wish for something like this very much.
Macro expansion in Mathematica
This is somewhat unrelated, but you mention in your post that you use With for macro expansion. Here's my first (possibly buggy) go at implementing macro expansion in Mathematica. This is not well tested, feel free to try to break it and post a comment.
Clear[defineMacro, macros, expandMacros]
macros = Hold[];
SetAttributes[defineMacro, HoldAllComplete]
defineMacro[name_Symbol, value_] := (AppendTo[macros, name]; name := value)
SetAttributes[expandMacros, HoldAllComplete]
expandMacros[expr_] := Unevaluated[expr] //. Join ## (OwnValues /# macros)
Description:
macros is a (held) list of all symbols to be expanded.
defineMacro will make a new macro.
expandMacros will expand macro definitions in an expression.
Beware: I didn't implement macro-redefinition, this will not work while expansion is on using $Pre. Also beware of recursive macro definitions and infinite loops.
Usage:
Do macro expansion on all input by defining $Pre:
$Pre = expandMacros;
Define a to have the value 1:
defineMacro[a, 1]
Set a delayed definition for b:
b := a + 1
Note that the definition of b is not fully evaluated, but a is expanded.
?b
Global`b
b:=1+1
Turn off macro expansion ($Pre can be dangerous if there's a bug in my code):
$Pre =.
One way:
cf = Block[{a, x, degree = 3},
With[{expr = Quiet[Product[x - a[[i]], {i, degree}]]},
Compile[{{x, _Real, 0}, {a, _Real, 1}}, expr]]]
be careful though, it you really want this.
Original code:
newtonC = Function[{degree, f, df, colors},
Compile[{{x0, _Complex, 0}, {a, _Complex, 1}},
Block[{x = x0, xn = 0.0 + 0.0 I, i = 0, maxiter = 256,
...
RuntimeOptions -> "Speed", Parallelization -> True]]##
(Quiet#Block[{degree = 3, polynomial, a, x},
polynomial = HornerForm[Product[x - a[[i]], {i, degree}]];
...
Modified code:
newtonC = Function[{degree, f, df, colors},
Compile[{{x0, _Complex, 0}, {a, _Complex, 1}},
Block[{x = x0, xn = 0.0 + 0.0 I, i = 0, maxiter = 256,
...
RuntimeOptions -> "Speed", Parallelization -> True],HoldAllComplete]##
( (( (HoldComplete###)/.a[i_]:>a[[i]] )&)#Block[{degree = 3, polynomial, a, x},
polynomial = HornerForm[Product[x - a[i], {i, degree}]];
...
Add HoldAllComplete attribute to the function.
Write a[i] in place of a[[i]].
Replace Quiet with (( (HoldComplete###)/.a[i_]:>a[[i]] )&)
Produces the identical code, no Quiet, and all of the Hold stuff is in one place.

Mathematica: NExpectation vs Expectation - inconsistent results

Following code returns different values for NExpectation and Expectation.
If I try the same for NormalDistribution[] I get convergence erors for NExpectation (but the final result is still 0 for all of them).
What is causing the problem?
U[x_] := If[x >= 0, Sqrt[x], -Sqrt[-x]]
N[Expectation[U[x], x \[Distributed] NormalDistribution[1, 1]]]
NExpectation[U[x], x \[Distributed] NormalDistribution[1, 1]]
Output:
-0.104154
0.796449
I think it might actually be an Integrate bug.
Let's define your
U[x_] := If[x >= 0, Sqrt[x], -Sqrt[-x]]
and the equivalent
V[x_] := Piecewise[{{Sqrt[x], x >= 0}, {-Sqrt[-x], x < 0}}]
which are equivalent over the reals
FullSimplify[U[x] - V[x], x \[Element] Reals] (* Returns 0 *)
For both U and V, the analytic Expectation command uses the Method option "Integrate" this can be seen by running
Table[Expectation[U[x], x \[Distributed] NormalDistribution[1, 1],
Method -> m], {m, {"Integrate", "Moment", "Sum", "Quantile"}}]
Thus, what it's really doing is the integral
Integrate[U[x] PDF[NormalDistribution[1, 1], x], {x, -Infinity, Infinity}]
which returns
(Sqrt[Pi] (BesselI[-(1/4), 1/4] - 3 BesselI[1/4, 1/4] +
BesselI[3/4, 1/4] - BesselI[5/4, 1/4]))/(4 Sqrt[2] E^(1/4))
The integral for V
Integrate[V[x] PDF[NormalDistribution[1, 1], x], {x, -Infinity, Infinity}]
gives the same answer but multiplied by a factor of 1 + I. This is clearly a bug.
The numerical integral using U or V returns the expected value of 0.796449:
NIntegrate[U[x] PDF[NormalDistribution[1, 1], x], {x, -Infinity, Infinity}]
This is presumably the correct solution.
Edit: The reason that kguler's answer returns the same value for all versions is because the u[x_?NumericQ] definition prevents the analytic integrals from being performed so Expectation is unevaluated and reverts to using NExpectation when asked for its numerical value..
Edit 2:
Breaking down the problem a little bit more, you find
In[1]:= N#Integrate[E^(-(1/2) (-1 + x)^2) Sqrt[x] , {x, 0, Infinity}]
NIntegrate[E^(-(1/2) (-1 + x)^2) Sqrt[x] , {x, 0, Infinity}]
Out[1]= 0. - 0.261075 I
Out[2]= 2.25748
In[3]:= N#Integrate[Sqrt[-x] E^(-(1/2) (-1 + x)^2) , {x, -Infinity, 0}]
NIntegrate[Sqrt[-x] E^(-(1/2) (-1 + x)^2) , {x, -Infinity, 0}]
Out[3]= 0.261075
Out[4]= 0.261075
Over both the ranges, the integrand is real, non-oscillatory with an exponential decay. There should not be any need for imaginary/complex results.
Finally note that the above results hold for Mathematica version 8.0.3.
In version 7, the integrals return 1F1 hypergeometric functions and the analytic result matches the numeric result. So this bug (which is also currently present in Wolfram|Alpha) is a regression.
If you change the argument of your function u to avoid evaluation for non-numeric values all three methods gives the same result:
u[x_?NumericQ] := If[x >= 0, Sqrt[x], -Sqrt[-x]] ;
Expectation[u[x], x \[Distributed] NormalDistribution[1, 1]] // N;
N[Expectation[u[x], x \[Distributed] NormalDistribution[1, 1]]] ;
NExpectation[u[x], x \[Distributed] NormalDistribution[1, 1]];
{% === %% === %%%, %}
with the result
{True, 0.796449}

Resources