Following code returns different values for NExpectation and Expectation.
If I try the same for NormalDistribution[] I get convergence erors for NExpectation (but the final result is still 0 for all of them).
What is causing the problem?
U[x_] := If[x >= 0, Sqrt[x], -Sqrt[-x]]
N[Expectation[U[x], x \[Distributed] NormalDistribution[1, 1]]]
NExpectation[U[x], x \[Distributed] NormalDistribution[1, 1]]
Output:
-0.104154
0.796449
I think it might actually be an Integrate bug.
Let's define your
U[x_] := If[x >= 0, Sqrt[x], -Sqrt[-x]]
and the equivalent
V[x_] := Piecewise[{{Sqrt[x], x >= 0}, {-Sqrt[-x], x < 0}}]
which are equivalent over the reals
FullSimplify[U[x] - V[x], x \[Element] Reals] (* Returns 0 *)
For both U and V, the analytic Expectation command uses the Method option "Integrate" this can be seen by running
Table[Expectation[U[x], x \[Distributed] NormalDistribution[1, 1],
Method -> m], {m, {"Integrate", "Moment", "Sum", "Quantile"}}]
Thus, what it's really doing is the integral
Integrate[U[x] PDF[NormalDistribution[1, 1], x], {x, -Infinity, Infinity}]
which returns
(Sqrt[Pi] (BesselI[-(1/4), 1/4] - 3 BesselI[1/4, 1/4] +
BesselI[3/4, 1/4] - BesselI[5/4, 1/4]))/(4 Sqrt[2] E^(1/4))
The integral for V
Integrate[V[x] PDF[NormalDistribution[1, 1], x], {x, -Infinity, Infinity}]
gives the same answer but multiplied by a factor of 1 + I. This is clearly a bug.
The numerical integral using U or V returns the expected value of 0.796449:
NIntegrate[U[x] PDF[NormalDistribution[1, 1], x], {x, -Infinity, Infinity}]
This is presumably the correct solution.
Edit: The reason that kguler's answer returns the same value for all versions is because the u[x_?NumericQ] definition prevents the analytic integrals from being performed so Expectation is unevaluated and reverts to using NExpectation when asked for its numerical value..
Edit 2:
Breaking down the problem a little bit more, you find
In[1]:= N#Integrate[E^(-(1/2) (-1 + x)^2) Sqrt[x] , {x, 0, Infinity}]
NIntegrate[E^(-(1/2) (-1 + x)^2) Sqrt[x] , {x, 0, Infinity}]
Out[1]= 0. - 0.261075 I
Out[2]= 2.25748
In[3]:= N#Integrate[Sqrt[-x] E^(-(1/2) (-1 + x)^2) , {x, -Infinity, 0}]
NIntegrate[Sqrt[-x] E^(-(1/2) (-1 + x)^2) , {x, -Infinity, 0}]
Out[3]= 0.261075
Out[4]= 0.261075
Over both the ranges, the integrand is real, non-oscillatory with an exponential decay. There should not be any need for imaginary/complex results.
Finally note that the above results hold for Mathematica version 8.0.3.
In version 7, the integrals return 1F1 hypergeometric functions and the analytic result matches the numeric result. So this bug (which is also currently present in Wolfram|Alpha) is a regression.
If you change the argument of your function u to avoid evaluation for non-numeric values all three methods gives the same result:
u[x_?NumericQ] := If[x >= 0, Sqrt[x], -Sqrt[-x]] ;
Expectation[u[x], x \[Distributed] NormalDistribution[1, 1]] // N;
N[Expectation[u[x], x \[Distributed] NormalDistribution[1, 1]]] ;
NExpectation[u[x], x \[Distributed] NormalDistribution[1, 1]];
{% === %% === %%%, %}
with the result
{True, 0.796449}
Related
Suppose I want to construct a matrix A such that A[[i,i]]=f[x_,y_]+d[i], A[[i,i+1]]=u[i], A[[i+1,i]]=l[i], i=1,N . Say, f[x_,y_]=x^2+y^2.
How can I code this in Mathematica?
Additionally, if I want to integrate the first diagonal element of A, i.e. A[[1,1]] over x and y, both running from 0 to 1, how can I do that?
In[1]:= n = 4;
f[x_, y_] := x^2 + y^2;
A = Normal[SparseArray[{
{i_,i_}/;i>1 -> f[x,y]+ d[i],
{i_,j_}/;j-i==1 -> u[i],
{i_,j_}/;i-j==1 -> l[i-1],
{1, 1} -> Integrate[f[x,y]+d[1], {x,0,1}, {y,0,1}]},
{n, n}]]
Out[3]= {{2/3+d[1], l[1], 0, 0},
{u[1], x^2+y^2+ d[2], l[2], 0},
{0, u[2], x^2+y^2+d[3], l[3]},
{0, 0, u[3], x^2+y^2+d[4]}}
Band is tailored specifically for this:
myTridiagonalMatrix#n_Integer?Positive :=
SparseArray[
{ Band#{1, 1} -> f[x, y] + Array[d, n]
, Band#{1, 2} -> Array[u, n - 1]
, Band#{2, 1} -> Array[l, n - 1]}
, {n, n}]
Check it out (no need to define f, d, u, l):
myTridiagonalMatrix#5 // MatrixForm
Note that MatrixForm should not be part of a definition. For example, it's a bad idea to set A = (something) // MatrixForm. You will get a MatrixForm object instead of a table (= array of arrays) or a sparse array, and its only purpose is to be pretty-printed in FrontEnd. Trying to use MatrixForm in calculations will yield errors and will lead to unnecessary confusion.
Integrating the element at {1, 1}:
myTridiagonalMatrixWithFirstDiagonalElementIntegrated#n_Integer?Positive :=
MapAt[
Integrate[#, {x, 0, 1}, {y, 0, 1}]&
, myTridiagonalMatrix#n
, {1, 1}]
You may check it out without defining f or d, as well:
myTridiagonalMatrixWithFirstDiagonalElementIntegrated#5
The latter operation, however, looks suspicious. For example, it does not leave your matrix (or its corresponding linear system) invariant w.r.t. reasonable transformations. (This operation does not even preserve linearity of matrices.) You probably don't want to do it.
Comment on comment above: there's no need to define A[x_, y_] := … to Integrate[A[[1,1]], {x,0,1}, {y,0,1}]. Note that A[[1,1]] is totally different from A[1, 1]: the former is Part[A, 1, 1] which is a certain element of table A. A[1, 1] is a different expression: if A is some table then A[1, 1] is (that table)[1, 1], which is a valid expression but is normally considered meaningless.
I ran into this problem when I try to solve a partial differential equation. Here is my code:
dd = NDSolve[{D[tes[t, x], t] ==D[tes[t, x], x, x] + Exp[-1/(tes[t, x])],
tes[t, 0] == 1, tes[t, -1] == 1, tes[0, x] == 1}, {tes[t, x]}, {t, 0, 5}, {x, -1, 0}]
f[t_, x_] = tes[t, x] /. dd
kkk = FunctionInterpolation[Integrate[Exp[-1.1/( Evaluate[f[t, x]])], {x, -1, 0}], {t, 0, 0.05}]
kkg[t_] = Integrate[Exp[-1.1/( Evaluate[f[t, x]])], {x, -1, 0}]
Plot[Evaluate[kkk[t]] - Evaluate[kkg[t]], {t, 0, 0.05}]
N[kkg[0.01] - kkk[0.01], 1]
It's strange that the deviation showed in the graph reaches up to more than 5*10^-7 around t=0.01, while it's only -3.88578*10^-16 when calculated by N[kkg[0.01] - kkk[0.01], 1], I wonder how this error comes out.
By the way, I feel it strange that the output of N[kkg[0.01] - kkk[0.01], 1] has so many decimal places, I've set the precision as 1, right?
Using Mathematica 7 the plot I get does not show a peak at 0.01:
Plot[kkk[t] - kkg[t], {t, 0, 0.05}, GridLines -> Automatic]
There is a peak at about 0.00754:
kkk[0.00754] - kkg[0.00754] // N
{6.50604*10^-7}
Regarding N, it does not change the precision of machine precision numbers as it does for exact or arbitrary precision ones:
N[{1.23456789, Pi, 1.23456789`50}, 2]
Precision /# %
{1.23457, 3.1, 1.2}
{MachinePrecision, 2., 2.}
Look at SetPrecision if you want to force (fake) a precision, and NumberForm if you want to print a number in a specific format.
I don't use Mathematica in general and I need it to compare with an other program. I want to solve a system of three differential and non linear equations. For this I use Dsolve. Everything goes wrong when I put the nonlinear term (exponential).
Here is my code:
equa = {x'[t] == z[t] - Exp[y[t]],
y'[t] == z[t] - y[t],
z'[t] == x[t] + y[t] - z[t],
x[0] == 0,
y[0] == 0,
z[0] == 0};
slt = DSolve[equa, {x, y, z}, t]
Plot[{x[t] /. slt}, {t, 0, 10}]
And the errors are like this :
DSolve::dsvar: 0.1 cannot be used as a variable.
ReplaceAll::reps:{Dsolve[<<1>>]} is neither a list of replacement rules nor a valid dispatch table, and so cannot be used for replacing
Does someone know why the exponential term makes troubles ?
Thanks
You may try
s = NDSolve[equa, {x, y, z}, {t, 0, 10}];
Plot[Evaluate[({x[t], y[t], z[t]} /. s)], {t, 0, 1}]
What's the best way of getting Mathematica 7 or 8 to do the integral
NIntegrate[Exp[-x]/Sin[Pi x], {x, 0, 50}]
There are poles at every integer - and we want the Cauchy principle value.
The idea is to get a good approximation for the integral from 0 to infinity.
With Integrate there is the option PrincipleValue -> True.
With NIntegrate I can give it the option Exclusions -> (Sin[Pi x] == 0), or manually give it the poles by
NIntegrate[Exp[-x]/Sin[Pi x], Evaluate[{x, 0, Sequence##Range[50], 50}]]
The original command and the above two NIntegrate tricks give the result 60980 +/- 10. But they all spit out errors. What is the best way of getting a quick reliable result for this integral without Mathematica wanting to give errors?
Simon, is there reason to believe your integral is convergent ?
In[52]:= f[k_Integer, eps_Real] :=
NIntegrate[Exp[-x]/Sin[Pi x], {x, k + eps, k + 1 - eps}]
In[53]:= Sum[f[k, 1.0*10^-4], {k, 0, 50}]
Out[53]= 2.72613
In[54]:= Sum[f[k, 1.0*10^-5], {k, 0, 50}]
Out[54]= 3.45906
In[55]:= Sum[f[k, 1.0*10^-6], {k, 0, 50}]
Out[55]= 4.19199
It looks like the problem is at x==0. Splitting integrand k+eps to k+1-eps for integer values of k:
In[65]:= int =
Sum[(-1)^k Exp[-k ], {k, 0, Infinity}] Integrate[
Exp[-x]/Sin[Pi x], {x, eps, 1 - eps}, Assumptions -> 0 < eps < 1/2]
Out[65]= (1/((1 +
E) (I + \[Pi])))E (2 E^(-1 + eps - I eps \[Pi])
Hypergeometric2F1[1, (I + \[Pi])/(2 \[Pi]), 3/2 + I/(2 \[Pi]),
E^(-2 I eps \[Pi])] +
2 E^(I eps (I + \[Pi]))
Hypergeometric2F1[1, (I + \[Pi])/(2 \[Pi]), 3/2 + I/(2 \[Pi]),
E^(2 I eps \[Pi])])
In[73]:= N[int /. eps -> 10^-6, 20]
Out[73]= 4.1919897038160855098 + 0.*10^-20 I
In[74]:= N[int /. eps -> 10^-4, 20]
Out[74]= 2.7261330651934049862 + 0.*10^-20 I
In[75]:= N[int /. eps -> 10^-5, 20]
Out[75]= 3.4590554287709991277 + 0.*10^-20 I
As you see there is a logarithmic singularity.
In[79]:= ser =
Assuming[0 < eps < 1/32, FullSimplify[Series[int, {eps, 0, 1}]]]
Out[79]= SeriesData[eps, 0, {(I*(-1 + E)*Pi -
2*(1 + E)*HarmonicNumber[-(-I + Pi)/(2*Pi)] +
Log[1/(4*eps^2*Pi^2)] - 2*E*Log[2*eps*Pi])/(2*(1 + E)*Pi),
(-1 + E)/((1 + E)*Pi)}, 0, 2, 1]
In[80]:= Normal[
ser] /. {{eps -> 1.*^-6}, {eps -> 0.00001}, {eps -> 0.0001}}
Out[80]= {4.191989703816426 - 7.603403526913691*^-17*I,
3.459055428805136 -
7.603403526913691*^-17*I,
2.726133068607085 - 7.603403526913691*^-17*I}
EDIT
Out[79] of the code above gives the series expansion for eps->0, and if these two logarithmic terms get combined, we get
In[7]:= ser = SeriesData[eps, 0,
{(I*(-1 + E)*Pi - 2*(1 + E)*HarmonicNumber[-(-I + Pi)/(2*Pi)] +
Log[1/(4*eps^2*Pi^2)] - 2*E*Log[2*eps*Pi])/(2*(1 + E)*
Pi),
(-1 + E)/((1 + E)*Pi)}, 0, 2, 1];
In[8]:= Collect[Normal[PowerExpand //# (ser + O[eps])],
Log[eps], FullSimplify]
Out[8]= -(Log[eps]/\[Pi]) + (
I (-1 + E) \[Pi] -
2 (1 + E) (HarmonicNumber[-((-I + \[Pi])/(2 \[Pi]))] +
Log[2 \[Pi]]))/(2 (1 + E) \[Pi])
Clearly the -Log[eps]/Pi came from the pole at x==0. So if one subtracts this, just like principle value method does this for other poles you end up with a finitely value:
In[9]:= % /. Log[eps] -> 0
Out[9]= (I (-1 + E) \[Pi] -
2 (1 + E) (HarmonicNumber[-((-I + \[Pi])/(2 \[Pi]))] +
Log[2 \[Pi]]))/(2 (1 + E) \[Pi])
In[10]:= N[%, 20]
Out[10]= -0.20562403655659928968 + 0.*10^-21 I
Of course, this result is difficult to verify numerically, but you might know more that I do about your problem.
EDIT 2
This edit is to justify In[65] input that computes the original regularized integral. We are computing
Sum[ Integrate[ Exp[-x]/Sin[Pi*x], {x, k+eps, k+1-eps}], {k, 0, Infinity}] ==
Sum[ Integrate[ Exp[-x-k]/Sin[Pi*(k+x)], {x, eps, 1-eps}], {k, 0, Infinity}] ==
Sum[ (-1)^k*Exp[-k]*Integrate[ Exp[-x]/Sin[Pi*x], {x, eps, 1-eps}],
{k, 0, Infinity}] ==
Sum[ (-1)^k*Exp[-k], {k, 0, Infinity}] *
Integrate[ Exp[-x]/Sin[Pi*x], {x, eps, 1-eps}]
In the third line Sin[Pi*(k+x)] == (-1)^k*Sin[Pi*x] for integer k was used.
Simon, I haven't spent much time with your integral, but you should try looking at stationary phase approximation. What you have is a smooth function (exp), and a highly oscillatory function (sine). The work involved is now in brow-beating the 1/sin(x) into the form exp(if(x))
Alternatively, you could use the series expansion of the cosecant (not valid at poles):
In[1]:=Series[Csc[x], {x, 0, 5}]
(formatted) Out[1]=1/x + x/6 + 7/360 x^3 + 31/15120 x^5 +O[x]^6
Note that for all m>-1, you have the following:
In[2]:=Integrate[x^m Exp[-x], {x, 0, Infinity}, Assumptions -> m > -1]
Out[2]=Gamma[1+m]
However, summing the series with the coefficients of cosecant (from wikipedia), not including 1/x Exp[-x] case, which doesn't converge on [0,Infinity].
c[m_] := (-1)^(m + 1) 2 (2^(2 m - 1) - 1) BernoulliB[2 m]/Factorial[2 m];
Sum[c[m] Gamma[1 + 2 m - 1], {m, 1, Infinity}]
does not converge either...
So, I'm not sure that you can work out an approximation for the integral to infinity, but I if you're satisfied with a solution upto some large N, I hope these help.
I have to agree with Sasha, the integral does not appear to be convergent. However, if you exclude x == 0 and break the integral into pieces
Integrate[Exp[-x]/Sin[Pi x], {x, n + 1/2, n + 3/2}, PrincipalValue -> True]
where n >= 0 && Element[n, Integers], then it seems you may get an alternating series
I Sum[ (-1/E)^n, {n, 1, Infinity}] == - I / (1 + E )
Now, I only took it out to n == 4, but it looks reasonable. However, for the integral above with Assumptions -> Element[n, Integers] && n >= 0 Mathematica gives
If[ 2 n >= 1, - I / E, Integrate[ ... ] ]
which just doesn't conform to the individual cases. As an additional note, if the pole lies at the boundary of the integration region, i.e. your limits are {x, n, n + 1}, you only get DirectedInfinitys. A quick look at the plot implies that you with the limits {x, n, n + 1} you only have a strictly positive or negative integrand, so the infinite value may be due to the lack of compensation which {x, n + 1/2, n + 3/2} gives you. Checking with {x, n, n + 2}, however it only spits out the unevaluated integral.
On the documentation page for Equal we read that
Approximate numbers with machine
precision or higher are considered
equal if they differ in at most their
last seven binary digits (roughly
their last two decimal digits).
Here are examples (32 bit system; for 64 bit system add some more zeros in the middle):
In[1]:= 1.0000000000000021 == 1.0000000000000022
1.0000000000000021 === 1.0000000000000022
Out[1]= True
Out[2]= True
I'm wondering is there a "normal" analog of the Equal function in Mathematica that does not drop last 7 binary digits?
Thanks to recent post on the official newsgroup by Oleksandr Rasputinov, now I have learned two undocumented functions which control the tolerance of Equal and SameQ: $EqualTolerance and $SameQTolerance. In Mathematica version 5 and earlier these functions live in the Experimental` context and are well documented: $EqualTolerance, $SameQTolerance. Starting from version 6, they are moved to the Internal` context and become undocumented but still work and even have built-in diagnostic messages which appear when one try to assign them illegal values:
In[1]:= Internal`$SameQTolerance = a
During evaluation of In[2]:= Internal`$SameQTolerance::tolset:
Cannot set Internal`$SameQTolerance to a; value must be a real
number or +/- Infinity.
Out[1]= a
Citing Oleksandr Rasputinov:
Internal`$EqualTolerance ... takes a
machine real value indicating the
number of decimal digits' tolerance
that should be applied, i.e.
Log[2]/Log[10] times the number of
least significant bits one wishes to
ignore.
In this way, setting Internal`$EqualTolerance to zero will force Equal to consider numbers equal only when they are identical in all binary digits (not considering out-of-Precision digits):
In[2]:= Block[{Internal`$EqualTolerance = 0},
1.0000000000000021 == 1.0000000000000022]
Out[2]= False
In[5]:= Block[{Internal`$EqualTolerance = 0},
1.00000000000000002 == 1.000000000000000029]
Block[{Internal`$EqualTolerance = 0},
1.000000000000000020 == 1.000000000000000029]
Out[5]= True
Out[6]= False
Note the following case:
In[3]:= Block[{Internal`$EqualTolerance = 0},
1.0000000000000020 == 1.0000000000000021]
RealDigits[1.0000000000000020, 2] === RealDigits[1.0000000000000021, 2]
Out[3]= True
Out[4]= True
In this case both numbers have MachinePrecision which effectively is
In[5]:= $MachinePrecision
Out[5]= 15.9546
(53*Log[10, 2]). With such precision these numbers are identical in all binary digits:
In[6]:= RealDigits[1.0000000000000020` $MachinePrecision, 2] ===
RealDigits[1.0000000000000021` $MachinePrecision, 2]
Out[6]= True
Increasing precision to 16 makes them different arbitrary-precision numbers:
In[7]:= RealDigits[1.0000000000000020`16, 2] ===
RealDigits[1.0000000000000021`16, 2]
Out[7]= False
In[8]:= Row#First#RealDigits[1.0000000000000020`16,2]
Row#First#RealDigits[1.0000000000000021`16,2]
Out[9]= 100000000000000000000000000000000000000000000000010010
Out[10]= 100000000000000000000000000000000000000000000000010011
But unfortunately Equal still fails to distinguish them:
In[11]:= Block[{Internal`$EqualTolerance = 0},
{1.00000000000000002`16 == 1.000000000000000021`16,
1.00000000000000002`17 == 1.000000000000000021`17,
1.00000000000000002`18 == 1.000000000000000021`18}]
Out[11]= {True, True, False}
There is an infinite number of such cases:
In[12]:= Block[{Internal`$EqualTolerance = 0},
Cases[Table[a = SetPrecision[1., n];
b = a + 10^-n; {n, a == b, RealDigits[a, 2] === RealDigits[b, 2],
Order[a, b] == 0}, {n, 15, 300}], {_, True, False, _}]] // Length
Out[12]= 192
Interestingly, sometimes RealDigits returns identical digits while Order shows that internal representations of expressions are not identical:
In[13]:= Block[{Internal`$EqualTolerance = 0},
Cases[Table[a = SetPrecision[1., n];
b = a + 10^-n; {n, a == b, RealDigits[a, 2] === RealDigits[b, 2],
Order[a, b] == 0}, {n, 15, 300}], {_, _, True, False}]] // Length
Out[13]= 64
But it seems that opposite situation newer happens:
In[14]:=
Block[{Internal`$EqualTolerance = 0},
Cases[Table[a = SetPrecision[1., n];
b = a + 10^-n; {n, a == b, RealDigits[a, 2] === RealDigits[b, 2],
Order[a, b] == 0}, {n, 15, 3000}], {_, _, False, True}]] // Length
Out[14]= 0
Try this:
realEqual[a_, b_] := SameQ ## RealDigits[{a, b}, 2, Automatic]
The choice of base 2 is crucial to ensure that you are comparing the internal representations.
In[54]:= realEqual[1.0000000000000021, 1.0000000000000021]
Out[54]= True
In[55]:= realEqual[1.0000000000000021, 1.0000000000000022]
Out[55]= False
In[56]:= realEqual[
1.000000000000000000000000000000000000000000000000000000000000000022
, 1.000000000000000000000000000000000000000000000000000000000000000023
]
Out[56]= False
In[12]:= MyEqual[x_, y_] := Order[x, y] == 0
In[13]:= MyEqual[1.0000000000000021, 1.0000000000000022]
Out[13]= False
In[14]:= MyEqual[1.0000000000000021, 1.0000000000000021]
Out[14]= True
This tests if two object are identical, since 1.0000000000000021 and 1.000000000000002100 differs in precision they won't be considered as identical.
I'm not aware of an already defined operator. But you may define for example:
longEqual[x_, y_] := Block[{$MaxPrecision = 20, $MinPrecision = 20},
Equal[x - y, 0.]]
Such as:
longEqual[1.00000000000000223, 1.00000000000000223]
True
longEqual[1.00000000000000223, 1.00000000000000222]
False
Edit
If you want to generalize for an arbitrary number of digits, you can do for example:
longEqual[x_, y_] :=
Block[{
$MaxPrecision = Max ## StringLength /# ToString /# {x, y},
$MinPrecision = Max ## StringLength /# ToString /# {x, y}},
Equal[x - y, 0.]]
So that your counterexample in your comment also works.
HTH!
I propose a strategy that uses RealDigits to compare the actual digits of the numbers. The only tricky bit is stripping out trailing zeroes.
trunc = {Drop[First##, Plus ## First /# {-Dimensions#First##,
Last#Position[First##, n_?(# != 0 &)]}], Last##} &# RealDigits## &;
exactEqual = SameQ ## trunc /# {#1, #2} &;
In[1] := exactEqual[1.000000000000000000000000000000000000000000000000000111,
1.000000000000000000000000000000000000000000000000000111000]
Out[1] := True
In[2] := exactEqual[1.000000000000000000000000000000000000000000000000000111,
1.000000000000000000000000000000000000000000000000000112000]
Out[2] := False
I think that you really have to specify what you want... there's no way to compare approximate real numbers that will satisfy everyone in every situation.
Anyway, here's a couple more options:
In[1]:= realEqual[lhs_,rhs_,tol_:$MachineEpsilon] := 0==Chop[lhs-rhs,tol]
In[2]:= Equal[1.0000000000000021,1.0000000000000021]
realEqual[1.0000000000000021,1.0000000000000021]
Out[2]= True
Out[3]= True
In[4]:= Equal[1.0000000000000022,1.0000000000000021]
realEqual[1.0000000000000022,1.0000000000000021]
Out[4]= True
Out[5]= False
As the precision of both numbers gets higher, then they can always be distinguished if you set tol high enough.
Note that the subtraction is done at the precision of the lowest of the two numbers. You could make it happen at the precision of the higher number (which seems a bit pointless) by doing something like
maxEqual[lhs_, rhs_] := With[{prec = Max[Precision /# {lhs, rhs}]},
0 === Chop[SetPrecision[lhs, prec] - SetPrecision[rhs, prec], 10^-prec]]
maybe using the minimum precision makes more sense
minEqual[lhs_, rhs_] := With[{prec = Min[Precision /# {lhs, rhs}]},
0 === Chop[SetPrecision[lhs, prec] - SetPrecision[rhs, prec], 10^-prec]]
One other way to define such function is by using SetPrecision:
MyEqual[a_, b_] := SetPrecision[a, Precision[a] + 3] == SetPrecision[b, Precision[b] + 3]
This seems to work in the all cases but I'm still wondering is there a built-in function. It is ugly to use high-level functions for such a primitive task...