How to identify the order of evaluation of a mathematical expression and its correctness? - addition

Is a - b + c - d = a + c - b - d mathematically correct?
I believe this statement can be correct but only sometimes if the order of evaluation does not matter so if I were to do
{(a - b) + c} - d and choose numbers that would evaluate to {(a + c) - b} - d where b and c are both the same numbers then this may be correct.
Is there a more mathematical and logical explanation for this?
I also think it has to do with associativity but that would prove that this statement is never correct since addition and multiplication are associative (separately) but not addition and subtraction together

This highly depends on the definition of + and -. So far as you have written, they are but free untyped infix symbols so it's hard to tell.
A simple example. Suppose values are of fixed-width floating point type (like one of those defined in IEEE-754, for instance). Next, if we have
a = 10e100
b = -10e-100
c = -10e100
d = -10e-100
and the expressions are evaluated greedily left-to-right, then
a - b + c - d = ((a - b) + c) - d
When the type has enough order bits to contain decimal orders of -100 and 100, but its mantissa is not wide enough to correctly represent 10e100 + 10e-100, specifically, the RHS argument is simply lost in this expression, then the value of the whole large expression is
((10e100 - -10e-100) + -10e100) - -10e-100 =
= (10e100 + -10e100) - -10e-100 = 0 - -10e-100 = 10e-100
But the second expression would evaluate to
((a + c) - b) - d = ((10e100 + -10e100) - -10e-100) - -10e-100 = 20e-100
So you see, the result can differ by 100% depending on the order of evaluation.

Related

Algorithm to precisely compare two exponentiations for very large integers (order of 1 billion)

We want to compare a^b to c^d, and tell if the first is smaller, greater, or equal (where ^ denotes exponentiation).
Obviously, for very large numbers, we cannot explicitely compute these values.
The most common approach in this situation is to apply log on both sides and compare b * log(a) to d * log(c). The issue here is that logs are floating-point operations, and as such we cannot trust our answer with 100% confidence (there might be some values which are incredibly close, and because of floating-point error we get a wrong answer).
Is there an algorithm for solving this problem? I've been scouring the intrernet for this, but I can only find solutions which work for particular cases only (e.g. in which one exponent is a multiple of another), or which use floating point in some way (logarithms, division) etc.
This is sort of two questions in one:
Are they equal?
If not, which one is greater?
As Peter O. observes, it's easiest to build in a language that provides an arbitrary-precision fraction type. I'll use Python 3.
Let's assume without loss of generality that a ≤ c (swap if necessary) and b is relatively prime to d (divide both by the greatest common divisor).
To get at the core of the question, I'm going to assume that a, c > 0 and b, d ≥ 0. Removing this assumption is tedious but not difficult.
Equality test
There are some easy cases where a = 1 or b = 0 or c = 1 or d = 0.
Separately, necessary conditions for a^b = c^d are
i. b ≥ d, since otherwise b < d, which together with a ≤ c implies a^b < c^d;
ii. a is a divisor of c, since we know from (i) that a^b = c^d is a divisor of c^b = c^(b−d) c^d.
When these conditions hold, we can divide through by a^d to reduce the problem to testing whether a^(b−d) = (c/a)^d.
In Python 3:
def equal_powers(a, b, c, d):
while True:
lhs_is_one = a == 1 or b == 0
rhs_is_one = c == 1 or d == 0
if lhs_is_one or rhs_is_one:
return lhs_is_one and rhs_is_one
if a > c:
a, b, c, d = c, d, a, b
if b < d:
return False
q, r = divmod(c, a)
if r != 0:
return False
b -= d
c = q
def test_equal_powers():
for a in range(1, 25):
for b in range(25):
for c in range(1, 25):
for d in range(25):
assert equal_powers(a, b, c, d) == (a ** b == c ** d)
test_equal_powers()
Inequality test
Once we've established that the two quantities are not equal, it's time to figure out which one is greater. (Without the equality test, the code here could run forever.)
If you're doing this for real, you should consult an actual reference on computing elementary functions. I'm just going to try to do the simplest thing that works.
Time for a calculus refresher. We have the Taylor series
−log x = (1−x) + (1−x)^2/2 + (1−x)^3/3 + (1−x)^4/4 + ...
To get a lower bound, truncate the series. To get an upper bound, we can truncate but replace the final term (1−x)^n/n with (1−x)^n/n (1/x), since
(1−x)^n/n (1/x)
= (1−x)^n/n (1 + (1−x) + (1−x)^2 + ...)
= (1−x)^n/n + (1−x)^(n+1)/n + (1−x)^(n+2)/n + ...
> (1−x)^n/n + (1−x)^(n+1)/(n+1) + (1−x)^(n+2)/(n+2) + ...
To get a good convergence rate, we're going to want 0.5 ≤ x < 1, which we can achieve by dividing x by a power of two.
In Python, we'll represent a real number as an infinite generator of shrinking intervals that contain the true value. Once the intervals for b log a and d log c are disjoint, we can determine how they compare.
import fractions
def minus(x, y):
while True:
x_lo, x_hi = next(x)
y_lo, y_hi = next(y)
yield x_lo - y_hi, x_hi - y_lo
def times(b, x):
for lo, hi in x:
yield b * lo, b * hi
def restricted_log(a):
series = 0
n = 0
numerator = 1
while True:
n += 1
numerator *= 1 - a
series += fractions.Fraction(numerator, n)
yield -(series + fractions.Fraction(numerator * (1 - a), (n + 1) * a)), -series
def log(a):
n = 0
while a >= 1:
a = fractions.Fraction(a, 2)
n += 1
return minus(restricted_log(a), times(n, restricted_log(fractions.Fraction(1, 2))))
def less_powers(a, b, c, d):
lhs = times(b, log(a))
rhs = times(d, log(c))
while True:
lhs_lo, lhs_hi = next(lhs)
rhs_lo, rhs_hi = next(rhs)
if lhs_hi < rhs_lo:
return True
if rhs_hi < lhs_lo:
return False
def test_less_powers():
for a in range(1, 10):
for b in range(10):
for c in range(1, 10):
for d in range(10):
if a ** b != c ** d:
assert less_powers(a, b, c, d) == (a ** b < c ** d)
test_less_powers()

Probability of a disjunction on N dependent events in Prolog

Does anybody know where to find a Prolog algorithm for computing the probability of a disjunction for N dependent events? For N = 2 i know that P(E1 OR E2) = P(E1) + P(E2) - P(E1) * P(E2), so one could do:
prob_disjunct(E1, E2, P):- P is E1 + E2 - E1 * E2
But how can this predicate be generalised to N events when the input is a list? Maybe there is a package which does this?
Kinds regards/JCR
The recursive formula from Robert Dodier's answer directly translates to
p_or([], 0).
p_or([P|Ps], Or) :-
p_or(Ps, Or1),
Or is P + Or1*(1-P).
Although this works fine, e.g.
?- p_or([0.5,0.3,0.7,0.1],P).
P = 0.9055
hardcore Prolog programmers can't help noticing that the definition isn't tail-recursive. This would really only be a problem when you are processing very long lists, but since the order of list elements doesn't matter, it is easy to turn things around. This is a standard technique, using an auxiliary predicate and an "accumulator pair" of arguments:
p_or(Ps, Or) :-
p_or(Ps, 0, Or).
p_or([], Or, Or).
p_or([P|Ps], Or0, Or) :-
Or1 is P + Or0*(1-P),
p_or(Ps, Or1, Or). % tail-recursive call
I don't know anything about Prolog, but anyway it's convenient to write the probability of a disjunction of a number of independent items p_m = Pr(S_1 or S_2 or S_3 or ... or S_m) recursively as
p_m = Pr(S_m) + p_{m - 1} (1 - P(S_m))
You can prove this by just peeling off the last item -- look at Pr((S_1 or ... or S_{m - 1}) or S_m) and just write that in terms of the usual formula, writing Pr(A or B) = Pr(A) + Pr(B) - Pr(A) Pr(B) = Pr(B) + Pr(A) (1 - Pr(B)), for A and B independent.
The formula above is item C.3.10 in my dissertation: http://riso.sourceforge.net/docs/dodier-dissertation.pdf It is a simple result, and I suppose it must be an exercise in some textbooks, although I don't remember seeing it.
For any event E I'll write E' for the complementary event (ie E' occurs iff E doesn't).
Then we have:
P(E') = 1 - P(E)
(A union B)' = A' inter B'
A and B are independent iff A' and B' are independent
so for independent E1..En
P( E1 union .. union En ) = 1 - P( E1' inter .. inter En')
= 1 - product{ i<=i<=n | 1 - P(E[i])}

Maxima: how to reform a equation in a defined format

In Maxima, I want to change the following equation:
ax+b-c-d=0
into the following format
(ax+b)/(c+d)=1
Note:
something like ax+b-c-d+1=1 is not what I want.
Basically I want to have positive elements in one side and negative elements in another side, then divide the positive elements by the negative elements.
Here is a quick attempt. It handles some equations of the form you described, but it's probably easy to find some which it can't handle. Maybe it works well enough, or at least provides some inspiration.
ptermp (e) := symbolp(e) or (numberp(e) and e > 0)
or ((op(e) = "+" or op(e) = "*") and every (ptermp, args(e)));
matchdeclare (pterm, ptermp);
matchdeclare (otherterm, all);
defrule (r1, pterm + otherterm = 0, ratsimp (pterm/(-otherterm)) = 1);
NOTE: the catch-all otherterm must be precede pterm alphabetically! This is a useful, but obscure, consequence of the simplification of "+" expressions and the pattern-matching process ... sorry for the obscurity.
Examples:
apply1 (a*x - b - c + d = 0, r1);
a x + d
------- = 1
c + b
apply1 (a*x - (b + g) - 2*c + d*e*f = 0, r1);
a x + d e f
----------- = 1
g + 2 c + b

Why won't this Mathematica code maximize?

f[n_] := ((A*n^a)^(1/s) +
c*(B*(a*c*(B/A)^(1/s)*n^(1 - (a/s)))^(-(a*s)/(a - s)))^(1/s))^s +
b*log (1 - n - ((a*c*(B/A)^(1/s)*n^(1 - (a/s)))^(-(a*s)/(a - s))))
d/dn (f (n))
d/dn (f[n])
D[f[n], n]
solve (D[f[n], n] = 0)
0
Solve[D[f[n], n] = 0, n]
Solve[0, n]
Maximize[f[n], n]
Maximize[b log (1 - n - (a (B/A)^(1/s) c n^(1 - a/s))^(-((a s)/(a - s)))) + ((A n^a)^(1/s)
+ c (B (a (B/A)^(1/s) c n^(1 - a/s))^(-((a s)/(a - s))))^(1/s))^s, n]
I am not getting anything returning for any of these functions. Any idea why?
Attaching a photo of the mathematica script:
First of all, you're using solve with a lowercase, which is just an undefined variable. To use the function Solve you need to write it with a capital letter. In the same way, you have to write Log with a capital letter, not a lower-case letter, since it's a built in function.
Second, your open parenthesis is not a bracket. Functions in Mathematica require brackets, like Solve[ ... ], not Solve( ).
Third, you're using = instead of ==. The single equals = is used to store variables, the double equals == is used to represent equality.
See if you can get it to work after remedying these errors.

Get mathematica to simplify expression with another equation

I have a very complicated mathematica expression that I'd like to simplify by using a new, possibly dimensionless parameter.
An example of my expression is:
K=a*b*t/((t+f)c*d);
(the actual expression is monstrously large, thousands of characters). I'd like to replace all occurrences of the expression t/(t+f) with p
p=t/(t+f);
The goal here is to find a replacement so that all t's and f's are replaced by p. In this case, the replacement p is a nondimensionalized parameter, so it seems like a good candidate replacement.
I've not been able to figure out how to do this in mathematica (or if its possible). I tried:
eq1= K==a*b*t/((t+f)c*d);
eq2= p==t/(t+f);
Solve[{eq1,eq2},K]
Not surprisingly, this doesn't work. If there were a way to force it to solve for K in terms of p,a,b,c,d, this might work, but I can't figure out how to do that either. Thoughts?
Edit #1 (11/10/11 - 1:30)
[deleted to simplify]
OK, new tact. I've taken p=ton/(ton+toff) and multiplied p by several expressions. I know that p can be completely eliminated. The new expression (in terms of p) is
testEQ = A B p + A^2 B p^2 + (A+B)p^3;
Then I made the substitution for p, and called (normal) FullSimplify, giving me this expression.
testEQ2= (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
Finally, I tried all of the suggestions below, except the last (not sure how it works yet!)
Only the eliminate option worked. So I guess I'll try this method from now on. Thank you.
EQ1 = a1 == (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
EQ2 = P1 == ton/(ton + toff);
Eliminate[{EQ1, EQ2}, {ton, toff}]
A B P1 + A^2 B P1^2 + (A + B) P1^3 == a1
I should add, if the goal is to make all substitutions that are possible, leaving the rest, I still don't know how to do that. But it appears that if a substitution can completely eliminate a few variables, Eliminate[] works best.
Have you tried this?
K = a*b*t/((t + f) c*d);
Solve[p == t/(t + f), t]
-> {{t -> -((f p)/(-1 + p))}}
Simplify[K /. %[[1]] ]
-> (a b p)/(c d)
EDIT: Oh, and are you aware of Eliminiate?
Eliminate[{eq1, eq2}, {t,f}]
-> a b p == c d K && c != 0 && d != 0
Solve[%, K]
-> {{K -> (a b p)/(c d)}}
EDIT 2: Also, in this simple case, solving for K and t simultaneously seems to do the trick, too:
Solve[{eq1, eq2}, {K, t}]
-> {{K -> (a b p)/(c d), t -> -((f p)/(-1 + p))}}
Something along these lines is discussed in the MathGroup post at
http://forums.wolfram.com/mathgroup/archive/2009/Oct/msg00023.html
(I see it has an apocryphal note that is quite relevant, at least to the author of that post.)
Here is how it might be applied in the example above. For purposes of keeping this self contained I'll repeat the replacement code.
replacementFunction[expr_, rep_, vars_] :=
Module[{num = Numerator[expr], den = Denominator[expr],
hed = Head[expr], base, expon},
If[PolynomialQ[num, vars] &&
PolynomialQ[den, vars] && ! NumberQ[den],
replacementFunction[num, rep, vars]/
replacementFunction[den, rep, vars],
If[hed === Power && Length[expr] == 2,
base = replacementFunction[expr[[1]], rep, vars];
expon = replacementFunction[expr[[2]], rep, vars];
PolynomialReduce[base^expon, rep, vars][[2]],
If[Head[hed] === Symbol &&
MemberQ[Attributes[hed], NumericFunction],
Map[replacementFunction[#, rep, vars] &, expr],
PolynomialReduce[expr, rep, vars][[2]]]]]]
Your example is now as follows. We take the input, and also the replacement. For the latter we make an equivalent polynomial by clearing denominators.
kK = a*b*t/((t + f) c*d);
rep = Numerator[Together[p - t/(t + f)]];
Now we can invoke the replacement. We list the variables we are interested in replacing, treating 'p' as a parameter. This way it will get ordered lower than the others, meaning the replacements will try to remove them in favor of 'p'.
In[127]:= replacementFunction[kK, rep, {t, f}]
Out[127]= (a b p)/(c d)
This approach has a bit of magic in figuring out what should be the listed "variables". Possibly some further tweakage could be done to improve on that. But I believe that, generally, simply not listing the things we want to use as new replacements is the right way to go.
Over the years there have been variants of this idea on MathGroup. It is possible that some others may be better suited to the specific expression(s) you wish to handle.
--- edit ---
The idea behind this is to use PolynomialReduce to do algebraic replacement. That is to say, we do not try for pattern matching but instead use polynomial "canonicalization" a method. But in general we're not working with polynomial inputs. So we apply this idea recursively on PolynomialQ arguments inside NumericQ functions.
Earlier versions of this idea, along with some more explanation, can be found at the note referenced below, as well as in notes it references (how's that for explanatory recursion?).
http://forums.wolfram.com/mathgroup/archive/2006/Aug/msg00283.html
--- end edit ---
--- edit 2 ---
As observed in the wild, this approach is not always a simplifier. It does algebraic replacement, which involves, under the hood, a notion of "term ordering" (roughly, "which things get replaced by which others?") and thus simple variables may expand to longer expressions.
Another form of term rewriting is syntactic replacement via pattern matching, and other responses discuss using that approach. It has a different drawback, insofar as the generality of patterns to consider might become overwhelming. For example, what does one do with k^2/(w + p^4)^3 when the rule is to replace k/(w + p^4) with q? (Specifically, how do we recognize this as being equivalent to (k/(w + p^4))^2*1/(w + p^4)?)
The upshot is one needs to have an idea of what is desired and what methods might be feasible. This of course is generally problem specific.
One thing that occurs is perhaps you want to find and replace all commonly occurring "complicated" expressions with simpler ones. This is referred to as common subexpression elimination (CSE). In Mathematica this can be done using a function called Experimental`OptimizeExpression[]. Here are several links to MathGroup posts that discuss this.
http://forums.wolfram.com/mathgroup/archive/2009/Jul/msg00138.html
http://forums.wolfram.com/mathgroup/archive/2007/Nov/msg00270.html
http://forums.wolfram.com/mathgroup/archive/2006/Sep/msg00300.html
http://forums.wolfram.com/mathgroup/archive/2005/Jan/msg00387.html
http://forums.wolfram.com/mathgroup/archive/2002/Jan/msg00369.html
Here is an example from one of those notes.
InputForm[Experimental`OptimizeExpression[(3 + 3*a^2 + Sqrt[5 + 6*a + 5*a^2] +
a*(4 + Sqrt[5 + 6*a + 5*a^2]))/6]]
Out[206]//InputForm=
Experimental`OptimizedExpression[Block[{Compile`$1, Compile`$3, Compile`$4,
Compile`$5, Compile`$6}, Compile`$1 = a^2; Compile`$3 = 6*a;
Compile`$4 = 5*Compile`$1; Compile`$5 = 5 + Compile`$3 + Compile`$4;
Compile`$6 = Sqrt[Compile`$5]; (3 + 3*Compile`$1 + Compile`$6 +
a*(4 + Compile`$6))/6]]
--- end edit 2 ---
Daniel Lichtblau
K = a*b*t/((t+f)c*d);
FullSimplify[ K,
TransformationFunctions -> {(# /. t/(t + f) -> p &), Automatic}]
(a b p) / (c d)
Corrected update to show another method:
EQ1 = a1 == (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
f = # /. ton + toff -> ton/p &;
FullSimplify[f # EQ1]
a1 == p (A B + A^2 B p + (A + B) p^2)
I don't know if this is of any value at this point, but hopefully at least it works.

Resources