In Maxima, I want to change the following equation:
ax+b-c-d=0
into the following format
(ax+b)/(c+d)=1
Note:
something like ax+b-c-d+1=1 is not what I want.
Basically I want to have positive elements in one side and negative elements in another side, then divide the positive elements by the negative elements.
Here is a quick attempt. It handles some equations of the form you described, but it's probably easy to find some which it can't handle. Maybe it works well enough, or at least provides some inspiration.
ptermp (e) := symbolp(e) or (numberp(e) and e > 0)
or ((op(e) = "+" or op(e) = "*") and every (ptermp, args(e)));
matchdeclare (pterm, ptermp);
matchdeclare (otherterm, all);
defrule (r1, pterm + otherterm = 0, ratsimp (pterm/(-otherterm)) = 1);
NOTE: the catch-all otherterm must be precede pterm alphabetically! This is a useful, but obscure, consequence of the simplification of "+" expressions and the pattern-matching process ... sorry for the obscurity.
Examples:
apply1 (a*x - b - c + d = 0, r1);
a x + d
------- = 1
c + b
apply1 (a*x - (b + g) - 2*c + d*e*f = 0, r1);
a x + d e f
----------- = 1
g + 2 c + b
Related
We want to compare a^b to c^d, and tell if the first is smaller, greater, or equal (where ^ denotes exponentiation).
Obviously, for very large numbers, we cannot explicitely compute these values.
The most common approach in this situation is to apply log on both sides and compare b * log(a) to d * log(c). The issue here is that logs are floating-point operations, and as such we cannot trust our answer with 100% confidence (there might be some values which are incredibly close, and because of floating-point error we get a wrong answer).
Is there an algorithm for solving this problem? I've been scouring the intrernet for this, but I can only find solutions which work for particular cases only (e.g. in which one exponent is a multiple of another), or which use floating point in some way (logarithms, division) etc.
This is sort of two questions in one:
Are they equal?
If not, which one is greater?
As Peter O. observes, it's easiest to build in a language that provides an arbitrary-precision fraction type. I'll use Python 3.
Let's assume without loss of generality that a ≤ c (swap if necessary) and b is relatively prime to d (divide both by the greatest common divisor).
To get at the core of the question, I'm going to assume that a, c > 0 and b, d ≥ 0. Removing this assumption is tedious but not difficult.
Equality test
There are some easy cases where a = 1 or b = 0 or c = 1 or d = 0.
Separately, necessary conditions for a^b = c^d are
i. b ≥ d, since otherwise b < d, which together with a ≤ c implies a^b < c^d;
ii. a is a divisor of c, since we know from (i) that a^b = c^d is a divisor of c^b = c^(b−d) c^d.
When these conditions hold, we can divide through by a^d to reduce the problem to testing whether a^(b−d) = (c/a)^d.
In Python 3:
def equal_powers(a, b, c, d):
while True:
lhs_is_one = a == 1 or b == 0
rhs_is_one = c == 1 or d == 0
if lhs_is_one or rhs_is_one:
return lhs_is_one and rhs_is_one
if a > c:
a, b, c, d = c, d, a, b
if b < d:
return False
q, r = divmod(c, a)
if r != 0:
return False
b -= d
c = q
def test_equal_powers():
for a in range(1, 25):
for b in range(25):
for c in range(1, 25):
for d in range(25):
assert equal_powers(a, b, c, d) == (a ** b == c ** d)
test_equal_powers()
Inequality test
Once we've established that the two quantities are not equal, it's time to figure out which one is greater. (Without the equality test, the code here could run forever.)
If you're doing this for real, you should consult an actual reference on computing elementary functions. I'm just going to try to do the simplest thing that works.
Time for a calculus refresher. We have the Taylor series
−log x = (1−x) + (1−x)^2/2 + (1−x)^3/3 + (1−x)^4/4 + ...
To get a lower bound, truncate the series. To get an upper bound, we can truncate but replace the final term (1−x)^n/n with (1−x)^n/n (1/x), since
(1−x)^n/n (1/x)
= (1−x)^n/n (1 + (1−x) + (1−x)^2 + ...)
= (1−x)^n/n + (1−x)^(n+1)/n + (1−x)^(n+2)/n + ...
> (1−x)^n/n + (1−x)^(n+1)/(n+1) + (1−x)^(n+2)/(n+2) + ...
To get a good convergence rate, we're going to want 0.5 ≤ x < 1, which we can achieve by dividing x by a power of two.
In Python, we'll represent a real number as an infinite generator of shrinking intervals that contain the true value. Once the intervals for b log a and d log c are disjoint, we can determine how they compare.
import fractions
def minus(x, y):
while True:
x_lo, x_hi = next(x)
y_lo, y_hi = next(y)
yield x_lo - y_hi, x_hi - y_lo
def times(b, x):
for lo, hi in x:
yield b * lo, b * hi
def restricted_log(a):
series = 0
n = 0
numerator = 1
while True:
n += 1
numerator *= 1 - a
series += fractions.Fraction(numerator, n)
yield -(series + fractions.Fraction(numerator * (1 - a), (n + 1) * a)), -series
def log(a):
n = 0
while a >= 1:
a = fractions.Fraction(a, 2)
n += 1
return minus(restricted_log(a), times(n, restricted_log(fractions.Fraction(1, 2))))
def less_powers(a, b, c, d):
lhs = times(b, log(a))
rhs = times(d, log(c))
while True:
lhs_lo, lhs_hi = next(lhs)
rhs_lo, rhs_hi = next(rhs)
if lhs_hi < rhs_lo:
return True
if rhs_hi < lhs_lo:
return False
def test_less_powers():
for a in range(1, 10):
for b in range(10):
for c in range(1, 10):
for d in range(10):
if a ** b != c ** d:
assert less_powers(a, b, c, d) == (a ** b < c ** d)
test_less_powers()
Is a - b + c - d = a + c - b - d mathematically correct?
I believe this statement can be correct but only sometimes if the order of evaluation does not matter so if I were to do
{(a - b) + c} - d and choose numbers that would evaluate to {(a + c) - b} - d where b and c are both the same numbers then this may be correct.
Is there a more mathematical and logical explanation for this?
I also think it has to do with associativity but that would prove that this statement is never correct since addition and multiplication are associative (separately) but not addition and subtraction together
This highly depends on the definition of + and -. So far as you have written, they are but free untyped infix symbols so it's hard to tell.
A simple example. Suppose values are of fixed-width floating point type (like one of those defined in IEEE-754, for instance). Next, if we have
a = 10e100
b = -10e-100
c = -10e100
d = -10e-100
and the expressions are evaluated greedily left-to-right, then
a - b + c - d = ((a - b) + c) - d
When the type has enough order bits to contain decimal orders of -100 and 100, but its mantissa is not wide enough to correctly represent 10e100 + 10e-100, specifically, the RHS argument is simply lost in this expression, then the value of the whole large expression is
((10e100 - -10e-100) + -10e100) - -10e-100 =
= (10e100 + -10e100) - -10e-100 = 0 - -10e-100 = 10e-100
But the second expression would evaluate to
((a + c) - b) - d = ((10e100 + -10e100) - -10e-100) - -10e-100 = 20e-100
So you see, the result can differ by 100% depending on the order of evaluation.
infix notaion:x=a+b*(c-d+e/f)/(g*h)+i
I transform infix into postfix,and I have two answers.I can't sure that which is correct.
1.x a b c d - e f / + g h * / * + i + =
2.x a b c d - e f / + * g h * / + i + =
I transform post-fix into motion sequence,and found both stacks is empty.
So, I want to ask whether has possibility of two answers.
If you do the evaluation, you'll see that your question comes down to whether (b*(c-d+e/f))/(g*h) is the same as b*((c-d+e/f)/(g*h))
The answer is that they're the same. That is:
(x*y)/z == x*(y/z)
I'm struggling to figure out an algorithm to find the intersection of two linear equations like:
f(x)=2x+4
g(x)=x+2
I'd like to use the method where you set f (x)=g (x) and solve x, and I'd like to stay away from cross product.
Does anyone have any suggestion to how an algorithm like that would look like?
If your input lines are in slope-intercept form, an algorithm is an over-kill as there is a direct formula to calculate their point of intersection. It's given on a Wikipedia page and you can understand it as explained below.
Given the equations of the lines: The x and y coordinates of the
point of intersection of two non-vertical lines can easily be found
using the following substitutions and rearrangements.
Suppose that two lines have the equations y = ax + c and y = bx + d where a
and b are the slopes (gradients) of the lines and where c and d are
the y-intercepts of the lines. At the point where the two lines
intersect (if they do), both y coordinates will be the same, hence the
following equality:
ax + c = bx + d.
We can rearrange this expression in order to extract the
value of x,
ax - bx = d - c, and so,
x = (d-c)/(a-b).
To find the y coordinate, all we need to do is substitute the value of x into > either one of the two line equations. For example, into the first:
y=(a*(d-c)/(a-b))+c.
Hence, the Point of Intersection is {(d-c)/(a-b), (a*(d-c)/(a-b))+c}
Note: If a = b then the two lines are parallel. If c ≠ d as well, the lines
are different and there is no intersection, otherwise the two lines are
identical.
Given:
ax + b = cx + d
ax = cx + d - b
ax - cx = d - b
x(a - c) = d - b
Therefore, x = (d - b) / (a - c)
In your example, let a = 2, b = 4, c = 1 d = 2
x = (2 - 4) / (2 - 1)
x = -2 / 1
x = -2
General solution. Let
f(x) = a1x + b1 ....... g(x) = a2x + b2
Special cases:
a1 == a2 and b1 == b2 : lines coincide
a1 == a2 and b1 != b2 : lines are parallel, no intersection
General case: a1 != a2
X = (b2 - b1) / (a1 - a2) ....and... Y = (a1b2 - a2b1) / (a1 - a2)
I don't remember what cross products are in the context of equations.
One way to solve these is to set them equal to each other, solve for x, then use that value to solve for y:
2x + 4 = x + 2
2x + 2 = x
x = -2
y = f(x)
= g(x)
= x + 2
= -2 + 2
= 0
Solution: (-2, 0)
I have a very complicated mathematica expression that I'd like to simplify by using a new, possibly dimensionless parameter.
An example of my expression is:
K=a*b*t/((t+f)c*d);
(the actual expression is monstrously large, thousands of characters). I'd like to replace all occurrences of the expression t/(t+f) with p
p=t/(t+f);
The goal here is to find a replacement so that all t's and f's are replaced by p. In this case, the replacement p is a nondimensionalized parameter, so it seems like a good candidate replacement.
I've not been able to figure out how to do this in mathematica (or if its possible). I tried:
eq1= K==a*b*t/((t+f)c*d);
eq2= p==t/(t+f);
Solve[{eq1,eq2},K]
Not surprisingly, this doesn't work. If there were a way to force it to solve for K in terms of p,a,b,c,d, this might work, but I can't figure out how to do that either. Thoughts?
Edit #1 (11/10/11 - 1:30)
[deleted to simplify]
OK, new tact. I've taken p=ton/(ton+toff) and multiplied p by several expressions. I know that p can be completely eliminated. The new expression (in terms of p) is
testEQ = A B p + A^2 B p^2 + (A+B)p^3;
Then I made the substitution for p, and called (normal) FullSimplify, giving me this expression.
testEQ2= (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
Finally, I tried all of the suggestions below, except the last (not sure how it works yet!)
Only the eliminate option worked. So I guess I'll try this method from now on. Thank you.
EQ1 = a1 == (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
EQ2 = P1 == ton/(ton + toff);
Eliminate[{EQ1, EQ2}, {ton, toff}]
A B P1 + A^2 B P1^2 + (A + B) P1^3 == a1
I should add, if the goal is to make all substitutions that are possible, leaving the rest, I still don't know how to do that. But it appears that if a substitution can completely eliminate a few variables, Eliminate[] works best.
Have you tried this?
K = a*b*t/((t + f) c*d);
Solve[p == t/(t + f), t]
-> {{t -> -((f p)/(-1 + p))}}
Simplify[K /. %[[1]] ]
-> (a b p)/(c d)
EDIT: Oh, and are you aware of Eliminiate?
Eliminate[{eq1, eq2}, {t,f}]
-> a b p == c d K && c != 0 && d != 0
Solve[%, K]
-> {{K -> (a b p)/(c d)}}
EDIT 2: Also, in this simple case, solving for K and t simultaneously seems to do the trick, too:
Solve[{eq1, eq2}, {K, t}]
-> {{K -> (a b p)/(c d), t -> -((f p)/(-1 + p))}}
Something along these lines is discussed in the MathGroup post at
http://forums.wolfram.com/mathgroup/archive/2009/Oct/msg00023.html
(I see it has an apocryphal note that is quite relevant, at least to the author of that post.)
Here is how it might be applied in the example above. For purposes of keeping this self contained I'll repeat the replacement code.
replacementFunction[expr_, rep_, vars_] :=
Module[{num = Numerator[expr], den = Denominator[expr],
hed = Head[expr], base, expon},
If[PolynomialQ[num, vars] &&
PolynomialQ[den, vars] && ! NumberQ[den],
replacementFunction[num, rep, vars]/
replacementFunction[den, rep, vars],
If[hed === Power && Length[expr] == 2,
base = replacementFunction[expr[[1]], rep, vars];
expon = replacementFunction[expr[[2]], rep, vars];
PolynomialReduce[base^expon, rep, vars][[2]],
If[Head[hed] === Symbol &&
MemberQ[Attributes[hed], NumericFunction],
Map[replacementFunction[#, rep, vars] &, expr],
PolynomialReduce[expr, rep, vars][[2]]]]]]
Your example is now as follows. We take the input, and also the replacement. For the latter we make an equivalent polynomial by clearing denominators.
kK = a*b*t/((t + f) c*d);
rep = Numerator[Together[p - t/(t + f)]];
Now we can invoke the replacement. We list the variables we are interested in replacing, treating 'p' as a parameter. This way it will get ordered lower than the others, meaning the replacements will try to remove them in favor of 'p'.
In[127]:= replacementFunction[kK, rep, {t, f}]
Out[127]= (a b p)/(c d)
This approach has a bit of magic in figuring out what should be the listed "variables". Possibly some further tweakage could be done to improve on that. But I believe that, generally, simply not listing the things we want to use as new replacements is the right way to go.
Over the years there have been variants of this idea on MathGroup. It is possible that some others may be better suited to the specific expression(s) you wish to handle.
--- edit ---
The idea behind this is to use PolynomialReduce to do algebraic replacement. That is to say, we do not try for pattern matching but instead use polynomial "canonicalization" a method. But in general we're not working with polynomial inputs. So we apply this idea recursively on PolynomialQ arguments inside NumericQ functions.
Earlier versions of this idea, along with some more explanation, can be found at the note referenced below, as well as in notes it references (how's that for explanatory recursion?).
http://forums.wolfram.com/mathgroup/archive/2006/Aug/msg00283.html
--- end edit ---
--- edit 2 ---
As observed in the wild, this approach is not always a simplifier. It does algebraic replacement, which involves, under the hood, a notion of "term ordering" (roughly, "which things get replaced by which others?") and thus simple variables may expand to longer expressions.
Another form of term rewriting is syntactic replacement via pattern matching, and other responses discuss using that approach. It has a different drawback, insofar as the generality of patterns to consider might become overwhelming. For example, what does one do with k^2/(w + p^4)^3 when the rule is to replace k/(w + p^4) with q? (Specifically, how do we recognize this as being equivalent to (k/(w + p^4))^2*1/(w + p^4)?)
The upshot is one needs to have an idea of what is desired and what methods might be feasible. This of course is generally problem specific.
One thing that occurs is perhaps you want to find and replace all commonly occurring "complicated" expressions with simpler ones. This is referred to as common subexpression elimination (CSE). In Mathematica this can be done using a function called Experimental`OptimizeExpression[]. Here are several links to MathGroup posts that discuss this.
http://forums.wolfram.com/mathgroup/archive/2009/Jul/msg00138.html
http://forums.wolfram.com/mathgroup/archive/2007/Nov/msg00270.html
http://forums.wolfram.com/mathgroup/archive/2006/Sep/msg00300.html
http://forums.wolfram.com/mathgroup/archive/2005/Jan/msg00387.html
http://forums.wolfram.com/mathgroup/archive/2002/Jan/msg00369.html
Here is an example from one of those notes.
InputForm[Experimental`OptimizeExpression[(3 + 3*a^2 + Sqrt[5 + 6*a + 5*a^2] +
a*(4 + Sqrt[5 + 6*a + 5*a^2]))/6]]
Out[206]//InputForm=
Experimental`OptimizedExpression[Block[{Compile`$1, Compile`$3, Compile`$4,
Compile`$5, Compile`$6}, Compile`$1 = a^2; Compile`$3 = 6*a;
Compile`$4 = 5*Compile`$1; Compile`$5 = 5 + Compile`$3 + Compile`$4;
Compile`$6 = Sqrt[Compile`$5]; (3 + 3*Compile`$1 + Compile`$6 +
a*(4 + Compile`$6))/6]]
--- end edit 2 ---
Daniel Lichtblau
K = a*b*t/((t+f)c*d);
FullSimplify[ K,
TransformationFunctions -> {(# /. t/(t + f) -> p &), Automatic}]
(a b p) / (c d)
Corrected update to show another method:
EQ1 = a1 == (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
f = # /. ton + toff -> ton/p &;
FullSimplify[f # EQ1]
a1 == p (A B + A^2 B p + (A + B) p^2)
I don't know if this is of any value at this point, but hopefully at least it works.