Weird behaviour with GroebnerBasis in v7 - wolfram-mathematica

I came across some weird behaviour when using GroebnerBasis. In m1 below, I used a Greek letter as my variable and in m2, I used a Latin letter. Both of them have no rules associated with them. Why do I get vastly different answers depending on what variable I choose?
Image:
Copyable code:
Clear["Global`*"]
g = Module[{x},
x /. Solve[
z - x (1 - b -
b x ( (a (3 - 2 a (1 + x)))/(1 - 3 a x + 2 a^2 x^2))) == 0,
x]][[3]];
m1 = First#GroebnerBasis[\[Kappa] - g, z]
m2 = First#GroebnerBasis[k - g, z]
EDIT:
As pointed out by belisarius, my usage of GroebnerBasis is not entirely correct as it requires a polynomial input, whereas mine is not. This error, introduced by a copy-pasta, went unnoticed until now, as I was getting the answer that I expected when I followed through with the rest of my code using m1 from above. However, I'm not fully convinced that it is an unreasonable usage. Consider the example below:
x = (-b+Sqrt[b^2-4 a c])/2a;
p = First#GroebnerBasis[k - x,{a,b,c}]; (*get relation or cover for Riemann surface*)
q = First#GroebnerBasis[{D[p,k] == 0, p == 0},{a,b,c},k,
MonomialOrder -> EliminationOrder];
Solve[q==0, b] (*get condition on b for double root or branch point*)
{{b -> -2 Sqrt[a] Sqrt[c]}, {b -> 2 Sqrt[a] Sqrt[c]}}
which is correct. So my interpretation is that it is OK to use GroebnerBasis in such cases, but I'm not all too familiar with the deep theory behind it, so I could be completely wrong here.
P.S. I heard that if you mention GroebnerBasis three times in your post, Daniel Lichtblau will answer your question :)

The bug that was shown by these examples will be fixed in version 9. Offhand I do not know how to evade it in versions 8 and prior. If I recall correctly it was caused by an intermediate numeric overflow in some code that was checking whether a symbolic polynomial coefficient might be zero.
For some purposes it might be suitable to specify more variables and possibly a non-default term order. Also clearing denominators can be helpful at least in cases where that is a valid thing to do. That said, I do not know if these tactics would help in this example.
I'll look some more at this code but probably not in the near future.
Daniel Lichtblau

This may be related to the fact that Mathematica does not try all variable orders in functions like Simplify. Here is an example:
ClearAll[a, b, c]
expr = (c^4 b^2)/(c^4 b^2 + a^4 b^2 + c^2 a^2 (1 - 2 b^2));
Simplify[expr]
Simplify[expr /. {a -> b, b -> a}]
(b^2 c^4)/(a^4 b^2 + a^2 (1 - 2 b^2) c^2 + b^2 c^4)
(a^2 c^4)/(b^2 c^2 + a^2 (b^2 - c^2)^2)
Adam Strzebonski explained that:
...one can try FullSimplify with all
possible orderings of chosen
variables. Of course, this multiplies
the computation time by
Factorial[Length[variables]]...

Related

How to Implement a Trig Identity Proving Algorithm

How could I implement a program that takes in the two sides of a trig equation (could be generalized to anything but for now I'll leave it at just trig identities) and the program will output the steps to transform one side into another (or transform them both) to show that they are in fact equal. The program will assume that they are equal in the first place. I am quite stumped as to how I might implement an algorithm to do this. My first thought was something to do with graphs, but I couldn't think of anything beyond this. From there, I thought that I should first parse both sides of the equation into trees. For example (cot x * sin) / (sin x + cos x) would look like this:
division
/ \
* +
/ \ / \
cot sin sin cos
After this, I had two similar ideas, both of which have problems. The first idea was to pick the side with the least number of leaves and try to manipulate it into the other side by using equivalencies that would be represented by "tree regexs." Examples of these "tree regexs" would be csc = 1 / sin or cot = cos / sin (in tree form of course), etc. My second idea would be to pick the side with more leaves and try to find some expression that when multiplied by that expression would equal the other side. Using reciprocals this wouldn't be too bad, however, I would then have to prove that the thing I multiplied by equals 1. Again I am back to this "tree regex" thing.
The major flaw with both of these is in what order/how could I apply these substitutions. Will it just have to be a big mess of if statements or is there a more elegant solution? Is there actually a graph-based solution that I'm not seeing. What (if any) might be a good algorithm to prove trig identities.
To be clear I am not talking about the "solve for x" type problem such as tan(x)sin(x) = 5, find all values of x but rather prove that sqrt((1 + sin x) / (1 - sin x)) = sec x + tan x
This is a simple algorithm for deciding trigonometric identities that can be brought into the form polynomial(sin x, cos x) = 0 :
Get rid of tan x, cot x, sec x, ..., sin 2x, ... by the obvious substitutions (tan x -> (sin x)/(cos x), ..., sin 2x -> 2 (sin x) (cos x), ...)
Transform identity to polynomial by squaring (isolated) roots (getting rid of multiple roots in an identity can be tricky, though), multiplying with denominators and bringing all expanded terms to one side
Replace all terms cos^2 x in the polynomial (cos^3 x = (cos^2 x)(cos x), cos^4 x = (cos^2 x)(cos^2 x), ...) by 1 - sin^2 x and expand the polynomial.
Finally a polynomial without cos^2 x is computed. If it is identical to 0 the identity is proven, otherwise the identity does not hold.
Your example sqrt((1 + sin x)/(1 - sin x)) = sec x + tan x:
Using the substitutions sec x -> 1/(cos x) and tan x -> (sin x)/(cos x) we get
sqrt((1 + sin x)/(1 - sin x)) = 1/(cos x) + (sin x)/(cos x).
For brevity let us write s instead of sin x and c instead of cos x, which gives us:
sqrt((1 + s)/(1 - s)) = 1/c + s/c
Squaring the equation and multiplying both sides with (1 - s)c^2 we get
(1 + s)c^2 = (1 + s)^2(1 - s).
Expanding the parenthesis and bringing everthing to one side we get
c^2 - sc^2 + s^3 + s^2 - s - 1 = 0
Substituting c^2 = 1 - s^2 into the polynomial we get
(1 - s^2) - s(1 - s^2) + s^3 + s^2 - s - 1 which expands to 0.
Hence the identity is proven.
Look out for texts on computer algebra (which I haven't), I'm sure you'll find clever ideas there.
My approach would be a graph-based search, as I doubt that a linear application of transformations will reliably lead to a solution.
Express the whole equation as an expression-tree the way you already started, but including an "equals" node above.
For the search-graph view, take one expression-tree as one search-state. The search-target is a decidable expression-tree like 1=1 or 1=0. When searching (expanding a search-state), create the child states by applying equivalence transformations on your expression (regex-like sounds quite plausible to me). Define an evaluation function that counts the overall complexity of an expression (e.g. number of nodes in the expression-tree). Do a directed search minimizing the evaluation function (expanding the lowest-complexity expression first), thus simplifying the expression until you reach a decidable form.
Depending on the expressions, it's quite possible that an unrestricted search never terminates. I don't know how you'd handle that, maybe by limiting the allowed complexity of expressions to some multiple of the original one. That would reduce the risk of running indefinitely, but leave you with undecided cases.

Get mathematica to simplify expression with another equation

I have a very complicated mathematica expression that I'd like to simplify by using a new, possibly dimensionless parameter.
An example of my expression is:
K=a*b*t/((t+f)c*d);
(the actual expression is monstrously large, thousands of characters). I'd like to replace all occurrences of the expression t/(t+f) with p
p=t/(t+f);
The goal here is to find a replacement so that all t's and f's are replaced by p. In this case, the replacement p is a nondimensionalized parameter, so it seems like a good candidate replacement.
I've not been able to figure out how to do this in mathematica (or if its possible). I tried:
eq1= K==a*b*t/((t+f)c*d);
eq2= p==t/(t+f);
Solve[{eq1,eq2},K]
Not surprisingly, this doesn't work. If there were a way to force it to solve for K in terms of p,a,b,c,d, this might work, but I can't figure out how to do that either. Thoughts?
Edit #1 (11/10/11 - 1:30)
[deleted to simplify]
OK, new tact. I've taken p=ton/(ton+toff) and multiplied p by several expressions. I know that p can be completely eliminated. The new expression (in terms of p) is
testEQ = A B p + A^2 B p^2 + (A+B)p^3;
Then I made the substitution for p, and called (normal) FullSimplify, giving me this expression.
testEQ2= (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
Finally, I tried all of the suggestions below, except the last (not sure how it works yet!)
Only the eliminate option worked. So I guess I'll try this method from now on. Thank you.
EQ1 = a1 == (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
EQ2 = P1 == ton/(ton + toff);
Eliminate[{EQ1, EQ2}, {ton, toff}]
A B P1 + A^2 B P1^2 + (A + B) P1^3 == a1
I should add, if the goal is to make all substitutions that are possible, leaving the rest, I still don't know how to do that. But it appears that if a substitution can completely eliminate a few variables, Eliminate[] works best.
Have you tried this?
K = a*b*t/((t + f) c*d);
Solve[p == t/(t + f), t]
-> {{t -> -((f p)/(-1 + p))}}
Simplify[K /. %[[1]] ]
-> (a b p)/(c d)
EDIT: Oh, and are you aware of Eliminiate?
Eliminate[{eq1, eq2}, {t,f}]
-> a b p == c d K && c != 0 && d != 0
Solve[%, K]
-> {{K -> (a b p)/(c d)}}
EDIT 2: Also, in this simple case, solving for K and t simultaneously seems to do the trick, too:
Solve[{eq1, eq2}, {K, t}]
-> {{K -> (a b p)/(c d), t -> -((f p)/(-1 + p))}}
Something along these lines is discussed in the MathGroup post at
http://forums.wolfram.com/mathgroup/archive/2009/Oct/msg00023.html
(I see it has an apocryphal note that is quite relevant, at least to the author of that post.)
Here is how it might be applied in the example above. For purposes of keeping this self contained I'll repeat the replacement code.
replacementFunction[expr_, rep_, vars_] :=
Module[{num = Numerator[expr], den = Denominator[expr],
hed = Head[expr], base, expon},
If[PolynomialQ[num, vars] &&
PolynomialQ[den, vars] && ! NumberQ[den],
replacementFunction[num, rep, vars]/
replacementFunction[den, rep, vars],
If[hed === Power && Length[expr] == 2,
base = replacementFunction[expr[[1]], rep, vars];
expon = replacementFunction[expr[[2]], rep, vars];
PolynomialReduce[base^expon, rep, vars][[2]],
If[Head[hed] === Symbol &&
MemberQ[Attributes[hed], NumericFunction],
Map[replacementFunction[#, rep, vars] &, expr],
PolynomialReduce[expr, rep, vars][[2]]]]]]
Your example is now as follows. We take the input, and also the replacement. For the latter we make an equivalent polynomial by clearing denominators.
kK = a*b*t/((t + f) c*d);
rep = Numerator[Together[p - t/(t + f)]];
Now we can invoke the replacement. We list the variables we are interested in replacing, treating 'p' as a parameter. This way it will get ordered lower than the others, meaning the replacements will try to remove them in favor of 'p'.
In[127]:= replacementFunction[kK, rep, {t, f}]
Out[127]= (a b p)/(c d)
This approach has a bit of magic in figuring out what should be the listed "variables". Possibly some further tweakage could be done to improve on that. But I believe that, generally, simply not listing the things we want to use as new replacements is the right way to go.
Over the years there have been variants of this idea on MathGroup. It is possible that some others may be better suited to the specific expression(s) you wish to handle.
--- edit ---
The idea behind this is to use PolynomialReduce to do algebraic replacement. That is to say, we do not try for pattern matching but instead use polynomial "canonicalization" a method. But in general we're not working with polynomial inputs. So we apply this idea recursively on PolynomialQ arguments inside NumericQ functions.
Earlier versions of this idea, along with some more explanation, can be found at the note referenced below, as well as in notes it references (how's that for explanatory recursion?).
http://forums.wolfram.com/mathgroup/archive/2006/Aug/msg00283.html
--- end edit ---
--- edit 2 ---
As observed in the wild, this approach is not always a simplifier. It does algebraic replacement, which involves, under the hood, a notion of "term ordering" (roughly, "which things get replaced by which others?") and thus simple variables may expand to longer expressions.
Another form of term rewriting is syntactic replacement via pattern matching, and other responses discuss using that approach. It has a different drawback, insofar as the generality of patterns to consider might become overwhelming. For example, what does one do with k^2/(w + p^4)^3 when the rule is to replace k/(w + p^4) with q? (Specifically, how do we recognize this as being equivalent to (k/(w + p^4))^2*1/(w + p^4)?)
The upshot is one needs to have an idea of what is desired and what methods might be feasible. This of course is generally problem specific.
One thing that occurs is perhaps you want to find and replace all commonly occurring "complicated" expressions with simpler ones. This is referred to as common subexpression elimination (CSE). In Mathematica this can be done using a function called Experimental`OptimizeExpression[]. Here are several links to MathGroup posts that discuss this.
http://forums.wolfram.com/mathgroup/archive/2009/Jul/msg00138.html
http://forums.wolfram.com/mathgroup/archive/2007/Nov/msg00270.html
http://forums.wolfram.com/mathgroup/archive/2006/Sep/msg00300.html
http://forums.wolfram.com/mathgroup/archive/2005/Jan/msg00387.html
http://forums.wolfram.com/mathgroup/archive/2002/Jan/msg00369.html
Here is an example from one of those notes.
InputForm[Experimental`OptimizeExpression[(3 + 3*a^2 + Sqrt[5 + 6*a + 5*a^2] +
a*(4 + Sqrt[5 + 6*a + 5*a^2]))/6]]
Out[206]//InputForm=
Experimental`OptimizedExpression[Block[{Compile`$1, Compile`$3, Compile`$4,
Compile`$5, Compile`$6}, Compile`$1 = a^2; Compile`$3 = 6*a;
Compile`$4 = 5*Compile`$1; Compile`$5 = 5 + Compile`$3 + Compile`$4;
Compile`$6 = Sqrt[Compile`$5]; (3 + 3*Compile`$1 + Compile`$6 +
a*(4 + Compile`$6))/6]]
--- end edit 2 ---
Daniel Lichtblau
K = a*b*t/((t+f)c*d);
FullSimplify[ K,
TransformationFunctions -> {(# /. t/(t + f) -> p &), Automatic}]
(a b p) / (c d)
Corrected update to show another method:
EQ1 = a1 == (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
f = # /. ton + toff -> ton/p &;
FullSimplify[f # EQ1]
a1 == p (A B + A^2 B p + (A + B) p^2)
I don't know if this is of any value at this point, but hopefully at least it works.

Blanks in the denominator of a replacement rule

Mathematica 7.0 seems to dislike having blanks in the denominator. Can anyone explain why this is?
Input:
ClearAll["Global`*"];
(*Without blanks:*)
a^2 / b^2 /. a^2 / b^2 -> d
(*with:*)
a^2 / b^2 /. a^c_ / b^c_ -> d
(*Without blanks:*)
a^2 / b^2 /. (a / b)^2 -> d
(*With:*)
a^2 / b^2 /. (a / b)^c_ -> d
(*Without blanks:*)
a^2 / b^2 /. a^2 * b^(-2) -> d
(*With:*)
a^2 / b^2 /. a^c_ * b^(-c_) -> d
Output:
d
a^2/b^2
d
a^2/b^2
d
a^2/b^2
I'm trying to work this for a more complicated problem. The substitution that I want to make is in an expression of the form:
(a ^ c_. * Coefficient1_. / b ^ c_. / Coefficient2_.) + (a ^ d_. * Coefficient3_. / b ^ d_. / Coefficient4_.)
Where the coefficients may involve sums, products, and quotients of variables that may or may not includea and b.
Possibly relevant:
The FullForm shows that the power in the denominator is stored as a product of -1 and c:
Input:
FullForm[a^2/b^2]
FullForm[a^c_/b^c_]
FullForm[ (a / b)^2 ]
FullForm[(a / b)^c_ ]
FullForm[a^2 * b^(-2) ]
FullForm[a^c_ * b^(-c_)]
Output:
Times[Power[a,2],Power[b,-2]]
Times[Power[a,Pattern[c,Blank[]]],Power[b,Times[-1,Pattern[c,Blank[]]]]]
Times[Power[a,2],Power[b,-2]]
Power[Times[a,Power[b,-1]],Pattern[c,Blank[]]]
Times[Power[a,2],Power[b,-2]]
Times[Power[a,Pattern[c,Blank[]]],Power[b,Times[-1,Pattern[c,Blank[]]]]]
Edit: Bolded change to my actual case.
Generally speaking you should try to avoid doing mathematical manipulation using ReplaceAll which is a structural tool.
As opposed to FullForm, I will use TreeForm to illustrate these expressions:
a^2/b^2 // TreeForm
a^c_/b^c_ // TreeForm
You can see that while these expressions are mathematically similar, they are structurally quite different. You may be able to hammer out a functioning replacement rule for a specific case, but you will usually be better off using the Formula Manipulation (or Polynomial Algebra) tools that Mathematica provides.
If you carefully describe the mathematical manipulation you wish to achieve, I will attempt to provide a better solution.
As belisarius humorously points out in a comment, trying to force Mathematica to "see" or display expressions the way you do is often largely futile. This is one of the reasons that the opening statement above is true.
I agree with everything Mr.Wizard wrote. Having said that, a replacement rule that would work in this specific case would be:
a^2/b^2 /. (Times[Power[a,c_],Power[b,e_]]/; e == -c )-> d
or
a^2/b^2 /. (a^c_ b^e_/; e == -c )-> d
Note that I added the constraint /; e == -c so that I effectively have a -c_ without actually creating the corresponding Times[-1,c_] expression
The primary reason a^2 / b^2 /. a^c_ / b^c_ -> d doesn't work is your using Rule (->) not RuleDelayed (:>). The second reason, as you found with FullForm, is that a/b is interpreted as Times[a, Power[b,-1]], so it is best to not use division. Making these changes,
a^2 / b^2 /. a^n_ b^m_ :> {n,m}
returns {2, -2}. Usually, you'll want to have a default value, so that
a / b^2 /. a^n_. b^m_. :> {n,m}
returns {1,-2}.
Edit: to ensure that the two exponents are equal, requires the addition of the Condition (/;)
a^2 / b^2 /. a^n_. b^m_. /; n == m :> n
Note: by using _. this will also catch a/b.

How do I force Mathematica to include user defined functions in Simplify and FullSimplify?

Let's say I have a relation r^2 = x^2 + y^2. Now suppose after a calculation i get a complicated output of x and y, but which could in theory be simplified a lot by using the above relation. How do I tell Mathematica to do that?
I'm referring to situations where replacement rules x^2+y^2 -> r^2 and using Simplify/FullSimplify with Assumptions won't work, e.g. if the output is x/y + y/x = (x^2+y^2)/(xy) = r^2/(xy).
Simplification works really well with built in functions but not with user defined functions! So essentially I would like my functions to be treated like the built in functions!
I believe you are looking for TransformationFunctions.
f = # /. x^2 + y^2 -> r^2 &;
Simplify[x/y + y/x, TransformationFunctions -> {Automatic, f}]
(* Out= r^2/(x y) *)
In the example you give
(x/y + y/x // Together) /. {x^2 + y^2 -> r^2}
==> r^2/(x y)
works. But I've learned that in many occasions replacements like this don't work. A tip I once got was to replace this replacement with one which has a more simpler LHS like: x^2 -> r^2-y^2 (or even x->Sqrt[r^2-y^2] if you know that the values of x and y allow this).

How to store punch of equations / constants to solve for any element equation or numerical value

Lets say, that problems are fairly simple - something, that pre-degree theoretical physics student would solve. And student does the hardest part of the task - functional reading: parsing linguistically free form text, to get input and output variables and input variable values.
For example: a problem about kinematic equations, where there are variables {a,d,t,va,vf} and few functions that describe, how thy are dependent of each-other. So using skills acquired in playing fitting blocks where thy fit, you play with the equations to get the output variable you where looking for.
In any case, there are exactly 2 possible outputs you might want and thy are (with working example):
1) Equation for that variable
Physics[have_, find_] := Solve[Flatten[{
d == vf * t - (a * t^2) /2, (* etc. *)
have }], find]
Physics[True, {d}]
{{d -> (1/2)*(2*t*vf - a*t^2)}}
2) Exact or general numerical value for that variable
Physics[have_, find_] := Solve[Flatten[{
d == vf * t - (a * t^2) /2, (* etc. *)
have }], find]
Physics[{t == 9.7, vf == -104.98, a == -9.8}, {d}]
{{d->-557.265}}
I am not sure, that I am approaching the problem correctly.
I think that I would probably prefer an approach like
In[1]:= Physics[find_, have_:{}] := Solve[
{d == vf*t - (a*t^2)/2 (* , etc *)} /. have, find]
In[2]:= Physics[d]
Out[2]= {{d -> 1/2 (-a t^2 + 2 t vf)}}
In[2]:= Physics[d, {t -> 9.7, vf -> -104.98, a -> -9.8}]
Out[2]= {{d -> -557.265}}
Where the have variables are given as a list of replacement rules.
As an aside, in these types of physics problems, a nice thing to do is define your physical constants like
N[g] = -9.8;
which produces a NValues for g. Then
N[tf] = 9.7;N[vf] = -104.98;
Physics[d, {t -> tf, vf -> vf, a -> g}]
%//N
produces
{{d->1/2 (-g tf^2+2 tf vf)}}
{{d->-557.265}}
Let me show some advanges of Simon's approach:
You are at least approaching this problem reasonably. I see a fine general purpose function and I see you're getting results, which is what matters primarily. There is no 'correct' solution, since there might be a large range of acceptable solutions. In some scenario's some solutions may be preferred over others, for instance because of performance, while that might be the other way around in other scenarios.
The only slight problem I have with your example is the dubious parametername 'have'.
Why do you think this would be a wrong approach?

Resources