Prove Theorem with Groebner Basis - computational-geometry

I'm trying to prove some theorems using Groebner Basis (as described in Cox, Little and O'Shea Link )
The mentioned book gives as an excercise to prove Pappus theorem using the given methodology, but I really can't make it work. I've tried usign Sage, Mathematica and Singular, but the Grobner Basis computation doesn't terminate.
Any idea of what can I do? Does anybody else have done this excersise before? THANKS.
This is the Singular code:
ring R= (0,u1,u2,u3,u4,u5,u6,u7),(y,x1,x2,x3,x4,x5,x6,x7),dp;
poly h1=(u3 - u5)*(u4 - u6) - (u5 - u7)*(u6 - x1);
poly h2=-(u1 - u4)*u3 + (u1 - x3)*x2;
poly h3=-(u5 - x2)*(u6 - x3) + u5*u6;
poly h4=-(u2 - u4)*u3 + (u2 - x5)*x4;
poly h5=-(u7 - x4)*(x1 - x5) + u7*x1;
poly h6=-(u2 - u6)*u5 + (u2 - x7)*x6;
poly h7=-(u1 - x1)*u7 - (u7 - x6)*(x1 - x7);
poly g=(x2 - x4)*(x3 - x5) - (x4 - x6)*(x5 - x7);
poly g2=1-y*g;
ideal V=h1,h2,h3,h4,h5,h6,h7,g2;
std(V);

Related

How to Implement a Trig Identity Proving Algorithm

How could I implement a program that takes in the two sides of a trig equation (could be generalized to anything but for now I'll leave it at just trig identities) and the program will output the steps to transform one side into another (or transform them both) to show that they are in fact equal. The program will assume that they are equal in the first place. I am quite stumped as to how I might implement an algorithm to do this. My first thought was something to do with graphs, but I couldn't think of anything beyond this. From there, I thought that I should first parse both sides of the equation into trees. For example (cot x * sin) / (sin x + cos x) would look like this:
division
/ \
* +
/ \ / \
cot sin sin cos
After this, I had two similar ideas, both of which have problems. The first idea was to pick the side with the least number of leaves and try to manipulate it into the other side by using equivalencies that would be represented by "tree regexs." Examples of these "tree regexs" would be csc = 1 / sin or cot = cos / sin (in tree form of course), etc. My second idea would be to pick the side with more leaves and try to find some expression that when multiplied by that expression would equal the other side. Using reciprocals this wouldn't be too bad, however, I would then have to prove that the thing I multiplied by equals 1. Again I am back to this "tree regex" thing.
The major flaw with both of these is in what order/how could I apply these substitutions. Will it just have to be a big mess of if statements or is there a more elegant solution? Is there actually a graph-based solution that I'm not seeing. What (if any) might be a good algorithm to prove trig identities.
To be clear I am not talking about the "solve for x" type problem such as tan(x)sin(x) = 5, find all values of x but rather prove that sqrt((1 + sin x) / (1 - sin x)) = sec x + tan x
This is a simple algorithm for deciding trigonometric identities that can be brought into the form polynomial(sin x, cos x) = 0 :
Get rid of tan x, cot x, sec x, ..., sin 2x, ... by the obvious substitutions (tan x -> (sin x)/(cos x), ..., sin 2x -> 2 (sin x) (cos x), ...)
Transform identity to polynomial by squaring (isolated) roots (getting rid of multiple roots in an identity can be tricky, though), multiplying with denominators and bringing all expanded terms to one side
Replace all terms cos^2 x in the polynomial (cos^3 x = (cos^2 x)(cos x), cos^4 x = (cos^2 x)(cos^2 x), ...) by 1 - sin^2 x and expand the polynomial.
Finally a polynomial without cos^2 x is computed. If it is identical to 0 the identity is proven, otherwise the identity does not hold.
Your example sqrt((1 + sin x)/(1 - sin x)) = sec x + tan x:
Using the substitutions sec x -> 1/(cos x) and tan x -> (sin x)/(cos x) we get
sqrt((1 + sin x)/(1 - sin x)) = 1/(cos x) + (sin x)/(cos x).
For brevity let us write s instead of sin x and c instead of cos x, which gives us:
sqrt((1 + s)/(1 - s)) = 1/c + s/c
Squaring the equation and multiplying both sides with (1 - s)c^2 we get
(1 + s)c^2 = (1 + s)^2(1 - s).
Expanding the parenthesis and bringing everthing to one side we get
c^2 - sc^2 + s^3 + s^2 - s - 1 = 0
Substituting c^2 = 1 - s^2 into the polynomial we get
(1 - s^2) - s(1 - s^2) + s^3 + s^2 - s - 1 which expands to 0.
Hence the identity is proven.
Look out for texts on computer algebra (which I haven't), I'm sure you'll find clever ideas there.
My approach would be a graph-based search, as I doubt that a linear application of transformations will reliably lead to a solution.
Express the whole equation as an expression-tree the way you already started, but including an "equals" node above.
For the search-graph view, take one expression-tree as one search-state. The search-target is a decidable expression-tree like 1=1 or 1=0. When searching (expanding a search-state), create the child states by applying equivalence transformations on your expression (regex-like sounds quite plausible to me). Define an evaluation function that counts the overall complexity of an expression (e.g. number of nodes in the expression-tree). Do a directed search minimizing the evaluation function (expanding the lowest-complexity expression first), thus simplifying the expression until you reach a decidable form.
Depending on the expressions, it's quite possible that an unrestricted search never terminates. I don't know how you'd handle that, maybe by limiting the allowed complexity of expressions to some multiple of the original one. That would reduce the risk of running indefinitely, but leave you with undecided cases.

How to solve this equation for solving "Finding duplicate in integer array"

I was looking at the problem and the discussion here: Easy interview question got harder: given numbers 1..100, find the missing number(s)
One of the user provided a solution using following equation.
k1 + k2 = x
k1^2 + k2^2 = y
Substituting provides (x-k2)^2 + k2^2 = y
I am trying to solve this equation further and come up with a C program to solve the problem of finding duplicates.
Inspite of spending lot of time I couldn't solve this equation to get k1 or k2 one side. I always ended up with k1 or k2 on both side of equation.
Any help is appreciated.
Expand the equation
(x - k2)^2 + k2^2 = y
and get
x^2 - 2xk2 + 2k2^2 = y
or
2k2^2 - 2xk2 + x^2 - y = 0
Now use the formula for solving the quadratic equation az^2 + bz + c = 0 which is (-b +/- sqrt(b^2 - 4ac))/2a. Only that in our case z=k2. So
k2 = (2x +/- sqrt(4x^2 - 8(x^2 - y))) / 4
or
k2 = (x +/- sqrt(x^2 - 2(x^2 - y))) / 2
= (x +/- sqrt(2y - x^2)) / 2
and you can put
k2 = (x + sqrt(2y - x^2)) / 2
k1 = (x - sqrt(2y - x^2)) / 2.

Mathematica integration

guys! Sorry in advance about this.
Let's say I want to convolve two functions (f and g), a gaussian with a breit-wigner:
f[x_] := 1/(Sqrt[2 \[Pi]] \[Sigma])Exp[-(1/2) ((x - \[Mu])/\[Sigma])^2];
g[x_] := 1/\[Pi] (\[Gamma]/((x - \[Mu])^2 + \[Gamma]^2));
One way is to use Convolve like:
Convolve[f[x],g[x],x,y];
But that gives:
(\[Gamma] Convolve[E^(-((x - \[Mu])^2/(2 \[Sigma]^2))),1/(\[Gamma]^2 + (x - \[Mu])^2), x, y])/(Sqrt[2] \[Pi]^(3/2) \[Sigma])
,which means it couldn't do the convolution.
I then tried the integration (the definition of the convolution):
Integrate[f[x]*g[y - x], {x, 0, y}, Assuptions->{x > 0, y > 0}]
But again, it couldn't integrate. I know that there are functions that can't be integrated analytically, but it seems to me that whenever I go into convolution, I find another function that can't be integrated.
Is the numerical integration the only way to do convolution in Mathematica (besides those simple functions in the examples), or am I doing something wrong?
My target is to convolute a crystal-ball with a breit-weigner. The CB is something like:
Piecewise[{{norm*Exp[-(1/2) ((x - \[Mu])/\[Sigma])^2], (
x - \[Mu])/\[Sigma] > -\[Alpha]},
{norm*(n/Abs[\[Alpha]])^n*
Exp[-(1/2) \[Alpha]^2]*((n/Abs[\[Alpha]] - Abs[\[Alpha]]) - (
x - \[Mu])/\[Sigma])^-n, (x - \[Mu])/\[Sigma] <= -\[Alpha]}}]
I've done this in C++ but I thought I try it in Mathematica and use it to fit some data. So please tell me if I have to make a numerical integration routine in Mathematica or there's more to the analytic integration.
Thank you,
Adrian
I Simplified your functions a little bit(it might look little, but its huge in the spirit).
In this case I have set [Mu] to be zero.
\[Mu] = 0;
Now we have:
f[x_] := 1/(Sqrt[2 \[Pi]] \[Sigma]) Exp[-(1/2) ((x)/\[Sigma])^2];
g[x_] := 1/\[Pi] (\[Gamma]/((x)^2 + \[Gamma]^2));
Asking Mathematica to Convolve:
Convolve[f[x], g[x], x, y]
-((I E^(-((y + I \[Gamma])^2/(2 \[Sigma]^2))) (E^((2 I y \[Gamma])/\[Sigma]^2) \[Pi] Erfi[((y - I \[Gamma]) Sqrt[1/\[Sigma]^2])/Sqrt[2]] - \[Pi] Erfi[((y + I \[Gamma]) Sqrt[1/\[Sigma]^2])/Sqrt[2]] - Log[-y - I \[Gamma]] - E^((2 I y \[Gamma])/\[Sigma]^2) Log[y - I \[Gamma]] + E^((2 I y \[Gamma])/\[Sigma]^2) Log[-y + I \[Gamma]] + Log[y + I \[Gamma]]))/(2 Sqrt[2] \[Pi]^(3/2) \[Sigma]))
Although this is not precisely what you asked for, but it shows if your function was a tiny bit simpler, Mathematica would be able to do the integration. In the case of your question, unless we know some more information about [Mu], I don't think the result of Convolve has a closed form. You can probably ask math.stackexchange.com guys about your integral and see if someone comes up with a closed form.

Mathematica 8.0, obvious simplification missed, why?

I apologize beforehand if there’s an obvious answer, I’m not a user of Mathematica but I’m working on a borrowed laptop and that’s what I have available for the moment.
For some reason Simplify and FullSimplify are missing obvious simplifications, for instance:
Simplify[1/2 (2/5 (x - y)^2 + 2/3 z)]
Yields:
1/2 (2/5 (x - y)^2 + (2 z)/3)
For some reason, it doesn't get rid of the 1/2 factor, try it yourself!
Of course I can do it manually but I have much bigger expressions with the same problem.
Am I missing something?
PS: This laptop has Mathematica 8.0
EDIT: FullSimplify works for the previous example but it doesn't for
FullSimplify[1/2 (2 (x - y)^2 + 2/5 (y - z)^2)]
FullSimplify works for me:
In[693]:= Simplify[1/2 (2/5 (x - y)^2 + 2/3 z)]
Out[693]= 1/2 (2/5 (x - y)^2 + (2 z)/3)
In[694]:= FullSimplify[1/2 (2/5 (x - y)^2 + 2/3 z)]
Out[694]= 1/5 (x - y)^2 + z/3
In[695]:= $Version
Out[695]= "8.0 for Mac OS X x86 (64-bit) (October 5, 2011)"
I don't know why Simplify misses this case, but FullSimplify helps out here:
FullSimplify[1/2 (2/5 (x - y)^2 + 2/3 z)]
gives:
Sometimes Collect can be more appropriate :
In[1]:= Collect[1/2 (2/5 (x - y)^2 + 2/3 z), {z}]
Out[1]= 1/5 (x - y)^2 + z/3
Edit
In[2]:= Collect[1/2 (2 (x - y)^2 + 2/5 (y - z)^2), {x - y, y - z}]
Out[2]= (x - y)^2 + 1/5 (y - z)^2
In this specific case Verbeia's approach using Ditribute seems to be the simplest way to get what you want, however Collect[expr, list] is customizable to generic cases by ordering a list. In Mathematica there are many functions, which may help in various cases. Though Simplify and FullSimplify could be a bit smarter they can do quite a lot. A nice example of their different behavior you may find beneath:
I recommend to take a closer look at a neat demonstration what one may expect in general : Simplifying Some Algebraic Expressions Using Mathematica.
For your second example, Distribute works:
Distribute[1/2 (2 (x - y)^2 + 2/5 (y - z)^2)]
results in
(x - y)^2 + 1/5 (y - z)^2
which is what I assume you want.

Weird behaviour with GroebnerBasis in v7

I came across some weird behaviour when using GroebnerBasis. In m1 below, I used a Greek letter as my variable and in m2, I used a Latin letter. Both of them have no rules associated with them. Why do I get vastly different answers depending on what variable I choose?
Image:
Copyable code:
Clear["Global`*"]
g = Module[{x},
x /. Solve[
z - x (1 - b -
b x ( (a (3 - 2 a (1 + x)))/(1 - 3 a x + 2 a^2 x^2))) == 0,
x]][[3]];
m1 = First#GroebnerBasis[\[Kappa] - g, z]
m2 = First#GroebnerBasis[k - g, z]
EDIT:
As pointed out by belisarius, my usage of GroebnerBasis is not entirely correct as it requires a polynomial input, whereas mine is not. This error, introduced by a copy-pasta, went unnoticed until now, as I was getting the answer that I expected when I followed through with the rest of my code using m1 from above. However, I'm not fully convinced that it is an unreasonable usage. Consider the example below:
x = (-b+Sqrt[b^2-4 a c])/2a;
p = First#GroebnerBasis[k - x,{a,b,c}]; (*get relation or cover for Riemann surface*)
q = First#GroebnerBasis[{D[p,k] == 0, p == 0},{a,b,c},k,
MonomialOrder -> EliminationOrder];
Solve[q==0, b] (*get condition on b for double root or branch point*)
{{b -> -2 Sqrt[a] Sqrt[c]}, {b -> 2 Sqrt[a] Sqrt[c]}}
which is correct. So my interpretation is that it is OK to use GroebnerBasis in such cases, but I'm not all too familiar with the deep theory behind it, so I could be completely wrong here.
P.S. I heard that if you mention GroebnerBasis three times in your post, Daniel Lichtblau will answer your question :)
The bug that was shown by these examples will be fixed in version 9. Offhand I do not know how to evade it in versions 8 and prior. If I recall correctly it was caused by an intermediate numeric overflow in some code that was checking whether a symbolic polynomial coefficient might be zero.
For some purposes it might be suitable to specify more variables and possibly a non-default term order. Also clearing denominators can be helpful at least in cases where that is a valid thing to do. That said, I do not know if these tactics would help in this example.
I'll look some more at this code but probably not in the near future.
Daniel Lichtblau
This may be related to the fact that Mathematica does not try all variable orders in functions like Simplify. Here is an example:
ClearAll[a, b, c]
expr = (c^4 b^2)/(c^4 b^2 + a^4 b^2 + c^2 a^2 (1 - 2 b^2));
Simplify[expr]
Simplify[expr /. {a -> b, b -> a}]
(b^2 c^4)/(a^4 b^2 + a^2 (1 - 2 b^2) c^2 + b^2 c^4)
(a^2 c^4)/(b^2 c^2 + a^2 (b^2 - c^2)^2)
Adam Strzebonski explained that:
...one can try FullSimplify with all
possible orderings of chosen
variables. Of course, this multiplies
the computation time by
Factorial[Length[variables]]...

Resources