I have the following problem. Defining two simple functions in Mathematica, say, foo[x_]:= x, and bar[y_]:=y, I would expect that the expression foo[x]^(-bar[y])-(1/foo[x])^(bar[y]) will be evaluated to zero. However, I find (oddly) that Mathematica insists rather on keeping this thing "in a symbolic fashion", not willing to simplify at all. Tried many things to overcome this behaviour but they all failed. Any help much appreciated :)
You have to tell Mathematica that x>0:
Simplify[foo[x]^(-bar[y]) - (1/foo[x])^(bar[y]), x > 0]
0
Related
Years ago in college,I tinkered with some prolog, but that's long forgotten, so I count as a complete beginnner again (humbling!)
Anyway, I was playing with some of Bruce Tate's code, and came up with what I thought was a sudoku solver for the full (9x9) game. But, when I run it, it generates some very odd output:
Solution = [_#3(2..3),_#24(2:7),_#45(2..3:5:7),_#66(2..3:8),_#87(2..3:5..6:8),4,_#121(2:5..6),1,9,6,8,_#194(2..5:7:9),_#215(1..3:9),_#236(2..3:5:9),_#257(1..2:5:9),_#278(2:4..5),_#299(4:7),_#320(5:7),_#341(1..2),_#362(2:4),_#383(2:4..5:9),_#404(1..2:9),_#425(2:5..6:9),7,3,_#472(4:6),8,4,1,_#532(2:8),_#553(2:8),7,3,9,5,6,7,5,_#689(6:8),_#710(4:8..9),_#731(4:6:8..9),_#752(6:8..9),1,2,3,_#828(2..3),9,_#862(2..3:6),5,1,_#909(2:6),7,8,4,8,_#990(2:4:7),1,6,_#1037(2..5:9),_#1058(2:5:9),_#1079(4..5),_#1100(3..4:7),_#1121(5:7),5,_#1163(4:6..7),_#1184(4:6..7),_#1205(1:3..4:8),_#1226(3..4:8),_#1247(1:8),_#1268(4:6:8),9,2,9,3,_#1341(2:4:6),7,_#1375(2:4..5:8),_#1396(1..2:5:8),_#1417(4..6:8),_#1438(4:6),_#1459(1:5)]
yes
I was expecting ... well, frankly I was half expecting total failure :) but I thought that only numbers could show up in this output. What's it trying to tell me with those #-tagged things, and stuff in parens that looks like ranges? Is it trying to say there are many possible solutions and it's telling me all at once (seems unlikely as it's very unhelpful if it is) or is this some kind of error state (in which case, why does it compile my code and say "yes" to this query?)
Any insight gratefully received!
I think it's the result of a set of constraints not sufficiently strong to determine a solution without search. For instance, _#3(2..3) could means that a variable named _#3 could assume values in range 2..3. You could try to label the variables, something like
..., labeling([], Solution).
Syntax details depend on your solver, of course...
My question is about an apparent failure of Mathematica's function "FullSimplify" to simplify an easy algebraic expression.
This is the expression that I ask Mathematica to evaluate:
FullSimplify[Re[a^(I*b)] - Re[a^(-I*b)], Element[a, Reals] && a > 0 && Element[b, Reals]]
This should give the result 0. Instead Mathematica only restates my expression:
Re[a^(-I b) (-1 + a^(2 I b))]
Replacing a and b by actual numbers solves the problem.
What could be the cause of it? How to effectively use FullSimplify (and Simplify, Expand, Integrate and so...)?
I read that the order of variables could play a role here, but I couldn't wrap my head around it.
I tried to check for similar problems on the website as well, but I couldn't find any answer that could explain this phenomenon.
Thanks in advance for your support.
I'm running some Lattice proofs through Prover9/Mace4. I'm using a non-standard axiomatization of the lattice join operation, from which it is not immediately obvious that the join is commutative, associative and idempotent. (I can get Prover9 to prove that it is -- eventually.)
I know that Prover9 looks for those properties to help it search faster. I've tried putting those properties in the Additional Input section (I'm running the GUI version 0.5), with
formulas(hints).
x v y = y v x.
% etc
end_of_list.
Q1: Is this the way to get it to look at hints?
Q2: Is there a good place to look for help on speeding up proofs/tips and tricks?
(If I can get this to work, there are further operators I'd like to give hints for.)
For ref, my axioms are (bi-lattice with only one primitive operation):
x ^ y = y ^ x. % lattice meet
x ^ x = x.
(x ^ y) ^ z = x ^ (y ^ z).
x ^ (x v y) = x. % standard absorption for join
x ^ z = x & y ^ z = y <-> z ^ (x v y) = (x v y).
% non-standard absorption
(EDIT after DougS's answer posted.)
Wow! Thank you. Orders-of-magnitude speed-up.
Some follow-on q's if I might ...
Q3: The generated hints seem to include all of the initial axioms plus the goal -- is that what I should expect? (Presumably hence your comment about not needing all of the hints. I've certainly experienced that removing axioms makes a proof go faster.)
Q4: What if I add hints that (as it turns out) aren't justified by the axioms? Are they ignored?
Q5: What if I add hints that contradict the axioms? (From a few trials, this doesn't make Prover9 mis-infer.)
Q6: For a proof (attempt) that times out, is there any way to retrieve the formulas inferred so far and recycle them for hints to speed up the next attempt? (I have a feeling in my waters that this would drag in some sort of fallacy, despite what I've seen re Q3 and Q4.)
Q3: Yes, you should expect the axiom(s) and the goal(s) included as hints. Both of them can serve as useful. I more meant that you might see something like "$F" as a hint doesn't seem to add much to me, and that hints also lead you down a particular path first which can make it more difficult or easier to find shorter proofs. However, if you just want a faster proof, then using all of the suggested hints probably comes as the way to go.
Q4: Hints do NOT need to come as deducible from the axioms.
Q5: Hints can contradict the axioms, sure.
The manual says "A derived clause matches a hint if it subsumes the hint.
...
In short, the default value of the hints_part parameter says to select clauses that match hints (lightest first) whenever any are available."
"Clause C subsumes clause D if the variables of C can be instantiated in such a way that it becomes a subclause of D. If C subsumes D, then D can be discarded, because it is weaker than or equivalent to C. (There are some proof procedures that require retention of subsumed clauses.)"
So let's say that you put
1. x ^((y^z) V v)=x V y as a hint.
Then if Prover9 generates
2. x ^ ((x^x) V v)=x V x
x ^ ((x^x) V v)=x V x will get selected whenever it's available, since it matches the hint.
This explanation isn't complete, because I'm not exactly sure how "subclause" gets defined.
Still, instead of generating formulas with the original axioms and whatever procedure Prover9 uses to generate formulas, formulas that match hints will get put to the front of the list for generating formulas. This can pick up the speed of the program, but from what I've read about some other problems it seems that many difficult problems basically wouldn't have gotten proved automatically if it weren't for things like hints, weighting, and other strategies.
Q6: I'm not sure which formulas you're referring to. In Prover9, of course, you can click on "show output" and look through the dozens of formulas it has generated. You could also set lemmas you think of as useful as additional goals, and then use Prooftrans to generate hints from those lemmas to use as hints on the next run. Or you could use the steps of the proofs of those lemmas as hints for the next run. There's no fallacy in terms of reasoning if you use steps of those proofs as hints, or the hints suggested by Prooftrans, because hints don't actually add any assumptions to the initial set. The hint mechanism works, at least according to my somewhat rough understanding, by changing the search procedure to use a clause that matches a hint once we have something which matches a hint (that is, the program has to deduce something that matches a hint, and only then can what matches the hint get used).
Q1: Yes, that should work for hints. But, to better test it, take the proof you have, and then use the "reformat" option and check the "hints" part. Then copy and paste all of those hints into your "formulas(hints)." list. (well you don't necessarily need them all... and using only some of them might lead to a shorter proof if it exists, but I digress). Then run the proof again, and if it runs like my proofs in propositional calculi with hints do, you'll get the proof "in an instant".
Just in case... you'll need to click on the "additional input" tab, and put your hint list there.
Q2: For strategies, the Prover9 manual has useful information on weighting, hints, and semantic guidance (I haven't tried semantic guidance). You might also want to see Bob Veroff's page (some of his work got done in OTTER, but the programs are similar). There also exists useful information Larry Wos's notebooks, as well as Dr. Wos's published work, though all of Wos's recent work has gotten done using OTTER (again, the programs are similar).
I just started working with Mathematica (5.0) for the first time, and while the manual has been helpful, I'm not entirely sure my technique has been correct using (Full)Simplify. I am using the program to check my work on a derived transform to change between reference frames, which consisted of multiplying a trio of relatively large square matrices.
A colleague and I each did the work by hand, separately, to make sure there were no mistakes. We hoped to get a third check from the program, which seemed that it would be simple enough to ask. The hand calculations took some time due to matrix size, but we came to the same conclusions. The fact that we had the same answer made me skeptical when the program produced different results.
I've checked and double checked my inputs.
I am definitely . (dot-multiplying) the matrices for correct multiplication.
FullSimplify made no difference.
Neither have combinations with TrigReduce / expanding algebraically before simplifying.
I've taken indices from the final matrix and tryed to simplify them while isolated, to no avail, so the problem isn't due to the use of matrices.
I've also tried to multiply the first two matrices, simplify, and then multiply that with the third matrix; however, this produced the same results as before.
I thought Simplify automatically crossed into all levels of Heads, so I didn't need to worry about mapping, but even where zeros would be expected as outputs in the matrix, there are terms, and where we would expect terms, there are close answers, plus a host of sin and cosine terms that do not reduce.
Does anyone frequent any type of technique with Simplify to get more preferable results, in contrast to solely using Simplify?
If there are assumptions on parameter ranges you will want to feed them to Simplify. The following simple examples will indicate why this might be useful.
In[218]:= Simplify[a*Sqrt[1 - x^2] - Sqrt[a^2 - a^2*x^2]]
Out[218]= a Sqrt[1 - x^2] - Sqrt[-a^2 (-1 + x^2)]
In[219]:= Simplify[a*Sqrt[1 - x^2] - Sqrt[a^2 - a^2*x^2],
Assumptions -> a > 0]
Out[219]= 0
Assuming this and other responses miss the mark, if you could provide an example that in some way shows the possibly bad behavior, that would be very helpful. Disguise it howsoever necessary in order to hide proprietary features: bleach out watermarks, file down registration numbers, maybe dress it in a moustache.
Daniel Lichtblau
Wolfram Research
As you didn't give much details to chew on I can only give you a few tips:
Mma5 is pretty old. The current version is 8. If you have access to someone with 8 you might ask him to try it to see whether that makes a difference. You could also try WolframAlpha online (http://www.wolframalpha.com/), which also understands some (all?) Mma syntax.
Have you tried comparing your own and Mma's result numerically? Generate a Table of differences for various parameter values or use Plot. If the differences are negligable (use Chop to cut off small residuals) the results are probably equivalent.
Cheers -- Sjoerd
Well I know it might sound a bit strange but yes my question is: "What is a unification algorithm".
Well, I am trying to develop an application in F# to act like Prolog. It should take a series of facts and process them when making queries.
I was suggested to get started in implementing a good unification algorithm but did not have a clue about this.
Please refer to this question if you want to get a bit deeper to what I want to do.
Thank you very much and Merry Christmas.
If you have two expressions with variables, then unification algorithm tries to match the two expressions and gives you assignment for the variables to make the two expressions the same.
For example, if you represented expressions in F#:
type Expr =
| Var of string // Represents a variable
| Call of string * Expr list // Call named function with arguments
And had two expressions like this:
Call("foo", [ Var("x"), Call("bar", []) ])
Call("foo", [ Call("woo", [ Var("z") ], Call("bar", []) ])
Then the unification algorithm should give you an assignment:
"x" -> Call("woo", [ Var("z") ]
This means that if you replace all occurrences of the "x" variable in the two expressions, the results of the two replacements will be the same expression. If you had expressions calling different functions (e.g. Call("foo", ...) and Call("bar", ...)) then the algorithm will tell you that they are not unifiable.
There is also some explanation in WikiPedia and if you search the internet, you'll surely find some useful description (and perhaps even an implementation in some functional language similar to F#).
I found Baader and Snyder's work to be most informative. In particular, they describe several unification algorithms (including Martelli and Montanari's near-linear algorithm using union-find), and describe both syntactic unification and various kinds of semantic unification.
Once you have unification, you'll also need backtracking. Kiselyov/Shan/Friedman's LogicT framework will help here.
Obviously, destructive unification would be much more efficient than a pure functional one, but much less F-sharpish as well. If it's a performance you're after, probably you will end up implementing a subset of WAM any way:
https://en.wikipedia.org/wiki/Warren_Abstract_Machine
And probably this could help: Andre Marien, Bart Demoen: A new Scheme for Unification in WAM.