Intro rule for ∀x∈S (Isabelle) - set

Usually when you have a goal beginning ∀x you can write something like
show "∀x. Px"
proof (rule allI)
but this doesn't seem to work when you have something beginning ∀x∈S. For example, I tried
show "∀x∈S. P x"
proof (rule allI)
which gives the message
Failed to apply initial proof method
This surprised me, since I thought ∀x∈S. P x was probably syntactic sugar for ∀x. x∈S --> P x, in which case it should work.
This is similar to a question I previously asked
Intro rule for "∀r>0" in Isabelle
but I think that this time the answer might be different.

It is not just syntax; it is its own constant called Ball, and the introduction rule is called ballI.
If you ctrl-click onto `∀x∈A", it should take you straight to the definition, where you can see what it is called. Additionally, you can use the ‘Find theorems’ panel in Isabelle to find lemmas related to it.

Related

Reporting *why* a query failed in Prolog in a systematic way

I'm looking for an approach, pattern, or built-in feature in Prolog that I can use to return why a set of predicates failed, at least as far as the predicates in the database are concerned. I'm trying to be able to say more than "That is false" when a user poses a query in a system.
For example, let's say I have two predicates. blue/1 is true if something is blue, and dog/1 is true if something is a dog:
blue(X) :- ...
dog(X) :- ...
If I pose the following query to Prolog and foo is a dog, but not blue, Prolog would normally just return "false":
? blue(foo), dog(foo)
false.
What I want is to find out why the conjunction of predicates was not true, even if it is an out of band call such as:
? getReasonForFailure(X)
X = not(blue(foo))
I'm OK if the predicates have to be written in a certain way, I'm just looking for any approaches people have used.
The way I've done this to date, with some success, is by writing the predicates in a stylized way and using some helper predicates to find out the reason after the fact. For example:
blue(X) :-
recordFailureReason(not(blue(X))),
isBlue(X).
And then implementing recordFailureReason/1 such that it always remembers the "reason" that happened deepest in the stack. If a query fails, whatever failure happened the deepest is recorded as the "best" reason for failure. That heuristic works surprisingly well for many cases, but does require careful building of the predicates to work well.
Any ideas? I'm willing to look outside of Prolog if there are predicate logic systems designed for this kind of analysis.
As long as you remain within the pure monotonic subset of Prolog, you may consider generalizations as explanations. To take your example, the following generalizations might be thinkable depending on your precise definition of blue/1 and dog/1.
?- blue(foo), * dog(foo).
false.
In this generalization, the entire goal dog(foo) was removed. The prefix * is actually a predicate defined like :- op(950, fy, *). *(_).
Informally, above can be read as: Not only this query fails, but even this generalized query fails. There is no blue foo at all (provided there is none). But maybe there is a blue foo, but no blue dog at all...
?- blue(_X/*foo*/), dog(_X/*foo*/).
false.
Now we have generalized the program by replacing foo with the new variable _X. In this manner the sharing between the two goals is retained.
There are more such generalizations possible like introducing dif/2.
This technique can be both manually and automatically applied. For more, there is a collection of example sessions. Also see Declarative program development in Prolog with GUPU
Some thoughts:
Why did the logic program fail: The answer to "why" is of course "because there is no variable assignment that fulfills the constraints given by the Prolog program".
This is evidently rather unhelpful, but it is exactly the case of the "blue dog": there are no such thing (at least in the problem you model).
In fact the only acceptable answer to the blue dog problem is obtained when the system goes into full theorem-proving mode and outputs:
blue(X) <=> ~dog(X)
or maybe just
dog(X) => ~blue(X)
or maybe just
blue(X) => ~dog(X)
depending on assumptions. "There is no evidence of blue dogs". Which is true, as that's what the program states. So a "why" in this question is a demand to rewrite the program...
There may not be a good answer: "Why is there no x such that x² < 0" is ill-posed and may have as answer "just because" or "because you are restricting yourself to the reals" or "because that 0 in the equation is just wrong" ... so it depends very much.
To make a "why" more helpful, you will have to qualify this "why" somehow. which may be done by structuring the program and extending the query so that additional information collecting during proof tree construction is bubbling up, but you will have to decide beforehand what information that is:
query(Sought, [Info1, Info2, Info3])
And this query will always succeed (for query/2, "success" no longer means "success in finding a solution to the modeled problem" but "success in finishing the computation"),
Variable Sought will be the reified answer of the actual query you want answered, i.e. one of the atoms true or false (and maybe unknown if you have had enough with two-valued logic) and Info1, Info2, Info3 will be additional details to help you answer a why something something in case Sought is false.
Note that much of the time, the desire to ask "why" is down to the mix-up between the two distinct failures: "failure in finding a solution to the modeled problem" and "failure in finishing the computation". For example, you want to apply maplist/3 to two lists and expect this to work but erroneously the two lists are of different length: You will get false - but it will be a false from computation (in this case, due to a bug), not a false from modeling. Being heavy-handed with assertion/1 may help here, but this is ugly in its own way.
In fact, compare with imperative or functional languages w/o logic programming parts: In the event of failure (maybe an exception?), what would be a corresponding "why"? It is unclear.
Addendum
This is a great question but the more I reflect on it, the more I think it can only be answer in a task-specific way: You must structure your logic program to be why-able, and you must decide what kind of information why should actually return. It will be something task-specific: something about missing information, "if only this or that were true" indications, where "this or that" are chosen from a dedicates set of predicates. This is of course expected, as there is no general way to make imperative or functional programs explain their results (or lack thereof) either.
I have looked a bit for papers on this (including IEEE Xplore and ACM Library), and have just found:
Reasoning about Explanations for Negative Query Answers in DL-Lite which is actually for Description Logics and uses abductive reasoning.
WhyNot: Debugging Failed Queries in Large Knowledge Bases which discusses a tool for Cyc.
I also took a random look at the documentation for Flora-2 but they basically seem to say "use the debugger". But debugging is just debugging, not explaining.
There must be more.

Representing syntactically different terms in TPTP

I am having a look at first order logic theorem provers such as Vampire and E-Prover, and the TPTP syntax seems to be the way to go. I am more familiar with Logic Programming syntaxes such as Answer Set Programming and Prolog, and although I try refering to a detailed description of the TPTP syntax I still don't seem to grasp how to properly distinguish between interpreted and non interpreted functor (and I might be using the terminology wrong).
Essentially, I am trying to prove a theorem by showing that no model acts as a counter-example. My first difficulty was that I did not expect the following logic program to be satisfiable.
fof(all_foo, axiom, ![X] : (pred(X) => (X = foo))).
fof(exists_bar, axiom, pred(bar)).
It is indeed satisfiable because nothing prevents bar from being equal to foo. So a first solution would be to insist that these two terms are distinct and we obtain the following unsatisfiable program.
fof(all_foo, axiom, ![X] : pred(X) => (X = foo)).
fof(exists_bar, axiom, pred(bar)).
fof(foo_not_bar, axiom, foo != bar).
The Techinal Report clarifies that different double quoted strings are different objects indeed, so another solution is to put quotes here and there, so as to obtain the following unsatisfiable program.
fof(all_foo, axiom, ![X] : (pred(X) => (X = "foo"))).
fof(exists_bar, axiom, pred("bar")).
I am happy not to have manually specify the inequality as that would obviously not scale to a more realistic scenario. Moving closer to my real situation, I actually have to handle composed terms, and the following program is unfortunately satisfiable.
fof(all_foo, axiom, ![X] : (pred(X) => (X = f("foo")))).
fof(exists_bar, axiom, pred(g("bar"))).
I guess f("foo") is not a term but the function f applied to the object "foo". So it could potentially coincide with function g. Although a manual specification that f and g never coincide does the trick, the following program is unsatisfiable, I feel like I'm doing it wrong. And it probably wouldn't scale to my real setting with plenty of terms all to be interpreted as distinct when they are syntactically distinct.
fof(all_foo, axiom, ![X] : (pred(X) => (X = f("foo")))).
fof(exists_bar, axiom, pred(g("bar"))).
fof(f_not_g, axiom, ![X, Y] : f(X) != g(Y)).
I have tried throwing single quotes around, but I didn't find the proper way to do it.
How do I make syntactically different (composed) terms and test for syntactical equality?
Subsidiary question: the following program is satisfiable, because the automated-theorem prover understands f as a function rather than a uninterpreted functor.
fof(exists_f_g, axiom, (?[I] : ((f(foo) = f(I)) & pred(g(I))))).
fof(not_g_foo, axiom, ~pred(g(foo))).
To make it unsatisfiable, I need to manually specify that f is injective. What would be the natural way to obtain this behaviour without specifying injectivity of all functors that occur in my program?
fof(exists_f_g, axiom, (?[I] : ((f(foo) = f(I)) & pred(g(I))))).
fof(not_g_foo, axiom, ~pred(g(foo))).
fof(f_injective, axiom, ![X,Y] : (f(X) = f(Y) => (X = Y))).
First of all let me point you to the Syntax BNF of TPTP. In principle, you have Prolog terms with some predefined infix/prefix operators of appropriate precedences. This means, variables are written in upper case and constants are written in lower case. Also like Prolog, escaping with single quotes allows us to write a constant starting with a capital letter i.e. 'X'. I have never seen double quoted atoms so far, so you might want look up the instructions of the prover on how to interpret them.
But even though the syntax is Prolog-ish, automated theorem proving is a different kind of beast. There is no closed world assumption nor are different constants assumed to be different - that's why you cannot find a proof for:
fof(c1, conjecture, a=b ).
and neither for:
fof(c1, conjecture, ~(a=b) ).
So if you want to have syntactic dis-equality, you need to axiomatize it. Now, assuming a different from b trivially shows that they are different, so I at least claimed: "Suppose there are two different constants a and b, then there exists some variable which is not b."
fof(a1, axiom, ~(a=b)).
fof(c1, conjecture, ?[X]: ~(X=b)).
Since functions in first-order logic are not necessarily injective, you also don't get around of adding your assumption in there.
Please also note the different roles of input formulas: so far you only stated axioms and no conjectures i.e. you ask the prover to show your axiom set to be inconsistent. Some provers might even give up because they use some resolution refinements (e.g. set of support) which restricts resolution between axioms[1]. In any case, you need to be aware that the formula you are trying to prove is of the form A1 ∧ ... ∧ An → C1 ∨ ... Cm where the A are axioms and the C are conjectures.[2]
I hope that at least the syntax is a bit clearer now - unfortunately the answer to the questions is more that atomated theorem provers don't make the same assumptions as you expect, so you have to axiomatize them. These axiomatizations are also often ineffective and you might get better perfomance from specialized tools.
[1] As you already notice, advanced provers like Vampire or E Prover tell you about (counter-)satisfyability instead.
[2] A resolution based theorem prover will first negate that formula and perform a CNF transformation, but even though most TPTP accepting provers are resolution based, that's not a requirement.

Negated possibilities in Prolog

This is a somewhat silly example but I'm trying to keep the concept pretty basic for better understanding. Say I have the following unary relations:
person(steve).
person(joe).
fruit(apples).
fruit(pears).
fruit(mangos).
And the following binary relations:
eats(steve, apples).
eats(steve, pears).
eats(joe, mangos).
I know that querying eats(steve, F). will return all the fruit that steve eats (apples and pears). My problem is that I want to get all of the fruits that Steve doesn't eat. I know that this: \+eats(steve,F) will just return "no" because F can't be bound to an infinite number of possibilities, however I would like it to return mangos, as that's the only existing fruit possibility that steve doesn't eat. Is there a way to write this that would produce the desired result?
I tried this but no luck here either: \+eats(steve,F), fruit(F).
If a better title is appropriate for this question I would appreciate any input.
Prolog provides only a very crude form of negation, in fact, (\+)/1 means simply "not provable at this point in time of the execution". So you have to take into account the exact moment when (\+)/1 is executed. In your particular case, there is an easy way out:
fruit(F), \+eats(steve,F).
In the general case, however, this is far from being fixed easily. Think of \+ X = Y, see this answer.
Another issue is that negation, even if used properly, will introduce non-monotonic properties into your program: By adding further facts for eats/2 less might be deduced. So unless you really want this (as in this example where it does make sense), avoid the construct.

Prover9 hints not being used

I'm running some Lattice proofs through Prover9/Mace4. I'm using a non-standard axiomatization of the lattice join operation, from which it is not immediately obvious that the join is commutative, associative and idempotent. (I can get Prover9 to prove that it is -- eventually.)
I know that Prover9 looks for those properties to help it search faster. I've tried putting those properties in the Additional Input section (I'm running the GUI version 0.5), with
formulas(hints).
x v y = y v x.
% etc
end_of_list.
Q1: Is this the way to get it to look at hints?
Q2: Is there a good place to look for help on speeding up proofs/tips and tricks?
(If I can get this to work, there are further operators I'd like to give hints for.)
For ref, my axioms are (bi-lattice with only one primitive operation):
x ^ y = y ^ x. % lattice meet
x ^ x = x.
(x ^ y) ^ z = x ^ (y ^ z).
x ^ (x v y) = x. % standard absorption for join
x ^ z = x & y ^ z = y <-> z ^ (x v y) = (x v y).
% non-standard absorption
(EDIT after DougS's answer posted.)
Wow! Thank you. Orders-of-magnitude speed-up.
Some follow-on q's if I might ...
Q3: The generated hints seem to include all of the initial axioms plus the goal -- is that what I should expect? (Presumably hence your comment about not needing all of the hints. I've certainly experienced that removing axioms makes a proof go faster.)
Q4: What if I add hints that (as it turns out) aren't justified by the axioms? Are they ignored?
Q5: What if I add hints that contradict the axioms? (From a few trials, this doesn't make Prover9 mis-infer.)
Q6: For a proof (attempt) that times out, is there any way to retrieve the formulas inferred so far and recycle them for hints to speed up the next attempt? (I have a feeling in my waters that this would drag in some sort of fallacy, despite what I've seen re Q3 and Q4.)
Q3: Yes, you should expect the axiom(s) and the goal(s) included as hints. Both of them can serve as useful. I more meant that you might see something like "$F" as a hint doesn't seem to add much to me, and that hints also lead you down a particular path first which can make it more difficult or easier to find shorter proofs. However, if you just want a faster proof, then using all of the suggested hints probably comes as the way to go.
Q4: Hints do NOT need to come as deducible from the axioms.
Q5: Hints can contradict the axioms, sure.
The manual says "A derived clause matches a hint if it subsumes the hint.
...
In short, the default value of the hints_part parameter says to select clauses that match hints (lightest first) whenever any are available."
"Clause C subsumes clause D if the variables of C can be instantiated in such a way that it becomes a subclause of D. If C subsumes D, then D can be discarded, because it is weaker than or equivalent to C. (There are some proof procedures that require retention of subsumed clauses.)"
So let's say that you put
1. x ^((y^z) V v)=x V y as a hint.
Then if Prover9 generates
2. x ^ ((x^x) V v)=x V x
x ^ ((x^x) V v)=x V x will get selected whenever it's available, since it matches the hint.
This explanation isn't complete, because I'm not exactly sure how "subclause" gets defined.
Still, instead of generating formulas with the original axioms and whatever procedure Prover9 uses to generate formulas, formulas that match hints will get put to the front of the list for generating formulas. This can pick up the speed of the program, but from what I've read about some other problems it seems that many difficult problems basically wouldn't have gotten proved automatically if it weren't for things like hints, weighting, and other strategies.
Q6: I'm not sure which formulas you're referring to. In Prover9, of course, you can click on "show output" and look through the dozens of formulas it has generated. You could also set lemmas you think of as useful as additional goals, and then use Prooftrans to generate hints from those lemmas to use as hints on the next run. Or you could use the steps of the proofs of those lemmas as hints for the next run. There's no fallacy in terms of reasoning if you use steps of those proofs as hints, or the hints suggested by Prooftrans, because hints don't actually add any assumptions to the initial set. The hint mechanism works, at least according to my somewhat rough understanding, by changing the search procedure to use a clause that matches a hint once we have something which matches a hint (that is, the program has to deduce something that matches a hint, and only then can what matches the hint get used).
Q1: Yes, that should work for hints. But, to better test it, take the proof you have, and then use the "reformat" option and check the "hints" part. Then copy and paste all of those hints into your "formulas(hints)." list. (well you don't necessarily need them all... and using only some of them might lead to a shorter proof if it exists, but I digress). Then run the proof again, and if it runs like my proofs in propositional calculi with hints do, you'll get the proof "in an instant".
Just in case... you'll need to click on the "additional input" tab, and put your hint list there.
Q2: For strategies, the Prover9 manual has useful information on weighting, hints, and semantic guidance (I haven't tried semantic guidance). You might also want to see Bob Veroff's page (some of his work got done in OTTER, but the programs are similar). There also exists useful information Larry Wos's notebooks, as well as Dr. Wos's published work, though all of Wos's recent work has gotten done using OTTER (again, the programs are similar).

Prolog negation and logical negation

Assume we have the following program:
a(tom).
v(pat).
and the query (which returns false):
\+ a(X), v(X).
When tracing, I can see that X becomes instantiated to tom, the predicate a(tom) succeeds, therefore \+ a(tom) fails.
I have read in some tutorials that the not (\+) in Prolog is just a test and does not cause instantiation.
Could someone please clarify the above point for me? As I can see the instantiation.
I understand there are differences between not (negation as failure) and the logical negation. Could you refer a good article that explains in which cases they behave the same and when do they behave different?
Great question.
Short answer: you stumbled upon "floundering".
The problem is that the implementation of the operator \+ only works
when applied to a literal containing no variables, i.e., a ground
literal. It is not able to generate bindings for variables, but only
test whether subgoals succeed or fail. So to guarantee reasonable
answers to queries to programs containing negation, the negation
operator must be allowed to apply only to ground literals. If it is
applied to a nonground literal, the program is said to flounder.
link
If you invert the query
v(X), \+ a(X).
You'll get the right answer. Some implementations or meta interpreter detect floundering goals and delay them until all the variables are ground.
About your point 1), you see the instantiation inside the NAF tree. What happens there shouldn't affect variables that are outside (in this case in v(X)). Prolog often acts in the naive way to avoid inefficiencies. In theory it should just return an error instead of instantiating the variable.
2) This is my favourite article on the topic: Nonmonotonic Logic Programming.
WRT point 2, Wikipedia article seems a good starting point.
You already experienced that understanding NAF can be difficult. Part of this could be because (logical) negation it's inherently difficult to define even in simpler contest that predicate calculus (see for instance Russel's paradox), and part because the powerful variables of Prolog are domed to keep the actual counterexamples of failed if negated proofs. See if you can understand the actual library definition of forall/2 (please read the documentation, it's synthetic and interesting) that's the preferred way to run a failure driven loop:
%% forall(+Condition, +Action)
%
% True if Action if true for all variable bindings for which Condition
% if true.
forall(Cond, Action) :-
\+ (Cond, \+ Action).
I remember the first time I saw it, it looked like magic...
edit about a tutorial, I found, while 'spelunking' my links collection, a good site by J.R.Fisher. It's full of interesting stuff, just a pity it's a bit terse in explanations, requiring the student to answer itself with frequent execises. See the paragraph 2.5, devoted to negation by failure. I think you could also enjoy section 3. How Prolog Works

Resources