Where to find resources/information about proof control in Prolog - prolog

As part of an assignment I've been asked to check if proofs in natural deduction are either correct or incorrect, using Prolog. An example text file called "valid.txt" containing a proof looks like this:
[imp(p, q), p].
q.
[
[1, imp(p,q), premise],
[2, p, premise],
[3, q, impel(2,1)]
].```
This would be the input into my program, which should respond "yes" or "true" to a correct proof (which the above is), and "no" or "false" to an incorrect proof.
I have no idea where to begin. So my question is if there are some resources out there where I could learn about verifying/controlling proofs in prolog. I have some small amounts of experience programming in prolog, but I feel like I need some specific instruction on how to construct a program that can verify proofs. I've looked for textbooks and websites, but been unable to find anything that could help me.
In the end my program should probably resemble something like this: Checking whether or not a logical sequence that has assumptions in it is valid
As this is my first time asking a question on here I apologize if I missed something.

In part I totally understand the close votes, comments and such, but on the other had I can totally understand your side of the story.
Realize that I do view your question just barely on the wrong side of what is allowed here, but not so much so that other newcomers don't get away with doing the same thing.
Book recommendation questions are not allowed here. Normally a book recommendation would be seen as subjective, but when there is only one or two books that meet the requirements, how can it be subjective.
"Handbook of Practical Logic and Automated Reasoning" by John Harrison (WorldCat) (Amazon) (Webpage)
Note: The book uses the programming language OCaml, but implements a subset of Prolog. Also the book uses a domain specific language and implements a parser.
The book clearly goes way beyond what you need so you won't need to get or read the entire book.
You might only need this which is found in prop.ml
let rec eval fm v =
match fm with
False -> false
| True -> true
| Atom(x) -> v(x)
| Not(p) -> not(eval p v)
| And(p,q) -> (eval p v) & (eval q v)
| Or(p,q) -> (eval p v) or (eval q v)
| Imp(p,q) -> not(eval p v) or (eval q v)
| Iff(p,q) -> (eval p v) = (eval q v);;
Example query
tautology <<p \/ q ==> q \/ (p <=> q)>>;;
The book is more of a detailed introduction to the source code of a more feature rich version HOL-Light which is a simpler version of HOL.
The links should put you in the ball park of working applications and these all fall under the broader topic of automated theorem proves which is different from proof assistants.

Related

Substitution system

I am currently developing an advanced text editor with some features.
One of the features is a substitution system. It will help user to replace strings quickly.
It has a following format:
((a x) | x) (d e)
We can divide the string into two parts: left ((a x) | x) and right (d e). If there is a letter (x, in current case) after a |, then we then can perform a substitute action - replace all x in the left side with a string on the right side.
So after the action we will receive (a (d e)).
Of course, these parenthesized expressions might be nested : ((a | a) (d x | d)) (e f | f) -> ((e f | f) x) -> (e x).
Unfortunately, the reductions may continue infinitely: (a a | a) (a a | a).
I need to show a warning if user would write a string for which there is no way to reduce it into a form without reductions.
Any suggestions on how to do it?
Congratulations, you have just invented the λ-calculus (pronounced "Lambda-calculus")
Why this problem is hard
Let's use the original notation for this, since it was already invented by Church in the 1930s:
((λx.f) u) is the rewriting of f where all x's have been replaced by u's.
Note that this is equivalent to you notation (f | x) u, where f is a string that can contain x's. It's a mathematical tool introduced to understand what functions are "computable", i.e. can be given to a computer with an adequate program, so that for all input, the computer will run its program and output the correct answer. Spoiler: computable functions are exactly the functions that can be written as λ-terms (i.e. rewritings in your settings).
Sometimes λ-terms can be simplified, as you have noted yourself. For instance
(λx.yx)((λu.uu)(λz.z)i) -> (λx.yx)((λz.z)(λz.z)i) -> (λx.yx)((λz.z)i) -> (λx.yx)i -> yi
Sometimes the sequence of simplifications (also called "evaluation", or "beta-reduction") is infinite. One very famous is the Omega operator (λx.xx)(λx.xx) (rings a bell?).
Ideally, we would like to restrict the λ-calculus so that all terms are "simplifiable" to a final usable form in a finite number of steps. This property is called "normalization". The actual thing we want however is one step further : we want that all sequences of simplifications end up in a finite number of steps in the final form, so that when faced with multiple choices, you can choose either and not get stuck in an infinite loop because of a bad choice. This is called "strong normalization".
Here's the issue : the λ-calculus is not strongly normalizing. This property is simply not true. There are terms that do not end up in a final - "normal" - form.
You have found one yourself.
How this problem has been solved theoretically
The key to gaining the strong normalization property was to rule out λ-terms which did not satisfy this property (print a warning and spit an error in your case) so that we only consider strongly normalizing λ-terms. This "ruling out" was put in place via a typing system : λ-terms that have a valid type are strongly normalizing, and λ-terms that are not strongly normalizing cannot have a valid type. Awesome, now we know which ones should give errors ! Let's check that with simple cases.
From the way you are able to phrase your problem in a very clear way, I'm assuming you already have experience with programming and static type systems, but I can modify this answer if you'd rather have a full explanation. I'll be using Caml-like notation for types so s is a string, s -> s is a function that to a string associates a string, and s -> (s -> s) is a function that to a string associates a function from string to string, etc. I denote x : t when variable x has type t.
λx.ax : s -> s provided a : s -> s
(λx.yx)((λu.uu)(λz.z)i) : s provided y : s -> s, i : s as we have seen by the reductions above
λx.x : s -> s. But watch out, λx.x : (s -> s) -> (s -> s) is also true. Deciding the type is hard
How you may solve this problem programatically
You problem is slightly easier, because you are only dealing with string replacements, so you know that the base type of everything is a string, and you can try to "type you way up" until you are either able to type the whole rewriting (i.e. no errors), or able to prove that the rewriting is not typable (see Omega) and spit an error. Watch out though : just because you are not able to type a rewriting does not mean that it cannot be typed !
In your example (λx.ax)(de), you know the actual values for a, d and e so you may have for instance a : s -> s, d : s -> s, e : s, hence de : s and λx.ax : s -> s so the whole thing has type s and you're good to go. You can probably write a compiler that will try to type the rewriting and figure out if it's typable or not based on a set of cleverly-crafted decision rules for the specific use that you want. You can even decide that if the compiler fails to type a rewriting, then it is rejected (even when it's valid) because cases so intricate that they will fail though being correct should never happen in a reasonable editor text substitution scenario.
Do you want to solve this programatically ?
No. I mean it. You really don't want to.
Remember, λ-terms describe all computable functions.
If you were to really implement a fully correct warning generator as you seem to intend to, this means that one could encode any program as a string substitution in you editor, so your editor is essentially a programming language on its own, which is typically not what you want your editor to be.
I mean, writing a full program that queries the webcam, analyzes the image, guesses who is typing, and loads an ad based on Google's suggestion when opening a file, all as a cleverly-written text substitution command, really ? Is that what you're trying to achieve ?
PS: If this is indeed what you're trying to achieve, have a look at Lisp and Haskell, their syntax should look somewhat... familiar to what you've seen here.
And good luck with your editor !

Formal Reason Why LL Grammar excludes Left-Recursion

I am currently reading the Dragon Book in my free time. The book states that a grammar is LL if and only if for any production A -> a|b, the following two conditions apply.
1) FIRST(a) and FIRST(b) are disjoint. This implies that they cannot both derive EMPTY
2) If 'b' can derive EMPTY, then 'a' cannot derive any string that begins with FOLLOW(A)
I know that LL parsers cannot handle left recursion in general, but if I make a grammar
S -> S(S) | EMPTY,
FIRST(S) = {'('} and FOLLOW(S) = {EOF}. This does not seem to contradict any of the two rules, am I missing something?
Thank you in advance,
Michael
It has been a while, but I think FOLLOWS(S) = {EOF,')','('}.

Where might I find a method to convert an arbitrary boolean expression into conjunctive or disjunctive normal form?

I've written a little app that parses expressions into abstract syntax trees. Right now, I use a bunch of heuristics against the expression in order to decide how to best evaluate the query. Unfortunately, there are examples which make the query plan extremely bad.
I've found a way to provably make better guesses as to how queries should be evaluated, but I need to put my expression into CNF or DNF first in order to get provably correct answers. I know this could result in potentially exponential time and space, but for typical queries my users run this is not a problem.
Now, converting to CNF or DNF is something I do by hand all the time in order to simplify complicated expressions. (Well, maybe not all the time, but I do know how that's done using e.g. demorgan's laws, distributive laws, etc.) However, I'm not sure how to begin translating that into a method that is implementable as an algorithm. I've looked at papers on query optimization, and several start with "well first we put things into CNF" or "first we put things into DNF", and they never seem to explain their method for accomplishing that.
Where should I start?
Look at https://github.com/bastikr/boolean.py
Example:
def test(self):
expr = parse("a*(b+~c*d)")
print(expr)
dnf_expr = normalize(boolean.OR, expr)
print(list(map(str, dnf_expr)))
cnf_expr = normalize(boolean.AND, expr)
print(list(map(str, cnf_expr)))
Output is:
a*(b+(~c*d))
['a*b', 'a*~c*d']
['a', 'b+~c', 'b+d']
UPDATE: Now I prefer this sympy logic package:
>>> from sympy.logic.boolalg import to_dnf
>>> from sympy.abc import A, B, C
>>> to_dnf(B & (A | C))
Or(And(A, B), And(B, C))
>>> to_dnf((A & B) | (A & ~B) | (B & C) | (~B & C), True)
Or(A, C)
The naive vanilla algorithm, for quantifier-free formulae, is :
for CNF, convert to negation normal form with De Morgan laws then distribute OR over AND
for DNF, convert to negation normal form with De Morgan laws then distribute AND over OR
It's unclear to me if your formulae are quantified. But even if they aren't, it seems the end of the wikipedia articles on conjunctive normal form, and its - roughly equivalent in the automated theorm prover world - clausal normal form alter ego outline a usable algorithm (and point to references if you want to make this transformation a bit more clever). If you need more than that, please do tell us more about where you encounter a difficulty.
I've came across this page: How to Convert a Formula to CNF. It shows the algorithm to convert a Boolean expression to CNF in pseudo code. Helped me to get started into this topic.

How to parse a list of words according to a simplified grammar?

Just to clarify, this isn't homework. I've been asked for help on this and am unable to do it, so it turned into a personal quest to solve it.
Imagine you have a grammar for an English sentence like this:
S => NP VP | VP
NP => N | Det N | Det Adj N
VB => V | V NP
N => i you bus cake bear
V => hug love destroy am
Det => a the
Adj => pink stylish
I've searched for several hours and really am out of ideas.
I found articles talking about shallow parsing, depth-first backtracking and related things, and while I'm familiar with most of them, I still can't apply them to this problem. I tagged Lisp and Haskell because those are the languages I plan to implement this in, but I don't mind if you use other languages in your replies.
I'd appreciate hints, good articles and everything in general.
Here's a working Haskell example. It turns out there's a few tricks to learn before you can make it work! The zeroth thing to do is boilerplate: turn off the dreaded monomorphism restriction, import some libraries, and define some functions that aren't in the libraries (but should be):
{-# LANGUAGE NoMonomorphismRestriction #-}
import Control.Applicative ((<*))
import Control.Monad
import Text.ParserCombinators.Parsec
ensure p x = guard (p x) >> return x
singleToken t = tokenPrim id (\pos _ _ -> incSourceColumn pos 1) (ensure (==t))
anyOf xs = choice (map singleToken xs)
Now that the zeroth thing is done... first, we define a data type for our abstract syntax trees. We can just follow the shape of the grammar here. However, to make it more convenient, I've factored a few of the grammar's rules; in particular, the two rules
NP => N | Det N | Det Adj N
VB => V | V NP
are more conveniently written this way when it comes to actually writing a parser:
NP => N | Det (Adj | empty) N
VB => V (NP | empty)
Any good book on parsing will have a chapter on why this kind of factoring is a good idea. So, the AST type:
data Sentence
= Complex NounPhrase VerbPhrase
| Simple VerbPhrase
data NounPhrase
= Short Noun
| Long Article (Maybe Adjective) Noun
data VerbPhrase
= VerbPhrase Verb (Maybe NounPhrase)
type Noun = String
type Verb = String
type Article = String
type Adjective = String
Then we can make our parser. This one follows the (factored) grammar even more closely! The one wrinkle here is that we always want our parser to consume an entire sentence, so we have to explicitly ask for it to do that by demanding an "eof" -- or end of "file".
s = (liftM2 Complex np vp <|> liftM Simple vp) <* eof
np = liftM Short n <|> liftM3 Long det (optionMaybe adj) n
vp = liftM2 VerbPhrase v (optionMaybe np)
n = anyOf ["i", "you", "bus", "cake", "bear"]
v = anyOf ["hug", "love", "destroy", "am"]
det = anyOf ["a", "the"]
adj = anyOf ["pink", "stylish"]
The last piece is the tokenizer. For this simple application, we'll just tokenize based on whitespace, so the built-in words function works just fine. Let's try it out! Load the whole file in ghci:
*Main> parse s "stdin" (words "i love the pink cake")
Right (Complex (Short "i") (VerbPhrase "love" (Just (Long "the" (Just "pink") "cake"))))
*Main> parse s "stdin" (words "i love pink cake")
Left "stdin" (line 1, column 3):
unexpected "pink"
expecting end of input
Here, Right indicates a successful parse, and Left indicates an error. The "column" number reported in the error is actually the word number where the error occurred, due to the way we're computing source positions in singleToken.
There are several different approaches for syntactic parsing using a Context-free grammar.
If you want to implement this yourself you could start by familiarizing yourself with parsing algorithms: you can have a look here and here, or if you prefer something on paper the chapter on Syntactic Parsing in Jurafsky&Martin might be a good start.
I know that it is not too difficult to implement a simple syntactic parser in the Prolog programming language. Just google for 'prolog shift reduce parser' or 'prolog scan predict parser'. I don't know Haskell or Lisp, but there might be similarities to prolog, so maybe you can get some ideas from there.
If you don't have to implement the complete parser on your own I'd have a look at the Python NLTK which offers tools for CFG-Parsing. There is a chapter about this in the NLTK book.
Okay, there are a number of algorithms that you could use. Below are some popular dynamic programming algorithms:
1) CKY algorithm: The grammar should be in CNF form
2) Earley algorithm
3) Chart parsing.
Please google to find the implemenation of these. Basically, given a sentence, these algorithms allow you to assign a context free tree to it.
You provided example of non propabalistic grammar. So you can use tools ANTLR, JFlex, Scala Parser Combinators, Parsers python library to implement parser by this grammar in very similar code as you provided.
I think the problem for you might be that the way you'd parse a computer language is a lot different than the way you'd parse natural language.
Computer languages are designed to be unambiguous and relatively easily to get the exact meaning of from a computer.
Natural languages evolved to be compact and expressive and usually understood by people. You might manage to make deterministic parsing that compilers use work for some very simple subset of English grammar, but it's nothing like what is used to parse real natural language.

Are there Mathematica packages for presenting proofs/derivations?

When I write out a proof or derivation on paper I frequently make sign errors or drop terms as I move from one step to the next. I'd like to use Mathematica to save myself from these silly mistakes. I don't want Mathematica to solve the expression, I just want to use it carry out and display a series of algebraic manipulations. For a (trivial) example
In[111]:= MultBothSides[Equal[a_, b_], c_] := Equal[c a, c b];
In[112]:= expression = 2 a == a b
Out[112]= 2 a == a b
In[113]:= MultBothSides[expression, 1/a]
Out[113]= 2 == b
Can anyone point me to a package that would support this kind of manipulation?
Edit
Thanks for the input, not quite what I'm looking for though. The symbol manipulation isn't really the problem. I'm really looking for something that will make explicit the algebraic or mathematical justification of each step of a derivation. My goal here is really pedagogical.
Mathematica also provides a number of high-level functions for manipulating algebraic. Among these are Expand, Apart and Together, and Cancel, though there are quite a few more.
Also, for your specific example of applying the same transformation to both sides of an equation (that is, and expression with the head Equal), you can use the Thread function, which works just like your MultBothSides function, but with a great deal more generality.
In[1]:= expression = 2 a == a b
Out[1]:= 2 a == a b
In[2]:= Thread[expression /a, Equal]
Out[2]:= 2 == b
In[3]:= Thread[expression - c, Equal]
Out[3]:= 2 a - c == a b - c
In either of the presented solutions, it should be relatively easy to see what the step entailed. If you want something a little more explicit, you can write your own function like so:
In[4]:= ApplyToBothSides[f_, eq_Equal] := Map[f, eq]
In[5]:= ApplyToBothSides[4 * #&, expression]
Out[5]:= 8 a == 4 a b
It's a generalization of your MultBothSides function that takes advantage of the fact that Map works on expressions with any head, not just head List. If you're trying to communicate with an audience that is unfamiliar with Mathematica, using these sorts of names can help you communicate more clearly. In a related vein, if you want to use replacement rules as suggested by Ira Baxter, it may be helpful to write out Replace or ReplaceAll instead of using the /. syntactic sugar.
In[6]:= ReplaceAll[expression, a -> (x + y)]
Out[6]:= 2 (x + y) == b (x + y)
If you think it would be clearer to have the actual equation, instead of the variable name expression, in your input, and you're using the notebook interface, highlight the word expression with your mouse, call up the contextual menu, and select "Evaluate in Place".
The notebook interface is also a very pleasant environment for doing "literate programming", so you can also explain any steps that are not immediately obvious in words. I believe this is a good practice when writing mathematical proofs regardless of the medium.
I don't think you need a package. What you want to do is to manipulate each formula according to an inference rule. In MMa, you can model inference rules on a formula using transformations. So, if you have a formula f, you can apply an inference rule I by executing (my MMa syntax is 15 years rusty)
f ./ I
to produce the next formula in your sequence.
MMa will of course try to simplify your formulas if they contain standard algebraic operators and terms, such as constant numbers and arithmetic operators. You can prevent MMa from applying its own "inference" rules by enclosing your formula in a Hold[...] form.

Resources