":-" serves as an infix operator in the Prolog logic programming language that, in the following context, roughly means:
H :- B1, B2, ... BN
H is provable if bodies B1 through BN are all provable.
Somewhat remarkably, in all my time studying Prolog, I've neglected to assign a name to this symbol. Does anybody know what the agreed upon name for :- is?
The :- sign represents an implication arrow. If you write your example with logical symbols it reads:
H ← B1 ∧ B2 ∧ ... ∧ BN
So you can also say: "H is implied by B1 and B2 and ... and BN" or "The body of the rule implies its head."
It is also correct to call the operator itself "implication arrow" or just "implication".
I'm not sure how agreed upon it is, but here's a reference to naming it "neck":
http://www.cse.unsw.edu.au/~billw/prologdict.html#neck
Related
How can I tell Isabelle to expand all my definitions, please, because that way the proof is trivial? Unfortunately there is no default expansion or simplification happens, and basically I get back the original expression as the subgoal.
Example:
theory Test
imports Main
begin
definition b0 :: "nat⇒nat"
where "b0 n ≡ (n mod 2)"
definition b1 :: "nat⇒nat"
where "b1 n ≡ (n div 2)"
lemma "(a::nat)≤3 ∧ (b::nat)≤3 ⟶
2*(b1 a)+(b0 a)+2*(b1 b)+(b0 b) = a+b"
apply auto
oops
end
Respose before oops:
proof (prove)
goal (1 subgoal):
1. a ≤ 3 ⟹
b ≤ 3 ⟹ 2 * b1 a + b0 a + 2 * b1 b + b0 b = a + b
My recommendation: unfolding
There is a special keyword unfolding for unpacking definitions at the start of proofs. For your example this would read:
unfolding b0_def b1_def by simp
I consider unfolding the most elegant way. It also helps while writing the proofs. Internally, this is (mostly?) equivalent to using the unfold-method:
apply (unfold b0_def b1_def) by simp
This will recursively (!) use the set of equalities you supply to rewrite the proof goal. (Due to the recursion, you should rather not supply a set of equalities that could generate cycles...)
Alternative: Using the simplifier
In cases with possible loops, the simplifier might be able to reach a nice unfolding without running into these cycles, maybe by interleaving with other simplifications. In such cases, by (simp add: b0_def b1_def), as you've suggested, is great!
Alternative definition: Maybe it's just an abbreviation (and no definition)?
If you find yourself unfolding a definition in every single instance, you could consider, using abbreviation instead of definition. Then, some Isabelle magic will do the packing/unpacking for you without further hints. abbeviation does only affect how the user communicates with Isabelle. It does not introduce new symbols at the object level, and consequently, there would be no b1_def facts and the like.
abbreviation b0 :: "nat⇒nat"
where "b0 n ≡ (n mod 2)"
Usually not recommended: Building something like an abbreviation using the simplifier
If you (for whatever reason) want to have a defined name at the object level, but unfold it in almost every instance, you can also feed the defining equality directly into the simplifier.
definition b0 :: "nat⇒nat"
where [simp]: "b0 n ≡ (n mod 2)"
(Usually there should be little reason for the last option.)
Yes, I keep forgetting that definitions are not used in simplifications by default.
Adding the definitions explicitly to the simplification rules solves this problem:
lemma "(a::nat)≤3 ∧ (b::nat)≤3 ⟶
2*(b1 a)+(b0 a)+2*(b1 b)+(b0 b) = a+b"
by (simp add: b0_def b1_def)
This way the definitions (b0, b1) are correctly used.
Translate the following formula into a horn formula in Skolem form:
∀w¬∀x∃z(H(w)∧(¬G(x,x)∨¬H(z)))
it's translated from german to english, how to write it in horn form and then in skolem form, i didn't find anything on internet...plz help me
I will always use the satisfiability preserving version of skolemization, i.e. the one where those are replaced which would become existential quantifiers when moved to the head of the formula.
To make life a bit simpler, let's push the negations to the atoms. We can also see that w doesn't occur in ¬G(x,x)∨¬H(z) and that x,z don't occur in H(w), allowing us to distribute the quantifiers a bit inside.
Then we obtain the formula ∀w¬H(w) ∨ ∃x∀z (G(x,x)∧ H(z)) .
If we want to refute the formula:
We skolemize ∃x and delete ∀w, ∀z and obtain:
¬H(w) ∨ (G(c,c)∧ H(z))
after CNF transformation, we have:
(¬H(w) ∨ G(c,c)) ∧ (¬H(w) ∨ H(z))
both clauses have exactly one positive literal, so they are horn clauses. Translated to Prolog syntax we get:
g(c,c) :- h(W).
h(Z) :- h(W).
If we want to prove the formula:
We have to negate before we skolemize, leading to:
∃w H(w) ∧ ∀x∃z (¬G(x,x) ∨ ¬H(z))
after skolemizing ∃w and ∃z, deleting ∀x and CNF transformation, we obtain:
H(c) ∧ (¬G(x,x) ∨ ¬H(f(x)))
That could be interpreted as a fact h(c) and a query ?- g(X,X), h(f(X)).
To be honest, both variants don't make much sense - the first does not terminate for any input and in the second version, the query will fail because g/2 is not defined.
does this page help?
6.3 Convert first-order logic expressions to normal form
A horn clause consists of various goals that all have to be satisfied in order for the whole clause to be true.
∀w¬∀x∃z(H(w)∧(¬G(x,x)∨¬H(z)))
First you want to translate the whole statement to human language for clarity. ¬ means NOT, ∧ means AND and ∨ means OR. The () are used to group goals.
∀w¬∀x∃z
For all w, all NOT x, at least 1 Z. If a w is true, x must be false and there must be at least 1 z.
H(w)
Is w a H? There must be a fact that says H(w) is true in your knowledge base.
¬G(x,x)
Is there a fact G(x,x)? If yes, return false.
¬H(z)
Is there a fact H(z)? If yes, return false.
z(H(w)∧(¬G(x,x)∨¬H(z)))
What this says that z is only true if H(w) is true AND either G(x,x) OR H(z) is false.
In Prolog you'd write this as
factCheck(W,X,Z) :- h(W), not(g(X,X);not(checkZ(Z)).
where Z is a list with at least 1 entry in it. If ANY element in list Z is true, fail.
%is the list empty?
checkZ([])
%is h true for the first element of the list?
checkZ([Head|Tail]) :- h(Head), !.
%remove the first element of the list
checkZ([Head|Tail]) :- checkZ(Tail).
Briefly, I have a EBNF grammar and so a parse-tree, but I do not know if there is a procedure to translate it in First Order Logic.
For example:
DR ::= E and P
P ::= B | (and P)* | (or P)*
B ::= L | P (and L P)
L ::= a
Yes, there is. The general pattern for translating a production of the form
A ::= B C ... D
is to paraphrase is declaratively as saying
A sequence of terminals s is an A (or: A generates the sequence s, if you prefer that formulation) if:
s is the concatenation of s_1, s_2, ... s_n, and
s_1 is a B / B generates the sequence s_1, and
s_2 is a C / C generates the sequence s_2, and
...
s_n is a D / D generates the sequence s_n.
Assuming we write these in the obvious way using a generates predicate, and that we can write concatenation using a || operator, your first rule becomes (if I am right to guess that E and P are non-terminals and "and" is a terminal symbol) something like
generates(DR,s) ⊃ generates(E,s1)
∧ generates(and,s2)
∧ generates(P,s3)
∧ s = s1 || s2 || s3
To establish the consequent (i.e. prove that s is an A), prove the antecedents. As long as the grammar does actually generate some sentences, and as long as you have some premises defining the "generates" relation for terminal symbols, the proof will be straightforward.
Prolog definite-clause grammars are a beautiful instantiation of this pattern. It takes some of us a while to understand and appreciate the use of difference lists in DCGs, but they handle the partitioning of s into subsequences and the association of the subsequences with the different parts of the right hand side much more elegantly than the simple translation into logic given above.
I haven't been able to understand what the resolution rule is in propositional logic. Does resolution simply state some rules by which a sentence can be expanded and written in another form?
Following is a simple resolution algorithm for propositional logic. The function returns the set of all possible clauses obtained by resolving it's 2 input. I can't understand the working of the algorithm, could someone explain it to me?
function PL-RESOLUTION(KB,α) returns true or false
inputs: KB, the knowledge base, a sentence α in propositional logic, the query, a
sentence in propositional logic
clauses <--- the set of clauses in the CNF representation of KB ∧ ¬α
new <--- {}
loop do
for each Ci, Cj in clauses do
resolvents <----- PL-RESOLVE(Ci, Cj)
if resolvents contains the empty clause then return true
new <--- new ∪ resolvents
if new ⊆ clauses then return false
clauses <---- clauses ∪ new
It's a whole topic of discussion but I'll try to explain you one simple example.
Input of your algorithm is KB - set of rules to perform resolution. It easy to understand that as set of facts like:
Apple is red
If smth is red Then this smth is sweet
We introduce two predicates R(x) - (x is red) and S(x) - (x is sweet). Than we can written our facts in formal language:
R('apple')
R(X) -> S(X)
We can substitute 2nd fact as ¬R v S to be eligible for resolution rule.
Caluclating resolvents step in your programs delete two opposite facts:
Examples: 1) a & ¬a -> empty. 2) a('b') & ¬a(x) v s(x) -> S('b')
Note that in second example variable x substituted with actual value 'b'.
The goal of our program to determine if sentence apple is sweet is true. We write this sentence also in formal language as S('apple') and ask it in inverted state. Then formal definition of problem is:
CLAUSE1 = R('apple')
CLAUSE2 = ¬R(X) v S(X)
Goal? = ¬S('apple')
Algorithm works as follows:
Take clause c1 and c2
calculate resolvents for c1 and c2 gives new clause c3 = S('apple')
calculate resolvents for c3 and goal gives us empty set.
That means our sentence is true. If you can't get empty set with such resolutions that means sentence is false (but for most cases in practical applications it's a lack of KB facts).
Consider clauses X and Y, with X = {a, x1, x2, ..., xm} and Y = {~a, y1, y2, ..., yn}, where a is a variable, ~a is its negation, and the xi and yi are literals (i.e., possibly-negated variables).
The interpretation of X is the proposition (a \/ x1 \/ x2 \/ ... \/ xm) -- that is, at least one of a or one of the xi must be true, assuming X is true. Likewise for Y.
We assume that X and Y are true.
We also know that (a \/ ~a) is always true, regardless of the value of a.
If ~a is true, then a is false, so ~a /\ X => {x1, x2, ..., xm}.
If a is true, then ~a is false. In this case a /\ Y => {y1, y2, ..., yn}.
We know, therefore, that {x1, x2, ..., xm, y1, y2, ..., yn} must be true, assuming X and Y are true. Observe that the new clause does not refer to variable a.
This kind of deduction is known as resolution.
How does this work in a resolution based theorem prover? Simple: we use proof by contradiction. That is, we start by turning our "facts" into clauses and add the clauses corresponding to the negation of our "goal". Then, if we can eventually resolve to the empty clause, {}, we will have reached a contradiction since the empty clause is equivalent to falsity. Because the facts are given, this means that our negated goal must be wrong, hence the (unnegated) goal must be true.
resolution is a procedure used in proving that argument which are expressible in predicate logic are correct
resolution lead to refute theorem proving technique for sentences in propositional logic.
resolution provides proof by refutation. i.e. to show that it is valid,resolution attempts to show that the negation of the statement produces a contradiction with a known statement
algorithm:
1). convert all the propositions of axioms to clause form
2). negate propositions & convert result to clause form
3)resolve them
4)if the resolvent is the empty clause, then contradiction has been found
I'm trying to parse a string in a self-made language into a sort of tree, e.g.:
# a * b1 b2 -> c * d1 d2 -> e # f1 f2 * g
should result in:
# a
* b1 b2
-> c
* d1 d2
-> e
# f1 f2
* g
#, * and -> are symbols. a, b1, etc. are texts.
Since the moment I know only rpn method to evaluate expressions, and my current solution is as follows. If I allow only a single text token after each symbol I can easily convert expression first into RPN notation (b = b1 b2; d = d1 d2; f = f1 f2) and parse it from here:
a b c -> * d e -> * # f g * #
However, merging text tokens and whatever else comes seems to be problematic. My idea was to create marker tokens (M), so RPN looks like:
a M b2 b1 M c -> * M d2 d1 M e -> * # f2 f1 M g * #
which is also parseable and seems to solve the problem.
That said:
Does anyone have experience with something like that and can say it is or it is not a viable solution for the future?
Are there better methods for parsing expressions with undefined arity of operators?
Can you point me at some good resources?
Note. Yes, I know this example very much resembles Lisp prefix notation and maybe the way to go would be to add some brackets, but I don't have any experience here. However, the source text must not contain any artificial brackets and also I'm not sure what to do about potential infix mixins like # a * b -> [if value1 = value2] c -> d.
Thanks for any help.
EDIT: It seems that what I'm looking for are sources on postfix notation with a variable number of arguments.
I couldn't fully understand your question, but it seems what you want is a grammar definition and a parser generator. I suggest you take a look at ANTLR, it should be pretty straightforward with it to define a grammar for either your original syntax or the RPN.
Edit: (After exercising self-criticism, and making some effort to understand the question details.) Actually, the language grammar is unclear from your example. However, it seems to me, that the advantages of the prefix/postfix notations (i.e. that you need neither parentheses nor a precedence-aware parser) stem from the fact that you know the number of arguments every time you encounter an operator, therefore you know exactly how many elements to read (for prefix notation) or to pop from the stack (for postfix notation). OTOH, I beleive that having operators which can have variable number of arguments makes prefix/postfix notations not simply difficult to parse but outright ambiguous. Take the following expression for example:
# a * b c d
Which of the following three is the canonical form?
(a, *(b, c, d))
(a, *(b, c), d)
(a, *(b), c, d)
Without knowing more about the operators, it is impossible to tell. Of course you could define some sort of greedyness of the operators, e.g. * is greedier than #, so it gobbles up all the arguments. But this would beat the purpose of a prefix notation, because you simply wouldn't be able to write down the second variant from the above three; not without additinonal syntactic elements.
Now that I think of it, it is probably not by sheer chance that none of the programming languages I know support operators with a variable number of arguments, only functions/procedures.