Resolution Inference Rule Algorithm - algorithm

I have the following doubt in the Resolution Inference Rule.
1* In for each Ci, Cj in clauses do , does each Ci and Cj necessarily contain complimentary symbols (ex. one contains A and the other contains ~A) ?
2* In the above example , what if both clauses have the same symbol (ex A and A). Should I consider it for inference? If so , what result does it return?
3* when does the if new ⊆ clauses then return false run? After all the clauses have been explored?
4* What is the use of if new ⊆ clauses then return false ?
5* What is the use of if new ⊆ clauses then return false ?
function PL-RESOLUTION(KB,α) returns true or false
inputs: KB, the knowledge base, a sentence α in propositional logic,
the query, a sentence in propositional logic
clauses <--- the set of clauses in the CNF representation of KB ∧ ¬α
new <--- {}
loop do
for each Ci, Cj in clauses do
resolvents <----- PL-RESOLVE(Ci, Cj)
if resolvents contains the empty clause then return true
new <--- new ∪ resolvents
if new ⊆ clauses then return false
clauses <---- clauses ∪ new

Yes.
No, you can't apply resolution in that case.
Well, why shouldn't it happen?
I guess it's to avoid infinite loops (in that case you return to the conditions at the previous iteration)
Same question?
More about 1 and 2. The idea for resolution is that if you have (A v B) ^ (not A v C) then you can safely infer that B or C are true (informally because A is either true or false). If you have (A v B) ^ (A v C) you can't apply the same reasoning.

Related

How to write it in skolem form?(Prolog)

Translate the following formula into a horn formula in Skolem form:
∀w¬∀x∃z(H(w)∧(¬G(x,x)∨¬H(z)))
it's translated from german to english, how to write it in horn form and then in skolem form, i didn't find anything on internet...plz help me
I will always use the satisfiability preserving version of skolemization, i.e. the one where those are replaced which would become existential quantifiers when moved to the head of the formula.
To make life a bit simpler, let's push the negations to the atoms. We can also see that w doesn't occur in ¬G(x,x)∨¬H(z) and that x,z don't occur in H(w), allowing us to distribute the quantifiers a bit inside.
Then we obtain the formula ∀w¬H(w) ∨ ∃x∀z (G(x,x)∧ H(z)) .
If we want to refute the formula:
We skolemize ∃x and delete ∀w, ∀z and obtain:
¬H(w) ∨ (G(c,c)∧ H(z))
after CNF transformation, we have:
(¬H(w) ∨ G(c,c)) ∧ (¬H(w) ∨ H(z))
both clauses have exactly one positive literal, so they are horn clauses. Translated to Prolog syntax we get:
g(c,c) :- h(W).
h(Z) :- h(W).
If we want to prove the formula:
We have to negate before we skolemize, leading to:
∃w H(w) ∧ ∀x∃z (¬G(x,x) ∨ ¬H(z))
after skolemizing ∃w and ∃z, deleting ∀x and CNF transformation, we obtain:
H(c) ∧ (¬G(x,x) ∨ ¬H(f(x)))
That could be interpreted as a fact h(c) and a query ?- g(X,X), h(f(X)).
To be honest, both variants don't make much sense - the first does not terminate for any input and in the second version, the query will fail because g/2 is not defined.
does this page help?
6.3 Convert first-order logic expressions to normal form
A horn clause consists of various goals that all have to be satisfied in order for the whole clause to be true.
∀w¬∀x∃z(H(w)∧(¬G(x,x)∨¬H(z)))
First you want to translate the whole statement to human language for clarity. ¬ means NOT, ∧ means AND and ∨ means OR. The () are used to group goals.
∀w¬∀x∃z
For all w, all NOT x, at least 1 Z. If a w is true, x must be false and there must be at least 1 z.
H(w)
Is w a H? There must be a fact that says H(w) is true in your knowledge base.
¬G(x,x)
Is there a fact G(x,x)? If yes, return false.
¬H(z)
Is there a fact H(z)? If yes, return false.
z(H(w)∧(¬G(x,x)∨¬H(z)))
What this says that z is only true if H(w) is true AND either G(x,x) OR H(z) is false.
In Prolog you'd write this as
factCheck(W,X,Z) :- h(W), not(g(X,X);not(checkZ(Z)).
where Z is a list with at least 1 entry in it. If ANY element in list Z is true, fail.
%is the list empty?
checkZ([])
%is h true for the first element of the list?
checkZ([Head|Tail]) :- h(Head), !.
%remove the first element of the list
checkZ([Head|Tail]) :- checkZ(Tail).

Reasoning with Conjunctive Normal Forms

I have this code, which I need to translate into the CNF (this is in preparation for the exam, so not a homework!):
p,q
r :- q
false :- p , s
s :- t
t
Here's what I did:
p ^ q ^ (r V ~q) ^ (~p V ~s) ^ (s V ~t) ^ t
= r
Is my reasoning correct?
There is another question here:
You want to query the database with r. What clause, should you add to your database?
I don't understand this at all. After the simplification the database is basically r.
r is true, isn't it?
The question "You want to query the database with r. What clause, should you add to your database?" refers to so called refutation proofs. In a refutation proof one does not proof:
Database |- Query
Instead one proofs:
Database, ~Query |- f
In classical logic the two are the same. So in your example you would need to show that p ^ q ^ (r V ~q) ^ (~p V ~s) ^ (s V ~t) ^ t ^ ~r leads to a contradiction.
Bye
Edit 14.02.2019:
In case somebody is interested in Prolog code for converting propositonal formulas into CNF, see here https://gist.github.com/jburse/ca8d01e26c7cf176ea65eeb1bf916ea0#file-aspsat-p (Lines 43-87, requires Prolog Commons lists and ordset), you can convert a formula F into CNF C, by calling norm(F,H), cnf(H,C).
The obtained CNF is already cleaned from trivial and subsumed clauses. If somebody is further interested in CNF test cases, see here http://gist.github.com/jburse/bf99239903847322321fabf6f49a5b84#file-casescls-p , it contains some hundred tautologies from Principia Mathematica, and some tenths fallacies collected otherwise.

prolog and translating propositional logic

My ultimate goal is to load a set of propositional formulas in to Prolog from a file in order to deduce some facts. Suppose I have the propositional formula:
p implies not(q).
In Prolog this would be:
not(q) :- p
Prolog does not seem to like the not operator in the head of the rule. I get the following error:
'$record_clause'/2: No permission to redefine built-in predicate `not/1'
Use :- redefine_system_predicate(+Head) if redefinition is intended
I know two ways to rewrite the general formula for p implies q. First, use the fact that the contrapositive is logically equivalent.
p implies q iff not(q) implies not(p)
Second, use the fact that p implies q is logically equivalent to not(p) or q (the truth tables are the same).
The first method leads me to my current problem. The second method is then just a conjunction or disjunction. You cannot write only conjunctions and disjunctions in Prolog as they are not facts or rules.
What is the best way around my problem so that I can express p implies not(q)?
Is it possible to write all propositional formulas in Prolog?
EDIT: Now I wish to connect my results with other propositional formulae. Suppose I have the following rule:
something :- formula(P, Q).
How does this connect? If I enter formula(false, true) (which evaluates to true) into the interpreter, this does not automatically make something true. Which is what I want.
p => ~q === ~p \/ ~q === ~( p /\ q )
So we can try to model this with a Prolog program,
formula(P,Q) :- P, Q, !, fail.
formula(_,_).
Or you can use the built-in \+ i.e. "not", to define it as formula(P,Q) :- \+( (P, Q) ).
This just checks the compliance of the passed values to the formula. If we combine this with domain generation first, we can "deduce" i.e. generate the compliant values:
13 ?- member(Q,[true, false]), formula(true, Q). %// true => ~Q, what is Q?
Q = false.
14 ?- member(Q,[true, false]), formula(false, Q). %// false => ~Q, what is Q?
Q = true ;
Q = false.
You are using the wrong tool. Try Answer Set Programming.

How does a Resolution algorithm work for propositional logic?

I haven't been able to understand what the resolution rule is in propositional logic. Does resolution simply state some rules by which a sentence can be expanded and written in another form?
Following is a simple resolution algorithm for propositional logic. The function returns the set of all possible clauses obtained by resolving it's 2 input. I can't understand the working of the algorithm, could someone explain it to me?
function PL-RESOLUTION(KB,α) returns true or false
inputs: KB, the knowledge base, a sentence α in propositional logic, the query, a
sentence in propositional logic
clauses <--- the set of clauses in the CNF representation of KB ∧ ¬α
new <--- {}
loop do
for each Ci, Cj in clauses do
resolvents <----- PL-RESOLVE(Ci, Cj)
if resolvents contains the empty clause then return true
new <--- new ∪ resolvents
if new ⊆ clauses then return false
clauses <---- clauses ∪ new
It's a whole topic of discussion but I'll try to explain you one simple example.
Input of your algorithm is KB - set of rules to perform resolution. It easy to understand that as set of facts like:
Apple is red
If smth is red Then this smth is sweet
We introduce two predicates R(x) - (x is red) and S(x) - (x is sweet). Than we can written our facts in formal language:
R('apple')
R(X) -> S(X)
We can substitute 2nd fact as ¬R v S to be eligible for resolution rule.
Caluclating resolvents step in your programs delete two opposite facts:
Examples: 1) a & ¬a -> empty. 2) a('b') & ¬a(x) v s(x) -> S('b')
Note that in second example variable x substituted with actual value 'b'.
The goal of our program to determine if sentence apple is sweet is true. We write this sentence also in formal language as S('apple') and ask it in inverted state. Then formal definition of problem is:
CLAUSE1 = R('apple')
CLAUSE2 = ¬R(X) v S(X)
Goal? = ¬S('apple')
Algorithm works as follows:
Take clause c1 and c2
calculate resolvents for c1 and c2 gives new clause c3 = S('apple')
calculate resolvents for c3 and goal gives us empty set.
That means our sentence is true. If you can't get empty set with such resolutions that means sentence is false (but for most cases in practical applications it's a lack of KB facts).
Consider clauses X and Y, with X = {a, x1, x2, ..., xm} and Y = {~a, y1, y2, ..., yn}, where a is a variable, ~a is its negation, and the xi and yi are literals (i.e., possibly-negated variables).
The interpretation of X is the proposition (a \/ x1 \/ x2 \/ ... \/ xm) -- that is, at least one of a or one of the xi must be true, assuming X is true. Likewise for Y.
We assume that X and Y are true.
We also know that (a \/ ~a) is always true, regardless of the value of a.
If ~a is true, then a is false, so ~a /\ X => {x1, x2, ..., xm}.
If a is true, then ~a is false. In this case a /\ Y => {y1, y2, ..., yn}.
We know, therefore, that {x1, x2, ..., xm, y1, y2, ..., yn} must be true, assuming X and Y are true. Observe that the new clause does not refer to variable a.
This kind of deduction is known as resolution.
How does this work in a resolution based theorem prover? Simple: we use proof by contradiction. That is, we start by turning our "facts" into clauses and add the clauses corresponding to the negation of our "goal". Then, if we can eventually resolve to the empty clause, {}, we will have reached a contradiction since the empty clause is equivalent to falsity. Because the facts are given, this means that our negated goal must be wrong, hence the (unnegated) goal must be true.
resolution is a procedure used in proving that argument which are expressible in predicate logic are correct
resolution lead to refute theorem proving technique for sentences in propositional logic.
resolution provides proof by refutation. i.e. to show that it is valid,resolution attempts to show that the negation of the statement produces a contradiction with a known statement
algorithm:
1). convert all the propositions of axioms to clause form
2). negate propositions & convert result to clause form
3)resolve them
4)if the resolvent is the empty clause, then contradiction has been found

DPLL algorithm definition

I am having some problems understanding the DPLL algorithm and I was wondering if anyone could explain it to me because I think my understanding is incorrect.
The way I understand it is, I take some set of literals and if some every clause is true the model is true but if some clause is false then the model is false.
I recursively check the model by looking for a unit clause, if there is one I set the value for that unit clause to make it true, then update the model. Removing all clauses that are now true and remove all literals which are now false.
When there are no unit clauses left, I chose any other literal and assign values for that literal which make it true and make it false, then again remove all clauses which are now true and all literals which are now false.
DPLL requires a problem to be stated in disjunctive normal form, that is, as a set of clauses, each of which must be satisfied.
Each clause is a set of literals {l1, l2, ..., ln}, representing the disjunction of those literals (i.e., at least one literal must be true for the clause to be satisfied).
Each literal l asserts that some variable is true (x) or that it is false (~x).
If any literal is true in a clause, then the clause is satisfied.
If all literals in a clause are false, then the clause is unsatisfiable and hence the problem is unsatisfiable.
A solution is an assignment of true/false values to the variables such that every clause is satisfied. The DPLL algorithm is an optimised search for such a solution.
DPLL is essentially a depth first search that alternates between three tactics. At any stage in the search there is a partial assignment (i.e., an assignment of values to some subset of the variables) and a set of undecided clauses (i.e., those clauses that have not yet been satisfied).
(1) The first tactic is Pure Literal Elimination: if an unassigned variable x only appears in its positive form in the set of undecided clauses (i.e., the literal ~x doesn't appear anywhere) then we can just add x = true to our assignment and satisfy all the clauses containing the literal x (similarly if x only appears in its negative form, ~x, we can just add x = false to our assignment).
(2) The second tactic is Unit Propagation: if all but one of the literals in an undecided clause are false, then the remaining one must be true. If the remaining literal is x, we add x = true to our assignment; if the remaining literal is ~x, we add x = false to our assignment. This assignment can lead to further opportunities for unit propagation.
(3) The third tactic is to simply choose an unassigned variable x and branch the search: one side trying x = true, the other trying x = false.
If at any point we end up with an unsatisfiable clause then we have reached a dead end and have to backtrack.
There are all sorts of clever further optimisations, but this is the core of almost all SAT solvers.
Hope this helps.
The Davis–Putnam–Logemann–Loveland (DPLL) algorithm is a, backtracking-based search algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form also known as satisfiability problem or SAT.
Any boolean formula can be expressed in conjunctive normal form (CNF) which means a conjunction of clauses i.e. ( … ) ^ ( … ) ^ ( … )
where a clause is a disjunction of boolean variables i.e. ( A v B v C’ v D)
an example of boolean formula expressed in CNF is
(A v B v C) ^ (C’ v D) ^ (D’ v A)
and solving the SAT problem means finding the combination of values for the variables in the formula that satisfy it like A=1, B=0, C=0, D=0
This is a NP-Complete problem. Actually it is the first problem which has been proven to be NP-Complete by Stepehn Cook and Leonid Levin
A particular type of SAT problem is the 3-SAT which is a SAT in which all clauses have three variables.
The DPLL algorithm is way to solve SAT problem (which practically depends on the hardness of the input) that recursively creates a tree of potential solution
Suppose you want to solve a 3-SAT problem like this
(A v B v C) ^ (C’ v D v B) ^ (B v A’ v C) ^ (C’ v A’ v B’)
if we enumerate the variables like A=1 B=2 C=3 D=4 and se negative numbers for negated variables like A’ = -1 then the same formula can be written in Python like this
[[1,2,3],[-3,4,2],[2,-1,3],[-3,-1,-2]]
now imagine to create a tree in which each node consists of a partial solution. In our example we also depicted a vector of the clauses satisfied by the solution
the root node is [-1,-1,-1,-1] which means no values have been yet assigned to the variables neither 0 nor 1
at each iteration:
we take the first unsatisfied clause then
if there are no more unassigned variables we can use to satisfy that clause then there can’t be valid solutions in this branch of the search tree and the algorithm shall return None
otherwise we take the first unassigned variable and set it such it satisfies the clause and start recursively from step 1. If the inner invocation of the algorithm returns None we flip the value of the variable so that it does not satisfy the clause and set the next unassigned variable in order to satisfy the clause. If all the three variables have been tried or there are no more unassigned variable for that clause it means there are no valid solutions in this branch and the algorithm shall return None
See the following example:
from the root node we choose the first variable (A) of the first clause (A v B v C) and set it such it satisfies the clause then A=1 (second node of the search tree)
the continue with the second clause and we pick the first unassigned variable (C) and set it such it satisfies the clause which means C=0 (third node on the left)
we do the same thing for the fourth clause (B v A’ v C) and set B to 1
we try to do the same thing for the last clause we realize we no longer have unassigned variables and the clause is always false. We then have to backtrack to the previous position in the search tree. We change the value we assigned to B and set B to 0. Then we look for another unassigned value that can satisfy the third clause but there are not. Then we have to backtrack again to the second node
Once there we have to flip the assignment of the first variable (C) so that it won’t satisfy the clause and set the next unassigned variable (D) in order to satisfy it (i.e. C=1 and D=1). This also satisfies the third clause which contains C.
The last clause to satisfy (C’ v A’ v B’) has one unassigned variable B which can be then set to 0 in order to satisfy the clause.
In this link http://lowcoupling.com/post/72424308422/a-simple-3-sat-solver-using-dpll you can find also the python code implementing it

Resources