p(x)⇒∀x.p(x) is contingent? - logic

I've encountered a question asking whether the flowing sentence is valid/contingent/unsatisfiable:
p(x)⇒∀x.p(x)
I think the answer is the sentence is valid. under section 6.10 of the textbook here http://logic.stanford.edu/intrologic/secondary/notes/chapter_06.htmlsays
a sentence with free variables is equivalent to the sentence in which all of the free variables are universally quantified.
Therefore I think the first relational sentence p(x) is equal to ∀x.p(x) and therefore the sentence is valid, ie. it is always true.
However,the correct answer is that the sentence is contingent viz. under some truth assignment it is true and other some other truth assignment it is false.
So why is the sentence contingent?Is the answer wrong?

I think it depends on how you read the sentence.
If you read it as a definition, then it is not contingent.
However, if you read it as pure logic ... then there are actually 2 meanings of x in the statement. The x on the left of the implication is different to the x in the quantification on the right.
p(x) => for all x . p(x)
means the same as
p(x) => for all y . p(y)
and that is clearly contingent. It is not true for all predicates p.
(For example:
Let p(x) stand for the predicate "x is left handed"
The statement then says:
X is left-handed implies that everyone is left-handed.
... which is not a logically valid statement.
See #sawa's answer for a more "mathematically rigorous" explanation.

You have a statement:
p(x)⇒∀x.p(x)
If you universally close the free variable, you get:
∀x.(p(x)⇒∀x.p(x))
in other words:
∀x.(p(x)⇒∀y.p(y))
which is not tautology, but is contingent. In non-technical terms, this reads:
for any x, if p(x) is true, then p(y) is true for all y
or, to transform it into an equivalent form:
(∃x.p(x))⇒(∀y.p(y))
it reads:
if p(x) is true for some x, then p(y) is true for all y
In other words,
p(x) is either always true or always false

Related

Prolog: what is difference between, for example, X is 3 and 3 is X?

three(X) :- 3 is X.
three2(X) :- X is 3.
Requests three(3), three(5) and three2(3), three2(5) respectively have the same answers.
But three2(X) has answer 3, while three(X) has answer "Arguments are not sufficiently instantiated".
If there's enough data to solve that three(3) is true and three(5) is false, why there's not enough data to find that X is equals 3 when we request for the value of X?
That's because is/2 is the numeric expression evaluator of Prolog. Everything on the Right Hand Side of is/2 must be fully instantiated so that the expression can then be evaluated to a number (possibility missed: evaluate to something else than numbers). The result is then unified with the Left Hand Side of is/2. This succeeds if the LHS is an unbound variable or the same as the result obtained.
In your case, you can make the predicate three/1 symmetric by just unifying, as there is really nothing to evaluate:
three_sym(X) :- 3 = X.
Succeeds with 3 and outputs the answer X = 3 for an unbound X.

How to write it in skolem form?(Prolog)

Translate the following formula into a horn formula in Skolem form:
∀w¬∀x∃z(H(w)∧(¬G(x,x)∨¬H(z)))
it's translated from german to english, how to write it in horn form and then in skolem form, i didn't find anything on internet...plz help me
I will always use the satisfiability preserving version of skolemization, i.e. the one where those are replaced which would become existential quantifiers when moved to the head of the formula.
To make life a bit simpler, let's push the negations to the atoms. We can also see that w doesn't occur in ¬G(x,x)∨¬H(z) and that x,z don't occur in H(w), allowing us to distribute the quantifiers a bit inside.
Then we obtain the formula ∀w¬H(w) ∨ ∃x∀z (G(x,x)∧ H(z)) .
If we want to refute the formula:
We skolemize ∃x and delete ∀w, ∀z and obtain:
¬H(w) ∨ (G(c,c)∧ H(z))
after CNF transformation, we have:
(¬H(w) ∨ G(c,c)) ∧ (¬H(w) ∨ H(z))
both clauses have exactly one positive literal, so they are horn clauses. Translated to Prolog syntax we get:
g(c,c) :- h(W).
h(Z) :- h(W).
If we want to prove the formula:
We have to negate before we skolemize, leading to:
∃w H(w) ∧ ∀x∃z (¬G(x,x) ∨ ¬H(z))
after skolemizing ∃w and ∃z, deleting ∀x and CNF transformation, we obtain:
H(c) ∧ (¬G(x,x) ∨ ¬H(f(x)))
That could be interpreted as a fact h(c) and a query ?- g(X,X), h(f(X)).
To be honest, both variants don't make much sense - the first does not terminate for any input and in the second version, the query will fail because g/2 is not defined.
does this page help?
6.3 Convert first-order logic expressions to normal form
A horn clause consists of various goals that all have to be satisfied in order for the whole clause to be true.
∀w¬∀x∃z(H(w)∧(¬G(x,x)∨¬H(z)))
First you want to translate the whole statement to human language for clarity. ¬ means NOT, ∧ means AND and ∨ means OR. The () are used to group goals.
∀w¬∀x∃z
For all w, all NOT x, at least 1 Z. If a w is true, x must be false and there must be at least 1 z.
H(w)
Is w a H? There must be a fact that says H(w) is true in your knowledge base.
¬G(x,x)
Is there a fact G(x,x)? If yes, return false.
¬H(z)
Is there a fact H(z)? If yes, return false.
z(H(w)∧(¬G(x,x)∨¬H(z)))
What this says that z is only true if H(w) is true AND either G(x,x) OR H(z) is false.
In Prolog you'd write this as
factCheck(W,X,Z) :- h(W), not(g(X,X);not(checkZ(Z)).
where Z is a list with at least 1 entry in it. If ANY element in list Z is true, fail.
%is the list empty?
checkZ([])
%is h true for the first element of the list?
checkZ([Head|Tail]) :- h(Head), !.
%remove the first element of the list
checkZ([Head|Tail]) :- checkZ(Tail).

Steadfastness: Definition and its relation to logical purity and termination

So far, I have always taken steadfastness in Prolog programs to mean:
If, for a query Q, there is a subterm S, such that there is a term T that makes ?- S=T, Q. succeed although ?- Q, S=T. fails, then one of the predicates invoked by Q is not steadfast.
Intuitively, I thus took steadfastness to mean that we cannot use instantiations to "trick" a predicate into giving solutions that are otherwise not only never given, but rejected. Note the difference for nonterminating programs!
In particular, at least to me, logical-purity always implied steadfastness.
Example. To better understand the notion of steadfastness, consider an almost classical counterexample of this property that is frequently cited when introducing advanced students to operational aspects of Prolog, using a wrong definition of a relation between two integers and their maximum:
integer_integer_maximum(X, Y, Y) :-
Y >= X,
!.
integer_integer_maximum(X, _, X).
A glaring mistake in this—shall we say "wavering"—definition is, of course, that the following query incorrectly succeeds:
?- M = 0, integer_integer_maximum(0, 1, M).
M = 0. % wrong!
whereas exchanging the goals yields the correct answer:
?- integer_integer_maximum(0, 1, M), M = 0.
false.
A good solution of this problem is to rely on pure methods to describe the relation, using for example:
integer_integer_maximum(X, Y, M) :-
M #= max(X, Y).
This works correctly in both cases, and can even be used in more situations:
?- integer_integer_maximum(0, 1, M), M = 0.
false.
?- M = 0, integer_integer_maximum(0, 1, M).
false.
| ?- X in 0..2, Y in 3..4, integer_integer_maximum(X, Y, M).
X in 0..2,
Y in 3..4,
M in 3..4 ? ;
no
Now the paper Coding Guidelines for Prolog by Covington et al., co-authored by the very inventor of the notion, Richard O'Keefe, contains the following section:
5.1 Predicates must be steadfast.
Any decent predicate must be “steadfast,” i.e., must work correctly if its output variable already happens to be instantiated to the output value (O’Keefe 1990).
That is,
?- foo(X), X = x.
and
?- foo(x).
must succeed under exactly the same conditions and have the same side effects.
Failure to do so is only tolerable for auxiliary predicates whose call patterns are
strongly constrained by the main predicates.
Thus, the definition given in the cited paper is considerably stricter than what I stated above.
For example, consider the pure Prolog program:
nat(s(X)) :- nat(X).
nat(0).
Now we are in the following situation:
?- nat(0).
true.
?- nat(X), X = 0.
nontermination
This clearly violates the property of succeeding under exactly the same conditions, because one of the queries no longer succeeds at all.
Hence my question: Should we call the above program not steadfast? Please justify your answer with an explanation of the intention behind steadfastness and its definition in the available literature, its relation to logical-purity as well as relevant termination notions.
In 'The craft of prolog' page 96 Richard O'Keef says 'we call the property of refusing to give wrong answers even when the query has an unexpected form (typically supplying values for what we normally think of as inputs*) steadfastness'
*I am not sure if this should be outputs. i.e. in your query ?- M = 0, integer_integer_maximum(0, 1, M). M = 0. % wrong! M is used as an input but the clause has been designed for it to be an output.
In nat(X), X = 0. we are using X as an output variable not an input variable, but it has not given a wrong answer, as it does not give any answer. So I think under that definition it could be steadfast.
A rule of thumb he gives is 'postpone output unification until after the cut.' Here we have not got a cut, but we still want to postpone the unification.
However I would of thought it would be sensible to have the base case first rather than the recursive case, so that nat(X), X = 0. would initially succeed .. but you would still have other problems..

How does a Resolution algorithm work for propositional logic?

I haven't been able to understand what the resolution rule is in propositional logic. Does resolution simply state some rules by which a sentence can be expanded and written in another form?
Following is a simple resolution algorithm for propositional logic. The function returns the set of all possible clauses obtained by resolving it's 2 input. I can't understand the working of the algorithm, could someone explain it to me?
function PL-RESOLUTION(KB,α) returns true or false
inputs: KB, the knowledge base, a sentence α in propositional logic, the query, a
sentence in propositional logic
clauses <--- the set of clauses in the CNF representation of KB ∧ ¬α
new <--- {}
loop do
for each Ci, Cj in clauses do
resolvents <----- PL-RESOLVE(Ci, Cj)
if resolvents contains the empty clause then return true
new <--- new ∪ resolvents
if new ⊆ clauses then return false
clauses <---- clauses ∪ new
It's a whole topic of discussion but I'll try to explain you one simple example.
Input of your algorithm is KB - set of rules to perform resolution. It easy to understand that as set of facts like:
Apple is red
If smth is red Then this smth is sweet
We introduce two predicates R(x) - (x is red) and S(x) - (x is sweet). Than we can written our facts in formal language:
R('apple')
R(X) -> S(X)
We can substitute 2nd fact as ¬R v S to be eligible for resolution rule.
Caluclating resolvents step in your programs delete two opposite facts:
Examples: 1) a & ¬a -> empty. 2) a('b') & ¬a(x) v s(x) -> S('b')
Note that in second example variable x substituted with actual value 'b'.
The goal of our program to determine if sentence apple is sweet is true. We write this sentence also in formal language as S('apple') and ask it in inverted state. Then formal definition of problem is:
CLAUSE1 = R('apple')
CLAUSE2 = ¬R(X) v S(X)
Goal? = ¬S('apple')
Algorithm works as follows:
Take clause c1 and c2
calculate resolvents for c1 and c2 gives new clause c3 = S('apple')
calculate resolvents for c3 and goal gives us empty set.
That means our sentence is true. If you can't get empty set with such resolutions that means sentence is false (but for most cases in practical applications it's a lack of KB facts).
Consider clauses X and Y, with X = {a, x1, x2, ..., xm} and Y = {~a, y1, y2, ..., yn}, where a is a variable, ~a is its negation, and the xi and yi are literals (i.e., possibly-negated variables).
The interpretation of X is the proposition (a \/ x1 \/ x2 \/ ... \/ xm) -- that is, at least one of a or one of the xi must be true, assuming X is true. Likewise for Y.
We assume that X and Y are true.
We also know that (a \/ ~a) is always true, regardless of the value of a.
If ~a is true, then a is false, so ~a /\ X => {x1, x2, ..., xm}.
If a is true, then ~a is false. In this case a /\ Y => {y1, y2, ..., yn}.
We know, therefore, that {x1, x2, ..., xm, y1, y2, ..., yn} must be true, assuming X and Y are true. Observe that the new clause does not refer to variable a.
This kind of deduction is known as resolution.
How does this work in a resolution based theorem prover? Simple: we use proof by contradiction. That is, we start by turning our "facts" into clauses and add the clauses corresponding to the negation of our "goal". Then, if we can eventually resolve to the empty clause, {}, we will have reached a contradiction since the empty clause is equivalent to falsity. Because the facts are given, this means that our negated goal must be wrong, hence the (unnegated) goal must be true.
resolution is a procedure used in proving that argument which are expressible in predicate logic are correct
resolution lead to refute theorem proving technique for sentences in propositional logic.
resolution provides proof by refutation. i.e. to show that it is valid,resolution attempts to show that the negation of the statement produces a contradiction with a known statement
algorithm:
1). convert all the propositions of axioms to clause form
2). negate propositions & convert result to clause form
3)resolve them
4)if the resolvent is the empty clause, then contradiction has been found

can any one help me to understand this recursive prolog example?

here is the plus code that i don't understand
plus(0,X,X):-natural_number(X).
plus(s(X),Y,s(Z)) :- plus(X,Y,Z).
while given :
natural_number(0).
natural_number(s(X)) :- natural_number(X).
I don't understand this recursion. If I have plus(s(0),s(s(s(0))),Z) how can i get the answer of 1+3=4?
I need some explanation for the first code. I try that plus(0,X,X) will stop the recursion but I think that I do it wrong.
So, let's start with natural_number(P). Read this as "P is a natural number". We're given natural_number(0)., which tells us that 0 is always a natural number (i.e. there are no conditions that must be met for it to be a fact). natural_number(s(X)) :- natural_number(X). tells us that s(X) is a natural number if X is a natural number. This is the normal inductive definition of natural numbers, but written "backwards" as we read Prolog "Q := P" as "Q is true if P is true".
Now we can look at plus(P, Q, R). Read this as "plus is true if P plus Q equals R". We then look at the cases we're given:
plus(0,X,X) :- natural_number(X).. Read as Adding 0 to X results in X if X is a natural number. This is our inductive base case, and is the natural definition of addition.
plus(s(X),Y,s(Z)) :- plus(X,Y,Z). Read as "Adding the successor of X to Y results in the successor Z if adding X to Y is Z'. If we change the notation, we can read it algebraically as "X + 1 + Y = Z + 1 if X + Y = Z", which is very natural again.
So, to answer you direct question "If I have plus(s(0),s(s(s(0))),z), how can i get the answer of 1+3=4?", let's consider how we can unify something with z each step of the induction
Apply the second definition of plus, as it's the only one that unifies with the query. plus(s(0),s(s(s(0))), s(z')) is true if plus(0, s(s(s(0))), z') is true for some z
Now apply the first definition of plus, as it's the only unifying definition: plus(0, s(s(s(0))), z') if z' is s(s(s(0))) and s(s(s(0))) is a natural number.
Unwind the definition of natural_number a few times on s(s(s(0))) to see that is true.
So the overall statement is true, if s(s(s(0))) is unified with z' and s(z') is unified with z.
So the interpreter returns true, with z' = s(s(s(0))) and z = s(z'), i.e. z = s(s(s(s(0)))). So, z is 4.
That code is a straightforward implementation of addition in Peano arithmetic.
In Peano arithmetic, natural numbers are represented using the constant 0 and the unary function s. So s(0) is a representation of 1, s(s(s(0))) is representation of 3. And plus(s(0),s(s(s(0))),Z) will give you Z = s(s(s(s(0)))), which is a representation of 4.
You won't get numerical terms like 1+3=4, all you get is the term s/1 which can embed itself to any depth and thus can represent any natural number. You can combine such terms (using plus/3) and thereby achieve summing.
Note that your definition of plus/3 has nothing to do with SWI-Prolog's built-in plus/3 (which works with integers and not with the s/1 terms):
?- help(plus).
plus(?Int1, ?Int2, ?Int3)
True if Int3 = Int1 + Int2.
At least two of the three arguments must be instantiated to integers.

Resources