I'm stuck with a big proof in my homework. I have to use natural deduction to prove something, and I think if I can prove this somehow then I can finish the full proof. Can anyone help?
P v Q, ¬P : Q
I have to do it from first principles though, I can't use DM's laws.
I can use the following rules:
implication intro, implication elim, conjunction intro, conjunction elim, disjunction intro, disjunction elim, (double) negation elimination, negation introduction (using Reductio Ad Absurdum)
Yeah, the question might be off-topic, but find the solution here (the used rules are in the right column of the derivation). It's part of this tutorial on natural deduction. You can check for the notation and abbreviations of the rule names there, it uses Fitch style derivations rather than e.g. tree notation, but should be easy to read nonetheless.
Related
Is it possible and how to use Coq as calculator or as rule engine in foward chaining mode? Coq script usually requires to declare the goal for which the proof can be found. But is it possible to go in other direction, e.g. to compute the set of some consequences bounded by some rule, e.g., by some number of steps. I am especially interested in the sequent calculus of full first order logic. I guess (but I don't know) that there are some implementation or package for some type of sequent calculus for first order logic, but it is for theorem proving. I woul like to use such sequent calculus to derive consequences in some directed order. Is that possible in Coq and how?
Coq can be used for forward reasoning as well, in particular with the assert tactic. When you write assert (H : P)., Coq generates a subgoal that asks you to prove P. When this goal is complete, it resumes the original proof, extending its context with a hypothesis H : P.
The ltac language used to write Coq scripts has a match goal operator that allows you to inspect the shape of your goal. This allows you to progressively saturate your proof context with new facts derived from your current assumptions using the assert tactic, and to stop once certain conditions are met. Adam Chlipala's CPDT book has a nice chapter covering these features of tactic programming.
What is the exact difference between Well-formed formula and a proposition in propositional logic?
There's really not much given about Wff in my book.
My book says: "Propositions are also called sentences or statements. Another term formulae or well-formed formulae also refer to the same. That is, we may also call Well formed formula to refer to a proposition". Does that mean they both are the exact same thing?
Proposition: A statement which is true or false, easy for people to read but hard to manipulate using logical equivalences
WFF: An accurate logical statement which is true or false, there should be an official rigorus definition in your textbook. There are 4 rules they must follow. Harder for humans to read but much more precise and easier to manipulate
Example:
Proposition : All men are mortal
WFF: Let P be the set of people, M(x) denote x is a man and S(x)
denote x is mortal Then for all x in P M(x) -> S(x)
It is most likely that there is a typo in the book. In the quote Propositions are also called sentences or statements. Another term formulae or well-formed formulae also refer to the same. That is, we may also call Well formed formula to refer to a preposition, the word "preposition" should be "proposition".
Proposition :- A statement which is either true or false,but not both.
Propositional Form (necessary to understand Well Formed Formula) :- An assertion which contains at least one propositional variable.
Well Formed Formula :-A propositional form satisfying the following rules and any Wff(Well Formed Formula) can be derived using these rules:-
If P is a propositional variable then it is a wff.
If P is a propositional variable,then ~P is a wff.
If P and Q are two wffs then,(A and B),(A or B),(A implies B),(A is equivalent to B) are all wffs.
I want to use universal quantifier in the body of a predicate rule, i.e., something like
A(x,y) <- ∀B(x,a), C(y,a).
It means that only if for each a from C(y, a), B(x,a) always has x to match (x,a), then A(x,y) is true.
Since in Datalog, every variable bounded in rule body is existential quantifier by default, the a would be an existential quantifier too. What should I do to express universal quantifier in the body of a predicate rule?
Thank you.
P.S. The Datalog engine I am using is logicblox.
The basic idea is to use the logical axiom
∀x φ(x) ⇔ ¬∃x ¬φ(x)
to put your rules in a form where only existential quantifiers are required (along with negation). Intuitively, this usually means computing the complement of your answer first, and then computing its complement to produce the final answer.
For example, suppose you are given a graph G(V,E) and you want to find the vertices which are adjacent to all others in the graph. If universal quantification were allowed in a Datalog rule body, you might write something like
Q(x) <- ∀y E(x,y).
To write this without the universal quantifier, you first compute the vertices which are not adjacent to all others
NQ(x) <- V(x), V(y), !E(x,y).
then return its complement as the answer
Q(x) <- V(x), !NQ(x).
The same kind of trick can be used in SQL, which also lacks universal quantifiers.
I have been trying to acclimate to Prolog and Horn clauses, but the transition from formal logic still feels awkward and forced. I understand there are advantages to having everything in a standard form, but:
What is the best way to define the material conditional operator --> in Prolog, where A --> B succeeds when either A = true and B = true OR B = false? That is, an if->then statement that doesn't fail when if is false without an else.
Also, what exactly are the non-obvious advantages of Horn clauses?
What is the best way to define the material conditional operator --> in Prolog
When A and B are just variables to be bound to the atoms true and false, this is easy:
cond(false, _).
cond(_, true).
But in general, there is no best way because Prolog doesn't offer proper negation, only negation as failure, which is non-monotonic. The closest you can come with actual propositions A and B is often
(\+ A ; B)
which tries to prove A, then goes on to B if A cannot be proven (which does not mean that it is false due to the closed-world assumption).
Negation, however, should be used with care in Prolog.
Also, what exactly are the non-obvious advantages of Horn clauses?
That they have a straightforward procedural reading. Prolog is a programming language, not a theorem prover. It's possible to write programs that have a clear logical meaning, but they're still programs.
To see the difference, consider the classical problem of sorting. If L is a list of numbers without duplicates, then
sort(L, S) :-
permutation(L, S),
sorted(S).
sorted([]).
sorted([_]).
sorted([X,Y|L]) :-
X < Y,
sorted([Y|L]).
is a logical specification of what it means for S to contain the elements of L in sorted order. However, it also has a procedural meaning, which is: try all the permutations of L until you have one that it sorted. This procedure, in the worst case, runs through all n! permutations, even though sorting can be done in O(n lg n) time, making it a very poor sorting program.
See also this question.
I was wondering what sort of sentences can't you express in Prolog? I've been researching into logic programming in general and have learned that first-order logic is more expressive compared to definite clause logic (Horn clause) that Prolog is based on. It's a tough subject for me to get my head around.
So, for instance, can the following sentence be expressed:
For all cars, there does not exist at least 1 car without an engine
If so, are there any other sentences that CAN'T be expressed? If not, why?
You can express your sentence straightforward with Prolog using negation (\+).
E.g.:
car(bmw).
car(honda).
...
car(toyota).
engine(bmw, dohv).
engine(toyota, wenkel).
no_car_without_engine:-
\+(
car(Car),
\+(engine(Car, _))
).
Procedure no_car_without_engine/0 will succeed if every car has an engine, and fail otherwise.
The most problematic definitions in Prolog, are those which are left-recursive.
Definitions like
g(X) :- g(A), r(A,X).
are most likely to fail, due to Prolog's search algorithm, which is plain depth-first-search
and will run to infinity and beyond.
The general problem with Horn Clauses however is, that they're defined to have at most one positive element. That said, one can find a clause which is limited to those conditions,
for example:
A ∨ B
As a consequence, facts like ∀ X: cat(X) ∨ dog(X) can't be expressed directly.
There are ways to work around those and there are ways to allow such statements (see below).
Reading material:
These slides (p. 3) give an
example of which sentence you can't build using Prolog.
This work (p. 10) also explains Horn Clauses and their implications and introduces a method to allow 'invalid' Horn Clauses.
Prolog is a programming language, not a natural language interface.
The sentence you show is expressed in such a convoluted way that I had hard time attempting to understand it. Effectively, I must thanks gusbro that took the pain to express it in understandable way. But he entirely glossed over the knowledge representation problems that any programming language pose when applied to natural language, or even simply negation in first order logic. These problems are so urgent that the language selected is often perceived as 'unimportant'.
Relating to programming, Prolog lacks the ability to access in O(1) (constant time) any linear data structure (i.e. arrays). Then a QuickSort, for instance, that requires access to array elements in O(1), can't be implemented in efficient way.
But it's nevertheless a Turing complete language, for what is worth. Then there are no statements that can't be expressed in Prolog.
So you are looking for sentences that can't be expressed in clausal logic that can be expressed in first order logic.
Strictly speaking, there are many, simply because clausal logic is a restriction of FOL. So that's true by definition.
What you can do though is you can rewrite any set of FOL sentences into a logic program that is not equivalent but with good properties. So for example if you want to know if p is a consequence of your theory, you can use equivalently the transformed logic program.
A few notes on the other answers:
Negation in Prolog (\+) is negation as failure and not first order logic negation
Prolog is a programming language, as correctly pointed out, we should be talking about clausal logic instead.
Left recursion is not a problem. You can easily use a different selection rule, or some other inference mechanism.