Remove first useless productions and then unit productions? - logic

When transforming a Context-free grammar into Chomsky Normal Form, we first remove null-productions, then unit-productions and then useless productions in this exact order.
I understand that removing null-productions could give raise to unit-productions, that’s why unit is removed after null-productions.
I do however not understand what could go wrong if we first removed useless-productions and then unit?

If you remove the unit production A → B and that was the only place in the grammar where B was referenced, then B will become unreachable as a result of unit-production elimination, and will need to be removed along with its productions.
That condition requires B to be non-recursive (since a recursive non-terminal refers to itself, and presumably not with a unit production), and any non-terminals referenced in the productions of B will still be referenced, having been absorbed into productions for A.
If the grammar does not have a cycle of unit productions allowing A →* A, then unit productions can be topologically sorted and removed in reverse topological order, which guarantees that the unit production elimination doesn't create a new unit production. That makes it possible to remove newly-unreachable non-terminals as you do the unit-production elimination. But I think that textbook algorithms probably don't do that, which is presumably why your textbook wants you to remove useless productions after the grammar has been converted to CNF. (And, of course, there's nothing stopping a grammar from having a cycle of unit productions. Such a grammar would be ambiguous, making it difficult to use in a parser, but this exercise doesn't require that the grammar be useful in a parser.)
Similarly, if the only production for a non-terminal is an ε-production, then that non-terminal will end up with no productions after null-productions are removed (and it will also be unreachable). Again, that could be handled in a way which doesn't require deferring reachability analysis, but the textbook algorithm probably doesn't do that.

Related

Why is unit-propagation performed first in DPLL algorithm?

Why is the pure-literal rule performed after the unit propagation and not before?
Unit propagation is done first because it might produce pure literals. DPLL might then recurse on the variables associated with these literals, wasting potentially exponential time uselessly backtracking over them in the future. By eliminating pure literals after unit propagation the function is assured of recursing on a variable whose value legitimately might be either TRUE or FALSE. A pure literal can always be immediately set to TRUE.

How to use Coq as calculator or as forward chaining rule engine/sequence application tool?

Is it possible and how to use Coq as calculator or as rule engine in foward chaining mode? Coq script usually requires to declare the goal for which the proof can be found. But is it possible to go in other direction, e.g. to compute the set of some consequences bounded by some rule, e.g., by some number of steps. I am especially interested in the sequent calculus of full first order logic. I guess (but I don't know) that there are some implementation or package for some type of sequent calculus for first order logic, but it is for theorem proving. I woul like to use such sequent calculus to derive consequences in some directed order. Is that possible in Coq and how?
Coq can be used for forward reasoning as well, in particular with the assert tactic. When you write assert (H : P)., Coq generates a subgoal that asks you to prove P. When this goal is complete, it resumes the original proof, extending its context with a hypothesis H : P.
The ltac language used to write Coq scripts has a match goal operator that allows you to inspect the shape of your goal. This allows you to progressively saturate your proof context with new facts derived from your current assumptions using the assert tactic, and to stop once certain conditions are met. Adam Chlipala's CPDT book has a nice chapter covering these features of tactic programming.

Prolog - Avoid Infinite Loop

I am currently studying logic programming, and learn Prolog for that case.
We can have a Knowledge Base, which can lead us to some results, whereas Prolog will get in infinite loop, due to the way it expands the predicates.
Let assume we have the following logic program
p(X):- p(X).
p(X):- q(X).
q(X).
The query p(john) will get to an infinite loop because Prolog expands by default the first predicate that is unified. However, we can conclude that p(john) is true if we start expanding the second predicate.
So why doesn't Prolog expand all the matching predicates (implemented like threads/processes model with time slices), in order to conclude something if the KB can conclude something ?
In our case for example, two processes can be created, one expanded with p(X) and the other one with q(X). So when we later expand q(X), our program will conclude q(john).
Because Prolog's search algorithm for matching predicates is depth-first. So, in your example, once matching the first rule, it will match again the first rule, and will never explore the others.
This would not happen if the algorithm is breadth-first or iterative-deepening.
Usually is up to you to reorder the KB such that these situations never happen.
However, it is possible to encode breadth-first/iterative-deepening search in Prolog using a meta-interpreter that changes the search order. This is an extremely powerful technique that is not well known outside of the Prolog world. 'The Art of Prolog' describes this technique in detail.
You can find some examples of meta-interpreters here, here and here.

Logic programming - Is subset with only one function symbol Turing - complete?

If I have a subset of logic programming which contains only one function symbol, am I able to do everything?
I think that I cannot but I am not sure at all.
A programming language can do anything user wants if it is a Turing-complete language. I was taught that this means it has to be able to execute if..then..else commands, recursion and that natural numbers should be defined.
Any help and opinions would be appreciated!
In classical predicate logic, there is a distinction between the formula level and the term level. Since an n-ary function can be represented as an (n+1)-ary predicate, restricting only the number of function symbols does not lessen the expressivity.
In prolog, there is no difference between the formula and the term level. You might pick an n-ary symbol p and try to encode turing machines or an equivalent notion(e.g. recursive functions) via nestings of p.
From my intution I would assume this is not possible: you can basically describe n-ary trees with variables as leaves, but then you can always unify these trees. This means that every rule head will match during recursive derivations and therefore you are unable to express any case distinction. Still, this is just an informal argument, not a proof.
P.S. you might also be interested in monadic logic, where only unary predicates are allowed. This fragment of first-order logic is decidable.

Prolog - what sort of sentences can't be expressed

I was wondering what sort of sentences can't you express in Prolog? I've been researching into logic programming in general and have learned that first-order logic is more expressive compared to definite clause logic (Horn clause) that Prolog is based on. It's a tough subject for me to get my head around.
So, for instance, can the following sentence be expressed:
For all cars, there does not exist at least 1 car without an engine
If so, are there any other sentences that CAN'T be expressed? If not, why?
You can express your sentence straightforward with Prolog using negation (\+).
E.g.:
car(bmw).
car(honda).
...
car(toyota).
engine(bmw, dohv).
engine(toyota, wenkel).
no_car_without_engine:-
\+(
car(Car),
\+(engine(Car, _))
).
Procedure no_car_without_engine/0 will succeed if every car has an engine, and fail otherwise.
The most problematic definitions in Prolog, are those which are left-recursive.
Definitions like
g(X) :- g(A), r(A,X).
are most likely to fail, due to Prolog's search algorithm, which is plain depth-first-search
and will run to infinity and beyond.
The general problem with Horn Clauses however is, that they're defined to have at most one positive element. That said, one can find a clause which is limited to those conditions,
for example:
A ∨ B
As a consequence, facts like ∀ X: cat(X) ∨ dog(X) can't be expressed directly.
There are ways to work around those and there are ways to allow such statements (see below).
Reading material:
These slides (p. 3) give an
example of which sentence you can't build using Prolog.
This work (p. 10) also explains Horn Clauses and their implications and introduces a method to allow 'invalid' Horn Clauses.
Prolog is a programming language, not a natural language interface.
The sentence you show is expressed in such a convoluted way that I had hard time attempting to understand it. Effectively, I must thanks gusbro that took the pain to express it in understandable way. But he entirely glossed over the knowledge representation problems that any programming language pose when applied to natural language, or even simply negation in first order logic. These problems are so urgent that the language selected is often perceived as 'unimportant'.
Relating to programming, Prolog lacks the ability to access in O(1) (constant time) any linear data structure (i.e. arrays). Then a QuickSort, for instance, that requires access to array elements in O(1), can't be implemented in efficient way.
But it's nevertheless a Turing complete language, for what is worth. Then there are no statements that can't be expressed in Prolog.
So you are looking for sentences that can't be expressed in clausal logic that can be expressed in first order logic.
Strictly speaking, there are many, simply because clausal logic is a restriction of FOL. So that's true by definition.
What you can do though is you can rewrite any set of FOL sentences into a logic program that is not equivalent but with good properties. So for example if you want to know if p is a consequence of your theory, you can use equivalently the transformed logic program.
A few notes on the other answers:
Negation in Prolog (\+) is negation as failure and not first order logic negation
Prolog is a programming language, as correctly pointed out, we should be talking about clausal logic instead.
Left recursion is not a problem. You can easily use a different selection rule, or some other inference mechanism.

Resources