I want to use universal quantifier in the body of a predicate rule, i.e., something like
A(x,y) <- ∀B(x,a), C(y,a).
It means that only if for each a from C(y, a), B(x,a) always has x to match (x,a), then A(x,y) is true.
Since in Datalog, every variable bounded in rule body is existential quantifier by default, the a would be an existential quantifier too. What should I do to express universal quantifier in the body of a predicate rule?
Thank you.
P.S. The Datalog engine I am using is logicblox.
The basic idea is to use the logical axiom
∀x φ(x) ⇔ ¬∃x ¬φ(x)
to put your rules in a form where only existential quantifiers are required (along with negation). Intuitively, this usually means computing the complement of your answer first, and then computing its complement to produce the final answer.
For example, suppose you are given a graph G(V,E) and you want to find the vertices which are adjacent to all others in the graph. If universal quantification were allowed in a Datalog rule body, you might write something like
Q(x) <- ∀y E(x,y).
To write this without the universal quantifier, you first compute the vertices which are not adjacent to all others
NQ(x) <- V(x), V(y), !E(x,y).
then return its complement as the answer
Q(x) <- V(x), !NQ(x).
The same kind of trick can be used in SQL, which also lacks universal quantifiers.
Related
When attempting to solve logic problems on a computer, it is usual to first convert them to CNF, because the best solving algorithms expect CNF as input.
For propositional logic, the textbook rules for this conversion are simple, but if you apply them as is, the result is one of the very rare cases where a program encounters double exponential resource consumption without being specifically constructed to do so:
a <=> (b <=> (c <=> ...))
with N variables, generates 2^2^N clauses, one exponential blowup in the conversion of equivalence to AND/OR, and another in the distribution of OR into AND.
The solution to this is to rename subterms. If we rewrite the above as something like
r <=> (c <=> ...)
a <=> (b <=> r)
where r is a fresh symbol that is being defined to be equal to a subterm - in general, we may need O(N) such symbols - the exponential blowups can be avoided.
Unfortunately, this runs into a problem when we try to extend it to first-order logic. Using TPTP notation where ? means 'there exists' and variables begin with capital letters, consider
a <=> ?[X]:p(X)
Admittedly this case is simple enough that there is no actual need to rename the subterm, but it's necessary to use a simple case for illustration, so suppose we are using an algorithm that just automatically renames arguments of the equivalence operator; the point generalizes to more complex cases.
If we try the above trick and rename the ? subterm, we get
r <=> ?[X]:p(X)
Existential variables are converted to Skolem symbols, so that ends up as
r <=> p(s)
The original formula then expands to
(~a | r) & (a | ~r)
Which is by construction equivalent to
(~a | p(s)) & (a | ~p(s))
But this is not correct! Suppose we had not done the renaming, but just expanded the original formula as it was, we would get
(~a | ?[X]:p(X)) & (a | ~?[X]:p(X))
(~a | ?[X]:p(X)) & (a | ![X]:~p(X))
(~a | p(s)) & (a | ~p(X))
which is critically different from the version we got with the renaming.
The problem is that equivalence needs both the positive and negative versions of each argument, but applying negation to terms that contain universal or existential quantifiers, structurally changes those terms; you cannot just encapsulate them in a definition, then apply the negation to the defined symbol.
The upshot of this is that when you have equivalence and the arguments may contain such quantifiers, you actually need to recur through each argument twice, once for the positive version, once for the negative. This suffices to bring back the existential blowup we hoped to avoid by doing the renaming. As far as I can see, this problem is not caused by the way a particular algorithm works, but by the nature of the task.
So my question:
Given an input formula that may contain arbitrary nesting of equivalence and quantifiers, is there any algorithm that will correctly turn this to CNF with a polynomial rather than exponential number of clauses?
As you observed, an existential such as ∃X.p(X) is not in fact equivalent to a Skolemized expression p(S). Its negation ¬∃X.p(X) is not equivalent to ¬p(S), but to ∀Y.¬p(Y).
Possible approaches that avoid the exponential blow-up:
Convert existentials such as ∃X.p(X) to universals such as ¬∀Y.p(Y), or vice versa, so you have a canonical form. Skolemize at a later step.
Remember when you convert that your p(S) is a Skolemized existential, and that its negation is ∀Y.¬p(Y).
Define terms equivalent to universals and existentials, such that a represents ∀Y.p(Y) and ¬a then represents ¬∀Y.p(Y), or equivalently, ∃X.¬p(X).
Use the symmetry of Boolean duals, so that the same transformations apply with AND and OR swapped, De Morgan’s Laws, and the equivalence between existentials and negated universals, to restore the symmetry between the expansions of r and ~r. The negations in the conversion between universals and existentials and in De Morgan's Laws cancel each other out, and the duality of switching AND and OR means you can re-use the result on the left to generate the one on the right mechanically again?
Given that you need to support ALL and NOT ALL statements anyway, this should not create any new problems. Just canonicalize and use the same approach you would for a universal.
If you’re solving by converting to SAT, your terms can represent universals, too. So, in your example, you’re trying to replace a with r, but you can still use ~a, equivalent to the negative universal.
In your expressions. you’d still use (~a | r) & (a | ~r), but expand ~r to its correct rather than the incorrect value. That example is trivial, since that’s just ~a, but you’d normally define r as equivalent to a more complex transformation, and in that case you need to remember what both r and ~r represent. It is not really a simple mechanical transformation of the Skolemized expression.
In this example, I’m not sure why it’s a problem that (~a | r) & (a | ~r) is equivalent to (~a | r) & (a | ~a), which simplifies to (~a | r). That’s not going to give you exponential blow-up? When you translate back to first-order predicate logic, make the correct translation.
Update
Thanks for clarifying what the problem was in chat. As I currently think I understand it, what you have is an equivalence with a left and a right side, which contains other nested equivalences, and you want to expand both the equivalence and its negation. The problem is that, because the negation does not have symmetrical form, you need to recurse twice for each nested equivalence in the tree, once when expanding the equivalence and once when expanding its negation?
You should define a transformation that generates the negative expansion from the positive expansion in linear time, and divide-and-conquer the expressions containing nested equivalences using that. This seems to be what you were after with the ~p(S) transformation.
To do this, you recall that ¬∃X.p(X) is equivalent to ∀X.¬p(X), and vice versa. Then if you’ve expanded p(x) into normal form as conjunctions and disjunctions, De Morgan’s Laws lets you turn an expression like ¬(a ∨ ¬b) into ¬a ∧ b. The inner ¬ on the quantifier transformation and the outer ¬ on the De Morgan transformation cancel each other out. Finally, the dual of any Boolean equivalence remains valid when you replace each ∨ and ∧ with the other and any atom a or ¬a with its inverse.
So, while I might be making an error, especially at 1 AM, it looks to me like what you want is the dual transformation that substitutes:
An outer ∃ for ∀ and vice versa
∧ for ∨ and vice versa
Each term t with ¬t and vice versa
Apply this to the expansion of the positive equivalence to generate the negative dual in time proportional to its length, without further recursion.
What is the exact difference between Well-formed formula and a proposition in propositional logic?
There's really not much given about Wff in my book.
My book says: "Propositions are also called sentences or statements. Another term formulae or well-formed formulae also refer to the same. That is, we may also call Well formed formula to refer to a proposition". Does that mean they both are the exact same thing?
Proposition: A statement which is true or false, easy for people to read but hard to manipulate using logical equivalences
WFF: An accurate logical statement which is true or false, there should be an official rigorus definition in your textbook. There are 4 rules they must follow. Harder for humans to read but much more precise and easier to manipulate
Example:
Proposition : All men are mortal
WFF: Let P be the set of people, M(x) denote x is a man and S(x)
denote x is mortal Then for all x in P M(x) -> S(x)
It is most likely that there is a typo in the book. In the quote Propositions are also called sentences or statements. Another term formulae or well-formed formulae also refer to the same. That is, we may also call Well formed formula to refer to a preposition, the word "preposition" should be "proposition".
Proposition :- A statement which is either true or false,but not both.
Propositional Form (necessary to understand Well Formed Formula) :- An assertion which contains at least one propositional variable.
Well Formed Formula :-A propositional form satisfying the following rules and any Wff(Well Formed Formula) can be derived using these rules:-
If P is a propositional variable then it is a wff.
If P is a propositional variable,then ~P is a wff.
If P and Q are two wffs then,(A and B),(A or B),(A implies B),(A is equivalent to B) are all wffs.
Is there any logical difference between these two implementations of a variant predicate?
variant1(X,Y) :-
subsumes_term(X,Y),
subsumes_term(Y,X).
variant2(X_,Y_) :-
copy_term(X_,X),
copy_term(Y_,Y),
numbervars(X, 0, N),
numbervars(Y, 0, N),
X == Y.
Neither variant1/2 nor variant2/2 implement a test for being a syntactic variant. But for different reasons.
The goal variant1(f(X,Y),f(Y,X)) should succeed but fails. For some cases where the same variable appears on both sides, variant1/2 does not behave as expected. To fix this, use:
variant1a(X, Y) :-
copy_term(Y, YC),
subsumes_term(X, YC),
subsumes_term(YC, X).
The goal variant2(f('$VAR'(0),_),f(_,'$VAR'(0))) should fail but succeeds. Clearly, variant2/2 assumes that no '$VAR'/1 occur in its arguments.
ISO/IEC 13211-1:1995 defines variants as follows:
7.1.6.1 Variants of a term
Two terms are variants if there is a bijection s of the
variables of the former to the variables of the latter such that
the latter term results from replacing each variable X in the
former by Xs.
NOTES
1 For example, f(A, B, A) is a variant of f(X, Y, X),
g(A, B) is a variant of g(_, _), and P+Q is a variant of
P+Q.
2 The concept of a variant is required when defining bagof/3
(8.10.2) and setof/3 (8.10.3).
Note that the Xs above is not a variable name but rather (X)s. So s is here a bijection, which is a special case of a substitution.
Here, all examples refer to typical usages in bagof/3 and setof/3 where variables happen to be always disjoint, but the more subtle case is when there are common variables.
In logic programming, the usual definition is rather:
V is a variant of T iff there exists σ and θ such that
Vσ and T are identical
Tθ and V are identical
In other words, they are variants if both match each other. However, the notion of matching is pretty alien to Prolog programmers, that is, the notion of matching as used in formal logic. Here is a case which lets many Prolog programmers panic:
Consider f(X) and f(g(X)). Does f(g(X)) match f(X) or not? Many Prolog programmers will now shrug their shoulders and mumble something about the occurs-check. But this is entirely unrelated to the occurs-check. They match, yes, because
f(X){ X ↦ g(X) } is identical to f(g(X)).
Note that this substitution replaces all X and substitutes them for g(X). How can this happen? In fact, it cannot happen with Prolog's typical term representation as a graph in memory. In Prolog the node X is a real address somehow in memory, and you cannot do such an operation at all. But in logic things are on an entirely textual level. It's just like
sed 's/\<X\>/g(X)/g'
except that one can also replace variables simultaneously. Think of { X ↦ Y, Y ↦ X}. They have to be replaced at once, otherwise f(X,Y) would shrink into f(X,X) or f(Y,Y).
So this definition, while formally perfect, relies on notions that have no direct correspondence in Prolog systems.
Similar problems happen when one-sided unification is considered which is not matching, but the common case between unification and matching.
According to ISO/IEC 13211-1:1995 Cor.2:2012 (draft):
8.2.4 subsumes_term/2
This built-in predicate provides a test for syntactic one-sided unification.
8.2.4.1 Description
subsumes_term(General, Specific) is true iff there is a
substitution θ such
that
a) Generalθ
and Specificθ are identical, and
b) Specificθ and Specific
are identical.
Procedurally, subsumes_term(General, Specific) simply
succeeds or fails accordingly. There is no side effect or
unification.
For your definition of variant1/2, subsumes_term(f(X,Y),f(Y,X)) already fails.
Ok, I have the given relation:
If F(x) is not true then no case satisfies G(x) and H(y,x).
((∀x ¬F(x)) ⇒¬(∀y G(y) ˄ H(y,x)))
Now, Can I possibly convert this into:
(∀y G(y) ˄ H(y,x))) ⇒ ((∀x F(x)) ????
If not, the left hand side essentially has to imply:
If F(x) is not true.... Mentions nothing about the For All or Existential Quantifiers. Can I take the negation outside of the Quantifier i.e. put it as (¬(∀x F(x)), because this makes the job much easier???
I'm not sure this is the right place but, no you can't.
Moving the negation out would change the quantifier. Also, the initial formula may not be what you want: the last x is a free variable.
So I'm trying to understand how Datalog works and one of the differences between it and Prolog is that it has stratification limitations placed upon negation and recursion.
To quote Wikipedia:
If a predicate P is positively derived from a predicate Q (i.e., P is
the head of a rule, and Q occurs positively in the body of the same
rule), then the stratification number of P must be greater than or
equal to the stratification number of Q
If a predicate P is derived from a negated predicate Q (i.e., P is the
head of a rule, and Q occurs negatively in the body of the same rule),
then the stratification number of P must be greater than the
stratification number of Q,
So, going by this, the two following predicates do not result in a stratification error as they can simply be assigned the same stratification number. So these predicates are fine, despite the circular definition.
A(x) :- B(x)
B(x) :- A(x)
But contrast that with what happens if we have a definition which has some negation involved (Where ~ is negation)
A(x) :- ~ B(x)
B(x) :- ~ A(x)
Here a stratification is impossible. A(x,y) must have a stratification number greater than B(x,y), and B(x,y) must have a stratification number greater than A(x,y). My first thought was that this was not okay because this is a circular definition, but stratification is fine with circularity so long as the predicates are not negated. But why? Truth values are simply binary. It seems extremely arbitrary to treat formulas which have a negation symbol differently in this manner. What is this stratification trying to prevent in the second case which isn't in the first?
I think the problem with:
A(x) :- \+ B(x)
B(x) :- \+ A(x)
...is that it has ambiguous semantics. This program has two minimal models, namely, {A(x)} and {B(x)}, and is therefore not well-defined under the fixed point semantics (no fixed point) or under the model theoretic semantics (no unique minimal model).
In order to address this problem, stratified semantics for Datalog imposes restrictions on the syntax of Datalog programs such that, if a stratification exists for the program, then it will also have a unique, minimal model in both the fixed point and model theoretic semantics (and vice-versa, I believe).
You can find more on the precise details of stratified semantics for Datalog in the text "Foundations of Databases by Serge Abiteboul, Richard Hull, and Victor Vianu" which happens to be freely available online, with the relevant detail in Chapter 15. This excellent text also explains most of the other terms I've used above like model, fixed-point, etc. if you're stuck.