Ok, I have the given relation:
If F(x) is not true then no case satisfies G(x) and H(y,x).
((∀x ¬F(x)) ⇒¬(∀y G(y) ˄ H(y,x)))
Now, Can I possibly convert this into:
(∀y G(y) ˄ H(y,x))) ⇒ ((∀x F(x)) ????
If not, the left hand side essentially has to imply:
If F(x) is not true.... Mentions nothing about the For All or Existential Quantifiers. Can I take the negation outside of the Quantifier i.e. put it as (¬(∀x F(x)), because this makes the job much easier???
I'm not sure this is the right place but, no you can't.
Moving the negation out would change the quantifier. Also, the initial formula may not be what you want: the last x is a free variable.
Related
In predicate logic, why does P(x) and P(f(x)) have no unifiers? One of my solutions is replacing x with f(x), but I'm not sure why I am wrong.
Let's see what happens if you replace x with f(x):
P(x) becomes P(f(x))
P(f(x)) becomes P(f(f(x)))
And the result isn't the same; so it's not a unifier.
In general when the difference term involves itself as in this case (i.e., x differing from f(x)) you cannot unify them since whatever you substitute for x would change both terms in an unequal way, assuming they are not equal to start with.
Another way to think about this the so called occurs-check. Since x occurs in f(x) you cannot unify these two terms. You can read about the occurs-check here: https://en.wikipedia.org/wiki/Occurs_check
What is the exact difference between Well-formed formula and a proposition in propositional logic?
There's really not much given about Wff in my book.
My book says: "Propositions are also called sentences or statements. Another term formulae or well-formed formulae also refer to the same. That is, we may also call Well formed formula to refer to a proposition". Does that mean they both are the exact same thing?
Proposition: A statement which is true or false, easy for people to read but hard to manipulate using logical equivalences
WFF: An accurate logical statement which is true or false, there should be an official rigorus definition in your textbook. There are 4 rules they must follow. Harder for humans to read but much more precise and easier to manipulate
Example:
Proposition : All men are mortal
WFF: Let P be the set of people, M(x) denote x is a man and S(x)
denote x is mortal Then for all x in P M(x) -> S(x)
It is most likely that there is a typo in the book. In the quote Propositions are also called sentences or statements. Another term formulae or well-formed formulae also refer to the same. That is, we may also call Well formed formula to refer to a preposition, the word "preposition" should be "proposition".
Proposition :- A statement which is either true or false,but not both.
Propositional Form (necessary to understand Well Formed Formula) :- An assertion which contains at least one propositional variable.
Well Formed Formula :-A propositional form satisfying the following rules and any Wff(Well Formed Formula) can be derived using these rules:-
If P is a propositional variable then it is a wff.
If P is a propositional variable,then ~P is a wff.
If P and Q are two wffs then,(A and B),(A or B),(A implies B),(A is equivalent to B) are all wffs.
In a question I asked here: p(x)⇒∀x.p(x) is contingent?
It seems that there's tendency to agree upon p(x)⇒∀x.p(x) is the same as ∀x.(p(x)⇒∀y.p(y)), whereas ∀x.(p(x)⇒∀y.p(y)) is read as if p(x) is true for some x, then it is true for all x.
However I don't understand where's the quantifier SOME came from, since there no quantifier '∃' in '∀x.(p(x)⇒∀y.p(y))'
Is there any sort of quantifier distribution law makes the quantifier changed in the ∀x.(p(x)⇒∀y.p(y)) ?
there's tendency to agree upon p(x)⇒∀x.p(x) is the same as ∀x.(p(x)⇒∀y.p(y))
No, it isn't the same (the truth of the first depends on x, the truth of the second doesn't); the second is the universal closure of the first. The linked textbook does consider them the same, but it's far from universal. I believe the more common definition is the one in Wikipedia, by which the first is not a sentence.
Is there any sort of quantifier distribution law makes the quantifier changed in the ∀x.(p(x)⇒∀y.p(y)) ?
Yes; if q doesn't depend on x, you can see this chain of equivalences:
∀x.(p(x)⇒q) ≡
∀x.(¬p(x)∨q) ≡
(∀x.¬p(x))∨q ≡
¬(∃x.p(x))∨q ≡
(∃x.p(x))⇒q
I want to know why the order of quantifiers are important in a logic formula?
When I read books about logic programming, such points are mentioned, but did not say why.
Is there any one could explain with some examples?
Also, how can we determine order of quantifiers from a given logic formula?
Thanks in advance!
You would be well advised to read a book about first-order logic before the books about
logic programming.
Consider the true statement:
1. Everybody has a mother
Let's formalize it in FOL. To keep it simple, we'll say
that the universe of discourse is the set of people, i.e.
our individual variables x, y, z... range over people. Then
1 becomes:
1F. (x)(Ey)Mother(y,x)
which we can read as: For every person x there exists
some person y such that y is the mother of x.
Now let's swap the order of the universal quantifier (x) and existential
quantifier (Ey):
2F. (Ey)(x)Mother(y,x)
That reads: There is some person y such that for every person x,
y is the mother of x. Or in plain English:
2. There is somebody who is the mother of everybody
You see that swapping the quantifiers changes the meaning of the statement,
taking us from the true statement 1 to the false statement 2. Indeed, to the absurdly false statement
2, which entails that somebody is their own mother.
That's why the order of quantifiers matters.
how can we determine order of quantifiers from a given logic formula?
Well, in 1F and 2F, for example, all the variables are already bound by quantifiers,
so there's nothing to determine. The order of the quantifiers is what you see,
left to right.
Suppose one of the variables was free (not bound), e.g.
3F. (Ey)Mother(y,x)
You might read that as: There is someone who is the mother of x, for variable person x.
But this formula really doesn't express any statement. It expresses a unary predicate of persons, the predicate Someone is the mother of x. If you free up the remaining variable:
4F. Mother(x,y)
then you have the binary predicate, or relation: x is the mother of y.
A formula with 1,2,...,n free variables expresses a unary, binary,...,n-ary predicate.
Given a predicate, you can make a statement by binding free variables with quantifiers and/or substituting individual constants for the free variables. From 4F you can make:
(x)(y)Mother(x,y) (Everybody is everybody's mother)
(Ex)(y)Mother(x,y) (Somebody is everybody's mother)
(Ex)(Ey)Mother(x,y) (Somebody is somebody's mother)
(x)Mother(x,Arnold) (Everybody is the mother of Arnold)
(x)Mother(Bernice,x) (Bernice is the mother of everybody)
Mother(Arnold,Bernice) (Arnold is the mother of Bernice)
...
...
and so on ad nauseam.
What this should make clear is that if a formula has free variables, and therefore expresses
a predicate, the formula as such does not imply any particular way of quantifying
the free variables, or that they should be quantified at all.
I want to use universal quantifier in the body of a predicate rule, i.e., something like
A(x,y) <- ∀B(x,a), C(y,a).
It means that only if for each a from C(y, a), B(x,a) always has x to match (x,a), then A(x,y) is true.
Since in Datalog, every variable bounded in rule body is existential quantifier by default, the a would be an existential quantifier too. What should I do to express universal quantifier in the body of a predicate rule?
Thank you.
P.S. The Datalog engine I am using is logicblox.
The basic idea is to use the logical axiom
∀x φ(x) ⇔ ¬∃x ¬φ(x)
to put your rules in a form where only existential quantifiers are required (along with negation). Intuitively, this usually means computing the complement of your answer first, and then computing its complement to produce the final answer.
For example, suppose you are given a graph G(V,E) and you want to find the vertices which are adjacent to all others in the graph. If universal quantification were allowed in a Datalog rule body, you might write something like
Q(x) <- ∀y E(x,y).
To write this without the universal quantifier, you first compute the vertices which are not adjacent to all others
NQ(x) <- V(x), V(y), !E(x,y).
then return its complement as the answer
Q(x) <- V(x), !NQ(x).
The same kind of trick can be used in SQL, which also lacks universal quantifiers.