express the following statements as formulas in first-order predicate logic - logic

Let:
• B(x) for “x has bifurcated horns”
• D(x) for “x suffers from dermal asthenia”
• F(x) for “x is female”
• M(x, y) for “x is the mother of y”
• S(x) for “x is Syldavian”
• U(x) for “x is a unicorn”
How do I express
1) "Mother unicorns with dermal asthenia pass the condition on to all their offspring"
2) "Any unicorn whose mother is Syldavian suffers from dermal asthenia"
in first-order predicate logic?
My attempt
1)
there exist a x and for all y,
if x is mother of y
and x is a unicorn
and x has dermal asthenia,
it implies y have dermal asthenia too.
∃x∀y( (M(x,y) ∧ U(x) ∧ D(x) ) -> D(y) )
2)
for all x and y,
if y is a unicorn
and x is mother of y,
and x is Syldavian,
it implies y has dermal asthenia
∀x∀y( ( U(y) ∧ M(x,y) ∧ S(x) ) -> B(y) )
Any help would be appreciated, especially on when to use ∀ and when to use ∃.
Thank you.

"Mother unicorns with dermal asthenia pass the condition on to all their offspring"
∀x∀y((M(x,y) ∧ U(x) ∧ D(x)) -> D(y))
"Any unicorn whose mother is Syldavian suffers from dermal asthenia"
∀x∀y((M(x,y) ∧ U(y) ∧ S(x)) -> D(x))
Here there is no statements "there exists" or "at least one". These statements are about all unicorns so we do not use ∃.

Related

First order logic - position of quantifier

When you have a statement with →, does it matter if you use a quantifier before or after the implication?
ex. the statement "Every man loves a king" (2 different semantic interpretations)
Every man loves a king and these kings may all differ from each other. ∀x IsMan(x) → ∃y (IsKing(y) ∧ Loves(x,y))
There is a single king that every man loves. ∃y, ∀x (IsKing(y) ∧ IsMan(x)) → Loves(x,y)
For #1, would it be equally correct to write it as ∀x, ∃y, (IsMan(x) ∧ IsKing(y)) → Loves(x,y)?
And for #2, what about ∃y IsKing(y) → ∀x (IsMan(x) ∧ Loves(x,y))?
Yes, the order of quantifier matters to decide the satisfiability/validity of a formula.
One way to be sure of it is to know that (A → B) is the same as (not A or B) and that (not (∀x x))is the same as (∃x (not x)).
Thus, when you have (∃x, ∀y x → y) it is the same as (∀x ∃y ((not x) or y)).
Which is different from (∃x, x → ∀y y) which can then be ((∀x (not x) or (∀y y)).

Reduce Lambda Term to Normal Form

I just learned about lambda calculus and I'm having issues trying to reduce
(λx. (λy. y x) (λz. x z)) (λy. y y)
to its normal form. I get to (λy. y (λy. y y) (λz. (λy. y y) z) then get kind of lost. I don't know where to go from here, or if it's even correct.
(λx. (λy. y x) (λz. x z)) (λy. y y)
As #ymonad notes, one of the y parameters needs to be renamed to avoid capture (conflating different variables that only coincidentally share the same name). Here I rename the latter instance (using α-equivalence):
(λx. (λy. y x) (λz. x z)) (λm. m m)
Next step is to β-reduce. In this expression we can do so in one of two places: we can either reduce the outermost application (λx) or the inner application (λy). I'm going to do the latter, mostly on arbitrary whim / because I thought ahead a little bit and think it will result in shorter intermediate expressions:
(λx. (λz. x z) x) (λm. m m)
Still more β-reduction to do. Again I'm going to choose the inner expression because I can see where this is going, but it doesn't actually matter in this case, I'll get the same final answer regardless:
(λx. x x) (λm. m m)
Side note: these two lambda expressions (which are also known as the "Mockingbird" (as per Raymond Smullyan)) are actually α-equivalent, and the entire expression is the (in)famous Ω-combinator. If we ignore all that however, and apply another β-reduction:
(λm. m m) (λm. m m)
Ah, that's still β-reducible. Or is it? This expression is α-equivalent to the previous. Oh dear, we appear to have found ourselves stuck in an infinite loop, as is always possible in Turing-complete (or should we say Lambda-complete?) languages. One might denote this as our original expression equalling "bottom" (in Haskell parlance), denoted ⊥:
(λx. (λy. y x) (λz. x z)) (λy. y y) = ⊥
Is this a mistake? Well, some good LC theory to know is:
if an expression has a β-normal form, then it will be the same β-normal form no matter what order of reductions was used to reach it, and
if an expression has a β-normal form, then normal order evaluation is guaranteed to find it.
So what is normal order? In short, it is β-reducing the outermost expression at every step. Let's take this expression for another spin!
(λx. (λy. y x) (λz. x z)) (λm. m m)
(λy. y (λm. m m)) (λz. (λm. m m) z)
(λz. (λm. m m) z) (λm. m m)
(λm. m m) (λm. m m)
Darn. Looks like this expression has no normal form – it diverges (doesn't terminate).

can you help me to transform forall FO logical formula to it equivalent not exist formula?

i have this formula
∀x ∀y (A(x,y) V A(y,x) → B(y,c1) ∧ B(x,c2) ∧ c1≠c2)
to the equivalent formula that by using existential quantifier
?
∀x ∀y X is the same as ¬∃(x, y) ¬X
'X → Y' is the same as 'There is no counterexample when X but not Y'
¬(A(x,y) V A(y,x) → B(x,c1) ∧ B(x,c2) ∧ c1≠c2) = (A(x,y) V A(y,x)) ∧ ¬(B(x,c1) ∧ B(x,c2) ∧ c1≠c2)) - our counterexample. If we put negation of the second part in it and collect everything together, we get:
¬∃(x, y) (A(x,y) V A(y,x)) ∧ (¬B(x,c1) v ¬B(x,c2) v c1 = c2)
Update: replaced ¬∃x ¬∃ y with ¬∃(x, y). I suppose that's what you originally meant, right?
When you want to make that change, you basically want to find the opposite of what the inner statement says , because if a statement is true **for every ** x, then it means the opposite of it does not happen, ever; not exist means just that, there is no x such that makes the statement true.

Find unique tuples in a relation represented by a BDD

Let's suppose I use a BDD to represent the set of tuples in a relation. For simplicity consider tuples with 0 or 1 values. For instance:
R = {<0,0,1>, <1,0,1>, <0,0,0>} represent a ternary relation in a BDD with three variables, say x, y and z (one for each tuple's element). What I want is to implement an operation that given a bdd like for R and a cube C returns the subset of R that contains unique tuples when the variables in C are abstracted.
For instance, if C contains the variable x (which represents the first element in each tuple) the result of the function must be {<0,0,0>}. Notice that when x is abstracted away tuples <0,0,1> and <1,0,1> become "the same".
Now suppose R = {<0,0,1>, <1,0,1>, <0,0,0>, <1,0,0>} and we want to abstract x again. In this case I would expect the constant false as result because there is no unique tuple in R after abstracting x.
Any help is highly appreciated.
This could be done in three simple steps:
Make two BDDs with restricted value of the variable you want to abstract:
R[x=0] = restrict R with x = 0
R[x=1] = restrict R with x = 1
Apply XOR operation to this new BDDs
Q = R[x=0] xor R[x=1]
Enumerate all models of Q
Applying this to your examples:
R = {<0,0,1>, <1,0,1>, <0,0,0>} = (¬x ∧ ¬y ∧ z) ∨ (x ∧ ¬y ∧ z) ∨ (¬x ∧ ¬y ∧ ¬z)
R[x=1] = {<0,1>} = (¬y ∧ z)
R[x=0] = {<0,1>,<0,0>} = (¬y ∧ z) ∨ (¬y ∧ ¬z)
Q = R[x=1] xor R[x=0] = (¬y ∧ ¬z)
Intuition here is that XOR will cancel entries that occur in both BDDs.
This is easily (but with exponential complexity) generalized to the case with several abstracted variables.

existential search and query without the fuss

Is there an extensible, efficient way to write existential statements in Haskell without implementing an embedded logic programming language? Oftentimes when I'm implementing algorithms, I want to express existentially quantified first-order statements like
∃x.∃y.x,y ∈ xs ∧ x ≠ y ∧ p x y
where ∈ is overloaded on lists. If I'm in a hurry, I might write perspicuous code that looks like
find p [] = False
find p (x:xs) = any (\y -> x /= y && (p x y || p y x)) xs || find p xs
or
find p xs = or [ x /= y && (p x y || p y x) | x <- xs, y <- xs]
But this approach doesn't generalize well to queries returning values or predicates or functions of multiple arities. For instance, even a simple statement like
∃x.∃y.x,y,z ∈ xs ∧ x ≠ y ≠ z ∧ f x y z = g x y z
requires writing another search procedure. And this means a considerable amount of boilerplate code. Of course, languages like Curry or Prolog that implement narrowing or a resolution engine allow the programmer to write statements like:
find(p,xs,z) = x ∈ xs & y ∈ xs & x =/= y & f x y =:= g x y =:= z
to abuse the notation considerably, which performs both a search and returns a value. This problem arises often when implementing formally specified algorithms, and is often solved by combinations of functions like fmap, foldr, and mapAccum, but mostly explicit recursion. Is there a more general and efficient, or just general and expressive, way to write code like this in Haskell?
There's a standard transformation that allows you to convert
∃x ∈ xs : P
to
exists (\x -> P) xs
If you need to produce a witness you can use find instead of exists.
The real nuisance of doing this kind of abstraction in Haskell as opposed to a logic language is that you really must pass the "universe" set xs as a parameter. I believe this is what brings in the "fuss" to which you refer in your title.
Of course you can, if you prefer, stuff the universal set (through which you are searching) into a monad. Then you can define your own versions of exists or find to work with the monadic state. To make it efficient, you can try Control.Monad.Logic, but it may involve breaking your head against Oleg's papers.
Anyway, the classic encoding is to replace all binding constructs, including existential and universal quantifiers, with lambdas, and proceed with appropriate function calls. My experience is that this encoding works even for complex nested queries with a lot of structure, but that it always feels clunky.
Maybe I don't understand something, but what's wrong with list comprehensions? Your second example becomes:
[(x,y,z) | x <- xs, y <- xs, z <- xs
, x /= y && y /= z && x /= z
, (p1 x y z) == (p2 x y z)]
This allows you to return values; to check if the formula is satisfied, just use null (it won't evaluate more than needed because of laziness).

Resources