Knights and Knives Logical Proposition - logic

I have a question regarding Knights and Knaves and logical proposition. If I want to solve the puzzle and I assume I have two kinds of citizens: Knights, who always tell the truth, and knaves, who always tell lies. On the basis of utterances from some citizens, I must decide what kind they are.
There are three kinds of citizens: a, b and c, who are talking about themselves:
a says: ”All of us are knaves.”
b says: ”Exactly one of us is a knight.”
To solve the puzzle I should determine: What kinds of citizens are a, b and c? I should solve the puzzle by modelling the two utterances above using propositional logic, and I assume that I can use p to describe a knight and ¬p to describe a knave. How would I go about doing that? Any hint for someone who hasn't done any noticeable discrete mathematics in college?

A and C are Knaves. B is a Knight.
If A is a Knight, "All of us are knaves" is true. So, A would also be a Knave. This is a contradiction. Hence, A is a Knave.
If B is a Knave, then "Exactly one of us is a knight." is false. Meaning that 2 or more are Knights. But neither A nor B is a Knight. How can possibly be 2 or more Knights (since C is the only one with a possibility of being a Knight). This is also a contradiction. So, B is a Knight.
We just showed that B is a Knight. So, he himself is the only Knight he is talking about. So, C is a Knave.
Now, I don't think you can model this argument in the Propositional Logic. For one, notice the universal and existential quantifiers ("All" and "Exactly One") in the statements "All of us are knaves" and "Exactly one of us is a knight.". For another one, notice that A and B are talking about themselves. Modeling situations like this is one of the hardest problems in the history (not kidding!). Look at the following links for more info:
https://en.wikipedia.org/wiki/Liar_paradox
https://en.wikipedia.org/wiki/Self-reference

you can create a Truth table,
by first look at it i can say A must be knave, and B is a knight. because if A is a knight he can't say he's a knave(lie), also he can't be right about that all are knaves (can't say the truth) so B is a knight(if B Knave he can't say the truth that makes A a liar and he must be one) and then C is a Knave.

Related

XOR and logical conjunction

I'm struggling in order to understand the meaning of the following expression:
aᵢ ⊕ bᵢ = xᵢ ∧ yᵢ
I know the symbol ⊕ is actually an exclusive OR, and the ∧ is an and symbol.
But I cannot grasp the overall meaning. What does that mean in simple words?
The context is what is stated here.
Can someone help me?
Thanks a lot
The paper you reference uses notation a⊕b = x·y, but · and ∧ mean the same in this context: logical AND operation on single-bit variables.
This equality describes the requirement of the CHSH game. The game involves two players, Alice and Bob, who cannot communicate with one another. They are each given a single random bit (Alice gets X and Bob gets Y). Alice and Bob then output a single bit they choose independently based on their input bits (A from Alice and B from Bob) with the goal of satisfying the formula X · Y = A ⊕ B.
This game illustrates that quantum entanglement enables strategies that are dramatically better than the purely classical strategies. The best
classical strategy is for Alice and Bob to output 0 regardless of the input - this strategy wins the game 75% of the time. But a quantum strategy exists that allows them to win 85% of the time if they share an entangled pair of qubits before the start of the game.
You can read more on the CHSH game here.

Flattening quantification over relations

I have a Relation f defined as f: A -> B × C. I would like to write a firsr-order formula to constrain this relation to be a bijective function from A to B × C?
To be more precise, I would like the first order counter part of the following formula (actually conjunction of the three):
∀a: A, ∃! bc : B × C, f(a)=bc -- f is function
∀a1,a2: A, f(a1)=f(a2) → a1=a2 -- f is injective
∀(b, c) : B × C, ∃ a : A, f(a)=bc -- f is surjective
As you see the above formulae are in Higher Order Logic as I quantified over the relations. What is the first-order logic equivalent of these formulae if it is ever possible?
PS:
This is more general (math) question, rather than being more specific to any theorem prover, but for getting help from these communities --as I think there are mature understanding of mathematics in these communities-- I put the theorem provers tag on this question.
(Update: Someone's unhappy with my answer, and SO gets me fired up in general, so I say what I want here, and will probably delete it later, I suppose.
I understand that SO is not a place for debates and soapboxes. On the other hand, the OP, qartal, whom I assume is the unhappy one, wants to apply the answer from math.stackexchange.com, where ZFC sets dominates, to a question here which is tagged, at this moment, with isabelle and logic.
First, notation is important, and sloppy notation can result in a question that's ambiguous to the point of being meaningless.
Second, having a B.S. in math, I have full appreciation for the logic of ZFC sets, so I have full appreciation for math.stackexchange.com.
I make the argument here that the answer given on math.stackexchange.com, linked to below, is wrong in the context of Isabelle/HOL. (First hmmm, me making claims under ill-defined circumstances can be annoying to people.)
If I'm wrong, and someone teaches me something, the situation here will be redeemed.
The answerer says this:
First of all in logic B x C is just another set.
There's not just one logic. My immediate reaction when I see the symbol x is to think of a type, not a set. Consider this, which kind of looks like your f: A -> BxC:
definition foo :: "nat => int × real" where "foo x = (x,x)"
I guess I should be prolific in going back and forth between sets and types, and reading minds, but I did learn something by entering this term:
term "B × C" (* shows it's of type "('a × 'b) set" *)
Feeling paranoid, I did this to see if had fallen into a major gotcha:
term "f : A -> B × C"
It gives a syntax error. Here I am, getting all pedantic, and our discussion is ill-defined because the notation is ill-defined.
The crux: the formula in the other answer is not first-order in this context
(Another hmmm, after writing what I say below, I'm full circle. Saying things about stuff when the context of the stuff is ill-defined.)
Context is everything. The context of the other site is generally ZFC sets. Here, it's HOL. That answerer says to assume these for his formula, wich I give below:
Ax is true iff x∈A
Bx is true iff x∈B×C
Rxy is true iff f(x)=y
Syntax. No one has defined it here, but the tag here is isabelle, so I take it to mean that I can substitute the left-hand side of the iff for the right-hand side.
Also, the expression x ∈ A is what would be in the formula in a typical set theory textbook, not Rxy. Therefore, for the answerer's formula to have meaning, I can rightfully insert f(x) = y into it.
This then is why I did a lot of hedging in my first answer. The variable f cannot be in the formula. If it's in the formula, then it's a free variable which is implicitly quantified. Here's the formula in Isar syntax:
term "∀x. (Ax --> (∃y. By ∧ Rxy ∧ (∀z. (Bz ∧ Rxz) --> y = z)))"
Here it is with the substitutions:
∀x. (x∈A --> (∃y. y∈B×C ∧ f(x)=y ∧ (∀z. (z∈B×C ∧ f(x)=z) --> y = z)))
In HOL, f(x) = f x, and so f is implicitly, universally quantified. If this is the case, then it's not first-order.
Really, I should dig deep to recall what I was taught, that f(x)=y means:
(x,f(x)) = (x,y) which means we have to have (x,y)∈(A, B×C)
which finally gets me:
∀x. (x∈A -->
(∃y. y∈B×C ∧ (x,y)∈(A,B×C) ∧ (∀z. (z∈B×C ∧ (x,z)∈(A,B×C)) --> y = z)))
Finally, I guess it turns out that in the context of math.stackexchange.com, it's 100% on.
Am I the only one who feels compulsive about questioning what this means in the context of Isabelle/HOL? I don't accept that everything here is defined well enough to show that it's first order.
Really, qartal, your notation should be specific to a particular logic.
First answer
With Isabelle, I answer the question based on my interpretation of your
f: A -> B x C, which I take as a ZFC set, in particular a subset of the
Cartesian product A x (B x C)
You're sort of mixing notation from the two logics, that of ZFC
sets and that of HOL. Consequently, I might be off on what I think you're
asking.
You don't define your relation, so I keep things simple.
I define a simple ZFC function, and prove the first
part of your first condition, that f is a function. The second part would be
proving uniqueness. It can be seen that f satisfies that, so once a
formula for uniqueness is stated correctly, auto might easily prove it.
Please notice that the
theorem is a first-order formula. The characters ! and ? are ASCII
equivalents for \<forall> and \<exists>.
(Clarifications must abound when
working with HOL. It's first-order logic if the variables are atomic. In this
case, the type of variables are numeral. The basic concept is there. That
I'm wrong in some detail is highly likely.)
definition "A = {1,2}"
definition "B = A"
definition "C = A"
definition "f = {(1,(1,1)), (2,(1,1))}"
theorem
"!a. a \<in> A --> (? z. z \<in> (B × C) & (a,z) \<in> f)"
by(auto simp add: A_def B_def C_def f_def)
(To completely give you an example of what you asked for, I would have to redefine my function so its bijective. Little examples can take a ton of work.)
That's the basic idea, and the rest of proving that f is a function will
follow that basic pattern.
If there's a problem, it's that your f is a ZFC set function/relation, and
the logical infrastructure of Isabelle/HOL is set up for functions as a type.
Functions as ordered pairs, ZFC style, can be formalized in Isabelle/HOL, but
it hasn't been done in a reasonably complete way.
Generalizing it all is where the work would be. For a particular relation, as
I defined above, I can limit myself to first-order formulas, if I ignore that
the foundation, Isabelle/HOL, is, of course, higher-order logic.

Herbrand universe and Least herbrand Model

I read the question asked in Herbrand universe, Herbrand Base and Herbrand Model of binary tree (prolog) and the answers given, but I have a slightly different question more like a confirmation and hopefully my confusion will be clarified.
Let P be a program such that we have the following facts and rule:
q(a, g(b)).
q(b, g(b)).
q(X, g(X)) :- q(X, g(g(g(X)))).
From the above program, the Herbrand Universe
Up = {a, b, g(a), g(b), q(a, g(a)), q(a, g(b)), q(b, g(a)), q(b, g(b)), g(g(a)), g(g(b))...e.t.c}
Herbrand base:
Bp = {q(s, t) | s, t E Up}
Now come to my question(forgive me for my ignorance), i included q(a, g(a)) as an element in my Herbrand Universe but from the fact, it states q(a, g(b)). Does that mean that q(a, g(a)) does not suppose to be there?
Also since the Herbrand models are subset of the Herbrand base, how do i determine the least Herbrand model by induction?
Note: I have done a lot of research on this, and some parts are well clear to me but still i have this doubt in me thats why i want to seek the communities opinion. Thank you.
From having the fact q(a,g(b)) you cannot conclude whether or not q(a,g(a)) is in the model. You will have to generate the model first.
For determining the model, start with the facts {q(a,g(b)), q(b,g(b))} and now try to apply your rules to extend it. In your case, however, there is no way to match the right-hand side of the rule q(X,g(X)) :- q(X,g(g(g(X)))). to above facts. Therefore, you are done.
Now imagine the rule
q(a,g(Y)) :- q(b,Y).
This rule could be used to extend our set. In fact, the instance
q(a,g(g(b))) :- q(b,g(b)).
is used: If q(b,g(b)) is present, conclude q(a,g(g(b))). Note that we are using here the rule right-to-left. So we obtain
{q(a,g(b)), q(b,g(b)), q(a,g(g(b)))}
thereby reaching a fixpoint.
Now take as another example you suggested the rule
q(X, g(g(g(X)))) :- q(X, g(X)).
Which permits (I will no longer show the instantiated rule) to generate in one step:
{q(a,g(b)), q(b,g(b)), q(a,g(g(g(b)))), q(b, g(g(g(b))))}
But this is not the end, since, again, the rule can be applied to produce even more! In fact, you have now an infinite model!
{g(a,gn+1(b)), g(b, gn+1(b))}
This right-to-left reading is often very helpful when you are trying to understand recursive rules in Prolog. The top-down reading (left-to-right) is often quite difficult, in particular, since you have to take into account backtracking and general unification.
Concerning your question:
"Also since the Herbrand models are subset of the Herbrand base, how do i determine the least Herbrand model by induction?"
If you have a set P of horn clauses, the definite program, then you can define
a program operator:
T_P(M) := { H S | S is ground substitution, (H :- B) in P and B S in M }
The least model is:
inf(P) := intersect { M | M |= P }
Please note that not all models of a definite program are fixpoints of the
program operator. For example the full herbrand model is always a model of
the program P, which shows that definite programs are always consistent, but
it is not necessarily a fixpoint.
On the other hand each fixpoint of the program operator is a model of the
definite program. Namely if you have T_P(M) = M, then one can conclude
M |= P. So that after some further mathematical reasoning(*) one finds that
the least fixpoint is also the least model:
lfp(T_P) = inf(P)
But we need some further considerations so that we can say that we can determine
the least model by a kind of computation. Namely one easily observes that the
program operator is contiguous, i.e. preserves infinite unions of chains, since
horn clauses do not have forall quantifiers in their body:
union_i T_P(M_i) = T_P(union_i M_i)
So that again after some further mathematical reasoning(*) one finds that we can
compute the least fixpoint via iteration, witch can be used for simple
induction. Every element of the least model has a simple derivation of finite
depth:
union_i T_P^i({}) = lpf(T_P)
Bye
(*)
Most likely you find further hints on the exact mathematical reasoning
needed in this book, but unfortunately I can't recall which sections
are relevant:
Foundations of Logic Programming, John Wylie Lloyd, 1984
http://www.amazon.de/Foundations-Programming-Computation-Artificial-Intelligence/dp/3642968287

Deterministic Finite Automata pattern

I'm trying to solve this problem using Deterministic Finite Automata :
inputs: {a,b}
conditions:
a. must have exactly 2 a
b. have more than 2 b
so a correct input should be like this abbba or bbbaa or babab
now my question is, "is there a pattern to solve this things?"
Yes there is a pattern. You can take each statement and deduct pre-states from them. Then you take the cross-product of those pre-states, which will comprise the final states. In this example:
a. will yield states: 0a, 1a, 2a, 2+a (you've seen 0 a, 1 a, 2 as or more than 2 as)
b. will yield states: 0b, 1b, 2b, 2+b (you've seen 0 b, 1 b, 2 bs or more than 2 bs)
The cross product of these states result in 4x4=16 states. You'll start from {0a,0b} states. The inputs can be 3 types: a, b or something else.
From that you should be able to go. Do you need more help?
(Are we solving homework?)
Always draw such things first.
Feel free to give states any meanings. What you need here is states like: q2: (1 b, 2 a's). Draw states like this, until the accept state and connect them with lines. The accept state is qx: 2 a's 3 b's.
After reaching the accept state, if input is "b" that line goes to itself, the accept state. If the input is "a", draw a new state, that will get into an endless loop and goes into itself no matter what the input is.
(are we helping for an exam here?)

Algorithm for 2-Satisfiability problem

Can anyone explain the algorithm for 2-satisfiability problem or provide me the links for the same? I could not find good links to understand it.
If you have n variables and m clauses:
Create a graph with 2n vertices: intuitively, each vertex resembles a true or not true literal for each variable. For each clause (a v b), where a and b are literals, create an edge from !a to b and from !b to a. These edges mean that if a is not true, then b must be true and vica-versa.
Once this digraph is created, go through the graph and see if there is a cycle that contains both a and !a for some variable a. If there is, then the 2SAT is not satisfiable (because a implies !a and vica-versa). Otherwise, it is satisfiable, and this can even give you a satisfying assumption (pick some literal a so that a doesn't imply !a, force all implications from there, repeat). You can do this part with any of your standard graph algorithms, ala Breadth-First Search , Floyd-Warshall, or any algorithm like these, depending on how sensitive you are to the time complexity of your algorithm.
You can solve it with greedy approach.
Or using Graph theory, here is link which explains the solution using graph theory.
http://www.cs.tau.ac.il/~safra/Complexity/2SAT.ppt
Here is the Wikipedia page on the subject, which describes a polynomial time algorithm. (The brute force algorithm of just trying all the different truth assignments is exponential time.) Maybe a bit of further explanation will help.
The expression "if P then Q" is only false when P is true and Q is false. So the expression has the same truth table values as "Q or not P". It is also equivalent to its contrapositive, "if not Q then not P", and that in turn is equivalent to "not P or Q" (the same as the other one).
So the algorithm involves replacing every expression of the form "A or B", with the two expressions, "if not A then B" and "if not B then A". (Putting it another way, A and B can't both be false.)
Next, construct a graph representing these implications. Create nodes for each "A" and "not A", and add links for each of the implications obtained above.
The last step is to make sure that none of the variables is equivalent to its own negation. That is, for each variable A (or not A), follow the links to discover all the nodes that can be reached from it, taking care to detect loops.
If one of the variables, A, can reach "not A", and "not A" can also reach A, then the original expression is not satisfiable. (It is a paradox.) If none of the variables do this, then it is satisfiable.
(It's okay if A implies "not A", but not the other way around. That just means that A must be negated to satisfy the expression.)
2 satisfiabilty:
if x & !x is strongly connected
then from !x we can reach to x
from x we can reach to !x
so in our operation,
in case of x,
we have 2 options only,
1.taking x (x) that leads to !x
2.rejecting x (!x) that leads to x
and both the choices are leading to the paradox of taking and rejecting a choice at the same time
so the satisfiability is impossible :D

Resources