LTL about Fp=TUp, is T really necessary in rewriting F? - logic

I just come up with this question. As written in the book of Logic in Computer Science, one of the important equivalence of LTL is this:
Fp=TUp. And the T means no constraints.
Yet what if I replace the T with (not p)? Does Fp=(not p)Up hold? Since in this case I actually put some constraints (not p) in the formula, but in the meantime there could be no state can satisfy (not p) and p together. And I tried with different LTL formula as p, and as long as p is satisfiable, then for every path with p, it must satisfy Fp and (not p)Up as well.
Does it means that I can rewrite F in this way or there is some counter example?

The short answer:
Yes, both formulas are equivalent and you can rewrite Fp also with (¬p)Up.
and a proof:
We can investigate the problem by looking at the definition of pUq (I think it's defined this way in the book Model Checking by Clarke, Grumberg, Peled).
A path s is a model for the formula (written s ⊨ pUq):
s ⊨ pUq <=> ∃k: s^k ⊨ q
∧ ∀i: 0<=i<k => s^i ⊨ q
(With s^i being the path s with the first i steps removed.)
We have (1):
s ⊨ (¬p)Up <=> ∃k: s^k ⊨ p
∧ ∀i: 0<=i<k => s^i ⊨ ¬p
and (2):
s ⊨ TUp <=> ∃k: s^k ⊨ p
∧ ∀i: 0<=i<k => s^i ⊨ true
<=> ∃k: s^k ⊨ p
We want to show (1) <=> (2) (I renamed the ks to k1 and k2 to avoid confusion):
∃k1: s^k1 ⊨ p
∧ ∀i: 0<=i<k1 => s^i ⊨ ¬p
<=>
∃k2: s^k2 ⊨ p
The direction (1) => (2) is trivial.
For (2) => (1) we have to show that from
∃k2: s^k2 ⊨ p
follows
∃k1: s^k1 ⊨ p ∧ ∀i: 0<=i<k1 => s^i ⊨ ¬p
We know that there exists a value for k1 (namely k2) such that s^k1 ⊨ p holds. But what about the second part? We can now just use for k1 the smallest value such that s^i ⊨ p holds. Then the second part is true, because if there would be an i such that s^i ⊨ not p does not hold, we know that s^i |= p holds. But in that case we would have choosen i for k1 because i is strictly smaller then k1.
So both formulas (1) and (2) are equivalent.

Related

Simple Cardinality Proof

So I'm trying to perform a simple proof using cardinalities. It looks like:
⟦(A::nat set) ∩ B = {}⟧ ⟹ (card (A ∪ B) = card A + card B)
Which seems to makes sense, but for some reason blast hangs, the rest of the provers fail to apply, and sledgehammer times out. Is there a gap in what I think I know about cardinalities? If not, how can I prove this lemma?
Thanks in advance!
I believe that the lemma you are trying to prove does not appropriately consider the case of infinite sets.
In Isabelle/HOL, infinite cardinalities are represented by zero. As we can see by the following lemma.
lemma "¬(finite A) ⟹ card A = 0"
by simp
If we consider the case of an infinite set, A, and a set of one element, B, then assume the intersection, A ∩ B is an empty set.
We are left with:
card (A ∪ B) = 0 as their union will also be infinite.
card A = 0
card B = 1
So we can see that in this case, the lemma does not hold.
The lemma can be corrected by asserting both sets are finite:
lemma
"⟦finite A; finite B; ((A::nat set) ∩ B) = {}⟧ ⟹ (card (A ∪ B) = card A + card B)"
by (simp add: card_Un_disjoint)
Which is essentially the same as the card_Un_disjoint used by the proof:
lemma card_Un_disjoint: "finite A ⟹ finite B ⟹ A ∩ B = {} ⟹ card (A ∪ B) = card A + card B"
using card_Un_Int [of A B] by simp

Proof by resolution - Artificial Intelligence

I'm working with an exercise where I need to show that KB |= ~D.
And I know that the Knowledge Base is:
- (B v ¬C) => ¬A
- (¬A v D) => B
- A ∧ C
After converting to CNF:
A ∧ C ∧ (¬A v ¬B) ∧ (¬A v C) ∧ (A v B) ∧ (B v ¬D)
So now I have converted to CNF but from there, I don't know how to go any further. Would appreciate any help. Thanks!
The general resolution rule is that, for any two clauses
(that is, disjunctions of literals)
P_1 v ... v P_n
and
Q_1 v ... v Q_m
in your CNF such that there is i and j with P_i and Q_j being the negation of each other,
you can add a new clause
P_1 v ... v P_{i-1} v P_{i+1} ... v P_n v Q_1 v ... v Q_{j-1} v Q_{j+1} ... v Q_m
This is just a rigorous way to say that you can form a new clause by joining two of them, minus a literal with opposite "signs" in each.
For example
(A v ¬B)∧(B v ¬C)
is equivalent to
(A v ¬B)∧(B v ¬C)∧(A v ¬C),
by joining the two clauses while removing the opposites B and ¬B, obtaining A v ¬C.
Another example is
A∧(¬A v ¬C)
which is equivalent to
A∧(¬A v ¬C) ∧ ¬C.
since A counts as a clause with a single literal (A itself). So the two clauses are joined, while A and ¬A are removed, yielding a new clause ¬C.
Applying this to your problem, we can resolve A and ¬A v ¬B, obtaining ¬B.
We then resolve this new clause ¬B with B v ¬D, obtaining ¬D.
Because the CNF is a conjunction, the fact that it holds means that every clause in it holds. That is to say, the CNF implies all of its clauses. Since ¬D is one of its clauses, ¬D is implied by the CNF. Since the CNF is equivalent to the original KB, the KB implies ¬D.

Case distinction for propositional logic

I would like to prove
P ==> P
by case distinction, to understand the latter.
lemma "P ⟹ P"
proof (cases P)
goal (2 subgoals):
1. P ⟹ P ⟹ P
2. P ⟹ ¬ P ⟹ P
I am not quite sure if I want these. I wanted to assume that P is true and then show P is true by assumption, then assume not P and prove not P by assumption. Like in a truth table.
The not P in the second subgoal seems strange, is that provable at all?
assume P then show P by assumption
Successful attempt to solve goal by exported rule:
(P) ⟹ P
next
goal (1 subgoal):
1. P ⟹ ¬ P ⟹ P
assume P assume "¬P" then show "¬P" by (rule HOL.FalseE)
This went completely bad.
How can I take P and not P as the cases?
P and not P already are your cases. If you write "cases P", Isabelle copys the current goal and adds P to the assumptions of the first and ¬ P to the assumption of the second new subgoal. The right-hand side of the goals is not affected by the cases method if used in this way.
In your case, you don't have to prove the ¬ P in the second subgoal, but you may use it as the additional assumption, introduced by the case destinction.
Obviously you can't prove P from ¬ P in the second subgoal. Luckily, the global assumption P is still there, so that both cases still prove P from P, which is as trviial as it already is without the case destinction ;).
If you want to prove something by having Isabelle insert the possible values of a variable directly, you could try:
lemma "P ⟹ P"
proof (induct P)
goal (2 subgoals):
1. True ⟹ True
2. False ⟹ False

Formal Methods, Logic and VDM past exam paper questions

I was hoping someone can help me with the following questions, answers would be best but if you can point me in the right direction that will be helpful also.
I am a final year uni student and these questions are from a previous exam on Formal Methods and I could do with knowing the answers ready for this years paper. Our lecturer does not seem the best and has not covered a lot of this and so finding the exact answer has been proving impossible. Google has not been much of a help nor has the recommended books.
1 - Given that ∃x • P (x) is logically equivalent to ¬∀x • ¬P (x) and that
∀x ∈ S • P (x) means ∀x • x ∈ S ⇒ P (x), deduce that ∃x ∈ S • P (x)
means ∃x • x ∈ S ∧ P (x)
2 - Describe the two statements that would have to be proved to show that
the definition:
max(i, j)
if i>j
then i
else j
is a correct implementation of the specification:
max(i : Z, j : Z)r : Z
pre true
post (r = i ∨ r = j) ∧ i ≤ r ∧ j ≤ r
The first is really just manipulation of symbols using the given and two other well-known logical equivalences:
(1) ∃x • P(x) is logically equivalent to ¬∀x • ¬P(x)
(2) ∀x∈S • P(x) means ∀x • x∈S ⇒ P(x)
∃x∈S • P(x)
== ¬∀x∈S • ¬P(x) (from (1))
== ¬∀x • x∈S ⇒ ¬P(x) (from (2))
== ¬∀x • ¬x∈S v ¬P(x) (from def. of ⇒)
== ¬∀x • ¬(x∈S ∧ P(x)) (from ¬A v ¬B == ¬(A ∧ B))
== ∃x • x∈S ∧ P(x) (from (1) -- the other way around)
For the second, you need to recognize that the outcome of max(i, j) will be computed along one of two paths: one, when i<j and the other when i>=j (the logical negation of i<j)
So you need to show that
if true ∧ i<j (precondition), then (r=i ∨ r=j) ∧ i≤r ∧ j≤r (post condition), and
if true ∧ i>=j (precond.) then (r=i ∨ r=j) ∧ i≤r ∧ j≤r (post cond.),
where r is the result of max(i, j)
But section 2 of your question does not make sense since any implementation that returns either i or j is correct.
The specification is wrong.
A correct postcondition is
post (i > j => r = i) v (i <= j => r = j)

Proving the Associativity of OR

I need help proving the following:
(a ∨ b) ∨ c = a ∨ (b ∨ c)
I don't want the answer... just a hint that will help me understand the process of proving this.
Thank you.
Why not just prove it by doing all possible values of a, b and c = True, False? -- there are only 2^3 = 8 different cases.
Here's a start, for a=T, b=F, c=T
(a v b) v c = a ∨ (b ∨ c)
(T v F) v T = T v (F v T)
T v T = T v T
T = T
(However, this isn't really a programming question...)
What is your axiom set?
Not knowing the set, you could build a truth table

Resources