Linear Temporal Logic (LTL) questions - logic

[] = always
O = next
! = negation
<> = eventually
Wondering is it []<> is that equivalent to just []?
Also having a hard time understanding how to distribute temporal logic.
[][] (a OR !b)
!<>(!a AND b)
[]([] a ==> <> b)

I'll use the following notations:
F = eventually
G = always
X = next
U = until
In my model-checking course, we defined LTL the following way:
LTL: p | φ ∩ ψ | ¬φ | Xφ | φ U ψ
With F being a syntactic sugar for :
F (future)
Fφ = True U φ
and G:
G (global)
Gφ = ¬F¬φ
With that, your question is :
Is it true that : Gφ ?= GFφ
GFφ <=> G (True U φ)
Knowing that :
P ⊧ φ U ψ <=> exists i >= 0: P_(>= i) ⊧ ψ AND forall 0 <= j < i : P_(<= j) ⊧ φ
From that, we can clearly see that GFφ indicates that it must always be true that φ will be always be verified after some time i, and before that (j before i) True must be verified (trivial).
But Gφ indicates that φ must always be true, "from now to forever" and not "from i to forever".

G p indicates that at all times p holds. GF p indidcates that at all times, eventually p will hold. So while the infinite trace pppppp... satisfies both of the specifications, an infinite trace of the form p(!p)(!p!)p(!p)p... satisfies only GF p but not G p.
To be clear, both these example traces need to contain infinitely many locations, where p holds. But in the case of GF p, and only in this case, it is acceptable that there be locations in between, where p does not hold.
So the short answer to the above question by counterexample is: no, those two specifications aren't the same.

Related

Probability of a disjunction on N dependent events in Prolog

Does anybody know where to find a Prolog algorithm for computing the probability of a disjunction for N dependent events? For N = 2 i know that P(E1 OR E2) = P(E1) + P(E2) - P(E1) * P(E2), so one could do:
prob_disjunct(E1, E2, P):- P is E1 + E2 - E1 * E2
But how can this predicate be generalised to N events when the input is a list? Maybe there is a package which does this?
Kinds regards/JCR
The recursive formula from Robert Dodier's answer directly translates to
p_or([], 0).
p_or([P|Ps], Or) :-
p_or(Ps, Or1),
Or is P + Or1*(1-P).
Although this works fine, e.g.
?- p_or([0.5,0.3,0.7,0.1],P).
P = 0.9055
hardcore Prolog programmers can't help noticing that the definition isn't tail-recursive. This would really only be a problem when you are processing very long lists, but since the order of list elements doesn't matter, it is easy to turn things around. This is a standard technique, using an auxiliary predicate and an "accumulator pair" of arguments:
p_or(Ps, Or) :-
p_or(Ps, 0, Or).
p_or([], Or, Or).
p_or([P|Ps], Or0, Or) :-
Or1 is P + Or0*(1-P),
p_or(Ps, Or1, Or). % tail-recursive call
I don't know anything about Prolog, but anyway it's convenient to write the probability of a disjunction of a number of independent items p_m = Pr(S_1 or S_2 or S_3 or ... or S_m) recursively as
p_m = Pr(S_m) + p_{m - 1} (1 - P(S_m))
You can prove this by just peeling off the last item -- look at Pr((S_1 or ... or S_{m - 1}) or S_m) and just write that in terms of the usual formula, writing Pr(A or B) = Pr(A) + Pr(B) - Pr(A) Pr(B) = Pr(B) + Pr(A) (1 - Pr(B)), for A and B independent.
The formula above is item C.3.10 in my dissertation: http://riso.sourceforge.net/docs/dodier-dissertation.pdf It is a simple result, and I suppose it must be an exercise in some textbooks, although I don't remember seeing it.
For any event E I'll write E' for the complementary event (ie E' occurs iff E doesn't).
Then we have:
P(E') = 1 - P(E)
(A union B)' = A' inter B'
A and B are independent iff A' and B' are independent
so for independent E1..En
P( E1 union .. union En ) = 1 - P( E1' inter .. inter En')
= 1 - product{ i<=i<=n | 1 - P(E[i])}

Is this CTL formula equivalent and what makes it hold?

I'm wondering if the CTL formulas below are equivalent and if so, can you help me persuade myself that they are?
A(p U ( A(q U r) )) = A(A(p U q) U r)
I can't come up with any models that contradicts it and my guts tells me the formulas are equivalent but I can't find any equivalences that supports that statement. I've tried to rewrite the equivalence
A(p U q) == not(E ((not q) U not(p or q)) or EG (not q))
into something helpful but failed several times.
I've looked through my course material as well as google but I can't find anything. I did however find another question here that has the same equivalence question but with no answer, so I'm trying to make a second try.
Note: this answer might be late.
However, since the question was raised multiple times, I think it's still useful.
Question: Is A[p U A[q U r]] equivalent to A[A[p U q] U r]?
Answer: no.
To prove that the inequality stands, it is sufficient to provide a single Kripke Structure s.t. A[p U A[q U r]] is verified but A[A[p U q] U r] is not (or the converse).
Now, for simplicity, we assume to deal with a Kripke Structure in which every state has only one possible future state. Therefore, we can forget about the A modifier and consider the LTL version of the given problem: is [p U [q U r]] equivalent to [[p U q] U r]?.
Let's break down [p U [q U r]]:
[q U r] is true on paths which match the expression {q}*{r}
[p U [q U r]] is true on paths that mach {p}*{[q U r]} = {p}*{q}*{r}
What about [[p U q] U r]?
[p U q] is true on paths which match the expression {p}*{q}
[[p U q] U r] is true on paths that mach {[p U q]}*{r} = {{p}*{q}}*{r}
Now, {p}*{q}*{r} != {{p}*{q}}*{r}.
In fact, {p}*{q}*{r} matches any path in which a sequence of p is followed by r and there is no q along the way.
However, {{p}*{q}}*{r} does not. If a path contains a sequence of p, then the occurrence of q before r is mandatory.
Thus, the two formulas are not equivalent.
Hands-On Answer:
Let's encode a Kripke structure that provides the same counter-example using NuSMV
MODULE main ()
VAR
p: boolean;
q: boolean;
r: boolean;
INVAR !q;
INIT
!q & p & !r
TRANS
r -> next(r);
TRANS
p & !r -> next(r);
CTLSPEC A[p U A[q U r]];
CTLSPEC A[A[p U q] U r];
and check it:
~$ NuSMV -int
NuSMV > reset; read_model -i test.smv; go; check_property
-- specification A [ p U A [ q U r ] ] is true
-- specification A [ A [ p U q ] U r ] is false
-- as demonstrated by the following execution sequence
Trace Description: CTL Counterexample
Trace Type: Counterexample
-> State: 1.1 <-
p = TRUE
q = FALSE
r = FALSE
Indeed, one property is verified but the other is not.

Are the following functions in O(x³)?

I'm trying to decide whether the following functions are or can be O(x³) assuming k = 1. I have what I think are the right answers but I'm confused on a few so I figured someone on here could look over them. If one is wrong, could you please explain why? From what I understand if it is about x³ then it can be referred to as O(x³) and if its below it can't be? I think I may be viewing it wrong or have the concept of this backwards.
Thanks
a. 3x = TRUE
b. 10x + 42 = FALSE
c. 2 + 4x + 8x² = FALSE
d. (logx + 1)⋅(2x + 3) = true
e. 2x + x! = TRUE
What is meant by Big-O?
A function f is in O(g), if you can find a constant number c with f &leq; c·g (for all x > x0).
The results
3x is not in O(x3), due to limx &rightarrow; ∞ 3x/x3 = ∞
10x+42 is in O(x3). c = 52 holds for all x &geq; 1.
2 + 4x + 8x2 is in O(x3). c = 14 holds for all x &geq; 1.
(logx+1)(2x+3) is in O(x3). c = 7 holds for all x &geq; 1.
2x + x! is not in O(x3), due to limx&rightarrow;∞(2x+x!)/x3 = ∞
Think of O(f) as an upper bound. So when we write g = O(f) we mean g grows/runs at most as fast as f. It's like <= for running times. The correct answers are:
a. FALSE
b. TRUE
c. TRUE
d. TRUE
e. FALSE

Cheking a Zero Polynomial

Can anyone explain me in below algorithm how the "ISZERO" function checking whether the polynomial is zero or not. Here "REM(P,e)" function removes all the values with exponent "e".
what i don't able to understand is the significance of "if COEF(P,e) = - c". And also what is this "SMULT" function.
structure POLYNOMIAL
declare ZERO( ) poly; ISZERO(poly) Boolean
COEF(poly,exp) coef;
ATTACH(poly,coef,exp) poly
REM(poly,exp) poly
SMULT(poly,coef,exp) poly
ADD(poly,poly) poly; MULT(poly,poly) poly;
for all P,Q, poly c,d, coef e,f exp let
REM(ZERO,f) :: = ZERO
REM(ATTACH(P,c,e),f) :: =
if e = f then REM(P,f) else ATTACH(REM(P,f),c,e)
***ISZERO(ZERO) :: = true
ISZERO(ATTACH(P,c,e)):: =
if COEF(P,e) = - c then ISZERO(REM(P,e)) else false***
COEF(ZERO,e) :: = 0
COEF(ATTACH(P,c,e),f) :: =
if e = f then c + COEF(P,f) else COEF(P,f)
SMULT(ZERO,d,f) :: = ZERO
SMULT(ATTACH(P,c,e),d,f) :: =
ATTACH(SMULT(P,d,f),c d,e + f)
ADD(P,ZERO):: = P
ADD(P,ATTACH(Q,d,f)) :: = ATTACH(ADD(P,Q),d,f)
MULT(P,ZERO) :: = ZERO
MULT(P,ATTACH(Q,d,f)) :: =
ADD(MULT(P,Q),SMULT(P,d,f))
end
end POLYNOMIAL
Without knowing what language this is, it looks like this line
ISZERO(ATTACH(P,c,e)):: =
if COEF(P,e) = - c then ISZERO(REM(P,e)) else false
is specifying ISZERO recursively. We are trying to determine whether ATTACH(P, c, e), otherwise known as P(x) + cx^e, is zero. It first checks whether the x^e coefficient of P is -c. If not, then P(x) + cx^e is definitely not zero, and you can return false immediately. Otherwise, P(x) + cx^e = REM(P, e), so you have to check ISZERO(REM(P, e)).
I believe SMULT is multiplication, so SMULT(P, a, b) is equivalent to a * x^b * P(x).

Reduction from Atm to A (of my choice) , and from A to Atm

Reduction of many one , is not symmetric . I'm trying to prove it but it doesn't work
so well .
Given two languages A and B ,where A is defined as
A={w| |w| is even} , i.e. `w` has an even length
and B=A_TM , where A_TM is undecidable but Turing-recognizable!
Given the following Reduction:
f(w) = { (P(x):{accept;}),epsilon , if |w| is even
f(w) = { (P(x):{reject;}),epsilon , else
(Please forgive me for not using Latex , I have no experience with it)
As I can see, a reduction from A <= B (from A to A_TM) is possible , and works great.
However , I don't understand why B <= A , is not possible .
Can you please clarify and explain ?
Thanks
Ron
Assume for a moment that B <= A. Then there is a function f:Sigma*->Sigma* such that:
f(w) = x in A if w is in B
= x not in A if w is not in B
Therefore, we can describe the following algorithm [turing machine] M on input w:
1. w' <- f(w)
2. if |w'| is even return true
3. return false
It is easy to prove that M accepts w if and only if w is in B [left as an exercise to the reader], thus L(M) = B.
Also, M stops for any input w [from its construction]. Thus - L(M) is decideable.
But we got that L(M) = B is decideable - and that is a contradiction, because B = A_TM is undecideable!

Resources