I am learning Sequential Consistency in Distributed Systems but just could not understand the terms explained. I would appreciate if someone can shed some light in layman's term on why (a) and (c) below are sequentially consistent and (b) is not.
Thanks.
An execution e of operations is sequentially consistent if and only if it can be permutated into a sequence s of these operations such that:
the sequence s respects the program order of each process. That is, for any two operations o1 and o2 which are of the same process and if o1 precedes o2 in e, then o1 should be placed before o2 in s;
in the sequence s, each read operation returns the value of the last preceding write operation over the same variable.
For (a), s can be:
W(x)b [P2], R(x)b [P3], R(x)b [P4], W(x)a [P1], R(x)a [P3], R(x)a [P4]
For (c), s can be:
W(x)a [P1], R(x)a [P2], R(x)a [P3], R(x)a [P4], W(x)b [P3], R(x)b [P1], R(x)b [P2], R(x)b [P4]
However, for (b):
the operations R(x)b, R(x)a from P3 require that W(x)b come before W(x)a
the operations R(x)a, R(x)b from P4 require that W(x)a come before W(x)b
It is impossible to construct such a sequence s.
Related
I need to construct a pushdown automation for the following language: L = {a^n b^m | 2n>=m }
Can someone help me with it?
There are two approaches that come to mind:
try to write a PDA from scratch using the stack in a sensible way
write a CFG and then rely on the construction that shows PDAs accept the languages of all CFGs
To take approach 1, recognize that PDAs are like NFAs that can push symbols onto a stack and pop them off. If we want 2n >= m, that means we want the number of a's to be at least half the number of b's. That is, we want at least one a for every two b's. That means if we push all the a's on the stack, we need to read no more than two b's for every a on the stack. This suggests a PDA that works like this:
1. read all the a's, pushing into the stack
2. read b's, popping an a for every two b's you see
3. if you run out of stack but still have b's to process, reject
4. if you run out of input at the same time or sooner than you run out of stack, accept
In terms of states and transitions:
Q S E Q' S'
q0 Z a q0 aZ // these transitions read all the a's and
q0 a a q0 aa // push them onto the stack
q0 a b q1 a // these transitions read all the b's and
q1 a b q2 - // pop a's off the stack for every complete
q2 a b q1 a // pair of b's we see. if we run out of a's
// it will crash and reject
q0 Z - q3 Z // these transitions just let us guess at any
q0 a - q3 a // point that we have read all the input and
q1 Z - q3 Z // go to the accepting state. note that if
q1 a - q3 a // we guessed wrong the PDA will crash there
q2 Z - q3 Z // since we have no transitions defined.
q2 a - q3 a // crashing rejects the input.
Here, the accept condition is being in state q3 with no more stack.
To do the second option, you'd write a CFG like this:
S -> aSbb | aS | e
Then your PDA could do the following:
push S onto the stack
pop from the stack and push onto the stack one of the productions for S
if you pop a terminal off the stack, consume the nonterminal from input
if you pop a nonterminal off the stack, replace it with some production for that nonterminal
This nondeterministically generates every possible derivation according to the CFG. If the input is a string in the language, then one of these derivations will consume all the input and leave the stack empty, which is the halt/accept condition.
We need to construct a pushdown automation for the following language: L = {a^n b^m | 2n>=m }
The given condition is 2n >= m, that means we want the number of a's to be at least half the number of b's. That is, we want at least one a for every two b's. That means if we push all the a's on the stack, we need to read no more than two b's for every a on the stack. This suggests a PDA that works like this:
read all the a's, pushing into the stack
read b's, popping an a for every two b's you see
if you run out of stack but still have b's to process, reject
if you run out of input at the same time or sooner than you run out of stack, accept.
The PDA for the problem is as follows −
Transition functions
The transition functions are as follows −
Step 1: δ(q0, a, Z) = (q0, aZ)
Step 2: δ(q0, a, a) = (q0, aa)
Step 3: δ(q0, b, a) = (q1, a)
Step 4: δ(q1, b, a) = (q2, ε)
Step 5: δ(q2, b, a) = (q1, a)
Step 6: δ(q2,b, Z) = dead state.
Step 7: δ(q2,ε, Z) = (qf, Z)
(No this isn't one of those translate SQL to RA questions ;-) I have a formula in First-Order Logic that I want to express in RA. That ought to be easy: Codd's 1972 approach in the Relational Completeness paper is to show each FOL operator can be equivalently expressed in RA.
Given relation SP:
Heading {S# CHAR, P# CHAR, QTY INT}
Key {S#, P#}
Characteristic predicate SP(s, p, q) = 'Supplier s supplies Part p in quantity q.'
Express: 'Supplier 'S1' and Supplier 'S2' supply exactly the same set of Parts (disregarding quantities).'
Formula:
∀p. (∃q1. SP('S1', p, q1) ) ⇔ (∃q2. SP('S2', p, q2) )
Note in case of S1 supplying no parts at all, this formula is true just in case S2 also supplies no parts.
This is a Yes/No question (the formula has no free variables); so I'd expect the RA expression must result in a relation with no attributes, returning an empty relation if the two Suppliers do not supply the same set of Parts (formula evaluates to False); otherwise the non-empty relation with no attributes (formula evaluates to True).
To explain a bit further: usually queries return a list of something -- such as the list of Parts supplied by S1, disregarding quantities: SP WHERE (S# = 'S1') {P#} (or in Greek π{P#}(σS# = 'S1'(SP))). For a Yes/No question, we're interested only in whether the query returns something vs nothing, e.g. does Supplier S1 supply Part P456?: SP WHERE (S# = 'S1' AND P# ='P456') {} (π{}(σS# = 'S1'(σP# = 'P456'(SP)))).
You'll notice I'm using a variant of RA: Tutorial D from Date & Darwen. This is easier to read and typeset than Codd's original RA (I've also included the Greek characters and subscripts form). I'll limit myself to Tutorial D operators that correspond to Codd's RA.
I can do the inverse of the query I want: 'Are there any Parts Supplied by S1 but not by S2, or vice versa?'
Firstly a couple of shorthands for common subexpressions
WITH S1P := SP WHERE (S# = 'S1'){P#},
S2P := SP WHERE (S# = 'S2'){P#} :
( S1P MINUS S2P )
UNION
( S2P MINUS S1P );
For those who prefer Greek:
S1P := π{P#}(σS# = 'S1'(SP))
S2P := π{P#}(σS# = 'S2'(SP))
(S1P \ S2P) ∪ (S2P \ S1P)
This'll return an empty result just in case the two Suppliers supply exactly the same set of Parts. So all that remains to do is project that result on to no attributes, and flip empty result to non-empty and vice versa. But Codd's RA doesn't have a way to express that flip, AFAICT.
Applying Codd's 1972 method to the formula, the outermost operation is a forall quantifier, so convert that to a negation of an existential:
¬∃p. ¬( (∃q1. SP('S1', p, q1) ) ⇔ (∃q2. SP('S2', p, q2) ) )
But now the outermost operation is negation. Codd's method only allows negation to appear nested inside conjunction.
I'm stuck. Any ideas?
There is no RA expression that answers the question, if we limit to RA operators and semantics per Codd's 1972 specification.
Even if we add the operators commonly included in RA these days, we can't answer the question as posed. For example, the operators covered in wikipedia such as Rename aka ρ, Extend (for calculated columns), Grouping/Aggregation, Outer Joins.
From the discussion, arguably, the desired result (a degree-zero relation) is not countenanced by Codd. I say "arguably" because Codd never rigorously defines 'relation'. There's Codd 1970 footnote 1 "R is a subset of the Cartesian product S1 x S2 x ... x Sn."; but no lower bound given for n. Clearly it's intended to include the degenerate 'product' for n is 1, then why not allow zero?
For example SQL does not support degree-zero tables. SQL does support pseudo-extending a would-be degree-zero table with a dummy column:
SELECT 'Yes' AS Dummy FROM SP WHERE...
Even allowing that, I claim the question as posed can't be answered in SQL. (Consider the case where SP is empty: then the two Suppliers do supply the same set of Products, viz. the empty set; but the FROM SP ... can only return an empty relation.)
Various non-standard operators or primitives have been suggested (see Comments on q and on other answers). AFAICT there is no authoritative reference that 'blesses' any particular approach. For example, the Alice Book seems not to consider relations of degree zero.
To briefly survey the possible operators/primitives. (Any one of these is expressively equivalent to any other, in the sense that if we have one we can define the others in terms of it -- except for the last.)
Those returning true/false:
Relational comparison: subset ⊆, which can be used to define equality of relations ==. (These require the operands to be 'Union Compatible'.)
IS_EMPTY( ) (which appears in Tutorial D).
The difficulty with returning true/false is that there are no such primitives in RA. (RA operators are usually described as "closed over relations".) Alternatively these operators could return a degree-zero relation; but then why not go to that direct?
Those returning a degree-zero relation:
A complement operation, valid only applied to a degree-zero relation. (This is the "flip" operation discussed in the q.)
Make Dee a primitive -- that is, the non-empty degree-zero relation. Then Dum =df Dee MINUS Dee; and in general complement of r (which must be degree-zero) =df Dee MINUS r
Provide primitive(s) to express a relation literal/constant value, just as most programming languages support expressing numeric or String literals, or more complex data structures. Then Dum/Dee are just two amongst the many relation constants.
In DFA we can do the intersection of two automata by doing the cross product of the states of the two automata and accepting those states that are accepting in both the initial automata.
Union is performed similarly. How ever although i can do union in NFA easily using epsilon transition how do i do their intersection?
You can use the cross-product construction on NFAs just as you would DFAs. The only changes are how you'd handle ε-transitions. Specifically, for each state (qi, rj) in the cross-product automaton, you add an ε-transition from that state to each pair of states (qk, rj) where there's an ε-transition in the first machine from qi to qk and to each pair of states (qi, rk) where there's an ε-transition in the second machine from rj to rk.
Alternatively, you can always convert the NFAs into DFAs and then compute the cross product of those DFAs.
Hope this helps!
We can also use De Morgan's Laws: A intersection B = (A' U B')'
Taking the union of the compliments of the two NFA's is comparatively simpler, especially if you are used to the epsilon method of union.
There is a huge mistake in templatetypedef's answer.
The product automaton of L1 and L2 which are NFAs :
New states Q = product of the states of L1 and L2.
Now the transition function:
a is a symbol in the union of both automatons' alphabets
delta( (q1,q2) , a) = delta_L1(q1 , a) X delta_L2(q2 , a)
which means you should multiply the set that is the result of delta_L1(q1 , a) with the set that results from delta_L2(q1 , a).
The problem in the templatetypedef's answer is that the product result (qk ,rk) is not mentioned.
Probably a late answer, but since I had the similar problem today I felt like sharing it. Realise the meaning of intersection first. Here, it means that given the string e, e should be accepted by both automata.
Consider the folowing automata:
m1 accepting the language {w | w contains '11' as a substring}
m2 accepting the language {w | w contains '00' as a substring}
Intuitively, m = m1 ∩ m2 is the automaton accepting the strings containing both '11' and '00' as substrings. The idea is to simulate both automata simultaneously.
Let's now formally define the intersection.
m = (Q, Σ, Δ, q0, F)
Let's start by defining the states for m; this is, as mentioned above the Cartesian product of the states in m1 and m2. So, if we have a1, a2 as labels for the states in m1, and b1, b2 the states in m2, Q will consist of following states: a1b1, a2b1, a1b2, a2b2. The idea behind this product construction is to keep track of where we are in both m1 and m2.
Σ most likely remains the same, however in some cases they differ and we just take the union of alphabets in m1 and m2.
q0 is now the state in Q containing both the start state of m1 and the start state of m2. (a1b1, to give an example.)
F contains state s IF and only IF both states mentioned in s are accept states of m1, m2 respectively.
Last but not least, Δ; we define delta again in terms of the Cartesian product, as follows: Δ(a1b1, E) = Δ(m1)(a1, E) x Δ(m2)(b1, E), as also mentioned in one of the answers above (if I am not mistaken). The intuitive idea behind this construction for Δ is just to tear a1b1 apart and consider the states a1 and b1 in their original automaton. Now we 'iterate' each possible edge, let's pick E for example, and see where it brings us in the original automaton. After that, we glue these results together using the Cartesian product. If (a1, E) is present in m1 but not Δ(b1, E) in m2, then the edge will not exist in m; otherwise we'll have some kind of a union construction.
An alternative to constructing the product automaton is allowing more complicated acceptance criteria. Ordinarily, an NFA accepts an input string when it has reached any one of a set of accepting final states. That can be extended to boolean combinations of states. Specifically, you construct the automaton for the intersection like you do for the union, but consider the resulting automaton to accept an input string only when it is in (what corresponds to) accepting final states in both automata.
In the following generalized code:
nat = [1..xmax]
xmax = *insert arbitrary Integral value here*
setA = [2*x | x <- nat]
setB = [3*x | x <- nat]
setC = [4*x | x <- nat]
setD = [5*x | x <- nat]
setOne = setA `f` setB
setTwo = setC `f` setD
setAll = setOne ++ setTwo
setAllSorted = quicksort setAll
(please note that 'f' stands for a function of type
f :: Integral a => [a] -> [a] -> [a]
that is not simply ++)
how does Haskell handle attempting to print setAllSorted?
does it get the values for setA and setB, compute setOne and then only keep the values for setOne in memory (before computing everything else)?
Or does Haskell keep everything in memory until having gotten the value for setAllSorted?
If the latter is the case then how would I specify (using main, do functions and all that other IO stuff) that it do the former instead?
Can I tell the program in which order to compute and garbage collect? If so, how would I do that?
The head of setAllSorted is necessarily less-than-or-equal to every element in the tail. Therefore, in order to determine the head, all of setOne and setTwo must be computed. Furthermore, since all of the sets are constant applicative forms, I believe they will not be garbage collected after being computed. The numbers themselves will likely be shared between the sets, but the cons nodes that glue them together will likely not be (your luck with some will depend upon the definition of f).
Due to laziness, Haskell evaluates things on-demand. You can think of the printing done at the end as "pulling" elements from the list setAllSorted, and that might pull other things with it.
That is, running this code it goes something like this:
Printing first evaluates the first element of setAllSorted.
Since this comes from a sorting procedure, it will require all the elements of setAll to be evaluated. (Since the smallest element could be the last one).
Evaluating the first element of setAll requires evaluating the first element of setOne.
Evaluating the first element of setOne depends on how f is implemented. It might require all or none of setA and setB to be evaluated.
After we're done printing the first element of setAllSorted, setAll will have been fully evaluated. There are no more references to setOne, setTwo and the smaller sets, so all of these are by now eligible for garbage collection. The first element of setAllSorted can also be reclaimed.
So in theory, this code will keep setAll in memory most of the time, while setAllSorted, setOne and setTwo will likely only occupy a constant amount of space at any time. Depending on the implementation of f, the same may be true for the smaller sets.
I'm reading a algorithm book in my lesure time. Here is a question I have my own answer but not quite sure. What's your opinion? Thanks!
Question:
There are 2 television network company, let as assume A and B, each are planning the TV progame schedule in n time slots of a day. Each of them are putting their n programs in those slots. While each program have a rate based on the popularity in the past year, and these rate are distinct to each other. The company win a slot when its show have a higher rate then its opponent's. Is there a schedule matching that A made a schedule S and B made a schedule T, and (S, T) is stable that neither network can unilaterally change its own schedule and win more time slots.
There is no stable matching, unless one station has all its programs contiguous in the ratings (ie. the other station has no program which is rated better than one program on the first station but worse than another on the first station).
Proof
Suppose a station can improve its score, and the result is a stable matching. But then the other station could improve its own score by reversing the rearrangement. So it wasn't a stable matching. Contradiction.
Therefore a stable matching can not be reached by a station improving its score.
Therefore a stable matching can't be made worse (for either station), because then the lower state could be improved to a stable matching (which I just showed was not allowed).
Therefore every program rearrangement of a stable matching must give equal scores to both stations.
The only program sets which can't have scores changed by rearrangement are the ones where one of the stations' programs are contiguous in the ratings. (Proof left to reader)
Solution in Haskell:
hasStable :: Ord a => [a] -> [a] -> Bool
hasStable x y =
score x y + score y x == 0
-- score is number of slots we win minus number of slots they win
-- in our best possible response schedule
score :: Ord a => [a] -> [a] -> Integer
score others mine =
scoreSorted (revSort others) (revSort mine)
where
-- revSort is sort from biggest to smallest
revSort = reverse . sort
scoreSorted :: Ord a => [a] -> [a] -> Integer
scoreSorted (o:os) (m:ms)
| m > o =
-- our best show is better then theirs
-- we use it to beat theirs and move on
1 + score os ms
| otherwise =
-- their best show is better then ours
-- we match it with our worst show
-- so we could make use of our big guns
-1 + score os (m : ms)
scoreSorted _ _ = 0 -- there are no shows left
> hasStable [5,3] [4,2]
False
> hasStable [5,2] [3,4]
True
My own answer is no stable matching. Supposing there are only 2 time slots.
A have program p1(5.0) p2(3.0);
B have program p3(4.0) p4(2.0);
The schedule of A includes:
S1: p1, p2
S2: p2, p1
The schedule of B includes:
T1: p3, p4
T2: p4, p3
So the matching include:
(S1, T1)(S1, T2)(S2, T1)(S2, T2)
while the results are
(S1, T1) - (p1, p3)(p2, p4) 2:0 - not stable, becuase B can change its schedule to T2 and the result is : (S1, T2) - (p1, p4)(p2, p3) 1:0
Vise versa and so does the other matching.
Let each TV channel having 2 shows.
TV-1:
Show1 has a rating of 20 pts.
show2 has a rating of 40 pts.
TV-2:
Show1 has a rating of 30 pts.
Show2 has a rating of 50 pts.
Then it clearly shows the matching is unstable.