I am getting confused as a construct some state diagrams of the DFA, I am currently working through a problem with the constraint that {w | w is any string not in a*b*}. I do not understand what a*b* means exactly, can someone summarize that for me, thanks!
a*b* means (zero or more a's followed by zero or more b's) .
Strings present in language a*b* are
L={epsilon(zero a's and zero b's) , a, b, ab, aab, abb...... }
DFA for strings in language a*b* are:
∆(q0, epsilon) = q0
∆(q0, a) = q0
∆(q0, b) = q1
∆(q1, b) = q1
∆(q1, a) = q2
∆(q2, a) = q2
∆(q2, b) = q2
Here, ∆ represents transition.
q0 is the initial state.
q0, q1 are final states.
Similarly DFA for strings not present in language a*b* are:
∆(q0, epsilon) = q0
∆(q0, a) = q0
∆(q0, b) = q1
∆(q1, b) = q1
∆(q1, a) = q2
∆(q2, a) = q2
∆(q2, b) = q2
Here, ∆ represents transition.
q0 is the initial state.
q2 is the final state.
Here is the image for above DFA's:
[1]: https://i.stack.imgur.com/SAt38.jpg DFA image
Hope You Got it.!!
Related
I need to construct a pushdown automation for the following language: L = {a^n b^m | 2n>=m }
Can someone help me with it?
There are two approaches that come to mind:
try to write a PDA from scratch using the stack in a sensible way
write a CFG and then rely on the construction that shows PDAs accept the languages of all CFGs
To take approach 1, recognize that PDAs are like NFAs that can push symbols onto a stack and pop them off. If we want 2n >= m, that means we want the number of a's to be at least half the number of b's. That is, we want at least one a for every two b's. That means if we push all the a's on the stack, we need to read no more than two b's for every a on the stack. This suggests a PDA that works like this:
1. read all the a's, pushing into the stack
2. read b's, popping an a for every two b's you see
3. if you run out of stack but still have b's to process, reject
4. if you run out of input at the same time or sooner than you run out of stack, accept
In terms of states and transitions:
Q S E Q' S'
q0 Z a q0 aZ // these transitions read all the a's and
q0 a a q0 aa // push them onto the stack
q0 a b q1 a // these transitions read all the b's and
q1 a b q2 - // pop a's off the stack for every complete
q2 a b q1 a // pair of b's we see. if we run out of a's
// it will crash and reject
q0 Z - q3 Z // these transitions just let us guess at any
q0 a - q3 a // point that we have read all the input and
q1 Z - q3 Z // go to the accepting state. note that if
q1 a - q3 a // we guessed wrong the PDA will crash there
q2 Z - q3 Z // since we have no transitions defined.
q2 a - q3 a // crashing rejects the input.
Here, the accept condition is being in state q3 with no more stack.
To do the second option, you'd write a CFG like this:
S -> aSbb | aS | e
Then your PDA could do the following:
push S onto the stack
pop from the stack and push onto the stack one of the productions for S
if you pop a terminal off the stack, consume the nonterminal from input
if you pop a nonterminal off the stack, replace it with some production for that nonterminal
This nondeterministically generates every possible derivation according to the CFG. If the input is a string in the language, then one of these derivations will consume all the input and leave the stack empty, which is the halt/accept condition.
We need to construct a pushdown automation for the following language: L = {a^n b^m | 2n>=m }
The given condition is 2n >= m, that means we want the number of a's to be at least half the number of b's. That is, we want at least one a for every two b's. That means if we push all the a's on the stack, we need to read no more than two b's for every a on the stack. This suggests a PDA that works like this:
read all the a's, pushing into the stack
read b's, popping an a for every two b's you see
if you run out of stack but still have b's to process, reject
if you run out of input at the same time or sooner than you run out of stack, accept.
The PDA for the problem is as follows −
Transition functions
The transition functions are as follows −
Step 1: δ(q0, a, Z) = (q0, aZ)
Step 2: δ(q0, a, a) = (q0, aa)
Step 3: δ(q0, b, a) = (q1, a)
Step 4: δ(q1, b, a) = (q2, ε)
Step 5: δ(q2, b, a) = (q1, a)
Step 6: δ(q2,b, Z) = dead state.
Step 7: δ(q2,ε, Z) = (qf, Z)
Can somebody please help me draw a NFA that accepts this language:
{ w | the length of w is 6k + 1 for some k ≥ 0 }
I have been stuck on this problem for several hours now. I do not understand where the k comes into play and how it is used in the diagram...
{ w | the length of w is 6k + 1 for some k ≥ 0 }
We can use the Myhill-Nerode theorem to constructively produce a provably minimal DFA for this language. This is a useful exercise. First, a definition:
Two strings w and x are indistinguishable with respect to a language L iff: (1) for every string y such that wy is in L, xy is in L; (2) for every string z such that xz is in L, wz is in L.
The insight in Myhill-Nerode is that if two strings are indistinguishable w.r.t. a regular language, then a minimal DFA for that language will see to it that the machine ends up in the same state for either string. Indistinguishability is reflexive, symmetric and transitive so we can define equivalence classes on it. Those equivalence classes correspond directly to the set of states in the minimal DFA. Now, to find the equivalence classes for our language. We consider strings of increasing length and see for each one whether it's indistinguishable from any of the strings before it:
e, the empty string, has no strings before it. We need a state q0 to correspond to the equivalence class this string belongs to. The set of strings that can come after e to reach a string in L is L itself; also written c(c^6)*
c, any string of length one, has only e before it. These are not, however, indistinguishable; we can add e to c to get ce = c, a string in L, but we cannot add e to e to get a string in L, since e is not in L. We therefore need a new state q1 for the equivalence class to which c belongs. The set of strings that can come after c to reach a string in L is (c^6)*.
It turns out we need a new state q2 here; the set of strings that take cc to a string in L is ccccc(c^6)*. Show this.
It turns out we need a new state q3 here; the set of strings that take ccc to a string in L is cccc(c^6)*. Show this.
It turns out we need a new state q4 here; the set of strings that take cccc to a string in L is ccc(c^6)*. Show this.
It turns out we need a new state q5 here; the set of strings that take ccccc to a string in L is cc(c^6)*. Show this.
Consider the string cccccc. What strings take us to a string in L? Well, c does. So does c followed by any string of length 6. Interestingly, this is the same as L itself. And we already have an equivalence class for that: e could also be followed by any string in L to get a string in L. cccccc and e are indistinguishable. What's more: since all strings of length 6 are indistinguishable from shorter strings, we no longer need to keep checking longer strings. Our DFA is guaranteed to have one the states q0 - q5 we have already identified. What's more, the work we've done above defines the transitions we need in our DFA, the initial state and the accepting states as well:
The DFA will have a transition on symbol c from state q to state q' if x is a string in the equivalence class corresponding to q and xc is a string in the equivalence class corresponding to q';
The initial state will be the state corresponding to the equivalence class to which e, the empty string, belongs;
A state q is accepting if any string (hence all strings) belonging to the equivalence class corresponding to the language is in the language; alternatively, if the set of strings that take strings in the equivalence class to a string in L includes e, the empty string.
We may use the notes above to write the DFA in tabular form:
q x q'
-- -- --
q0 c q1 // e + c = c
q1 c q2 // c + c = cc
q2 c q3 // cc + c = ccc
q3 c q4 // ccc + c = cccc
q4 c q5 // cccc + c = ccccc
q5 c q0 // ccccc + c = cccccc ~ e
We have q0 as the initial state and the only accepting state is q1.
Here's a NFA which goes 6 states forward then if there is one more character it stops on the final state. Otherwise it loops back non-deterministcally to the start and past the final state.
(Start) S1 -> S2 -> S3 -> S5 -> S6 -> S7 (Final State) -> S8 - (loop forever)
^ |
^ v |_|
|________________________| (non deterministically)
In order to implement a position counter from quadrature inputs, I used the following truth table:
Similar Image
My Boolean equation (with Ap and Bp being the previous value of A and the previous value of B respectively) for Direction and Count/Movement can be seen below
Count <= (not Ap and not Bp and not A and B) or (not Ap and not Bp and A and not B) or
(not Ap and Bp and not A and not B) or (not Ap and Bp and A and B) or
(Ap and Bp and not A and B) or (Ap and Bp and A and not B) or
(Ap and not Bp and not A and not B) or (Ap and not Bp and A and B); -- complete boolean on conditions for a count value of 1
Direction <= (not Bp and A) or (Bp and not A); -- complete boolean on conditions for forward direction
One would expect the result to count up as long as B leads A
But instead all I get is this:
Simulation result
Any suggestions on what could be wrong? If my Boolean equation differs from your's please post it.
Click here for the answer. Turing Machine
The question is to construct a Turing Machine which accepts the regular expression,
L = {a^n b^n | n>= 1}.
I am not sure if my answer is correct or wrong. Thank you in advance for your reply.
You cannot "accept the regular expression", only the language it describes. And what you provide is not a regular expression, but a set description. In fact, the language is not regular and therefore cannot be described by standard regular expressions.
The machine from your answer accepts the language described by the regular expression a^+ b^+.
A TM could mark the first a (e.g. by converting it to A) then delete the first b. And for each n one loop. If you and up with a string only of A, then accept.
As stated before, language L = {a^nb^n; n >= 1} cannot be described by regular expressions, it doesn't belong into the category of regular grammars. This language in particular is an example of context-free grammar, and thus it can be described by context-free grammar and recognized by pushdown automaton (an automaton with LIFO memory, a stack).
Grammar for this language would look something like this:
G = (V, S, R, P)
Where:
V is finite set of non-terminal characters, V = { S }
S is finite set of terminal characters, S = { a, b }
R is relation that describes "rewrites" from non-terminal characters to non-terminals and terminals, in this case R = { S -> aSb, S -> ab }
P is starting non-terminal character, P = S
A pushdown automata recognizing this language would be more complex, as it is a 7-tuple M = (Q, S, G, D, q0, Z, F)
Q is set of states
S is input alphabet
G is stack alphabet
D is the transition relation
q0 is start state
Z is initial stack symbol
F is set of accepting states
For our case, it would be:
Q = { q0, q1, qF }
S = { a, b }
G = { z0, X }
D will take a form of relation (current state, input character, top of stack) -> (output state, top of stack) (meaning you can move to a different state and rewrite top of stack (erase it, rewrite it or let it be)
(q0, a, z0) -> (q0, Xz0) - reading the first a
(q0, a, X) -> (q0, XX) - reading consecutive a's
(q0, b, X) -> (q1, e) - reading first b
(q1, b, X) -> (q1, e) - reading consecutive b's
(q1, e, z0) -> (qF, e) - reading last b
where e is empty word (sometimes called epsilon)
q0 = q0
Z = z0
F = { qF }
The language L = {a^n b^n | n≥1} represents a kind of language where we use only 2 character, i.e., a, b. In the beginning language has some number of a’s followed by equal number of b’s . Any such string which falls in this category will be accepted by this language. The beginning and end of string is marked by $ sign.
Step-1:
Replace a by X and move right, Go to state Q1.
Step-2:
Replace a by a and move right, Remain on same state
Replace Y by Y and move right, Remain on same state
Replace b by Y and move right, go to state Q2.
Step-3:
Replace b by b and move left, Remain on same state
Replace a by a and move left, Remain on same state
Replace Y by Y and move left, Remain on same state
Replace X by X and move right, go to state Q0.
Step-5:
If symbol is Y replace it by Y and move right and Go to state Q4
Else go to step 1
Step-6:
Replace Y by Y and move right, Remain on same state
If symbol is $ replace it by $ and move left, STRING IS ACCEPTED, GO TO FINAL STATE Q4
I am self-studying Okasaki's Purely Functional Data Structures, now on exercise 3.4, which asks to reason about and implement a weight-biased leftist heap. This is my basic implementation:
(* 3.4 (b) *)
functor WeightBiasedLeftistHeap (Element : Ordered) : Heap =
struct
structure Elem = Element
datatype Heap = E | T of int * Elem.T * Heap * Heap
fun size E = 0
| size (T (s, _, _, _)) = s
fun makeT (x, a, b) =
let
val sizet = size a + size b + 1
in
if size a >= size b then T (sizet, x, a, b)
else T (sizet, x, b, a)
end
val empty = E
fun isEmpty E = true | isEmpty _ = false
fun merge (h, E) = h
| merge (E, h) = h
| merge (h1 as T (_, x, a1, b1), h2 as T (_, y, a2, b2)) =
if Elem.leq (x, y) then makeT (x, a1, merge (b1, h2))
else makeT (y, a2, merge (h1, b2))
fun insert (x, h) = merge (T (1, x, E, E), h)
fun findMin E = raise Empty
| findMin (T (_, x, a, b)) = x
fun deleteMin E = raise Empty
| deleteMin (T (_, x, a, b)) = merge (a, b)
end
Now, in 3.4 (c) & (d), it asks:
Currently, merge operates in two
passes: a top-down pass consisting of
calls to merge, and a bottom-up pass
consisting of calls to the helper
function, makeT. Modify merge to
operate in a single, top-down pass.
What advantages would the top-down
version of merge have in a lazy
environment? In a concurrent
environment?
I changed the merge function by simply inlining makeT, but I fail to see any advantages, so I think I haven't grasped the spirit of these parts of the exercise. What am I missing?
fun merge (h, E) = h
| merge (E, h) = h
| merge (h1 as T (s1, x, a1, b1), h2 as T (s2, y, a2, b2)) =
let
val st = s1 + s2
val (v, a, b) =
if Elem.leq (x, y) then (x, a1, merge (b1, h2))
else (y, a2, merge (h1, b2))
in
if size a >= size b then T (st, v, a, b)
else T (st, v, b, a)
end
I think I've figured out one point with regards to lazy evaluation. If I don't use the recursive merge to calculate the size, then the recursive call won't need to be evaluated until the child is needed:
fun merge (h, E) = h
| merge (E, h) = h
| merge (h1 as T (s1, x, a1, b1), h2 as T (s2, y, a2, b2)) =
let
val st = s1 + s2
val (v, ma, mb1, mb2) =
if Elem.leq (x, y) then (x, a1, b1, h2)
else (y, a2, h1, b2)
in
if size ma >= size mb1 + size mb2
then T (st, v, ma, merge (mb1, mb2))
else T (st, v, merge (mb1, mb2), ma)
end
Is that all? I am not sure about concurrency though.
I think you've essentially got it as far as the lazy evaluation goes -- it's not very helpful to use lazy evaluation if you are going to have to end up traversing the whole data structure to find out anything every time you do a merge...
As to the concurrency, I expect the issue is that if, while one thread is evaluating the merge, another comes along and wants to look something up, it will not be able to get anything useful done at least until the first thread completes the merge. (And it might even take longer than that.)
It doesn’t any benefit to WMERGE-3-4C function in a lazy environment. It still does all the work that the original down-up merge did. It pretty sure it would not be any easier for the language system to memorize..
No benefit to WMERGE-3-4C functions in a concurrent environment. Each call to WMERGE-3-4C does all its work before passing the buck to another instance of WMERGE-3-4C. In fact, if we eliminated the recursion by hand, WMERGE-3-4C could be implemented as a single loop that does all the work while accumulating a stack, then a second loop that does the REDUCE work on the stack. The first loop would not be naturally parallizable, though maybe the REDUCE could operate by calling the function on pairs, in parallel, until only one element remained in the list.