NFA that does not accept strings ending "101" - computation-theory

What is the NFA that does not accept strings ending "101"?

There are lots of NFAs (infinitely many, in fact) that work. Here is a simple one:
/0-\ /1-+--------+--------+
\ | \ | 1 1
\ V \ V | |
----->[q0]--1-->[q1]--0-->[q2]--1-->q3
^ | ^ |
| | | |
\-------------0----/ \--0---/
This NFA happens also to be a completely deterministic DFA. That's OK. This DFA works by keeping track of the three most recently encountered input symbols. If the three most recently encountered were 000 or 100, the machine will end up in state q0. If the three most recently encountered were 111, 011 or 001, the machine will end up in state q1. If the three most recently encountered were 010 or 110 the machine will end up in state q2. If the three most recently encountered were 101 the machine will end up in q3. Those are all possibilities for the last three symbols seen and each one leaves the machine in either an accepting or non-accepting state correctly according to the language definition.

There can be "n" number of NFA possible for the finite automata.In non-deterministic finite automata on any input symbol trasitions to zero or more states is possible.To accept the string that does not ending with 101 one of the NFA is:
Transitions are:
∆(q0,0)={q0}
∆(q0,1)={q0,q1}
Here, intial state q0 on input symbol 0 can remain in the same state or can go to the next state q1 and on the input symbol 1 go to next state q1. q0 can be the final state.
∆(q1,0)={q2}
∆(q1,1)={q1}
here,q1 on input symbol 0 go to state q2 and on inputt symbol 1 remains in the same state.q1 can also be the final state.
∆(q2,0)={q2}
∆(q2,1)={q3}
Here, q2 on input symbol 0 remains in the same state and on the input symbol 1 goes to next state q3. q2 can also be final state.
∆{q3,0)={q2}
∆(q3,1)={q2,q1}
Here, q3 on symbol 0 goes to the state q2 and on input symbol 1 can go to q1 or q2. The state q3 can not be the final state as it accepts the string ending with 101.
Other NFA's can also be done for the same question which can also be correct.

we have to find nfa for the strings that dosen't end with 101 :
Consider 4 states q0,q1,q2,q3.The inputs are 0,1.
The transitions are as follows:
δ(q0,0)=q0
δ(q0,1)=q1
δ(q1,0)=q2
δ(q1,1)=q1
δ(q2,0)=q0
δ(q2,1)=q3
δ(q3,0)=q2
δ(q3,1)=q1
Here,q0,q1,q2 are final states and q3 is non final state.
Transition diagram

Related

formal description of TM that on input x computes the next word in the lexicographic order

For Σ = {0, 1}, give a formal description of a TM M that on input x computes the next word in Σ∗ according to the lexicographic order.
For example, on input 11, M halts with 000 on its tape
We will design a single-tape deterministic Turing machine to compute the next word in lexicographic order given a word w on the tape. Here is our strategy:
scan to the end of input and see if the least significant bit is equal to 0. If so, increment it, and be done.
otherwise, scan left until you see a 0, in which case you increment it and then halt. keep looking until you reach the front of the tape.
if you got here, your input is of the form 11...1 and you need to replace all 1s but the first with 0s and then add an extra 0 on the end. To accomplish this, scan right until you find the first blank, replacing all 1s you see with 0s; then, replace exactly one blank with a 0, then halt.
Here are the transitions and states:
state tape state' tape' direction comment
q0 blank halt-acc 0 same assume 0 for empty tape
q0 0,1 q1 0,1 right scan right
q1 blank q2 blank left go to LSB
q1 0,1 q0 0,1 right scan to end of input
q2 blank q3 1 right input is 11...1
q2 0 halt-acc 1 same found the least significant 0
q2 1 q2 1 left keep looking
q3 blank halt-acc 0 same ran out of tape, add one 0
q3 0 halt-rej 0 same this transition cannot happen
q3 1 q3 0 right replace all but 1st 1 with 0

Is the implementation of `Don't care condition ( X ) in k- map is right

I have a little confusion regarding the Don't care condition in the karnaugh map. As we all know karnaugh map is used to achieve the
complete
accurate/precise
optimal
output equation of 16bit or sometime 32 bits binary solution,till that everything was alright but the problem arises when we insert a dont care conditions in it.
My question was that,
As even the dont care conditions were generated from the o's or 1's solution of truth table & in karnaugh map we sometimes conclude or sometimes ignore dont care conditions in our karnaugh map groups. so is it an ambiguity in a karnaugh map that we ignore dont care conditions in the karnaugh map beacuse we dont know what is behind that dont care condition is it 1 or 0. so afterwards that how could we confidently use to say that our solution is complete or accurate while we are ignoring the dont care conditions in it. May be the dont care we are ignoring contains a 1 in sop and 0 at pos so according to it may contain an error.
A "don't care" is just that. Something that we don't care about. It gives us an opportunity for additional optimization, because that value is not restricted. We are able to make it whatever we wish in order to achieve the most optimal solution.
Because we don't care about it, it doesn't matter what the value is. We will use whatever suits us best (lowest cost, fastest, etc... "optimal"). If it's better as a 1 in one implementation and a 0 in another, so be it, it doesn't matter.
Yes there is always another case with the don't care, but we can say it's complete/accurate because we don't care about the other one. We will treat it whichever way makes our implementation better.
Let's take a very simple example to understand what "don't care conditions" exactly mean.
Let F be a two-variable Boolean function defined by a user as follows:
A B F
0 0 1
0 1 0
This function is not defined when the value of A is 1.
This can mean one of two things :-
The user doesn't care about the value that F produces when A = 1.
The user guarantees that A = 1 will never be given as an input to F.
Both of these cases are collectively known as "don't care conditions".
Now, the person implementing this function can use this fact to their advantage by extending the definition of F to include the cases when A = 1.
Using a K-Map,
B' B
A' | 1 | 0 |
A | X | X |
Without these don't care conditions, the algebraic expression of F would be written as A'B', i.e. F = A'B'.
But, if we modify this map as follows,
B' B
A' | 1 | 0 |
A | 1 | 0 |
then F can be expressed as F = B'.
By modifying this map, we have basically extended the definition of F as follows:
A B F
0 0 1
0 1 0
1 0 1
1 1 0
This method works because the person implementing the function already knows that either the user will not care what happens when A = 1, or the user will never use A = 1 as an input to F.
Another example is a four-variable Boolean function in which each of those 4 variables denotes one distinct bit of a BCD value, and the function gives the output as 1 if the equivalent decimal number is even & 0 if it is odd. Here, since it is guaranteed that the input will never be one of 1010, 1011, 1100, 1101, 1110 & 1111, therefore the person implementing the function can extend the definition of the function to their advantage by including these cases as well.

Probability of state representation

Just working my way through the W.Feller introduction to probability theory and its applications volume 1. An example in the chapter on combinatorial analysis asks the question:
"Each of the 50 states has 2 senators. If we choose 50 senators at random, what is the probability a given state is represented?"
I understand the answer given which uses the complement of the event but was curious whether the method where you force the desired outcome to occur, then work out how many ways the remaining cells can be chosen, would work here too?
AJ
Let s1 and s2 the two senators of the state.
P(state is represented) = P(s1 or s2 is chosen by chance).
Let us compute the respective numbers of favorable cases:
s1 and s2 chosen: 48 to choose in the 98 remaining senators
s1 chosen without s2: 49 to choose in the 98 remaining senators
s2 chosen without s1: the same
none of them chosen: 50 to choose in the 98 remaining senators
that is, P(state is represented) = (98!/48!50! + 2*98!/49!49!) / (98!/48!50! + 2*98!/49!49! + 98!/48!50!)

Sequential circuit design

A sequential circuit has two inputs, x1 and x2. Five-bit sequences representing decimal digits coded in the 2-out-of-5 code appear from time to time on line a:,, synchronized with a clock pulse on a third line. Each five consecutive bits on the line x1, which occur while x2 = 0 but immediately following one or more inputs of x2 = 1, may be considered a code word. The single output z is to be 0, except upon the fifth bit of an invalid code word. Determine a state table of the above sequential circuit.
2 out of 5 code
0=00011
1=00101
2=00110
3=01001
4=01010
5=01100
6=10001
7=10010
8=10100
9=11000
I have solved it with 16 states. could any one tell me the is it possible to reduces the number of states to less than 16?

negamax algorithm for a 3-in-a-row game on a 3x4 grid (rows x columns)

I'm struggling with the negamax-algorithm for 3-in-a-row game on a 3x4 (rows x columns) grid. It is played like the well known 4-in-a-row, i.e. the pieces are dropped (NOT like TicTacToe). Let's call the players R and B. R had the first move, B's moves are controlled by negamax. Possible moves are 1, 2, 3, or 4. This the position in question which was reached after R: move 3 --> B: move 1 --> R: move 3:
1 2 3 4
| | | | |
| | | R | |
| B | | R | |
Now, in order to defend against move 3 by R, B has to play move 3 itself, but it refuses to do so. Instead it plays move 1 and the game is over after R's next move.
I spent the whole day looking for an error in my negamax implementation which works perfectly for a 3x3 grid, by the way, but I couldn't find any.
Then I started thinking: Another explanation for the behavior of the negamax-algorithm would be that B is lost in all variations no matter what, after R starts the game with move 3 on a 3x4-grid.
Can anybody refute this proposition or point me to a proof (which I would prefer ;-))?
Thanks, RSel
It is, in fact, a won game from the start. And can be played through fairly easily by hand. I will assume that B avoids all of the 1-move wins for R, and will mark moves by color, and spot in the grid where the play happens.
1. R3,1
... B1,1 2. R3,2 B3,3 3. R4,1 B2,1 4. R2,2 (and R1,2 or R4,2 wins next)
... B2,1 2. R3,2 B3,3 3. R2,2 B2,3 4. R1,1 (and R1,2 or R1,3 wins next)
... B3,2 2. R2,1 (and R1,1 or R4,1 wins next)
... B4,1 2. R2,1 B1,1 3. R3,2 B3,3 4. R2,2 (and R1,2 or R4,2 wins next)
As for your algorithm, I'm going to suggest that you modify it to prefer wins over losses, and prefer distant losses over near losses. If you do that, it will "try harder" to avoid the inevitable loss.
Proof that B3 loses as well:
B3: R(1,2,4)->R1; B(1,2,4)->B2 (loses), so B1; R(2,4)->R2 Loses, so R4; B(2,4)->B2 loses, so B4; R loses on either choice now
...so R1 will lose for R - so R won't choose it.
B3: R(1,2,4)->R2 loses since B2, so R won't choose it
B3: R(1,2,4)->R4; B2 (forced); R2 (forced); B loses on R's next move
...so, B3 loses for B as well as B1...so B has lost in this situation.
EDIT: Just in case anyone is wondering about the other B options (2,4) at the end of "B3: R(1,2,4)->R1; B(1,2,4)->B2 (loses), so B1"...they are irrelevant, since as soon as Red chooses R1, this scenario shows that B (computer) can choose B1 and win. It doesn't really matter what happens with B's other choices - this choice will win, so Red can't choose R1 or he will lose.

Resources