I'm wondering how could I design a pushdown automaton for this specific language.
I can't solve this..
L2 = { u ∈ {a, b}∗ : 3 ∗ |u|a = 2 ∗ |u|b + 1 }
So the number of 'a's multiplied by 3 is equals to number of 'b's multiplied by 2 and added 1.
The grammar corresponding to that language is something like:
S -> ab | ba |B
B -> abB1 | baB1 | aB1b | bB1a | B1ab | B1ba
B1 -> aabbbB1 | baabbB1 | [...] | aabbb | baabb | [...]
S generates the base case (basically strings with #a = 1 = #b) or B
B generates the base case + B1 (in every permutation)
B1 adds 2 'a' and 3 'b' to the base case (in fact if you keep adding this number of 'a' and 'b' the equation 3#a = 2#b + 1 will always be true!). I didn't finish writing B1, basically you need to add every permutation of 2 'a' and 3 'b'. I think you'll be able to do it on your own :)
When you're finished with the grammar, designing the PDA is simple. More info here.
3|u|a = 2|u|b + 1 <=> 3|u|a - 2|u|b = 1
The easiest way to design a PDA for this is to implement this equation directly.
For any string x, let f(x) = 3|x|a - 2|x|b. Then design a PDA such that, after processing any string x:
The stack depth is always equal to abs( floor( f(x)/3 ) );
The symbol on the top of the stack (if any), reflects the sign of floor( f(x)/3 ). You only need 2 kinds of stack symbols
The current state number = f(x) mod 3. Of course you only need 3 states.
From the state number and the symbol on top of the stack, you can detect when f(x) = 1, and at that condition the PDA accepts x as a string in the language.
Consider the following example:
I have a list of 5 items, each with their occurrence with either 1 or 0:
{a, b, c, d, e}
The restricted combinations are as follows:
the occurrence of a, c, and e cannot be 1 at any given time.
the occurrence of b, d, and e cannot be 1 at any given time.
basically, if found in database that occurrence of a and c is already 1, and if a given input is e (giving e an occurrence of 1) is not allowed (clause 1) or vice versa.
another example, d and e has an occurrence of 1 respectively in the database, a new input of b will not be allowed (following clause 2).
An even more solid example:
LETTER | COUNT(OCCURRENCE)
------------------------------
a | 1
b | 1
c | 1
d | 0
e | 0
Therefore, a new input of e would be rejected because of the violation of clause 1.
What is the best algorithm/practice for this solution?
I thought of having many if-else statements, but that doesn't seem efficient enough. What if I had a dynamic list of elements instead? Or at least have a better extensibility to this piece of program.
As mentioned by BKassem(I think) in the comments(removed for whatever reason).
The algorithm for this scenario:
(count(a) * count(c) * count(e)) == 0 //proceed to further actions
Worked flawlessly!
Can somebody please help me draw a NFA that accepts this language:
{ w | the length of w is 6k + 1 for some k ≥ 0 }
I have been stuck on this problem for several hours now. I do not understand where the k comes into play and how it is used in the diagram...
{ w | the length of w is 6k + 1 for some k ≥ 0 }
We can use the Myhill-Nerode theorem to constructively produce a provably minimal DFA for this language. This is a useful exercise. First, a definition:
Two strings w and x are indistinguishable with respect to a language L iff: (1) for every string y such that wy is in L, xy is in L; (2) for every string z such that xz is in L, wz is in L.
The insight in Myhill-Nerode is that if two strings are indistinguishable w.r.t. a regular language, then a minimal DFA for that language will see to it that the machine ends up in the same state for either string. Indistinguishability is reflexive, symmetric and transitive so we can define equivalence classes on it. Those equivalence classes correspond directly to the set of states in the minimal DFA. Now, to find the equivalence classes for our language. We consider strings of increasing length and see for each one whether it's indistinguishable from any of the strings before it:
e, the empty string, has no strings before it. We need a state q0 to correspond to the equivalence class this string belongs to. The set of strings that can come after e to reach a string in L is L itself; also written c(c^6)*
c, any string of length one, has only e before it. These are not, however, indistinguishable; we can add e to c to get ce = c, a string in L, but we cannot add e to e to get a string in L, since e is not in L. We therefore need a new state q1 for the equivalence class to which c belongs. The set of strings that can come after c to reach a string in L is (c^6)*.
It turns out we need a new state q2 here; the set of strings that take cc to a string in L is ccccc(c^6)*. Show this.
It turns out we need a new state q3 here; the set of strings that take ccc to a string in L is cccc(c^6)*. Show this.
It turns out we need a new state q4 here; the set of strings that take cccc to a string in L is ccc(c^6)*. Show this.
It turns out we need a new state q5 here; the set of strings that take ccccc to a string in L is cc(c^6)*. Show this.
Consider the string cccccc. What strings take us to a string in L? Well, c does. So does c followed by any string of length 6. Interestingly, this is the same as L itself. And we already have an equivalence class for that: e could also be followed by any string in L to get a string in L. cccccc and e are indistinguishable. What's more: since all strings of length 6 are indistinguishable from shorter strings, we no longer need to keep checking longer strings. Our DFA is guaranteed to have one the states q0 - q5 we have already identified. What's more, the work we've done above defines the transitions we need in our DFA, the initial state and the accepting states as well:
The DFA will have a transition on symbol c from state q to state q' if x is a string in the equivalence class corresponding to q and xc is a string in the equivalence class corresponding to q';
The initial state will be the state corresponding to the equivalence class to which e, the empty string, belongs;
A state q is accepting if any string (hence all strings) belonging to the equivalence class corresponding to the language is in the language; alternatively, if the set of strings that take strings in the equivalence class to a string in L includes e, the empty string.
We may use the notes above to write the DFA in tabular form:
q x q'
-- -- --
q0 c q1 // e + c = c
q1 c q2 // c + c = cc
q2 c q3 // cc + c = ccc
q3 c q4 // ccc + c = cccc
q4 c q5 // cccc + c = ccccc
q5 c q0 // ccccc + c = cccccc ~ e
We have q0 as the initial state and the only accepting state is q1.
Here's a NFA which goes 6 states forward then if there is one more character it stops on the final state. Otherwise it loops back non-deterministcally to the start and past the final state.
(Start) S1 -> S2 -> S3 -> S5 -> S6 -> S7 (Final State) -> S8 - (loop forever)
^ |
^ v |_|
|________________________| (non deterministically)
Briefly, I have a EBNF grammar and so a parse-tree, but I do not know if there is a procedure to translate it in First Order Logic.
For example:
DR ::= E and P
P ::= B | (and P)* | (or P)*
B ::= L | P (and L P)
L ::= a
Yes, there is. The general pattern for translating a production of the form
A ::= B C ... D
is to paraphrase is declaratively as saying
A sequence of terminals s is an A (or: A generates the sequence s, if you prefer that formulation) if:
s is the concatenation of s_1, s_2, ... s_n, and
s_1 is a B / B generates the sequence s_1, and
s_2 is a C / C generates the sequence s_2, and
...
s_n is a D / D generates the sequence s_n.
Assuming we write these in the obvious way using a generates predicate, and that we can write concatenation using a || operator, your first rule becomes (if I am right to guess that E and P are non-terminals and "and" is a terminal symbol) something like
generates(DR,s) ⊃ generates(E,s1)
∧ generates(and,s2)
∧ generates(P,s3)
∧ s = s1 || s2 || s3
To establish the consequent (i.e. prove that s is an A), prove the antecedents. As long as the grammar does actually generate some sentences, and as long as you have some premises defining the "generates" relation for terminal symbols, the proof will be straightforward.
Prolog definite-clause grammars are a beautiful instantiation of this pattern. It takes some of us a while to understand and appreciate the use of difference lists in DCGs, but they handle the partitioning of s into subsequences and the association of the subsequences with the different parts of the right hand side much more elegantly than the simple translation into logic given above.
In chapter 8 of Godel, Escher, Bach by Douglas Hofstader, the reader is challenged to translate these 2 statements into TNT:
"b is a power of 2"
and
"b is a power of 10"
Are following answers correct?:
(Assuming '∃' to mean 'there exists a number'):
∃x:(x.x = b)
i.e. "there exists a number 'x' such that x multiplied x equals b"
If that is correct, then the next one is equally trivial:
∃x:(x.x.x.x.x.x.x.x.x.x = b)
I'm confused because the author indicates that they are tricky and that the second one should take hours to solve; I must have missed something obvious here, but I can't see it!
In general, I would say "b is a power of 2" is equivalent to "every divisor of b except 1 is a multiple of 2". That is:
∀x((∃y(y*x=b & ¬(x=S0))) → ∃z(SS0*z=x))
EDIT: This doesnt work for 10 (thanks for the comments). But at least it works for all primes. Sorry. I think you have to use some sort of encoding sequences after all. I suggest "Gödel's Incompleteness Theorems" by Raymond Smullyan, if you want a detailed and more general approach to this.
Or you can encode Sequences of Numbers using the Chinese Remainder Theorem, and then encode recursive definitions, such that you can define Exponentiation. In fact, that is basically how you can prove that Peano Arithmetic is turing complete.
Try this:
D(x,y)=∃a(a*x=y)
Prime(x)=¬x=1&∀yD(y,x)→y=x|y=1
a=b mod c = ∃k a=c*k+b
Then
∃y ∃k(
∀x(D(x,y)&Prime(x)→¬D(x*x,y)) &
∀x(D(x,y)&Prime(x)&∀z(Prime(z)&z<x→¬D(z,y))→(k=1 mod x)) &
∀x∀z(D(x,y)&Prime(x)&D(z,y)&Prime(z)&z<x&∀t(z<t<x→¬(Prime(t)&D(t,y)))→
∀a<x ∀c<z ((k=a mod x)&(k=c mod z)-> a=c*10))&
∀x(D(x,y)&Prime(x)&∀z(Prime(z)&z>x→¬D(z,y))→(b<x & (k=b mod x))))
should state "b is Power of 10", actually saying "there is a number y and a number k such that y is product of distinct primes, and the sequence encoded by k throug these primes begins with 1, has the property that the following element c of a is 10*a, and ends with b"
Your expressions are equivalent to the statements "b is a square number" and "b is the 10th power of a number" respectively. Converting "power of" statements into TNT is considerably trickier.
There's a solution to the "b is a power of 10" problem behind the spoiler button in skeptical scientist's post here. It depends on the chinese remainder theorem from number theory, and the existence of arbitrarily-long arithmetic sequences of primes. As Hofstadter indicated, it's not easy to come up with, even if you know the appropriate theorems.
In expressing "b is a power of 10", you actually do not need the Chinese Remainder Theorem and/nor coding of finite sequences. You can alternatively work as follows (we use the usual symbols as |, >, c-d, as shortcuts for formulas/terms with obvious meaning):
For a prime number p, let us denote EXP(p,a) some formula in TNT saying that "p is a prime and a is a power of p". We already know, how to build one. (For technical reasons, we do not consider S0 to be a power of p, so ~EXP(p,S0).)
If p is a prime, we define EXPp(c,a) ≖ 〈EXP(p,a) ∧ (c-1)|(a-1)〉. Here, the symbol | is a shortcut for "divides" which can be easily defined in TNT using one existencial quantifier and multiplication; the same holds for c-1 (a-1, resp.) which means "the d such that Sd=c" (Sd=a, resp.).
If EXP(p,c) holds (i.e. c is a power of p), the formula EXPp(c,a) says that "a is a power of c" since a ≡ 1 (mod c-1) then.
Having a property P of numbers (i.e. nonnegative integers), there is a way how to refer, in TNT, to the smallest number with this property: 〈P(a) ∧ ∀c:〈a>c → ~P(a)〉〉.
We can state the formula expressing "b is a power of 10" (for better readability, we omit the symbols 〈 and 〉, and we write 2 and 5 instead of SS0 and SSSSS0):
∃a:∃c:∃d: (EXP(2,a) ∧ EXP(5,c) ∧ EXP(5,d) ∧ d > b ∧ a⋅c=b ∧ ∀e:(e>5 ∧ e|c ∧ EXP5(e,c) → ~EXP5(e,d)) ∧ ∀e:("e is the smallest such that EXP5(c,e) ∧ EXP5(d,e)" → (d-2)|(e-a))).
Explanation: We write b = a⋅c = 2x⋅5y (x,y>0) and choose d=5z>b in such a way that z and y are coprime (e.g. z may be a prime). Then "the smallest e..." is equal to (5z)y = dy ≡ 2y (mod d-2), and (d-2)|(e-a) implies a = 2x = e mod (d-2) = 2y (we have 'd-2 > 2y' and 'd-2 > a', too), and so x = y.
Remark: This approach can be easily adapted to define "b is a power of n" for any number n with a fixed decomposition a1a2...ak, where each ai is a power of a prime pi and pi = pj → i=j.
how about:
∀x: ∀y: (SSx∙y = b → ∃z: z∙SS0 = SSx)
(in English: any factor of b that is ≥ 2 must itself be divisible by 2; literally: for all natural numbers x and y, if (2+x) * y = b then this implies that there's a natural number z such that z * 2 = (2+x). )
I'm not 100% sure that this is allowed in the syntax of TNT and propositional calculus, it's been a while since I've perused GEB.
(edit: for the b = 2n problem at least; I can see why the 10n would be more difficult as 10 is not prime. But 11n would be the same thing except replacing the one term "SS0" with "SSSSSSSSSSS0".)
Here's what I came up with:
∀c:∃d:<(c*d=b)→<(c=SO)v∃e:(d=e*SSO)>>
Which translates to:
For all numbers c, there exists a number d, such that if c times d equals b then either c is 1 or there exists a number e such that d equals e times 2.
Or
For all numbers c, there exists a number d, such that if b is a factor of c and d then either c is 1 or d is a factor of 2
Or
If the product of two numbers is b then one of them is 1 or one of them is divisible by 2
Or
All divisors of b are either 1 or are divisible by 2
Or
b is a power of 2
For the open expression meaning that b is a power of 2, I have ∀a:~∃c:(S(Sa ∙ SS0) ∙ Sc) = b
This effectively says that for all a, S(Sa ∙ SS0) is not a factor of b. But in normal terms, S(Sa ∙ SS0) is 1 + ((a + 1) * 2) or 3 + 2a. We can now reword the statement as "no odd number that is at least 3 is a factor of b". This is true if and only if b is a power of 2.
I'm still working on the b is a power of 10 problem.
I think that most of the above have only shown that b must be a multiple of 4. How about this: ∃b:∀c:<<∀e:(c∙e) = b & ~∃c':∃c'':(ssc'∙ssc'') = c> → c = 2>
I don't think the formatting is perfect, but it reads:
There exists b, such that for all c, if c is a factor of b and c is prime, then c equal 2.
Here is what I came up with for the statement "b is a power of 2"
∃b: ∀a: ~∃c: ((a * ss0) + sss0) * c = b
I think this says "There exists a number b, such that for all numbers a, there does not exist a number c such that (a * 2) + 3 (in other words, an odd number greater than 2) multiplied by c, gives you b." So, if b exists, and can't be zero, and it has no odd divisors greater than 2, then wouldn't b necessarily be 1, 2, or another power of 2?
my solution for b is a power of two is :
∀x: ∃y x.y=b ( isprime(x) => x = SS0 )
isprime() should not be hard to write.