I have a paper that states:
(...) languajes such as strings of x's followed by the same number of y's (for example xxxxyyyy) cannot be specified by a regular grammar or Finite States Automaton because these devices have no mechanism for remembering how many x's were generated when the time comes to derive the y's. This shortcoming is remedied by means of rules such as S → xSy, which always generate an x and a y at the same time. (...)
So, I don't understand this statement, as far as I know, such strings can be generated with the regular grammar with production rules:
S → xS
S → yS
S → y
Where x,y are terminals, S is the starting unique non terminal. This grammar produces the derivation
S→xS→xxS→xxxS→xxxxS→xxxxyS→xxxxyyS→xxxxyyyS→xxxxyyyy
Grammar must generate every string in language, and no strings not in language.
Your grammar also generates invalid string y with the third production, or xxy, or xyxy, hence you can NOT say this is a grammar for your language.
Related
The Prolog standard ISO/IEC 13211-1:1995/Cor.2:2012
features compare/3:
8.4.2 compare/3 – three-way comparison
8.4.2.1 Description
compare(Order, X, Y) is true iff Order unifies with R which is one of the following atoms: '=' iff X and Y are identical terms (3.87), '<' iff X term_precedes Y (7.2), and '>' iff Y term_precedes X. [...]
Recently, it dawned on me that using the atoms <, =, and > is somewhat weird:
The predicates (<)/2 and (>)/2 express arithmetic comparison.
The predicate (=)/2 on the other hand is syntactic term unification.
IMHO, a much more natural choice would (have) be(en) #<, == and #>, as these are exactly the predicates whose fulfillment is determined by compare/3.
So: why were the atoms </=/> chosen—and not #</==/#>?
Recently, it dawned on me that using the atoms <, =, and > is somewhat
weird:
The compare/3 predicate existed in several Prolog systems prior to find its way into the ISO Prolog Core standard. The choice here (I was the WG17 Core editor at the time) was to preserve backward compatibility.
compare/3 exists as a built-in since 1982 which is the quasi second edition of the DECsystem 10 manual. The first of 1978 (called User's guide) did not contain compare/3 nor (#<)/2 and related built-ins; only (==)/2 and (\==)/2. The 1982 manual refers in the definition of this built-in to a "standard order". And thus the three symbols (which constitute in the standard the domain order) make quite some sense in that context. The standard itself refers to 7.2 Term order via term_precedes.
Some systems had used == as the symbol for identity, but changed to =. However, I have never encountered #< in any system.
Note that identity of terms is well defined even when considering terms with variables and even infinite trees, whereas the general Term order is only partially defined in such cases.
L={w|w€{a,b}, number of a is divisible by 2 }is the language. Can someone help me with the regular grammer of this?
The language is the set of all strings of a and b with an even number of a. This is a regular language and the goal is to produce a regular grammar for it.
Unless the regular grammar you're going to need is trivial, I would recommend always writing down the finite automaton first, and then converting it into a grammar. Converting a finite automaton into a grammar is very easy, and solving this problem is easy with a finite automaton. We will have two states: one will correspond to having seen an even number of a, the other an odd number. The state corresponding to having seen an even number of a will be accepting, and seeing b will not cause us to change states. The DFA is therefore:
b b
/-\ /-\
| V | V
----->(q0)--a-->(q1)
^ |
| a |
\---------/
A regular grammar for this can be formed by writing the transitions down as productions, using the states as nonterminal symbols, and including an empty production for the accepting state:
(q0) -> b(q0) | a(q1) | e
(q1) -> b(q1) | a(q0)
For the sake of completeness, you could run some other algorithms on the grammar or automaton and get a regular expression, maybe like this: b*(ab*ab*)* (just wrote that down, not sure if it's right or not, left as an exercise).
What is the exact difference between Well-formed formula and a proposition in propositional logic?
There's really not much given about Wff in my book.
My book says: "Propositions are also called sentences or statements. Another term formulae or well-formed formulae also refer to the same. That is, we may also call Well formed formula to refer to a proposition". Does that mean they both are the exact same thing?
Proposition: A statement which is true or false, easy for people to read but hard to manipulate using logical equivalences
WFF: An accurate logical statement which is true or false, there should be an official rigorus definition in your textbook. There are 4 rules they must follow. Harder for humans to read but much more precise and easier to manipulate
Example:
Proposition : All men are mortal
WFF: Let P be the set of people, M(x) denote x is a man and S(x)
denote x is mortal Then for all x in P M(x) -> S(x)
It is most likely that there is a typo in the book. In the quote Propositions are also called sentences or statements. Another term formulae or well-formed formulae also refer to the same. That is, we may also call Well formed formula to refer to a preposition, the word "preposition" should be "proposition".
Proposition :- A statement which is either true or false,but not both.
Propositional Form (necessary to understand Well Formed Formula) :- An assertion which contains at least one propositional variable.
Well Formed Formula :-A propositional form satisfying the following rules and any Wff(Well Formed Formula) can be derived using these rules:-
If P is a propositional variable then it is a wff.
If P is a propositional variable,then ~P is a wff.
If P and Q are two wffs then,(A and B),(A or B),(A implies B),(A is equivalent to B) are all wffs.
I have to write a program that tests whether two algebraic expressions are equivalent. It should follow MDAS precedence and parenthesis grouping. To solve the problem about precedence, I'm thinking I should implement a Infix to Postfix Notation converter for these expressions. But by doing this, I could not conclude their equivalence.
The program should look like this:
User Input: a*(a+b) = a*a + a*b
Output : Equivalent
For this problem I'm not allowed to use Computer Algebraic Systems or any external libraries. Please don't post the actual code if you have one, I just need an idea to work this problem out.
If you are not allowed to evaluate the expressions, you will have to parse them out into expression trees.
After that, I would get rid of all parenthesis by multiplying/dividing all members so a(b - c) becomes a*b - a*c.
Then convert all expressions back to strings, making sure you have all members alphabetically sorted (a*b, not b*a) ,remove all spaces and compare strings.
That's an idea:
You need to implement building expression tree first because it's a very natural representation of expression.
Then maybe you'll need to simplify it by open brackets and etc. using associative or distributive algebraic properties.
Then you'll have to compare trees. It's not obvious because you need to take care of all branch permutations in commutative operations and etc. E.g. you can sort them (I mean branches) and then compare for equality. Also you need to keep in mind possible renaming of parameters, i.e. a + b need to be equal x + y.
I'm writing an input file for OTTER that is very simple:
set(auto).
formula_list(usable).
all x y ([Nipah(x) & Encephalitis(y)] -> Causes(x,y)).
exists x y (Nipah(x) & Encephalitis(y)).
end_of_list.
I get this output for the search :
given clause #1: (wt=2) 2 [] Nipah($c2).
given clause #2: (wt=2) 2 [] Encephalitis($c1).
search stopped because sos empty
Why won't OTTER infer Causes($c2,$c1)?
EDIT:
I removed the square brackets from [Nipah(x) & Encephalitis(x)] and it worked. Why does this matter?
I'd answer with a question: Why did you use square brackets in the first place?
Look into Otter manual, Section 4.3, List Notation. Square brackets are used for lists, it's syntactic sugar that is expanded into special terms. In your case, it expanded to something like
all x y ($cons(Nipah(x) & Encephalitis(y), $nil) -> Causes(x,y)).
Why won't OTTER infer Causes($c2,$c1)?
Note that the resolution calculus is not complete in the sense that every formula provable in a given theory could be inferred by the calculus. This would be highly undesirable! Instead, resolution is only refutationally complete, meaning that if a given theory is
contradictory then the resolution will find a proof of the empty clause. So even if a clause C is a logical consequence of a set of clauses T, it doesn't mean that the resolution calculus can derive C from T. In your case, the fact that Causes($c2,$c1) follows from the input doesn't mean Otter has to derive it.