Is there a general algorithm that turns a given grammar into LR(0) grammar?
I tried turning the grammar into CNF but even when I succeeded it didnt work out to be LR0
No.
There are lots of languages which are not LR(0); meaning that no LR(0) grammar exists.
CNF is irrelevant. It does not generally create determinism.
Related
I'm building a propositional logic library and am running into some conceptual (and computational) problems to do with arbitrary tautologies and contradictions.
Firstly, I assume the symbols \top and \bot are both well-formed formulas.
If \top represents an arbitrary tautology, and it is known that any formula can be converted to NNF, it seems that \top is both in NNF and not in NNF simultaneously.
This leads me to conclude I was wrong about my assumption and that they are not conventional well-formed
formulas but rather special forms of syntactic sugar undefined wrt NNF and thus, if my NNF converter is provided with a formula such as \alpha\lor\top, it should "treat \top as it would an atom" and just return the formula (no simplifying as that's not the function's prerogative). This solution would extend to either types of normal forms too.
Is this thinking correct? Any suggestions as to how I could better deal with them?
Thanks in advance to anyone who responds! I imagine this may be entry-level stuff.
Well I need help, I'm working with languages and free context grammars, and I need to know if there is an algorithm or program that helps to resolve membership issue, this means that giving an string "w" and a FCG G, decide if the string it's on the language or if is not.
I'm looking for a library or a program that can do this for later convert the string into an automata.
First of all, I've only seen such grammars called context-free grammars, not free context grammars. Also, automata is the plural of automaton. Your last statement regarding converting a string into an automaton makes no sense. There is a correspondence between context-free grammars and pushdown automata, but not between strings and automata. Given a context-free grammar, the simplest algorithm, which may not be the most efficient, for deciding if a string is part of the language of the grammar would be to apply every possible production in the grammar to every possible non-terminal string that can be derived from the starting non-terminal. Generate every possible string that of length less than or equal to the string in question. If the string is not there, then it is not a member of the language of the grammar.
In my opinion, define the DCGs (Definite Clause Grammars) as a compact way to describe the lists in Prolog, is a poorly way to define them. As far as I know, the DCGs are not only used in Prolog, but also in other programming languages, such as Mercury.
In addition, they are called DCGs, because they represent a grammar in a set of definite clauses (Horn clauses), the basis of logic programming.
So why if an entire Prolog program can be written using definite clauses, DCGs are solely defined as a compact way to describe the lists in Prolog?
Note: The doubt arises from the description for the tag dcg given by SO.
The extended info from the DCG tag wiki provides additional information, which I think is both correct and also in close agreement with your first point:
"DCGs are usually associated with Prolog, but similar languages such
as Mercury also include DCGs."
Regarding your second point: Emphasizing the close association with Prolog lists is in my opinion well justified, since a DCG indeed always describes a list, and typically also quite compactly.
Both are technologies that are expressed via languages full of macros, but in a more technical terms, what is the kind of grammar and how to describe their own properties ?
I'm not interested in a graphical representation, by properties I mean a descriptive phrase about this subject, so please don't just go for a BNF/EBNF oriented response full of arcs and graphs .
I assume that both are context-free grammars, but this is a big family of grammars, there is a way to describe this 2 in a more precise way ?
Thanks.
TeX can change the meaning of characters at run time, so it's not context free.
Is my language Context-Free?
I believe that every useful language ends up being Turing-complete, reflexive, etc.
Fortunately that is not the end of the story.
Most of the parser generation tools (yacc, antler, etc) process up to context-free grammars (CFG).
So we divide the language processing problem in 3 steps:
Build an over-generating CFG; this is the "syntactical" part that constitutes a solid base where we add the other components,
Add "semantic" constraints (with some extra syntactic and semantic constraints)
main semantics ( static semantics, pragmatics, attributive semantics, etc)
Writing a context-free grammar is a very standard way of speaking about all the languages!
It is a very clear and didactic notation for languages!! (and sometimes is not telling all the truth).
When We say that "is not context-free, is Turing-complete, ..." you can translate it to "you can count with lots of semantic extra work" :)
How can I speak about it?
Many choices available. I like to do a subset of the following:
Write a clear semantic oriented CFG
for each symbol (T or NT) add/define a set of semantic attributes
for each production rule: add syntactic/semantic constraints predicates
for each production rule: add a set equations to define the values of the attributes
for each production rule: add a English explanation, examples, etc
Can someone explain to me why grammars [context-free grammar and context-sensitive grammar] of this kind accepts a String?
What I know is
Context-free grammar is a formal grammar in which every production(rewrite) rule is a form of Vāw
Where V is a single nonterminal symbol and w is a string of terminals and/or non-terminals. w can be empty
Context-sensitive grammar is a formal grammar in which left-hand sides and right hand sides of any production (rewrite) rules may be surrounded by a context of terminal and nonterminal symbols.
But how can i explain why these grammar accepts a String?
An important detail here is that grammars do not accept strings; they generate strings. Grammars are descriptions of languages that provide a means for generating all possible strings contained in the language. In order to tell if a particular string is contained in the language, you would use a recognizer, some sort of automaton that processes a given string and says "yes" or "no."
A context-free grammar (CFG) is a grammar where (as you noted) each production has the form A ā w, where A is a nonterminal and w is a string of terminals and nonterminals. Informally, a CFG is a grammar where any nonterminal can be expanded out to any of its productions at any point. The language of a grammar is the set of strings of terminals that can be derived from the start symbol.
A context-sensitive grammar (CSG) is a grammar where each production has the form wAx ā wyx, where w and x are strings of terminals and nonterminals and y is also a string of terminals. In other words, the productions give rules saying "if you see A in a given context, you may replace A by the string y." It's an unfortunate that these grammars are called "context-sensitive grammars" because it means that "context-free" and "context-sensitive" are not opposites, and it means that there are certain classes of grammars that arguably take a lot of contextual information into account but aren't formally considered to be context-sensitive.
To determine whether a string is contained in a CFG or a CSG, there are many approaches. First, you could build a recognizer for the given grammar. For CFGs, the pushdown automaton (PDA) is a type of automaton that accepts precisely the context-free languages, and there is a simple construction for turning any CFG into a PDA. For the context-sensitive grammars, the automaton you would use is called a linear bounded automaton (LBA).
However, these above approaches, if treated naively, are not very efficient. To determine whether a string is contained in the language of a CFG, there are far more efficient algorithms. For example, many grammars can have LL(k) or LR(k) parsers built for them, which allows you to (in linear time) decide whether a string is contained in the grammar. All grammars can be parsed using the Earley parser, which in O(n3) can determine whether a string of length n is contained in the grammar (interestingly, it can parse any unambiguous CFG in O(n2), and with lookaheads can parse any LR(k) grammar in O(n) time!). If you were purely interested in the question "is string x contained in the language generated by grammar G?", then one of these approaches would be excellent. If you wanted to know how the string x was generated (by finding a parse tree), you can adapt these approaches to also provide this information. However, parsing CSGs is, in general, PSPACE-complete, so there are no known parsing algorithms for them that run in worst-case polynomial time. There are some algorithms that in practice tend to run quickly, though. The authors of Parsing Techniques: A Practical Guide (see below) have put together a fantastic page containing all sorts of parsing algorithms, including one that parses context-sensitive languages.
If you're interested in learning more about parsing, consider checking out the excellent book "Parsing Techniques: A Practical Guide, Second Edition" by Grune and Jacobs, which discusses all sorts of parsing algorithms for determining whether a string is contained in a grammar and, if so, how it is generated by the parsing algorithm.
As was said before, a Grammar doesn't accept a string, but it is simply a way in order to generate specific words of a Language that you analyze. In fact, the grammar as the generative rule in the Formal Language Theory instead the finite state automaton do what you're saying, the recognition of specific strings.
In particular, you need recursive enumerable automaton in order to recognize Type 1 Languages( the Context Sensitive Languages in the Chomsky's Hierarchy ).
A grammar for a specific language only grants to you to specify the property of all the strings which gather to the set of strings of the CS language.
I hope that my explanation was clear.
One easy way to show that a grammar accepts a string is to show the production rules for that string.