Both are technologies that are expressed via languages full of macros, but in a more technical terms, what is the kind of grammar and how to describe their own properties ?
I'm not interested in a graphical representation, by properties I mean a descriptive phrase about this subject, so please don't just go for a BNF/EBNF oriented response full of arcs and graphs .
I assume that both are context-free grammars, but this is a big family of grammars, there is a way to describe this 2 in a more precise way ?
Thanks.
TeX can change the meaning of characters at run time, so it's not context free.
Is my language Context-Free?
I believe that every useful language ends up being Turing-complete, reflexive, etc.
Fortunately that is not the end of the story.
Most of the parser generation tools (yacc, antler, etc) process up to context-free grammars (CFG).
So we divide the language processing problem in 3 steps:
Build an over-generating CFG; this is the "syntactical" part that constitutes a solid base where we add the other components,
Add "semantic" constraints (with some extra syntactic and semantic constraints)
main semantics ( static semantics, pragmatics, attributive semantics, etc)
Writing a context-free grammar is a very standard way of speaking about all the languages!
It is a very clear and didactic notation for languages!! (and sometimes is not telling all the truth).
When We say that "is not context-free, is Turing-complete, ..." you can translate it to "you can count with lots of semantic extra work" :)
How can I speak about it?
Many choices available. I like to do a subset of the following:
Write a clear semantic oriented CFG
for each symbol (T or NT) add/define a set of semantic attributes
for each production rule: add syntactic/semantic constraints predicates
for each production rule: add a set equations to define the values of the attributes
for each production rule: add a English explanation, examples, etc
Related
I am attempting to understand the best uses of backward and forward chaining in AI programming for a program I am writing. Would anyone be able to explain the most ideal uses of backward and forward chaining? Also, could you provide an example?
I have done some research on the current understanding of "forward chaining" and "backward chaining". This brings up a lot of material. Here is a résumé.
First a diagram, partially based on:
The Sad State Concerning the Relationships between Logic, Rules and Logic Programming (Robert Kowalski)
LHS stands for "left-hand-side", RHS stands for "right-hand-side" of a rule throughout.
Let us separate "Rule-Based Systems" (i.e. systems which do local computation based on rules), into three groups as follows:
Production Rule Systems, which include the old-school Expert System Shells, which are not built on logical principles, i.e. "without a guiding model".
Logic Rule Systems, i.e. system based on a logical formalism (generally a fragment of first-order logic, classical or intuitionistic). This includes Prolog.
Rewrite Rule Systems, systems which rewrite some working memory based on, LHS => RHS rewrite rules.
There may be others. Features of one group can be found in another group. Systems of one group may be partially or wholly implemented by systems of another group. Overlap is not only possible but certain.
(Sadly, imgur does not accept .svg in 2020, so it's a .png)
Green: Forward Chaining
Orange: Backward Chaining
Yellow: Prolog
RuleML (an organization) tries to XML-ize the various rulesets which exist. They classify rules as follows:
The above appears in The RuleML Perspective on Reaction Rule Standards by Adrian Paschke.
So they make a differentiation between "deliberative rules" and "reactive rules", which fits.
First box: "Production Rule Systems"
The General Idea of the "Production Rule System" (PRS)
There are "LHS->RHS" rules & meta-rules, with the latter controlling application of the first. Rules can be "logical" (similar to Prolog Horn Clauses), but they need not be!
The PRS has a "working memory", which is changed destructively whenever a rule is applied: elements or facts can be removed, added or replaced in the working memory.
PRS have "operational semantics" only (they are defined by what they do).
PRS have no "declarative semantics", which means there is no proper way to reason about the ruleset itself: What it computes, what its fixpoint is (if there is one), what its invariants are, whether it terminates etc.
More features:
Ad-hoc handling of uncertainty using locally computable functions (i.e.
not probability computations) as in MYCIN, with Fuzzy rules, Dempster-Shaefer theory etc.
Strong Negation may be expressed in an ad-hoc fashion.
Generally, backtracking on impasse is not performed, one has to implement it explicitly.
PRS can connect to other systems rather directly: Call a neural network, call an optimizer or SAT Solver, call a sensor, call Prolog etc.
Special support for explanations & debugging may or may not exist.
Example Implementations
Ancient:
Old-school "expert systems shells", often written in LISP.
Planner of 1971, which is language with rudimentary (?) forward and backward chaining. The implementations of that language were never complete.
The original OPSx series, in particular OPS5, on which R1/XCON - a VAX system configurator with 2500 rules - was running. This was actually a forward-chaining implementation.
Recent:
CLIPS (written in C): http://www.clipsrules.net/
Jess (written in Java): https://jess.sandia.gov/
Drools (writen in "Enterprise" Java): https://www.drools.org/
Drools supports "backwards-chaining" (how exactly), but I'm not sure any of the others does, and if they do, how it looks like)
"Forward chaining" in PRS
Forward-chaining is the original approach to the PRS "cycle", also called "recognize-act" cycle, or the "data-driven cycle", which indicates what it is for. Event-Condition-Action architecture is another commonly used description.
The inner working are straightforward:
The rule LHSs are matched against the working memory (which happens at every working memory update thanks to the RETE algorithm).
One of the matching rules is selected according to some criterium (e.g. priority) and its RHS is executed. This continues until no LHS matches anymore.
This cycle can be seen as higher-level approach to imperative state-based languages.
Robert Kowalski notes that the "forward chaining" rules are actually an amalgamation of two distinct uses:
Forward-chained logic rules
These rules apply Modus Ponens repeatedly to the working memory and add deduced facts.
Example:
"IF X is a man, THEN X is mortal"
Uses:
Deliberation, refinement of representations.
Exploration of state spaces.
Planning if you want more control or space is at a premium (R1/XCON was a forward chaining system, which I find astonishing. This was apparently due to the desire to keep resource usage within bounds).
In Making forward chaining relevant (1998), Fahiem Bacchus writes:
Forward chaining planners have two particularly useful properties. First, they maintain complete information about the intermediate states generated by a potential plan. This information can be utilized to provide highly effective search control, both domain independent heuristic control and even more effective domain dependent control ... The second advantage of forward chaining planners is they can support rich planning languages. The TLPlan system for example, supports the full ADL language, including functions and numeric calculations. Numbers and functions are essential for modeling many features of real planning domains, particularly resourcs and resource consumption.
How much of the above really applies is debatable. You can always write your backward-chaining planner to retain more information or to be open to configuration by a search strategy selecting module.
Forward-chaining "reactive rules" aka "stimulus-response rules"
Example:
"IF you are hungry THEN eat something"
The stimulus is "hunger" (which can be read off a sensor). The response is to "eat something" (which may mean controlling an effector). There is an unstated goal, hich is to be "less hungry", which is attained by eating, but there is no deliberative phase where that goal is made explicit.
Uses:
Immediate, non-deliberative agent control: LHS can be sensor input, RHS can be effector output.
"Backward chaining" in PRS
Backward chaining, also called "goal-directed search", applies "goal-reduction rules" and runs the "hypothesis-driven cycle", which indicates what it is for.
Examples:
BDI Agents
MYCIN
Use this when:
Your problem looks like a "goal" that may be broken up into "subgoals", which can be solved individually. Depending on the problem, this may not be possible. The subgoals have too many interdependencies or too little structure.
You need to "pull in more data" on demand. For example, you ask the user Y/N question until you have classified an object properly, or, equivalently, until a diagnosis has been obtained.
When you need to plan, search, or build a proof of a goal.
One can encode backward-chaining rules also as forward-chaining rules as a programming exercise. However, one should choose the representation and the computational approach that is best adapted to one's problem. That's why backward chaining exists after all.
Second box: "Logic Rule Systems" (LRS)
These are systems based on some underlying logic. The system's behaviour can (at least generally) be studied independently from its implementation.
See this overview: Stanford Encyclopedia of Philosophy: Automated Reasoning.
I make a distinction between systems for "Modeling Problems in Logic" and systems for "Programming in Logic". The two are merged in textbooks on Prolog. Simple "Problems in Logic" can be directly modeled in Prolog (i.e. using Logic
Programming) because the language is "good enough" and there is no mismatch. However, at some point you need dedicated systems for your task, and these may be quite different from Prolog. See Isabelle or Coq for examples.
Restricting ourselves to Prolog family of systems for "Logic Programming":
"Forward chaining" in LRS
Forward-chaining is not supported by a Prolog system as such.
Forward-chained logic rules
If you want to forward-chained logic rules you can write your own interpreter "on top of Prolog". This is possible because Prolog is general purpose programming language.
Here is a very silly example of forward chaining of logic rules. It would certainly be preferable to define a domain-specific language and appropriate data structures instead:
add_but_fail_if_exists(Fact,KB,[Fact|KB]) :- \+member(Fact,KB).
fwd_chain(KB,KBFinal,"forall x: man(x) -> mortal(x)") :-
member(man(X),KB),
add_but_fail_if_exists(mortal(X),KB,KB2),
!,
fwd_chain(KB2,KBFinal,_).
fwd_chain(KB,KBFinal,"forall x: man(x),woman(y),(married(x,y);married(y,x)) -> needles(y,x)") :-
member(man(X),KB),
member(woman(Y),KB),
(member(married(X,Y),KB);member(married(Y,X),KB)),
add_but_fail_if_exists(needles(Y,X),KB,KB2),
!,
fwd_chain(KB2,KBFinal,_).
fwd_chain(KB,KB,"nothing to deduce anymore").
rt(KBin,KBout) :- fwd_chain(KBin,KBout,_).
Try it:
?- rt([man(socrates),man(plato),woman(xanthippe),married(socrates,xanthippe)],KB).
KB = [needles(xanthippe, socrates), mortal(plato),
mortal(socrates), man(socrates), man(plato),
woman(xanthippe), married(socrates, xanthippe)].
Extensions to add efficient forward-chaining to Prolog have been studied but they seem to all have been abandoned. I found:
1989: Adding Forward Chaining and Truth Maintenance to Prolog (PDF) (Tom_Finin, Rich Fritzson, Dave Matuszek)
There is an active implementation of this on GitHub: Pfc -- forward chaining in Prolog, and an SWI-Prolog pack, see also this discussion.
1997: Efficient Support for Reactive Rules in Prolog (PDF) (Mauro Gaspari) ... the author talks about "reactive rules" but apparently means "forward-chained deliberative rules".
1998: On Active Deductive Database: The Statelog Approach (Georg Lausen, Bertram Ludäscher, Wolfgang May).
Kowalski writes:
"Zaniolo (LDL++?) and Statelog use a situation calculus-like representation with frame axioms, and reduce Production Rules and Event-Condition-Action rules to Logic Programs. Both suffer from the frame problem."
Forward-chained reactive rules
Prolog is not really made for "reactive rules". There have been some attempts:
LUPS : A language for updating logic programs (1999) (Moniz Pereira , Halina Przymusinska , Teodor C. Przymusinski C)
The "Logic-Based Production System" (LPS) is recent and rather interesting:
Integrating Logic Programming and Production Systems in Abductive Logic Programming Agents (Robert Kowalski, Fariba Sadri)
Presentation at RR2009: Integrating Logic Programming and Production Systems in Abductive Logic Programming Agents
LPS website
It defines a new language where Observations lead to Forward-Chaining and Backward-Chaining lead to Acts. Both "silos" are linked by Integrity Constraints from Abductive Logic Programming.
So you can replace a reactive rule like this:
By something like this, which has a logic interpretation:
Third Box: "Rewrite Rule Systems" (forward-chaining)
See also: Rewriting.
Here, I will just mention CHR. It is a forward-chaining system which successively rewrites elements in a working memory according to rules with match working memory elements, verify a logic guard condition , and removed/add working memory elements if the logic guard condition succeeds.
CHR can be understood as an application of a fragment of linear logic (see "A Unified Analytical Foundation for Constraint Handling Rules" by Hariolf Betz).
A CHR implementation exists for SWI Prolog. It provides backtracking capability for CHR rules and a CHR goal can be called like any other Prolog goal.
Usage of CHR:
General model of computational (i.e. like Turing Machines etc.)
Bottom up parsing.
Type checking.
Constraint propagation in constraint logic programmning.
Anything that you would rather forward-chain (process bottom-up)
rather than backward-chain (process top-down).
I find it useful to start with your process and goals.
If your process can be easily expressed as trying to satisfy a goal by satisfying sub-goals then you should consider a backward-chaining system such as Prolog. These systems work by processing rules for the various ways in which a goal can be satisfied and the constraints on these applying these ways. Rule processing searches the network of goals with backtracking to try alternatives when one way of satisfying a goal fails.
If your process starts with a set of known information and applies the rules to add information then you should consider a forward-chaining system such as Ops5, CLIPS or JESS. These languages apply matching to the left hand side of the rule and invoke the right hand side of rules for which the matching succeeds. The working memory is better thought of as "what is known" than "true facts". Working memory can contain information known to be true, information known to be false, goals, sub-goals, and even domain rules. How this information is used is determined by the rules, not the language. To these languages there is no difference between rules that create values (deduce facts), rules that create goals, rules that create new domain knowledge or rules that change state. It is all in how you write your rules and organize your data and add base clauses to represent this knowledge.
It is fairly easy to implement either method using the other method. If you have a body of knowledge and want to make dedications but this needs to be directed by some goals go ahead and use a forward chaining language with rules to keep track of goals. In a backward chaining language you can have goals to deduce knowledge.
I would suggest that you consider writing rules to handle the processing of domain knowledge and not to encode your domain knowledge directly in the rules processed by the inference engine. Instead, the working memory or base clauses can contain your domain knowledge and the language rules can apply them. By representing the domain knowledge in working memory you can also write rules to check the domain knowledge (validate data, check for overlapping ranges, check for missing values, etc.), add a logic system on top of the rules (to calculate probabilities, confidence values, or truth values) or handle missing values by prompting for user input.
In my opinion, define the DCGs (Definite Clause Grammars) as a compact way to describe the lists in Prolog, is a poorly way to define them. As far as I know, the DCGs are not only used in Prolog, but also in other programming languages, such as Mercury.
In addition, they are called DCGs, because they represent a grammar in a set of definite clauses (Horn clauses), the basis of logic programming.
So why if an entire Prolog program can be written using definite clauses, DCGs are solely defined as a compact way to describe the lists in Prolog?
Note: The doubt arises from the description for the tag dcg given by SO.
The extended info from the DCG tag wiki provides additional information, which I think is both correct and also in close agreement with your first point:
"DCGs are usually associated with Prolog, but similar languages such
as Mercury also include DCGs."
Regarding your second point: Emphasizing the close association with Prolog lists is in my opinion well justified, since a DCG indeed always describes a list, and typically also quite compactly.
I'm working on a slot-machine mini-game application. The rules for what constitutes a winning prize are rather complex (n of a kind, n of any kind, specific sequences), and to make matters even more complicated, this code should work for a slot-machine with (n >= 3) reels.
So, after some thought, I believe defining a context-free language is the most efficient and extensible way to go. This way I could define the grammar in an XML file.
So my question is, given a string of symbols S, how do I go about testing if S is in a given Context-Free Language? Would I simply exhaust rules until I'm out of valid rules/symbols, or is there a known algorithm that could help. Thanks.
Also, a language like this seems non-regular, am I correct? I've never been good at proofs, so I've avoided trying.
Any comments on my approach would be appreciated as well.
Thanks.
"...given a string of symbols S, how do I go about testing if S is in
a given Context-Free Language?"
If a string w is in L(G); the process of finding a sequence of production rules of G by which w is derived is call parsing. So, you have to create a parse tree to search for some derivation. To do this you perform an exhaustive Breadth-First-Search. There is a serious issue that arises: The searching process may never terminate. To prevent endless searches you have to transform the grammer into what is known as normal form.
"Also, a language like this seems non-regular, am I correct?"
Not necessarily. Every regular language is context-free (because it can be described by a CTG), but not every context-free language is regular.
General cases of context free grammers are hard to evaluate.
However, there are methods to parse grammers in subsets of the context free grammers.
For example: SLR and LL grammers are often used by compilers to parse programming languages, which are also context free languages. To use these, your grammer must be in one of these "families" (remember - there are infinite number of grammers for each context free language).
Some practical tools you might want to use that are generally used for compilers are JavaCC in java and bison in C++.
(If I remember correctly, Bison is SLR parser and JavaCC is LL Parser, but I could be wrong)
P.S.
For a specific slot machine, with n slots and k symbols - the language is definetly regular, since there are at most kn "words" in it, and every finite language is regular. Things obviously get compilcated if you are looking for a grammer for all slot machines.
Your best bet is to actually code this with a proper programming language. A CFG is overkill, because it can be extremely hard to code some, as you say, "rather complex" rules. For example, grammars are poorly suited to talking about the number of things.
For example, how would you code "the number of cherries is > the number of any other object" in such a language? How would the person you're giving the program to do so? CFGs cannot easily express such concepts, and regular expressions cannot sanely do so by any stretch.
The answer is that grammars are not right for this task, unless the slot machines is trying to make English sentences.
You also have to consider what happens when TWO or more "prize sequences" match! Assuming you want to give out the highest prize, you need an ordered list of recognizers. This is not to say you can't code your recognizers with (for example) regular expressions in addition to arbitrary functions. I'm just saying that general CFG parsing is overkill, because what CFGs get you over regular languages (i.e. regular expressions) is the ability to consider parse trees of arbitrary depth (like nested parentheses of level N or more), which is probably not what you care about.
This is not to say that you don't, for example, want to allow regular expressions. You can make that job easy by using a parser generator to recognize regexes involving cherries bananas and pears, see http://en.wikipedia.org/wiki/Comparison_of_parser_generators, which you can then embed, though you might want to simply roll your own recursive descent parser (assuming again you don't care about CFGs, especially if your tokens are bounded length).
For example, here is how I might implement it in pseudocode (ideally you'd use a statically typechecked language with good list manipulation, which I can't think of off the top of my head):
rules = []
function Rule(name, code) {
this.name = name
this.code = code
rules.push(this) # adds them in order
}
##########################
Rule("All the same", regex(.*))
Rule("No two-in-a-row", function(list, counts) {
not regex(.{2}).match(list)
})
Rule("More cherries than anything else", function(list, counts) {
counts[cherries]>counts[x] for all x in counts
or
sorted(counts.items())[0]==cherries
or
counts.greatest()==cherries
})
for token in [cherry, banana, ...]:
Rule("At least 50% "+token, function(list, counts){
counts[token] >= list.length/2
})
Can someone explain to me why grammars [context-free grammar and context-sensitive grammar] of this kind accepts a String?
What I know is
Context-free grammar is a formal grammar in which every production(rewrite) rule is a form of V→w
Where V is a single nonterminal symbol and w is a string of terminals and/or non-terminals. w can be empty
Context-sensitive grammar is a formal grammar in which left-hand sides and right hand sides of any production (rewrite) rules may be surrounded by a context of terminal and nonterminal symbols.
But how can i explain why these grammar accepts a String?
An important detail here is that grammars do not accept strings; they generate strings. Grammars are descriptions of languages that provide a means for generating all possible strings contained in the language. In order to tell if a particular string is contained in the language, you would use a recognizer, some sort of automaton that processes a given string and says "yes" or "no."
A context-free grammar (CFG) is a grammar where (as you noted) each production has the form A → w, where A is a nonterminal and w is a string of terminals and nonterminals. Informally, a CFG is a grammar where any nonterminal can be expanded out to any of its productions at any point. The language of a grammar is the set of strings of terminals that can be derived from the start symbol.
A context-sensitive grammar (CSG) is a grammar where each production has the form wAx → wyx, where w and x are strings of terminals and nonterminals and y is also a string of terminals. In other words, the productions give rules saying "if you see A in a given context, you may replace A by the string y." It's an unfortunate that these grammars are called "context-sensitive grammars" because it means that "context-free" and "context-sensitive" are not opposites, and it means that there are certain classes of grammars that arguably take a lot of contextual information into account but aren't formally considered to be context-sensitive.
To determine whether a string is contained in a CFG or a CSG, there are many approaches. First, you could build a recognizer for the given grammar. For CFGs, the pushdown automaton (PDA) is a type of automaton that accepts precisely the context-free languages, and there is a simple construction for turning any CFG into a PDA. For the context-sensitive grammars, the automaton you would use is called a linear bounded automaton (LBA).
However, these above approaches, if treated naively, are not very efficient. To determine whether a string is contained in the language of a CFG, there are far more efficient algorithms. For example, many grammars can have LL(k) or LR(k) parsers built for them, which allows you to (in linear time) decide whether a string is contained in the grammar. All grammars can be parsed using the Earley parser, which in O(n3) can determine whether a string of length n is contained in the grammar (interestingly, it can parse any unambiguous CFG in O(n2), and with lookaheads can parse any LR(k) grammar in O(n) time!). If you were purely interested in the question "is string x contained in the language generated by grammar G?", then one of these approaches would be excellent. If you wanted to know how the string x was generated (by finding a parse tree), you can adapt these approaches to also provide this information. However, parsing CSGs is, in general, PSPACE-complete, so there are no known parsing algorithms for them that run in worst-case polynomial time. There are some algorithms that in practice tend to run quickly, though. The authors of Parsing Techniques: A Practical Guide (see below) have put together a fantastic page containing all sorts of parsing algorithms, including one that parses context-sensitive languages.
If you're interested in learning more about parsing, consider checking out the excellent book "Parsing Techniques: A Practical Guide, Second Edition" by Grune and Jacobs, which discusses all sorts of parsing algorithms for determining whether a string is contained in a grammar and, if so, how it is generated by the parsing algorithm.
As was said before, a Grammar doesn't accept a string, but it is simply a way in order to generate specific words of a Language that you analyze. In fact, the grammar as the generative rule in the Formal Language Theory instead the finite state automaton do what you're saying, the recognition of specific strings.
In particular, you need recursive enumerable automaton in order to recognize Type 1 Languages( the Context Sensitive Languages in the Chomsky's Hierarchy ).
A grammar for a specific language only grants to you to specify the property of all the strings which gather to the set of strings of the CS language.
I hope that my explanation was clear.
One easy way to show that a grammar accepts a string is to show the production rules for that string.
I'm writing some Extended Backus–Naur Form grammars for document parsing. There are lots of excellent guides for the syntax of these definitions, but very little online about how to design and structure them.
Can anyone suggest good articles (or general tips) about how you like to approach writing these as there does seem to be an element of style even if the final parse trees can be equivalent.
e.g. things like:
Deciding if you should explicitly tag newlines, or just treat it as whitespace?
Naming schemes for your nonterminals
Handing optional whitespace in long definitions
When to use bad syntax checks vs just letting those not match
Thanks,
You should work in the direction that you are most comfortable with - either bottom-up, top-down, or "sandwich" (do a little of both, meet somewhere in the middle).
Any "group" that can be derived and has a meaning of its own, should start from it's own non-terminal. So for example, I would use a non-terminal for all newline-related whitespaces, one for all the other whitespaces, and one for all whitespaces (which is basically the union of the former 2).
Naming conventions in grammars in general are that non-terminals are, or start with, a capital letter, and terminals start with non-capitals (but this of course depends on the language you're designing).
Regarding bad syntax checks, I'm not familiar with the concept. What I know of EBNFs are that you just write everything your language accepts, and only that.
Generally, just look around at some EBNFs of different languages from different websites, get a feeling of how they look, and then do what feels right to you.