I want to understand how does SLG resolution work. But I don't know what SLG is (what does it stand for?).
The resource I found are Query evaluation under the well-founded semantics and citation to Efficient implementation of general logical queries by W. Chen, D.S. Warren and Implementations of the well founded semantics by W. Chen, T. Swift, D.S. Warren (there should a meta-intepreter from Quintus in it). But it's a bit too abstract or I can't access it.
Prolog engines propose SLG resolution as tabling but I would like to implement a meta-interpreter that does SLG resolution.
Any hint for a definition that would help me to implement the meta-interpreter is welcome.
Related
I suspect that λProlog needs a type system to make their higher
order unification sound. Otherwise through self application some
Russell type anomalies can appear.
Are there alternative higher order Prologs that don't need .sig files?
Maybe by a much simpler type system, that doesn't need that many
declarations but still has some form of higher order unification?
Can this dilemma be solved?
Is there a higher order Prolog that wouldn't need a type system?
These are type-free:
HiLog
HiOrd
From the HiOrd paper:
The framework proposed gives rise to many questions the authors hope to ad-
dress in future research. In particular, a rigorous treatment must be developed for
comparison with other higher-order formal systems (Hilog, Lambda-Prolog). For
example, it is reasonably straightforward to conservatively translate the Higher-
order Horn fragment of λProlog into Hiord by erasing types, as the resolution
rules are essentially the same (assuming a type-safe higher-order unification pro-
cedure).
Ciao (includes HiOrd)
I am looking to clarify the some things about abductive logic programming vs. Answer set programming.
I with some classmates are creating a game. In this game there are "heroes" (special npcs). The heroes have goals and behaviors.
(All of this is story driven)
What I would like the heroes to react to a player's or another hero's action then decide what to do from there.
A teacher told us about a paper called "RoleModel: Towards a Formal Model of Dramatic Roles for Story Generation" it explains abductive logic programming. Through my research I found Answer Set Programming.
Question:
Is there a difference between the ALP paradigm and the ASP paradigm?
Is one better then other for my purposes?
Is there another option?
You're really asking three questions. I'm not qualified to answer any of them, but I'm going to take a crack at it anyway.
Is there a difference between the ALP paradigm and the ASP paradigm?
Yes. ASP is a paradigm in which your search problem is made into a model that can be handed over to different solvers. The paper you reference says in section 4.1 that they follow the ASP paradigm and use deductive and abductive reasoning concurrently. So you can see that abductive and deductive are acting as tactical solvers inside a larger ASP process.
Based on what I read on Wikipedia, this is a good approach because abductive reasoning is about providing explanations rather than logical consequences. I could see how you would like that in story generation; "Mary hates Sue, therefore Mary killed Sue" is a deduction but "Mary hates Sue, because Sue ran over her dog" seems more like an abduction, based on my cursory reading. You would want both to flesh out a story, or it's going to get pretty dull.
Is one better then other for my purposes?
All you've said about your purposes is that you're making a game. I am not a game developer but I feel fairly confident is assuring you that nothing like this is used in a typical game. Game AI is its own whole field. I would be shocked if any of this stuff was used in a major game.
That said, RoleModel shows you can do it, and it uses both, with ASP controlling a combined ALP/DLP process. It seems likely to me that the two are pretty separable and since one can use the other, I would guess they are not in strict opposition to each other. If it worked for RoleModel game, the real question isn't can it be done, is it a good idea, but is it a good fit for what you're trying to accomplish? If you're trying to build an action-shooter, I would wager that other, simpler approaches will work out better; if you're trying to build a rich RPG, maybe it will be OK.
Is there another option?
Probably. I would investigate AI for games. The priorities are different enough that I would expect their literature starts in completely different places and goes in radically different directions, but I could be mistaken.
Any logic programming that supports hypothetical reasoning, can support ALP. Since ASP supports hypothetical reasoning, it can also support ALP. Hypothetical reasoning is a search where temporarily facts are assumed.
With standard ISO core Prolog we can simulate assuming a fact by the following code. The code leaves a choice point and doesn't work correctly if there is a cut involved, this is why specialized systems are nevertheless needed:
assumez(P) :- assertz(P).
assumez(P) :- retract(P), fail.
We can now solve the following abductive problem:
abducible :- (assumez(amount(glucose,low));assumez(amount(glucose,medium))),
(assumez(amount(lactose,medium));assumez(amount(lactose,hi))).
feed(lactose) :- amount(glucose,low), amount(lactose,hi).
feed(lactose) :- amount(glucose,medium), amount(lactose,medium).
A possible query runs as follows:
?- abducible, feed(lactose), listing(amount/2).
amount(glucose, low).
amount(lactose, hi).
Yes;
amount(glucose, medium).
amount(lactose, medium).
Yes ;
No
The above solution uses backward chaining. A forward chaining solution, and something that is closer to ASP choice operators, can be provided as well. The choice operator in ASP will do the hypothetical variants, we only use (;)/2 as a choice operator:
:- use_module(library(minimal/delta)).
:- multifile abducible/0.
:- dynamic abducible/0, amount/2, feed/1.
:- forward feed/2.
post(amount(glucose,low));post(aamount(glucose,medium)) <= posted(abducible).
post(amount(lactose,medium));post(amount(lactose,hi)) <= posted(abducible).
post(feed(lactose)) <= posted(amount(glucose,low)), posted(amount(lactose,hi)).
post(feed(lactose)) <= posted(amount(glucose,medium)), posted(amount(lactose,medium)).
A possible query runs as follows:
?- post(abducible), feed(lactose), listing(amount/2).
amount(glucose, low).
amount(lactose, hi).
Yes ;
amount(glucose, medium).
amount(lactose, medium).
Yes ;
No
FYI: As has been mentioned, some systems for performing inductive and abductive logic programming use ASP systems. A free open source example is XHAIL https://github.com/stefano-bragaglia/XHAIL
There is also a paper describing this version:
Bragaglia S., Ray O. (2015) Nonmonotonic Learning in Large Biological Networks. In: Davis J., Ramon J. (eds) Inductive Logic Programming. Lecture Notes in Computer Science, vol 9046. Springer, Cham
It could be argued that Sherlock Holmes is actually famous for abductive reasoning not deductive reasoning... so I think there is some interesting scope for a detective game using ALP. :).
I am looking for implementation of Prolog extension which handles temporal logic operators. Is there any info about this ?
As temporal logic has been a significant part of logic, I am sure that there must have been discussions about this with respect to prototype or implementation.
I suggest to take a look at Etalis. If it turns out to be overkill (I'm sorry I never really delved inside too much), and you're using SWI-Prolog, see if pack Julian could be a better fit. It's nicely integrated with CLP(FD) library and will leave you full freedom about the semantics of your operators. Of course, it's a 'lower level' approach...
I would start with Carlo's suggestions. But if you're looking only for basic temporal logic operators, the Logtalk library includes an implementation for basic temporal interval relations:
https://logtalk.org/docs/interval_0.html
You can use Logtalk as an extension to most Prolog implementations.
I want to learn more about the internals of Prolog and understand how this works.
I know how to use it. But not how it works internally. What are the names of the algorithms and concepts used in Prolog?
Probably it builds some kind of tree structure or directed object graph, and then upon queries it traveres that graph with a sophisticated algorithm. A Depth First Search maybe. There might be some source code around but it would be great to read about it from a high level perspective first.
I'm really new to AI and understanding Prolog seems to be a great way to start, imho. My idea is to try to rebuild something similar and skipping the parser part completely. I need to know the directions in which I have to do my research efforts.
What are the names of the algorithms and concepts used in Prolog?
Logic programming
Depth-first, backtracking search
Unification
See Sterling & Shapiro, The Art of Prolog (MIT Press) for the theory behind Prolog.
Probably it builds some kind of tree structure or directed object graph, and then upon queries it traveres that graph with a sophisticated algorithm. A Depth First Search maybe.
It doesn't build the graph explicitly, that wouldn't even be possible with infinite search spaces. Check out the first chapters of Russell & Norvig for the concept of state-space search. Yes, it does depth-first search with backtracking, but no, that isn't very sophisticated. It's just very convenient and programming alternative search strategies isn't terribly hard in Prolog.
understanding Prolog seems to be a great way to start, imho.
Depends on what you want to do, but knowing Prolog certainly doesn't hurt. It's a very different way of looking at programming. Knowing Prolog helped me understand functional programming very quickly.
My idea is to try to rebuild something similar and skipping the parser part completely
You mean skipping the Prolog syntax? If you happen to be familiar with Scheme or Lisp, then check out section 4.4 of Abelson & Sussman where they explain how to implement a logic programming variant of Scheme, in Scheme.
AI is a wide field, Prolog only touches symbolic AI. As for Prolog, the inner workings are too complex to explain here, but googling will give you plenty of resources. E.g. http://www.amzi.com/articles/prolog_under_the_hood.htm .
Check also Wikipedia articles to learn about the other areas of AI.
You might also want to read about the Warren Abstract Machine
typically, prolog code is translated to WAM instructions and then executed more efficiently.
I would add:
Programming Languages: An interpreter based approach by Samuel N. Kamin. The book is out of print, but you may find it in a University Library. It contains a Prolog implementation in Pascal.
Tim Budd's "The Kamin Interpreters in C++" (in postscript)
The book by Sterling and Shapiro, mentioned by larsmans, actually contains an execution model of Prolog. It's quite nice and explains clearly "how Prolog works". And it's an excellent book!
There are also other sources you could try. Most notably, some Lisp books build pedagogically-oriented Prolog interpreters:
On Lisp by paul Graham (in Common Lisp, using -- and perhaps abusing -- macros)
Paradigms of Artificial Intelligence Programming by Peter Norvig (in Common Lisp)
Structure and Interpretation of Computer Programs by Abelson and Sussman (in Scheme).
Of these, the last one is the clearest (in my humble opinion). However, you'd need to learn some Lisp (either Common Lisp or Scheme) to understand those.
The ISO core standard for Prolog also contains an execution model. The execution model is of interest since it gives a good model of control constructs such as cut !/0, if-then-else (->)/2, catch/3 and throw/1. It also explains how to conformantly deal with naked variables.
The presentation in the ISO core standard is not that bad. Each control construct is described in a form of a prose use case with a reference to an abstract Prolog machine consisting of a stack, etc.. Then there are pictures that show the stack before and after execution of the control construct.
The cheapest source is ANSI:
http://webstore.ansi.org/RecordDetail.aspx?sku=INCITS%2FISO%2FIEC+13211-1-1995+%28R2007%29
In addition to the many good answers already posted, I add some historical facts on Prolog.
Wikipedia on Prolog: Prolog was created around 1972 by Alain Colmerauer with Philippe Roussel, based on Robert Kowalski's procedural interpretation of Horn clauses.
Alain was a French computer scientist and professor at Aix-Marseille University from 1970 to 1995. Retired in 2006, he remained active until he died in 2017. He was named Chevalier de la Legion d’Honneur by the French government in 1986.
The inner works of Prolog can best be explained by its inventor in this article Prolog in 10 figures. It was published in Communications of the ACM, vol. 28, num. 12, December. 1985.
Prolog uses a subset of first order predicate logic, called Horn logic. The algorithm used to derive answers is called SLD resolution.
Could you please explain me what is the basic connection between the fundamentals of logical programming and the phenomenon of syntactic similarity between type systems and conventional logic?
The Curry-Howard correspondence is not about logic programming, but functional programming. The fundamental mechanic of Prolog is justified in proof theory by John Robinson's resolution technique, which shows how it is possible to check whether logical formulae expressed as Horn clauses are satisfiable, that is, whether you can find terms to substitue for their logic variables that make them true.
Thus logic programming is about specifying programs as logical formulae, and the calculation of the program is some form of proof inference, in Prolog reolution, as I have said. By contrast the Curry-Howard correspondence shows how proofs in a special formulasition of logic, called natural deduction, correspond to programs in the lambda calculus, with the type of the program corresponding to the formula that the proof proves; computation in the lambda calculus corresponds to an important phenomenon in proof theory called normalisation, which transforms proofs into new, more direct proofs. So logic programming and functional programming correspond to different levels in these logics: logic programs match formulae of a logic, whilst functional programs match proofs of formulae.
There's another difference: the logics used are generally different. Logic programming generally uses simpler logics — as I said, Prolog is founded on Horn clauses, which are a highly restricted class of formulae where implications may not be nested, and there are no disjunctions, although Prolog recovers the full strength of classical logic using the cut rule. By contrast, functional programming languages such as Haskell make heavy use of programs whose types have nested implications, and are decorated by all kinds of forms of polymorphism. They are also based on intuitionistic logic, a class of logics that forbids use of the principle of the excluded middle, which Robinson's computational mechanism is based on.
Some other points:
It is possible to base logic programming on more sophisticated logics than Horn clauses; for example, Lambda-prolog is based on intuitionistic logic, with a different computation mechanism than resolution.
Dale Miller has called the proof-theoretic paradigm behind logic programming the proof search as programming metaphor, to contrast with the proofs as programs metaphor that is another term used for the Curry-Howard correspondence.
Logic programming is fundamentally about goal directed searching for proofs. The structural relationship between typed languages and logic generally involves functional languages, although sometimes imperative and other languages - but not logic programming languages directly. This relationship relates proofs to programs.
So, logic programming proof search can be used to find proofs that are then interpreted as functional programs. This seems to be the most direct relationship between the two (as you asked for).
Building whole programs this way isn't practical, but it can be useful for filling in tedious details in programs, and there's some important examples of this in practice. A basic example of this is structural subtyping - which corresponds to filling in a few proof steps via a simple entailment proof. A much more sophisticated example is the type class system of Haskell, which involves a particular kind of goal directed search - in the extreme this involves a Turing-complete form of logic programming at compile time.