Look for algorithm/research area that determines facts that would make a Prolog query true given a Prolog program - prolog

I'm looking for research, algorithms or even terminology for this area of research that take a Prolog program and a query I want to be true and attempt to find the facts that would need to be asserted to make it true. For example:
% Program
hasProperty(Object, Property) :-
property(Object, hasProperty, Property).
property(apple, hasProperty, red).
property(car, hasProperty, drivable).
% Magic function that determines what Facts would make
% query 'hasProperty(lemon, sour)' true
% in the program above
?- whatFacts(hasProperty(lemon, sour), Facts).
Facts = [property(lemon, sour)]
I'm sure research has been done on this, and certainly it seems unsolvable in the general case, but I'm curious what has been done but am having trouble finding the right terminology to find the work.
Would love any pointers to actual algorithms or names for the area or problem I'm describing.

This is called "abduction".
For the view from philosophical logic, Stanford Encyclopedia of Philosophy offers this entry: Abduction.
For the view from logic programming, Wikipedia offers this entry: Abductive Logic Programming.
A paper that uses Prolog and CHR (Constraint Handling Rules) for Abductive reasoning:
Henning Christiansen: Abductive reasoning in Prolog and CHR (PDF): A short introduction for the KIIS course, Autumn 2005.
Christiansen refers to the book
Abduction and Induction: Essays on their Relation and Integration, edited by Peter A. Flach and Antonis Hadjiantonis (Kluwer Academic Publishers, April 2000), Amazon Link, first chapter at researchgate
and provides this introductory explainer:
Deduction, reasoning within the knowledge we have already, i.e.,from those
facts we know and those rules and regularities of the world that we are
familiar with. E.g., reasoning from causes to effects: If you make a fire
here, you will burn down the house.
In Prolog, the language is structured so as to most naturally find the premise "make a fire here" if your goal happens to be "burn the house down".
Induction, finding general rules from the regularities that we have
experienced in the facts that we know; these rules can be used later for
prediction: Every time I made a fire in my living room, the house burnt
down ... Aha, the next time I make a fire in my living room, the house
will burn down too.
Abduction, reasoning from observed results to the basic facts from which
they follow, quite often it means from an observed effect to produce a
qualified guess for a possible cause: The house burnt down, perhaps my cousin
has made a fire in the living room again.
"Abductive Logic Programming" (ALP) is (used to be?) an active area of research.
Here is a Sprinker Link with search result.
ALP is a common problem in commonsense reasoning and planning. Examples that come to mind:
The CIFF Proof Procedure for
Abductive Logic Programming with Constraints: Theory, Implementation and
Experiments
Robert Kowalski, Fariba Sadri et al. have worked on "LPS" (Logic Production
System), which uses ALP (but not by name?) in the
context of the event calculus
to decide what actions to take to make facts about the world true (wishing
for more details here, I do hope they are editing a book on this).
Contrariwise, Raymond Reiter does not use Prolog but
Answer Set Programming
(which may be more adapted to ALP than the SLDNF approach of Prolog) for
(among others) abductive reasoning in the
Situation Calculus.
More on this in the book Knowledge in Action (MIT Press, July 2001).

As said in the comments this is called abductive reasoning, a good guide is in simply logical: https://too.simply-logical.space/src/text/3_part_iii/8.0.html

Related

Discussion about Abductive logic programming vs Answer Set Programming

I am looking to clarify the some things about abductive logic programming vs. Answer set programming.
I with some classmates are creating a game. In this game there are "heroes" (special npcs). The heroes have goals and behaviors.
(All of this is story driven)
What I would like the heroes to react to a player's or another hero's action then decide what to do from there.
A teacher told us about a paper called "RoleModel: Towards a Formal Model of Dramatic Roles for Story Generation" it explains abductive logic programming. Through my research I found Answer Set Programming.
Question:
Is there a difference between the ALP paradigm and the ASP paradigm?
Is one better then other for my purposes?
Is there another option?
You're really asking three questions. I'm not qualified to answer any of them, but I'm going to take a crack at it anyway.
Is there a difference between the ALP paradigm and the ASP paradigm?
Yes. ASP is a paradigm in which your search problem is made into a model that can be handed over to different solvers. The paper you reference says in section 4.1 that they follow the ASP paradigm and use deductive and abductive reasoning concurrently. So you can see that abductive and deductive are acting as tactical solvers inside a larger ASP process.
Based on what I read on Wikipedia, this is a good approach because abductive reasoning is about providing explanations rather than logical consequences. I could see how you would like that in story generation; "Mary hates Sue, therefore Mary killed Sue" is a deduction but "Mary hates Sue, because Sue ran over her dog" seems more like an abduction, based on my cursory reading. You would want both to flesh out a story, or it's going to get pretty dull.
Is one better then other for my purposes?
All you've said about your purposes is that you're making a game. I am not a game developer but I feel fairly confident is assuring you that nothing like this is used in a typical game. Game AI is its own whole field. I would be shocked if any of this stuff was used in a major game.
That said, RoleModel shows you can do it, and it uses both, with ASP controlling a combined ALP/DLP process. It seems likely to me that the two are pretty separable and since one can use the other, I would guess they are not in strict opposition to each other. If it worked for RoleModel game, the real question isn't can it be done, is it a good idea, but is it a good fit for what you're trying to accomplish? If you're trying to build an action-shooter, I would wager that other, simpler approaches will work out better; if you're trying to build a rich RPG, maybe it will be OK.
Is there another option?
Probably. I would investigate AI for games. The priorities are different enough that I would expect their literature starts in completely different places and goes in radically different directions, but I could be mistaken.
Any logic programming that supports hypothetical reasoning, can support ALP. Since ASP supports hypothetical reasoning, it can also support ALP. Hypothetical reasoning is a search where temporarily facts are assumed.
With standard ISO core Prolog we can simulate assuming a fact by the following code. The code leaves a choice point and doesn't work correctly if there is a cut involved, this is why specialized systems are nevertheless needed:
assumez(P) :- assertz(P).
assumez(P) :- retract(P), fail.
We can now solve the following abductive problem:
abducible :- (assumez(amount(glucose,low));assumez(amount(glucose,medium))),
(assumez(amount(lactose,medium));assumez(amount(lactose,hi))).
feed(lactose) :- amount(glucose,low), amount(lactose,hi).
feed(lactose) :- amount(glucose,medium), amount(lactose,medium).
A possible query runs as follows:
?- abducible, feed(lactose), listing(amount/2).
amount(glucose, low).
amount(lactose, hi).
Yes;
amount(glucose, medium).
amount(lactose, medium).
Yes ;
No
The above solution uses backward chaining. A forward chaining solution, and something that is closer to ASP choice operators, can be provided as well. The choice operator in ASP will do the hypothetical variants, we only use (;)/2 as a choice operator:
:- use_module(library(minimal/delta)).
:- multifile abducible/0.
:- dynamic abducible/0, amount/2, feed/1.
:- forward feed/2.
post(amount(glucose,low));post(aamount(glucose,medium)) <= posted(abducible).
post(amount(lactose,medium));post(amount(lactose,hi)) <= posted(abducible).
post(feed(lactose)) <= posted(amount(glucose,low)), posted(amount(lactose,hi)).
post(feed(lactose)) <= posted(amount(glucose,medium)), posted(amount(lactose,medium)).
A possible query runs as follows:
?- post(abducible), feed(lactose), listing(amount/2).
amount(glucose, low).
amount(lactose, hi).
Yes ;
amount(glucose, medium).
amount(lactose, medium).
Yes ;
No
FYI: As has been mentioned, some systems for performing inductive and abductive logic programming use ASP systems. A free open source example is XHAIL https://github.com/stefano-bragaglia/XHAIL
There is also a paper describing this version:
Bragaglia S., Ray O. (2015) Nonmonotonic Learning in Large Biological Networks. In: Davis J., Ramon J. (eds) Inductive Logic Programming. Lecture Notes in Computer Science, vol 9046. Springer, Cham
It could be argued that Sherlock Holmes is actually famous for abductive reasoning not deductive reasoning... so I think there is some interesting scope for a detective game using ALP. :).

What are the best Prolog programming practices and style guidelines? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
OK, I know that this is very general question and that there were written some papers on the subject, but I have a feeling that these publications cover very basic material and I'm looking for something more advanced which would improve style and efficency. This is what I have in paper:
"Research Report AI-1989-08 Efficient Prolog: A Practical Guide" by Michael A. Covington, 1989
"Efficient Prolog Programming" by Timo Knuutila, 1992
"Coding guidelines for Prolog" by Covington, Bagnara, O'Keefe, Wielemaker, Price, 2011
Sample subjects covered in these: tail recursion and differential lists, proper use of indexing, proper use of cuts, avoiding asserts and retracts, avoiding CONSing, code formatting guidelines (indentation, if-then-elses etc.), naming conventions, code documenting, arguments order, testing.
What would you add here from your own personal experience with Prolog? Are there any special style guidelines applicable only to CLP programming? Do you know of some common efficiency problems and know how to deal with them?
UPDATE:
Some interesting (but still too basic and too general for me) points are made here: Prolog programming guidelines of Lifeware Team
Just to highlight the whole problem I would like to qoute "Coding guidelines for Prolog" (Covington et al.):
As far as we know, a coherent and reasonably complete set of coding guidelines for Prolog has never been published. Moreover, when we look at the corpus of published Prolog programs, we do not see a de facto standard emerging. The most important reason behind this apparent omission is that the small Prolog community, due to the lack of a comprehensive language standard, is further fragmented into sub-communities centered around individual Prolog systems, none of which has a dominant position.
For designing clean interfaces in Prolog, I recommend reading the Prolog standard, see iso-prolog.
In particular the specific format how built-in predicates are codified which includes a particular style of documentation but also the way how errors are signaled. See 8.1 The format of built-in predicate definitions of ISO/IEC 13211-1:1995. You find definitions in that style online in Cor.2 and the
Prolog prologue.
A very good example of a library that follows the ISO error signaling conventions up to the letter (and yet is not standardized) is the implementation of library(clpfd) in SICStus and SWI. While both implementations are fundamentally different in their approach, they use the error conventions to their best advantage.
Back to ISO. This is ISO's format for built-in predicates:
x.y.z Name/Arity
In the beginning, there may be a short optional informal remark.
x.y.z.1 Description
A declarative description is given which starts very often with the most general goal using descriptive variable names such that they can be referred to later on. Should the predicate's meaning be not declarative at all, it is either stated "is true" or some otherwise unnecessary operationalizing word like "unifies", "assembles" is used. Let me give an example:
8.5.4 copy_term/2
8.5.4.1 Description
copy_term(Term_1, Term_2) is true iff Term_2 unifies with a term T which is a renamed copy (7.1.6.2) of Term_1.
So this unifies is a big red warning sign: Don't ever think this predicate is a relation, it can only be understood procedurally. And even more so it (implicitly) states that the definition is steadfast in the second argument.
Another example: sort/2. Is this now a relation or not?
8.4.3 sort/2
8.4.3.1 Description
sort(List, Sorted) is true iff Sorted unifies with the sorted list of List (7.1.6.5).
So, again, no relation. Surprised? Look at 8.4.3.4 Examples:
8.4.3.4 Examples
...
sort([X, 1], [1, 1]).
Succeeds, unifying X with 1.
sort([1, 1], [1, 1]).
Fails.
If necessary, a separate procedural description is added, starting with "Procedurally,". It again does not cover any errors at all. This is one of the big advantages of the standard descriptions: Errors are all separated from "doing", which helps a programmer (= user of the built-in) catching errors more systematically. To be fair, it slightly increases the burden of the implementor who wants to optimize by hand and on a case-to-case basis. Such optimized code is often prone to subtle errors anyway.
x.y.z.2 Template and modes
Here, a comprehensive, one or two line specification of the arguments' modes and types is given. The notation is very similar to other notations which finds its origin in the 1978 DECsystem-10 mode declarations.
8.5.2.2 Template and modes
arg(+integer, +compound_term, ?term)
There is, however, a big difference between ISO's approach and Covington et al.'s guideline which is of informal nature only and states how a programmer should use a predicate. ISO's approach describes how the built-in will behave - in particular which errors should be expected. (There are 4 errors following from above plus one extra error that cannot be seen from above spec, see below).
x.y.z.3 Errors
All error conditions are given, each in its own subclause numbered alphabetically. The codex in 7.12 Errors:
When more than one error condition is satisfied, the error that is reported by the Prolog processor is implementation dependent.
That means, that each error condition must state all preconditions where it applies. All of them. The error conditions are not read like an if-then-elsif-then...
It also means that the codifier has to put extra effort for finding good error conditions. This is all to the advantage of the actual user-programmer but certainly a bit of a pain for the codifier and implementor.
Many error conditions directly follow from the spec given in x.y.z.2 according to the NOTES in 8.1.3 Errors and according to 7.12.2 Error classification (summary). For the built-in predicate arg/3, errors a, b, c, d follow from the spec. Only error e does not follow.
8.5.2.3 Errors
a) N is a variable — instantiation_error.
b) Term is a variable — instantiation_error.
c) N is neither a variable nor an integer—type_error(integer, N).
d) Term is neither a variable nor a compound term— type_error(compound, Term).
e) N is an integer less than zero— domain_error(not_less_than_zero, N).
x.y.z.4 Examples
(Optional).
x.y.z.5 Bootstrapped built-in predicates
(Optional).
Defines other predicates that are so similar, they can be "bootstrapped".

Prolog basic questions

First, what do you recommend as a book for learning prolog. Second, is there an easy way to load many .pl files at once? Currently just doing one at a time with ['name.pl'] but it is annoying to do over and over again. I am also using this to learn.
Thanks
First, welcome to Prolog! I think you'll find it rewarding and enjoyable.
The books I routinely see recommended are The Art of Prolog, Programming Prolog and Clause and Effect. I have Art and Programming and they're both fine books; Art is certainly more encyclopedic and Programming is more linear. I consult Art and Craft a lot lately, and some weirder ones (Logic Grammars for example). I'm hoping to buy Prolog Programming in Depth next. I don't think there are a lot of bad Prolog books out there one should try to avoid. I would probably save Craft and Practice for later though.
You can load multiple files at once by listing them:
:- [file1, file2, file3].
ALso, since 'name.pl' ends in '.pl' you can omit the quotes; single quotes are really only necessary if Prolog wouldn't take the enclosed to be an atom ordinarily.
Hope this helps and good luck on your journey. :)
If you are incline to a mathematical introduction, Logic, Programming and Prolog (2ED) is an interesting book, by Nilsson and Maluszinski.
Programming in Prolog, by Clocksin and Mellish, is the classic introductory textbook.
In SWI-Prolog, also check out:
?- make.
to automatically reload files that were modified since they were consulted.
You can check out this question. There are several nice books recommended back there.
This is a nice short little intro: http://www.soe.ucsc.edu/classes/cmps112/Spring03/languages/prolog/PrologIntro.pdf
I also want to say there's a nice swi oriented pdf out there, but I can't find it.
I won't repeat the classic choices already mentioned in other answers, but I will add a note about Prolog Programming in Depth by Michael Covington, Donald Nute, and Andrew Vellino. Two chapters I would like to highlight are the chapters on hand tracing and defeasible rules. The former shows you how to trace out a Prolog computation on pencil and paper in an efficient and helpful manner. The latter shows you how to create Prolog code that supports defeasible rules. Unlike the rules you are accustomed to in Prolog that either succeed or fail outright and are not affected by anything not stated in the rule itself, defeasible rules can succeed on the information stated in the rule yet can be undercut by other rules in the knowledge base making the expression that are generally true but have exceptions easier in a manner that is compact and easy to understand. Said better by the book "A defeasible rule, on the other hand, is a rule that cannot be applied to some cases even though those cases satisify its conditions, because some knowledge elsewhere in the knowledge base blocks it from applying."
It's an intriguing concept that I have not found in other books.

How does Prolog technically work? What's under the hood?

I want to learn more about the internals of Prolog and understand how this works.
I know how to use it. But not how it works internally. What are the names of the algorithms and concepts used in Prolog?
Probably it builds some kind of tree structure or directed object graph, and then upon queries it traveres that graph with a sophisticated algorithm. A Depth First Search maybe. There might be some source code around but it would be great to read about it from a high level perspective first.
I'm really new to AI and understanding Prolog seems to be a great way to start, imho. My idea is to try to rebuild something similar and skipping the parser part completely. I need to know the directions in which I have to do my research efforts.
What are the names of the algorithms and concepts used in Prolog?
Logic programming
Depth-first, backtracking search
Unification
See Sterling & Shapiro, The Art of Prolog (MIT Press) for the theory behind Prolog.
Probably it builds some kind of tree structure or directed object graph, and then upon queries it traveres that graph with a sophisticated algorithm. A Depth First Search maybe.
It doesn't build the graph explicitly, that wouldn't even be possible with infinite search spaces. Check out the first chapters of Russell & Norvig for the concept of state-space search. Yes, it does depth-first search with backtracking, but no, that isn't very sophisticated. It's just very convenient and programming alternative search strategies isn't terribly hard in Prolog.
understanding Prolog seems to be a great way to start, imho.
Depends on what you want to do, but knowing Prolog certainly doesn't hurt. It's a very different way of looking at programming. Knowing Prolog helped me understand functional programming very quickly.
My idea is to try to rebuild something similar and skipping the parser part completely
You mean skipping the Prolog syntax? If you happen to be familiar with Scheme or Lisp, then check out section 4.4 of Abelson & Sussman where they explain how to implement a logic programming variant of Scheme, in Scheme.
AI is a wide field, Prolog only touches symbolic AI. As for Prolog, the inner workings are too complex to explain here, but googling will give you plenty of resources. E.g. http://www.amzi.com/articles/prolog_under_the_hood.htm .
Check also Wikipedia articles to learn about the other areas of AI.
You might also want to read about the Warren Abstract Machine
typically, prolog code is translated to WAM instructions and then executed more efficiently.
I would add:
Programming Languages: An interpreter based approach by Samuel N. Kamin. The book is out of print, but you may find it in a University Library. It contains a Prolog implementation in Pascal.
Tim Budd's "The Kamin Interpreters in C++" (in postscript)
The book by Sterling and Shapiro, mentioned by larsmans, actually contains an execution model of Prolog. It's quite nice and explains clearly "how Prolog works". And it's an excellent book!
There are also other sources you could try. Most notably, some Lisp books build pedagogically-oriented Prolog interpreters:
On Lisp by paul Graham (in Common Lisp, using -- and perhaps abusing -- macros)
Paradigms of Artificial Intelligence Programming by Peter Norvig (in Common Lisp)
Structure and Interpretation of Computer Programs by Abelson and Sussman (in Scheme).
Of these, the last one is the clearest (in my humble opinion). However, you'd need to learn some Lisp (either Common Lisp or Scheme) to understand those.
The ISO core standard for Prolog also contains an execution model. The execution model is of interest since it gives a good model of control constructs such as cut !/0, if-then-else (->)/2, catch/3 and throw/1. It also explains how to conformantly deal with naked variables.
The presentation in the ISO core standard is not that bad. Each control construct is described in a form of a prose use case with a reference to an abstract Prolog machine consisting of a stack, etc.. Then there are pictures that show the stack before and after execution of the control construct.
The cheapest source is ANSI:
http://webstore.ansi.org/RecordDetail.aspx?sku=INCITS%2FISO%2FIEC+13211-1-1995+%28R2007%29
In addition to the many good answers already posted, I add some historical facts on Prolog.
Wikipedia on Prolog: Prolog was created around 1972 by Alain Colmerauer with Philippe Roussel, based on Robert Kowalski's procedural interpretation of Horn clauses.
Alain was a French computer scientist and professor at Aix-Marseille University from 1970 to 1995. Retired in 2006, he remained active until he died in 2017. He was named Chevalier de la Legion d’Honneur by the French government in 1986.
The inner works of Prolog can best be explained by its inventor in this article Prolog in 10 figures. It was published in Communications of the ACM, vol. 28, num. 12, December. 1985.
Prolog uses a subset of first order predicate logic, called Horn logic. The algorithm used to derive answers is called SLD resolution.

Datalog vs CLIPS vs Prolog

As many programmers I studied Prolog in university, but only very little. I understand that Prolog and Datalog are closely related, but Datalog is simpler? Also, I believe that I read that Datalog does not depend on ordering of the logic clauses, but I am not sure why this is advantages. CLIPS is supposedly altogether different, but it is too subtle for me to understand. Can someone please to provide a general highlights of the languages over the other languages?
The difference between CLIPS and Prolog/Datalog is that CLIPS is a "production rule system" that operates by forward chaining: given a set of facts and rules, it will try to make every possible derivation of new facts and store those in memory. A query is then answered by checking whether it matches something in the fact store. So, in CLIPS, if you have (pseudo-syntax):
parent(X,Y) => child(Y,X)
parent(john,mary)
it will immediately derive child(mary,john) and remember that fact. This can be very fast, but puts restrictions on the possible ruleset and takes up memory.
Prolog and Datalog operate by backward chaining, meaning that a query (predicate call) is answered by trying to prove the query, i.e. running the Prolog/Datalog program. Prolog is a Turing complete programming language, so any algorithm can be implemented in it.
Datalog is a non-Turing complete subset of Prolog that does not allow, e.g., negation. Its main advantage is that every Datalog program terminates (no infinite loops). This makes it useful for so-called "deductive databases," i.e. databases with rules in addition to facts.
datalog is a subset of prolog. the subset which datalog carries has two things in mind:
adopt an API which would support rules and queries
make sure all queries terminate
prolog is Turing complete. datalog is not.
getting datalog out of the way, let's see how prolog compares with clips.
prolog's expertise is "problem solving" while clips is an "expert system". if i understand correctly, "problem solving" involves expertise using code and data. "expert systems" mostly use data structures to express expertise. see http://en.wikipedia.org/wiki/Expert_system#Comparison_to_problem-solving_systems
another way to look at it is:
expert systems operate on the premise that most (if not all) outcomes are known. all of these outcomes are compiled into data and then is fed into an expert system. give the expert system a scenario, the expert system computes the outcome from the compiled data, aka knowledge base. it's always a "an even number plus an even number is always even" kind of thinking.
problem solving systems have an incomplete view of the problem. so one starts out with modeling data and behavior, which would comprise the knowledge base (this gives justice to the term "corner case") and ends up with "if we add two to six, we end up with eight. is eight divisible by two? then it is even"

Resources