I'm implementing a Prolog interpreter, and I'd like to include some built-in mathematical functions (sum, product, etc). For example, I would like to be able to make calculations using knowledge bases like this one:
NetForce(F) :- Mass(M), Acceleration(A), Product(M, A, F)
Mass(10) :- []
Acceration(12) :- []
So then I should be able to make queries like ?NetForce(X). My question is: what is the right way to build functionality like this into my interpreter?
In particular, the problem I'm encountering is that, in order to evaluate Sum, Product, etc., all their arguments have to be evaluated (i.e. bound to numerical constants) first. For example, while to code above should evaluate properly, the permuted rule:
NetForce(F) :- Product(M, A, F), Mass(M), Acceleration(A)
wouldn't, because M and A aren't bound when the Product term is processed. My current approach is to simply reorder the terms so that mathematical expressions appear last. This works in simple cases, but it seems hacky, and I would expect problems to arise in situations with multiple mathematical terms, or with recursion. Is there a better solution?
The functionality you are describing exists in existing systems as constraint extensions. There is CLP(Q) over the rationals, CLP(R) over the reals - actually floats, and last but not least CLP(FD) which is often extended to a CLP(Z). See for example
library(clpfd).
In any case, starting a Prolog implementation from scratch will be a non-trivial effort, you will have no time to investigate what you want to implement because you will be inundated by much lower level details. So you will have to use a more economical approach and clarify what you actually want to do.
You might study and implement constraint languages in existing systems. Or you might want to use a meta-interpreter based approach. Or maybe you want to implement a Prolog system from scratch. But don't expect that you succeed in all of it.
And to save you another effort: Reuse existing standard syntax. The syntax you use would require you to build an extra parser.
You could use coroutining to delay the evaluation of the product:
product(X, A, B) :- freeze(A, freeze(B, X is A*B))
freeze/2 delays the evaluation of its second argument until its first argument is ground. Used nested like this, it only evaluates X is A*B after both A and B are bound to actual terms.
(Disclaimer: I'm not an expert on advanced Prolog topics, there might be an even simpler way to do this - e.g. I think SICStus Prolog has "block declarations" which do pretty much the same thing in a more concise way and generalized over all declarations of the predicate.)
Your predicates would not be clause order independent, which is pretty important. You need to determine usage modes of your predicates - what will the usage mode of NetForce() be? If I were designing a predicate like Force, I would do something like
force(Mass,Acceleration,Force):- Force is Mass * Acceleration.
This has a usage mode of +,+,- meaning you give me Mass and Acceleration and I will give you the Force.
Otherwise, you are depending on the facts you have defined to unify your variables, and if you pass them to Product first they will continue to unify and unify and you will never stop.
Related
Prolog is a nice language. I use it occasionally, from time to time.
But approaching it every subsequent time makes me feel less and less comfortable syntactically.
The modern programming languages are moving to allow
programmer less repeating himself
omit unnecessary pieces if they can be deduced, or their names are just placeholders.
The DCG is a step in the right direction allowing one to write
sentence --> noun_phrase, verb_phrase.
instead of
sentence(A,Z) :- noun_phrase(A,B), verb_phrase(B,Z).
but its entanglement with difference lists makes it less useful.
So what I am looking for are projects giving Prolog
a more compact syntactic representation, while preserving its semantic expressiveness.
Higher-order programming based on call/N is still a pretty much unexplored terrain. Major implementations like SICStus Prolog added call/N as late as 2006. So there is still a lot to explore. Consider library(lambda), library(reif) (both here) and other definitions using the meta-predicate declaration.
One thing you might want to look into in case of Swi-Prolog are actual language extensions introduced specifically by Swi-Prolog 7:
http://www.swi-prolog.org/pldoc/man?section=extensions
Another thing is Quasi-Quotation library which allows you to insert pieces of code in your own language (defined using DCG) inside "regular" Prolog code:
http://www.swi-prolog.org/pldoc/man?section=quasiquotations
The last thing I can recommend is the list of additional Swi-Prolog packages, some of which are specifically designed to extend the language, e.g. 'func', 'lambda', etc.:
http://www.swi-prolog.org/pack/list
I am using Prolog to encode some fairly complicated rules in a project of mine. There is a lot of recursion, including mutual recursion. Part of the rules look something like this:
pred1(X) :- ...
pred1(X) :- someguard(X), pred2(X).
pred2(X) :- ...
pred2(X) :- othercondition(X), pred1(X).
There is a fairly obvious infinite loop between pred1 and pred2. Unfortunately, the interaction between these predicates is very complicated and difficult to isolate. I was able to eliminate the infinite loop in this instance by passing around a list of objects that have been passed to pred1, but this is extremely unwieldy! In fact, it largely defeats the purpose of using Prolog in this application.
How can I make Prolog avoid infinite loops? For example, if in the course of proving pred1(foo) it tries to prove pred1(foo) as a sub-goal, fail and backtrack.
Is it possible to do this with meta-interpreters?
Yes, you can use meta-interpreters for this purpose, as mat suggests. But for the normal use case, that is going far beyond the regular effort.
What you may consider instead is to separate the looping functionality from your actual logic using higher-order predicates. That is a very safe way to go — SWI even checks if all the uses have a corresponding definition. This checking is either invoked when typing make. or check.
As an example, consider closure0/3 and path/4 which both handle loop checks "once and forever".
One feature that is available in some Prolog systems and that may help you to solve such issues is called tabling. See for example the related question and prolog-tabling.
If tabling is not available, then yes, meta-interpreters can definitely help a lot with this. For example, you can change the executation strategy etc. with a meta-interpreter.
In SWI-Prolog, also check out call_with_inference_limit/3 to robustly limit the execution, independent of CPU type and system load.
Related and also useful are termination analyzers like cTI: They allow you to statically derive termination conditions.
What is meant by "logical purity" (in the context of Prolog programming)? The logical-purity tag info says "programs using only Horn clauses", but then, how would predicates like if_/3 qualify, using as much as it does the cut, and the various meta-logical (what's the proper terminology? var/1 and such) predicates, i.e. the low-level stuff.
I get it that it achieves some "pure" effect, but what does this mean, precisely?
For a more concrete illustration, please explain how does if_/3 qualify as logically pure, seen in use e.g. in this answer?
Let us first get used to a declarative reading of logic programs.
Declaratively, a Prolog program states what is true.
For example
natural_number(0).
natural_number(s(X)) :-
natural_number(X).
The first clause states: 0 is a natural number.
The second clause states: If X is a natural number, then s(X) is a natural number.
Let us now consider the effect of changes to this program. For example, what changes when we change the order of these two clauses?
natural_number(s(X)) :-
natural_number(X).
natural_number(0).
Declaratively, exchanging the order of clauses does not change the intended meaning of the program in any way (disjunction is commutative).
Operationally, that is, taking into account the actual execution strategy of Prolog, different clause orders clearly often make a signifcant difference.
However, one extremely nice property of pure Prolog code is preserved regardless of chosen clause ordering:
If a query Q succeeds with respect to a clause ordering O1, then
Q does not fail with a different ordering O2.
Note that I am not saying that Q always also succeeds with a different ordering: This is because the query may also loop or yield an error with different orderings.
For two queries Q1 and Q2, we say that G1 is more general iff it subsumes G2 with respect to syntactic unification. For example, the query ?- parent_child(P, C). is more general than the query ?- parent_child(0, s(0))..
Now, with pure Prolog programs, another extremely nice property holds:
If a query Q1 succeeds, then every more general query Q2 does not
fail.
Note, again, that Q2 may loop instead of succeeding.
Consider now the case of var/1 which you mention, and think of the related predicate nonvar/1. Suppose we have:
my_pred(V) :-
nonvar(V).
When does this hold? Clearly, it holds iff the argument is not a variable.
As expected, we get:
?- my_pred(a).
true.
However, for the more general query ?- my_pred(X)., we get:
?- my_pred(X).
false.
Such a predicate is called non-monotonic, and you cannot treat it as a true relation due to this property: This is because the answer false above logically means that there are no solutions whatsoever, yet in the immediately preceding example, we see that there is a solution. So, illogically, a more specific query, built by adding a constraint, makes the query succeed:
?- X = a, my_pred(X).
true.
Thus, reasoning about such predicates is extremely complicated, to the point that it is no fun at all to program with them. It makes declarative debugging impossible, and hard to state any properties that are preserved. For instance, just swapping the order of subgoals in the above conjunctive query will make it fail:
?- my_pred(X), X = a.
false.
Hence, I strongly suggest to stay within the pure monotonic subset of Prolog, which allows the declarative reasoning along the lines outlined above.
CLP(FD) constraints, dif/2 etc. are all pure in this sense: You cannot trick these predicates into giving logically invalid answers, no matter the modes, orders etc. in which you use them. if_/3 also satisfies this property. On the other hand, var/1, nonvar/1, integer/1, !/0, predicates with side-effects etc. are all extra-logically referencing something outside the declarative world that is being described, and can thus not be considered pure.
EDIT: To clarify: The nice properties I mention here are in no way exhaustive. Pure Prolog code exhibits many other extremely valuable properties through which you can perceive the glory of logic programming. For example, in pure Prolog code, adding a clause can at most extend, never narrow, the set of solutions; adding a goal can at most narrow, never extend, it etc.
Using a single extra-logical primitive may, and typically will, already destroy many of these properties. Therefore, for example, every time you use !/0, consider it a cut right into the heart of purity, and try to feel regret and shame for wounding these properties.
A good Prolog book will at least begin to introduce or contain many hints to encourage such a declarative view, guide you to think about more general queries, properties that are preserved etc. Bad Prolog books will not say much about this and typically end up using exactly those impure language elements that destroy the language's most valuable and beautiful properties.
An awesome Prolog teaching environment that makes extensive use of these properties to implement declarative debugging is called GUPU, I highly recommend to check out these ideas. Ulrich Neumerkel has generously made one core idea that is used in his environment partly available as library(diadem). See the source file for a good example on how to declaratively debug a goal that fails unexpectedly: The library systematically builds generalizations of the query that still fail. This reasoning of course works perfectly with pure code.
Do they exist? How are they implemented?
The coroutining predicates of SWI-Prolog (freeze, when, dif etc.) have the functionality of guards. How do they fit in the preferred Prolog programming style?
I am very new to logic programming (with Prolog and altogether) and somewhat confused by the fact that it is not purely declarative, and requires procedural considerations even in very simple cases (see this question about using \== or dif). Am I missing something important?
First a terminological issue: Neither freeze/2 nor when/2 nor dif/2 are called guards under any circumstance. Guards appear in such extensions as CHR, or related languages as GHC (link in Japanese) or other Concurrent logic programming languages ; you even (under certain restrictions) might consider clauses of the form
Head :- Guard, !, ...
as clauses containing a guard and the cut would be in this case rather called a commit. But none applies to above primitives. Guards are rather inspired by Dijkstra's Guarded Command Language of 1975.
freeze(X, Goal) (originally called geler) is the same as when(nonvar(X), Goal) and they both are declaratively equivalent to Goal. There is no direct relation to the functionality of guards. However, when used together with if-then-else you might implement such a guard. But that is pretty different.
freeze/2 and similar constructs were for some time considered as a general way to improve Prolog's execution mechanism. However, they turned out to be very brittle to use. Often, they were too conservative thus delaying goals unnecessarily. That is, almost every interesting query produced a "floundering" answer as the query below. Also, the borderline between terminating and non-terminating programs is now much more complex. For pure monotonic Prolog programs that terminate, adding some terminating goal into the program will preserve termination of the entire program. However, with freeze/2 this is no longer the case. Then from a conceptual viewpoint, freeze/2 was not very well supported by the toplevels of systems: Only a few systems showed the delayed goals in a comprehensive manner (e.g. SICStus) which is crucial to understand the difference between success/answers and solution. With delayed goals Prolog may now produce an answer that has no solution as this one:
?- freeze(X, X = 1), freeze(X, X = 2).
freeze(X, X=1), freeze(X, X=2).
Another difficulty with freeze/2 was that termination conditions are much more difficult to determine. So, while freeze was supposed to solve all the problems with termination, it often created new problems.
And there are also more technical difficulties related to freeze/2 in particular w.r.t tabling and other techniques to prevent loops. Consider a goal freeze(X, Y = 1) clearly, Y is now 1 even if it is not yet bound, it still awaits X to be bound first. Now, an implementation might consider tabling for a goal g(Y). g(Y) will now have either no solution or exactly one solution Y = 1. This result would now be stored as the only solution for g/1 since the freeze-goal was not directly visible to the goal.
It is for such reasons that freeze/2 is considered the goto of constraint logic programming.
Another issue is dif/2 which today is considered a constraint. In contrast to freeze/2 and the other coroutining primitives, constraints are much better able to manage consistency and also maintain much better termination properties. This is primarily due to the fact that constraints introduce a well defined language were concrete properties can be proven and specific algorithms have been developed and do not permit general goals. However, even for them it is possible to obtain answers that are not solutions. More about answer and success in CLP.
freeze/2 and when/2 are like a "goto" of logic programming. They are not pure, commutative, etc.
dif/2, on the other hand, is completely pure and declarative, monotonic, commutative etc. dif/2 is a declarative predicate, it holds if its arguments are different. As to the "preferred Prolog programming style": State what holds. If you want to express the constraint that two general terms are different, dif/2 is exactly what states this.
Procedural considerations are typically most needed when you do not program in the pure declarative subset of Prolog, but use impure predicates that are not commutative etc. In modern Prolog systems, there is little reason to ever leave the pure and declarative subset.
There is a paper by Evan Tick explaining CC:
Informally, a procedure invocation commits to a clause by matching the
head arguments (passive unification) and satisfying the guard goals.
When a goal can commit to more than one clause in a procedure, it
commits to one of them non- deterministically (the other candidates
are thrown away). Structures appearing in the head and guard of a
clause cause suspension of execution if the correspond- ing argument
of the goal is not sufficiently instantiated. A suspended invocation
may be resumed later when the variable associated with the suspended
invocation becomes sufficiently instantiated.
THE DEEVOLUTION OF CONCURRENT LOGIC PROGRAMMING LANGUAGES
Evan Tick - 1995
https://core.ac.uk/download/pdf/82787481.pdf
So I guess with some when/2 magic, a commited choice code can
be rewritten into ordinary Prolog. The approach would be as
follows. For a set of commited choice rules of the same predicate:
H1 :- G1 | B1.
...
H2 :- Gn | Bn.
This can be rewritten into the following, where Hi' and Gi'
need to implement passive unification. For example by using
ISO corrigendum subsumes_term/2.
H1' :- G1', !, B1.
..
H1' :- Gn', !, Bn.
H :- term_variables(H, L), when_nonvar(L, H).
The above translation wont work for CHR, since CHR doesn't
throw away candidates.
How can I write theorem proofs using Prolog?
I have tried to write it like this:
parallel(X,Y) :-
perpendicular(X,Z),
perpendicular(Y,Z),
X \== Y,
!.
perpendicular(X,Y) :-
perpendicular(X,Z),
parallel(Z,Y),
!.
Can you help me?
I was reluctant to post an Answer because this Question is poorly framed. Thanks to theJollySin for adding clean formatting! Something omitted in the rewrite, indicative of what Aman had in mind, was "I inter in Loop" (sic).
We don't know what query was entered that resulted in this looping, so speculation is required. The two rules suggest that Goal involved either the parallel/2 or the perpendicular/2 predicate.
With practice it's not hard to understand what the Prolog engine will do when a query is posed, especially a single goal query. Prolog uses a pretty simple "follow your nose" strategy in attempting to satisfy a goal. Look for the rules for whichever predicate is invoked. Then see if any of those rules, starting with the first and going down in the list of them, can be applied.
There are three topics that beginning Prolog programmers will typically struggle with. One is the recursive nature of the search the Prolog engine makes. Here the only rule for parallel/2 has a right-hand side that invokes two subgoals for perpendicular/2, while the only rule for perpendicular/2 invokes both a subgoal for itself and another subgoal for parallel/2. One should expect that trying to satisfy either kind of query inevitably leads to a Hydra-like struggle with bifurcating heads.
The second topic we see in this example is the use of free variables. If we are to gain knowledge about perpendicularity or parallelism of two specific lines (geometry), then somehow the query or the rules need to provide "binding" of variables to "ground" terms. Again without the actual Goal being queried, it's hard to guess how Aman expected that to work. Perhaps there should have been "facts" supplied about specific lines that are perpendicular or parallel. Lines could be represented merely as atoms (perhaps lowercase letters), but Prolog variables are names that begin with an uppercase letter (as in the two given rules) or with an underscore (_) character.
Finally the third topic that can be quite confusing is how Prolog handles negation. There's only a touch of that in these rules, the place where X\==Y is invoked. But even that brief subgoal requires careful understanding. Prolog implements "negation as failure", so that X\==Y succeeds if and only if X==Y does not succeed. This latter goal is also subtle, because it asks whether X and Y are the same without trying to do any unification. Thus if these are different variables, both free, then X==Y fails (and X\==Ysucceeds). On the other hand, the only way for X==Yto succeed (and thus for X\==Y to fail) would be if both variables were bound to the same ground term. As discussed above the two rules as stated don't provide a way for that to be the case, though something might have taken care of this in the query Goal.
The homework assignment for Aman is to learn about these Prolog topics:
recursion
free and bound variables
negation
Perhaps more concrete suggestions can then be made about Prolog doing geometry proofs!
Added: PTTP (Prolog Technology Theorem Prover) was written by M.E. Stickel in the late 1980's, and this 2006 web page describes it and links to a download.
It also summarizes succinctly why Prolog alone is not " a full general-purpose theorem-proving system." Pointers to later, more capable theorem provers can be followed there as well.