Are there formal systems that allow you to make human-like higher-order statements? - prolog

For humans, it's very natural to make higher-order statements. For example, you could state the following (with pseudo-Prolog syntax):
Socrates is smart:
smart(socrates).
John is a man:
man(john).
Socrates believes all men are mortal:
believes(socrates, (mortal(X) :- man(X))).
If someone is smart and believes something, it must be true:
Y :- smart(X), believes(X, Y).
I checked out a couple of "higher-order" extensions to Prolog, but neither can accept the kinds of statements like the last two examples.
Are there formal systems that allow you to make human-like higher-order statements, such as these?

This is called modal logic, and is theorized with Kripke semantics. Here are some libraries in python.

Related

Does HiLog add anything that can not be done with "call" in Prolog? [duplicate]

This question already has answers here:
Are HiLog terms still useful in modern Prolog?
(2 answers)
Closed 2 years ago.
The Wikipedia article for Prolog states:
Higher-order programming style in Prolog was pioneered in HiLog and λProlog.
The motivation for HiLog includes its ability to implement higher-order predicates like maplist:
maplist(F)([],[]).
maplist(F)([X|Xs],[Y|Ys]) <- F(X,Y), maplist(F)(Xs,Ys).
The paper that describes HiLog assumes that Prolog only has call/1, not call/3.
However, since Prolog (now) has call/3, maplist can be easily implemented in it:
maplist(_, [], []).
maplist(P, [X|Xs], [Y|Ys]) :- call(P, X, Y), maplist(P, Xs, Ys).
Is HiLog mostly of historical interest, or is its "higher-order" logic more general that what's available in Prolog now?
From wiki
Although syntactically HiLog strictly extends first order logic, HiLog can be embedded into this logic.
Any HiLog term can be translated into a Prolog term (HiLog: A foundation for higher-order logic programming - Weidong Chen, Michael Kifer, David S.Warren - 1993). So in a sense yes, it is not more general than Prolog.
Let me quote a few conclusions from the paper
Firstly, programming in HiLog makes more logic programs logical. We all admonish Prolog programmers to make their programs as pure as possible and to eschew the evils of Prolog’s nonlogical constructs. In Prolog, the intermixing of predicate and function symbols, in particular in the predicate, call/ 1, is nonlogical, whereas in HiLog, it is completely logical and is a first-class citizen. So in HiLog, programmers
need not avoid using call/l, and thus have more flexibility in their
task of writing pure logic programs.
Secondly, even though one might say that HiLog is simply a syntactic variant of Prolog, syntax is important when one is doing meta-programming.
Since in meta-programming the syntax determines the data structures to be manipulated, a simpler syntax means that meta-programs can be much simpler.
A bit vaguely:
HiLog is not in Prolog (Prolog stays Prolog) but is used in Flora, which is basically an logic-based object-oriented database. It has its own syntax and runs on XSB Prolog.
If I understand correctly, the idea of HiLog is to have a defined practical syntax for "higher-order" predicates, by allowing variables in predicate name positions. This is the difference between the two maplist examples.
This looks as if this was 2-nd order logic (which becomes uncomputable/untractable as there is no way to find out whether a predicate F is related to a predicate G in general as you may be forced to compare their extent, all the points where they succeed) but is flattened down to 1-st order (computable) by restriction to syntactic equality (F and G are the same if the name the same predicate, foo/2) at which point one can deploy call/N to generate Prolog code.
So, yes, currently you have to jump through hoops to express statements in Prolog that may be a one-liner in HiLog (I have no examples though, not having though too deeply about this). It's the difference between C and ... uh ... Prolog!
Similar to a host of other ideas for extensions/modifications to Prolog into various X-logs, not all of which were implemented (I once tried to make a an overview image here), "HiLog syntax" (or something similar to it) may be found in a specialized X-log of the future that breaks out of its niche.
Since I answered my own question in the comments, I'll post it here:
There are things you can do in HiLog, that can not be done with call in Prolog, for example:
Queries like:
?- X(dog, 55, Y).
Assertions like:
foo(X, Y) :- Z(X), Z(Y(X)).
As stated in the aforementioned HiLog paper and the HiLog Wikipedia page, Prolog can emulate HiLog. However, this requires converting the whole program and all queries into a single predicate.

What is meant by "logical purity" in Prolog?

What is meant by "logical purity" (in the context of Prolog programming)? The logical-purity tag info says "programs using only Horn clauses", but then, how would predicates like if_/3 qualify, using as much as it does the cut, and the various meta-logical (what's the proper terminology? var/1 and such) predicates, i.e. the low-level stuff.
I get it that it achieves some "pure" effect, but what does this mean, precisely?
For a more concrete illustration, please explain how does if_/3 qualify as logically pure, seen in use e.g. in this answer?
Let us first get used to a declarative reading of logic programs.
Declaratively, a Prolog program states what is true.
For example
natural_number(0).
natural_number(s(X)) :-
natural_number(X).
The first clause states: 0 is a natural number.
The second clause states: If X is a natural number, then s(X) is a natural number.
Let us now consider the effect of changes to this program. For example, what changes when we change the order of these two clauses?
natural_number(s(X)) :-
natural_number(X).
natural_number(0).
Declaratively, exchanging the order of clauses does not change the intended meaning of the program in any way (disjunction is commutative).
Operationally, that is, taking into account the actual execution strategy of Prolog, different clause orders clearly often make a signifcant difference.
However, one extremely nice property of pure Prolog code is preserved regardless of chosen clause ordering:
If a query Q succeeds with respect to a clause ordering O1, then
Q does not fail with a different ordering O2.
Note that I am not saying that Q always also succeeds with a different ordering: This is because the query may also loop or yield an error with different orderings.
For two queries Q1 and Q2, we say that G1 is more general iff it subsumes G2 with respect to syntactic unification. For example, the query ?- parent_child(P, C). is more general than the query ?- parent_child(0, s(0))..
Now, with pure Prolog programs, another extremely nice property holds:
If a query Q1 succeeds, then every more general query Q2 does not
fail.
Note, again, that Q2 may loop instead of succeeding.
Consider now the case of var/1 which you mention, and think of the related predicate nonvar/1. Suppose we have:
my_pred(V) :-
nonvar(V).
When does this hold? Clearly, it holds iff the argument is not a variable.
As expected, we get:
?- my_pred(a).
true.
However, for the more general query ?- my_pred(X)., we get:
?- my_pred(X).
false.
Such a predicate is called non-monotonic, and you cannot treat it as a true relation due to this property: This is because the answer false above logically means that there are no solutions whatsoever, yet in the immediately preceding example, we see that there is a solution. So, illogically, a more specific query, built by adding a constraint, makes the query succeed:
?- X = a, my_pred(X).
true.
Thus, reasoning about such predicates is extremely complicated, to the point that it is no fun at all to program with them. It makes declarative debugging impossible, and hard to state any properties that are preserved. For instance, just swapping the order of subgoals in the above conjunctive query will make it fail:
?- my_pred(X), X = a.
false.
Hence, I strongly suggest to stay within the pure monotonic subset of Prolog, which allows the declarative reasoning along the lines outlined above.
CLP(FD) constraints, dif/2 etc. are all pure in this sense: You cannot trick these predicates into giving logically invalid answers, no matter the modes, orders etc. in which you use them. if_/3 also satisfies this property. On the other hand, var/1, nonvar/1, integer/1, !/0, predicates with side-effects etc. are all extra-logically referencing something outside the declarative world that is being described, and can thus not be considered pure.
EDIT: To clarify: The nice properties I mention here are in no way exhaustive. Pure Prolog code exhibits many other extremely valuable properties through which you can perceive the glory of logic programming. For example, in pure Prolog code, adding a clause can at most extend, never narrow, the set of solutions; adding a goal can at most narrow, never extend, it etc.
Using a single extra-logical primitive may, and typically will, already destroy many of these properties. Therefore, for example, every time you use !/0, consider it a cut right into the heart of purity, and try to feel regret and shame for wounding these properties.
A good Prolog book will at least begin to introduce or contain many hints to encourage such a declarative view, guide you to think about more general queries, properties that are preserved etc. Bad Prolog books will not say much about this and typically end up using exactly those impure language elements that destroy the language's most valuable and beautiful properties.
An awesome Prolog teaching environment that makes extensive use of these properties to implement declarative debugging is called GUPU, I highly recommend to check out these ideas. Ulrich Neumerkel has generously made one core idea that is used in his environment partly available as library(diadem). See the source file for a good example on how to declaratively debug a goal that fails unexpectedly: The library systematically builds generalizations of the query that still fail. This reasoning of course works perfectly with pure code.

Implementing arithmetic for Prolog

I'm implementing a Prolog interpreter, and I'd like to include some built-in mathematical functions (sum, product, etc). For example, I would like to be able to make calculations using knowledge bases like this one:
NetForce(F) :- Mass(M), Acceleration(A), Product(M, A, F)
Mass(10) :- []
Acceration(12) :- []
So then I should be able to make queries like ?NetForce(X). My question is: what is the right way to build functionality like this into my interpreter?
In particular, the problem I'm encountering is that, in order to evaluate Sum, Product, etc., all their arguments have to be evaluated (i.e. bound to numerical constants) first. For example, while to code above should evaluate properly, the permuted rule:
NetForce(F) :- Product(M, A, F), Mass(M), Acceleration(A)
wouldn't, because M and A aren't bound when the Product term is processed. My current approach is to simply reorder the terms so that mathematical expressions appear last. This works in simple cases, but it seems hacky, and I would expect problems to arise in situations with multiple mathematical terms, or with recursion. Is there a better solution?
The functionality you are describing exists in existing systems as constraint extensions. There is CLP(Q) over the rationals, CLP(R) over the reals - actually floats, and last but not least CLP(FD) which is often extended to a CLP(Z). See for example
library(clpfd).
In any case, starting a Prolog implementation from scratch will be a non-trivial effort, you will have no time to investigate what you want to implement because you will be inundated by much lower level details. So you will have to use a more economical approach and clarify what you actually want to do.
You might study and implement constraint languages in existing systems. Or you might want to use a meta-interpreter based approach. Or maybe you want to implement a Prolog system from scratch. But don't expect that you succeed in all of it.
And to save you another effort: Reuse existing standard syntax. The syntax you use would require you to build an extra parser.
You could use coroutining to delay the evaluation of the product:
product(X, A, B) :- freeze(A, freeze(B, X is A*B))
freeze/2 delays the evaluation of its second argument until its first argument is ground. Used nested like this, it only evaluates X is A*B after both A and B are bound to actual terms.
(Disclaimer: I'm not an expert on advanced Prolog topics, there might be an even simpler way to do this - e.g. I think SICStus Prolog has "block declarations" which do pretty much the same thing in a more concise way and generalized over all declarations of the predicate.)
Your predicates would not be clause order independent, which is pretty important. You need to determine usage modes of your predicates - what will the usage mode of NetForce() be? If I were designing a predicate like Force, I would do something like
force(Mass,Acceleration,Force):- Force is Mass * Acceleration.
This has a usage mode of +,+,- meaning you give me Mass and Acceleration and I will give you the Force.
Otherwise, you are depending on the facts you have defined to unify your variables, and if you pass them to Product first they will continue to unify and unify and you will never stop.

Guard clauses in prolog?

Do they exist? How are they implemented?
The coroutining predicates of SWI-Prolog (freeze, when, dif etc.) have the functionality of guards. How do they fit in the preferred Prolog programming style?
I am very new to logic programming (with Prolog and altogether) and somewhat confused by the fact that it is not purely declarative, and requires procedural considerations even in very simple cases (see this question about using \== or dif). Am I missing something important?
First a terminological issue: Neither freeze/2 nor when/2 nor dif/2 are called guards under any circumstance. Guards appear in such extensions as CHR, or related languages as GHC (link in Japanese) or other Concurrent logic programming languages ; you even (under certain restrictions) might consider clauses of the form
Head :- Guard, !, ...
as clauses containing a guard and the cut would be in this case rather called a commit. But none applies to above primitives. Guards are rather inspired by Dijkstra's Guarded Command Language of 1975.
freeze(X, Goal) (originally called geler) is the same as when(nonvar(X), Goal) and they both are declaratively equivalent to Goal. There is no direct relation to the functionality of guards. However, when used together with if-then-else you might implement such a guard. But that is pretty different.
freeze/2 and similar constructs were for some time considered as a general way to improve Prolog's execution mechanism. However, they turned out to be very brittle to use. Often, they were too conservative thus delaying goals unnecessarily. That is, almost every interesting query produced a "floundering" answer as the query below. Also, the borderline between terminating and non-terminating programs is now much more complex. For pure monotonic Prolog programs that terminate, adding some terminating goal into the program will preserve termination of the entire program. However, with freeze/2 this is no longer the case. Then from a conceptual viewpoint, freeze/2 was not very well supported by the toplevels of systems: Only a few systems showed the delayed goals in a comprehensive manner (e.g. SICStus) which is crucial to understand the difference between success/answers and solution. With delayed goals Prolog may now produce an answer that has no solution as this one:
?- freeze(X, X = 1), freeze(X, X = 2).
freeze(X, X=1), freeze(X, X=2).
Another difficulty with freeze/2 was that termination conditions are much more difficult to determine. So, while freeze was supposed to solve all the problems with termination, it often created new problems.
And there are also more technical difficulties related to freeze/2 in particular w.r.t tabling and other techniques to prevent loops. Consider a goal freeze(X, Y = 1) clearly, Y is now 1 even if it is not yet bound, it still awaits X to be bound first. Now, an implementation might consider tabling for a goal g(Y). g(Y) will now have either no solution or exactly one solution Y = 1. This result would now be stored as the only solution for g/1 since the freeze-goal was not directly visible to the goal.
It is for such reasons that freeze/2 is considered the goto of constraint logic programming.
Another issue is dif/2 which today is considered a constraint. In contrast to freeze/2 and the other coroutining primitives, constraints are much better able to manage consistency and also maintain much better termination properties. This is primarily due to the fact that constraints introduce a well defined language were concrete properties can be proven and specific algorithms have been developed and do not permit general goals. However, even for them it is possible to obtain answers that are not solutions. More about answer and success in CLP.
freeze/2 and when/2 are like a "goto" of logic programming. They are not pure, commutative, etc.
dif/2, on the other hand, is completely pure and declarative, monotonic, commutative etc. dif/2 is a declarative predicate, it holds if its arguments are different. As to the "preferred Prolog programming style": State what holds. If you want to express the constraint that two general terms are different, dif/2 is exactly what states this.
Procedural considerations are typically most needed when you do not program in the pure declarative subset of Prolog, but use impure predicates that are not commutative etc. In modern Prolog systems, there is little reason to ever leave the pure and declarative subset.
There is a paper by Evan Tick explaining CC:
Informally, a procedure invocation commits to a clause by matching the
head arguments (passive unification) and satisfying the guard goals.
When a goal can commit to more than one clause in a procedure, it
commits to one of them non- deterministically (the other candidates
are thrown away). Structures appearing in the head and guard of a
clause cause suspension of execution if the correspond- ing argument
of the goal is not sufficiently instantiated. A suspended invocation
may be resumed later when the variable associated with the suspended
invocation becomes sufficiently instantiated.
THE DEEVOLUTION OF CONCURRENT LOGIC PROGRAMMING LANGUAGES
Evan Tick - 1995
https://core.ac.uk/download/pdf/82787481.pdf
So I guess with some when/2 magic, a commited choice code can
be rewritten into ordinary Prolog. The approach would be as
follows. For a set of commited choice rules of the same predicate:
H1 :- G1 | B1.
...
H2 :- Gn | Bn.
This can be rewritten into the following, where Hi' and Gi'
need to implement passive unification. For example by using
ISO corrigendum subsumes_term/2.
H1' :- G1', !, B1.
..
H1' :- Gn', !, Bn.
H :- term_variables(H, L), when_nonvar(L, H).
The above translation wont work for CHR, since CHR doesn't
throw away candidates.

Theorem Proof Using Prolog

How can I write theorem proofs using Prolog?
I have tried to write it like this:
parallel(X,Y) :-
perpendicular(X,Z),
perpendicular(Y,Z),
X \== Y,
!.
perpendicular(X,Y) :-
perpendicular(X,Z),
parallel(Z,Y),
!.
Can you help me?
I was reluctant to post an Answer because this Question is poorly framed. Thanks to theJollySin for adding clean formatting! Something omitted in the rewrite, indicative of what Aman had in mind, was "I inter in Loop" (sic).
We don't know what query was entered that resulted in this looping, so speculation is required. The two rules suggest that Goal involved either the parallel/2 or the perpendicular/2 predicate.
With practice it's not hard to understand what the Prolog engine will do when a query is posed, especially a single goal query. Prolog uses a pretty simple "follow your nose" strategy in attempting to satisfy a goal. Look for the rules for whichever predicate is invoked. Then see if any of those rules, starting with the first and going down in the list of them, can be applied.
There are three topics that beginning Prolog programmers will typically struggle with. One is the recursive nature of the search the Prolog engine makes. Here the only rule for parallel/2 has a right-hand side that invokes two subgoals for perpendicular/2, while the only rule for perpendicular/2 invokes both a subgoal for itself and another subgoal for parallel/2. One should expect that trying to satisfy either kind of query inevitably leads to a Hydra-like struggle with bifurcating heads.
The second topic we see in this example is the use of free variables. If we are to gain knowledge about perpendicularity or parallelism of two specific lines (geometry), then somehow the query or the rules need to provide "binding" of variables to "ground" terms. Again without the actual Goal being queried, it's hard to guess how Aman expected that to work. Perhaps there should have been "facts" supplied about specific lines that are perpendicular or parallel. Lines could be represented merely as atoms (perhaps lowercase letters), but Prolog variables are names that begin with an uppercase letter (as in the two given rules) or with an underscore (_) character.
Finally the third topic that can be quite confusing is how Prolog handles negation. There's only a touch of that in these rules, the place where X\==Y is invoked. But even that brief subgoal requires careful understanding. Prolog implements "negation as failure", so that X\==Y succeeds if and only if X==Y does not succeed. This latter goal is also subtle, because it asks whether X and Y are the same without trying to do any unification. Thus if these are different variables, both free, then X==Y fails (and X\==Ysucceeds). On the other hand, the only way for X==Yto succeed (and thus for X\==Y to fail) would be if both variables were bound to the same ground term. As discussed above the two rules as stated don't provide a way for that to be the case, though something might have taken care of this in the query Goal.
The homework assignment for Aman is to learn about these Prolog topics:
recursion
free and bound variables
negation
Perhaps more concrete suggestions can then be made about Prolog doing geometry proofs!
Added: PTTP (Prolog Technology Theorem Prover) was written by M.E. Stickel in the late 1980's, and this 2006 web page describes it and links to a download.
It also summarizes succinctly why Prolog alone is not " a full general-purpose theorem-proving system." Pointers to later, more capable theorem provers can be followed there as well.

Resources