Related
I suspect that λProlog needs a type system to make their higher
order unification sound. Otherwise through self application some
Russell type anomalies can appear.
Are there alternative higher order Prologs that don't need .sig files?
Maybe by a much simpler type system, that doesn't need that many
declarations but still has some form of higher order unification?
Can this dilemma be solved?
Is there a higher order Prolog that wouldn't need a type system?
These are type-free:
HiLog
HiOrd
From the HiOrd paper:
The framework proposed gives rise to many questions the authors hope to ad-
dress in future research. In particular, a rigorous treatment must be developed for
comparison with other higher-order formal systems (Hilog, Lambda-Prolog). For
example, it is reasonably straightforward to conservatively translate the Higher-
order Horn fragment of λProlog into Hiord by erasing types, as the resolution
rules are essentially the same (assuming a type-safe higher-order unification pro-
cedure).
Ciao (includes HiOrd)
As far as I can tell, with sound unification, SLD resolution should not create cyclic data structures (Is this correct?)
If so, one could, in theory, implement Prolog in such a way that it wouldn't need garbage collection (GC). But then again, one might not.
Is this true for WAM-based Prolog implementations?
Is this true for SWI-Prolog? (I believe it's not WAM-based) Is it safe to disable GC in SWI-Prolog when the occurs check is globally enabled?
Specifically:
:- set_prolog_flag(occurs_check, true).
:- set_prolog_flag(gc, false). /* is this safe? */
The creation of cyclic terms is far from being the only operation that can create garbage (as in garbage collection) in Prolog (also worth noting that not all Prolog systems provide comprehensive support for cyclic terms but most of them support some form of garbage collection).
As an example, assume that you have in your code the following sequence of calls to go from a number to an atom:
...,
number_codes(Number, Codes),
atom_codes(Atom, Codes),
...
Here, Codes is a temporary list that should be garbage collected. Another example, assume that you're calling setof/3 to get an ordered list of results where you're only interested in the first two:
...,
setof(C, x(X), [X1, X2| _]),
...
You just created another temporary list. Or that you forgot about sub_atom/5 and decide to use atom_concat/3 to check the prefix of an atom:
...,
atom_concat(Prefix, _, Atom),
...
That second argument, the atom suffix that you don't care about (hence the anonymous variable), is a temporary atom that you just created. But not all Prolog systems provide an atom garbage collector, which can lead to trouble in long running applications.
But even when you think that you have carefully written your code to avoid the creation of temporary terms, the Prolog system may still be creating garbage when running your code. Prolog systems use different memory areas for different purposes, and operations may need to make temporary copies of segments of memory between different memory areas, depending on the implementation. The Prolog system may be written in a language, e.g. Java, that may eventually take care of that garbage. But most likely it's written in C or C++ and some sort of garbage collection is used internally. Not to mention that the Prolog system may be grabbing a big block of memory to be able to prove a query and then reclaiming that memory after the query terminates.
Well, something has to free up multiply-referenced memory to which references exist that can be dropped at any step in the computation. This is the case independently of whether structures are cyclic or not.
Consider variables A and B naming the same structure in memory (they "name the same term"). The structure is referenced from 2 places. Suppose the predicate in which B is defined succeeds or fails. The Prolog Processor cannot just deallocate that structure: it is still referenced from A. This means you need at least reference counting to make sure you don't free memory too early. That's a reference counting garbage collector.
I don't know what kind of garbage collection is implemented in any specific Prolog implementation (there are many approaches, some better suited to Prolog than others ... in a not completely-unrelated context 25 years of Java have created all of these) but you need to use one, not necessarily a reference counting one.
(Cyclic structures are only special to garbage collection because reference counting garbage collection algorithms are unable to free up cyclic structures, as all the cells in a loop have a reference count of at least 1.)
(Also, an IMHO, never a trust a programming languages in which you have to call free yourself. There is probably a variation of Greenspun's tenth rule ("Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.") in that any program written in a programming language in which you have to call free yourself contains an ad hoc, informally-specified, bug-ridden, slow implementation of a garbage collection algorithm.")
(OTOH, Rust seems to take kind of a middle way, offloading some effort onto the developer but having the advantage of being able to decide whether to free memory when a variable goes out of scope. But Rust is not Prolog.)
This here would be safe:
:- set_prolog_flag(gc, false).
But if your Prolog system has garbage collection, switching it off might be not a good idea, since even with occurs check set to true, there can be still garbage through temporary results. And having garbage continuously removed can improve locality of code, i.e. your memory gets less trashed by cache misses:
p(X,Y) :- q(X,Z), r(Z,Y).
The variable Z might point to some Prolog term which is only temporarily needed. Most modern Prolog systems can remove such Prolog terms through so called environment trimming.
But an occurs check opens up the path to a special kind of garbage collection. Namely since no more cyclic terms can appear, it is possible to use reference counting. An old Prolog system that had reference counting was this beast here:
xpProlog: High Performance Extended Pure Prolog - Lüdemann, 1988
https://open.library.ubc.ca/media/download/pdf/831/1.0051961/1
Also Jekejeke Prolog does still do reference counting. A problem with reference counting are attributed variables, which can create a cyclic term nevertheless, for example a freeze/2 as follows creates a cycle through the frozen goal back to the variable:
?- freeze(X, (write(X), nl)).
Edit 04.09.2021:
What also might demand garbage collection is setarg/3. It
can create cycles which cannot be so easily removed by
reference counting.
?- X = f(0), setarg(1,X,X).
X = f(X).
Since setarg/3 is backtrackable, the cycle would go away in
backtracking, at least I guess so. But the cycle might still bother
when we are deep in backtracking and running out of memory.
Or the cycle might not go away through backtracking since we
used the non-backtracking version nb_setarg/3.
I am struggling with what is the precise semantics of throw/1 without a suitable catch/3 in ISO Prolog. I am reading the ISO Prolog specification, and it seems to me that the execution will end up in an infinite recursion.
(Note that you must have access to the ISO Prolog standard in order to interpret this question).
Step 1. Let's say we call throw(unknown), and there is no catch/3 on the stack whatsoever.
Step 2. It will end up in 7.8.10.1 c), saying that it will be an system error (7.12.2 j).
Step 3. This is the same kind of formulation used at other places for other places, so I assume it should be interpreted in the same way. Therefore 7.12.1 applies, and the current goal (throw/1) will be replaced by throw(error(system_error, Imp_def)).
Step 4. Executing this goal will find no active "catch" for it on the stack. It should therefore attempt the same steps, continue at Step 2, and so forth => an infinite recursion.
You may say that the transformation of an uncaught "throw" to a system_error is "final" and not subject to further handling as other errors are, and it probably has to be so, in order to avoid the problem I described, but my question is, where in the standard is this covered?
Some other notes, for completeness:
Note 4 in 7.12.2 also mentions a possibility of a System Error under these circumstances. I think that the formulation used there ("....there is no active goal catch/3") introduces some confusion in yet another respect, because it should qualify that with the condition that the Catcher must unify with the error term (B).
What is the idea behind transforming uncaught throw-s to a system error? It looks like that it may exist to make a life "easier" for a top-level of the Prolog processor, so that it only receives one, predictable, kind of errors? To me, that brings more problems than benefits - the true cause of the error would thus disappear - any opinion or comment?
The formal semantics (Annex A) seems to struggle with this somehow as well, although I have not studied it to the detail. In A.2.5, it mentions that "... However in the formal specification there is a catcher at the root...", and relates it, for example, to the execution of findall/3. So the formal spec differs here from the body text?
(We are talking here about ISO/IEC 13211-1:1995)
The definition of the control construct throw/1 (7.8.10) states in two places, that in this situation there shall be a system error. First, as you have observed, there is 7.8.10.1 c:
c) It shall be a system error (7.12.2 j) if S is now
empty,
and then, there is the error clause:
7.8.10.3 Errors
a) B is a variable
— instantiation_error.
b) B does not unify with the C argument of any call of catch/3
— system_error.
To see what a system error is, we need to look at subclause 7.12.2 j:
j) There may be a System Error at any stage of execution. The
conditions in which there shall be a system error, and the
action taken by a processor after a system error are
implementation dependent. It has the form system_error.
So, the action taken by a processor after a system error is implementation dependent. An infinite loop would be fine, too.
In a standard conforming program you cannot rely on any particular action in that situation.
Ad Note 1: A Note is not normative. See 1.1 Notes. It is essentially a summary.
Ad Note 2: This is to avoid any overspecification. Parts as these are kept as vague as possible in the standard because a system error might have corrupted a Prolog system. It is quite similar with resource errors. Ideally a system would catch them and continue execution, but many implementations have difficulties to guarantee this in the general case.
Ad Note 3: The formal semantics is quintessentially another implementation. And in some parts, an implementation has to make certain decisions, whereas a specification can leave open all possibilities. Apart from that, note that the formal semantics is not normative. It did help to debug the standard, though.
Edit: In the comments you say:
(1mo) So you are saying that this allows the processor to immediately do its implementation dependent action when a system error occurs - not only after it is uncaught? (2do) Thus relieving it from the necessity to apply the goal transformation in 7.12.1 ? (3tio) It would then also be OK if system errors are not catchable (by catch/3) at all?
1mo
The sentence in 7.12.2 j
... the action taken by a processor after a system error are implementation dependent.
effectively overrules 7.12.1. Similarly to 7.12.2 h which may occur "at any stage of execution", too.
Just to be sure that we are reading the codex correctly assume the opposite for a moment. Imagine that a system error occurs and 7.12.1 would now produce such an error that is not caught anywhere, then we again would have a system error etc. Thus: the sentence above would never apply. This alone suggests that we have read here something incorrectly.
On the other hand, imagine a situation where a system error occurs when the system is completely corrupted. How should now 7.12.1 be executed anyway? So the Prolog system would be unable to execute this loop. Does this mean that a Prolog processor can only be conforming if we can prove that there will never be a system error? This is practically impossible, in particular, because
7.12.2 Error classification
NOTES
...
4 A System Error may happen for example (a) in interactions with the
operating system (for example, a disc crash or interrupt), ...
So effectively this would mean that there cannot be any conforming Prolog processor.
2do
7.12.1 describes a way how an error is handled within Prolog. That is, if you are able to handle the error within Prolog, then a Prolog system should use this method. However, there might be situations where it is very difficult or even impossible (see above case) to handle an error in Prolog at all. In such situations a system might bail out.
3tio
Short answer: Yes. It is a quite extreme, but valid, reading that system errors are not catchable and execution would be terminated then. But maybe, take first a step back to understand (a) what a technical standard is for and what not, (b) the scope of a standard.
Scope
Or, rather start with b: In 1 Scope, we have:
NOTE - This part of ISO/IEC 13211 does not specify:
...
f) the user environment (top level loop, debugger, library
system, editor, compiler etc.) of a Prolog processor.
(Strictly speaking this is only a note, but if you leaf through the standard, you will realize that these aspects are not specified.) I somewhat suspect that what you actually want is to understand what a toplevel loop shall do with an uncaught error. However, this question is out of scope of 13211-1. It probably makes a lot of sense to report such an error and continue execution.
Purpose
The other point here is what a technical standard is actually for. Technical standards are frequently misunderstood to be a complete guarantee that a system will work "properly". However, if a system is conforming to a technical standard, this does not imply that it is fit for any purpose or use. To show you the very extreme, consider the shell command exit 1 which might be considered to be a processor conforming to 13211-1 (provided it comes accompanied with documentation that defines all implementation defined features). Why? Well, when the system starts up, it might realize that the minimal requirements (1 Scope, Note b) are not met and thus it produces a system error which is handled by producing error code 1.
What you actually want is rather a system that does conform and is fit for certain purposes (that are out of scope of 13211-1). So in addition of asking yourself whether or not a certain behavior is conforming or not, you will also look about how this system is fit for certain purposes.
A good example to this end are resource errors. Many Prolog systems are able to handle certain resource errors. Consider:
p(-X) :- p(X).
and the query catch(p(X),error(E,_),true). A system has now several opportunities: loop infinitely (which requires a very, very smart GC), succeed with E = resource_error(Resource), succeed with E = system_error, stop execution with some error code, eat up all resources.
While there is no statement that a system has to catch such an error, all the mechanism is provided in the standard to do so.
Similarly in the case of a system_error: If it makes sense, it might be a good idea to report system errors properly within Prolog, however, if things go too far, bailing out safely is still not the worst that can happen. In fact, the worst (yet still conforming) way would be to continue execution "as if" everything is fine, when it is not.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
OK, I know that this is very general question and that there were written some papers on the subject, but I have a feeling that these publications cover very basic material and I'm looking for something more advanced which would improve style and efficency. This is what I have in paper:
"Research Report AI-1989-08 Efficient Prolog: A Practical Guide" by Michael A. Covington, 1989
"Efficient Prolog Programming" by Timo Knuutila, 1992
"Coding guidelines for Prolog" by Covington, Bagnara, O'Keefe, Wielemaker, Price, 2011
Sample subjects covered in these: tail recursion and differential lists, proper use of indexing, proper use of cuts, avoiding asserts and retracts, avoiding CONSing, code formatting guidelines (indentation, if-then-elses etc.), naming conventions, code documenting, arguments order, testing.
What would you add here from your own personal experience with Prolog? Are there any special style guidelines applicable only to CLP programming? Do you know of some common efficiency problems and know how to deal with them?
UPDATE:
Some interesting (but still too basic and too general for me) points are made here: Prolog programming guidelines of Lifeware Team
Just to highlight the whole problem I would like to qoute "Coding guidelines for Prolog" (Covington et al.):
As far as we know, a coherent and reasonably complete set of coding guidelines for Prolog has never been published. Moreover, when we look at the corpus of published Prolog programs, we do not see a de facto standard emerging. The most important reason behind this apparent omission is that the small Prolog community, due to the lack of a comprehensive language standard, is further fragmented into sub-communities centered around individual Prolog systems, none of which has a dominant position.
For designing clean interfaces in Prolog, I recommend reading the Prolog standard, see iso-prolog.
In particular the specific format how built-in predicates are codified which includes a particular style of documentation but also the way how errors are signaled. See 8.1 The format of built-in predicate definitions of ISO/IEC 13211-1:1995. You find definitions in that style online in Cor.2 and the
Prolog prologue.
A very good example of a library that follows the ISO error signaling conventions up to the letter (and yet is not standardized) is the implementation of library(clpfd) in SICStus and SWI. While both implementations are fundamentally different in their approach, they use the error conventions to their best advantage.
Back to ISO. This is ISO's format for built-in predicates:
x.y.z Name/Arity
In the beginning, there may be a short optional informal remark.
x.y.z.1 Description
A declarative description is given which starts very often with the most general goal using descriptive variable names such that they can be referred to later on. Should the predicate's meaning be not declarative at all, it is either stated "is true" or some otherwise unnecessary operationalizing word like "unifies", "assembles" is used. Let me give an example:
8.5.4 copy_term/2
8.5.4.1 Description
copy_term(Term_1, Term_2) is true iff Term_2 unifies with a term T which is a renamed copy (7.1.6.2) of Term_1.
So this unifies is a big red warning sign: Don't ever think this predicate is a relation, it can only be understood procedurally. And even more so it (implicitly) states that the definition is steadfast in the second argument.
Another example: sort/2. Is this now a relation or not?
8.4.3 sort/2
8.4.3.1 Description
sort(List, Sorted) is true iff Sorted unifies with the sorted list of List (7.1.6.5).
So, again, no relation. Surprised? Look at 8.4.3.4 Examples:
8.4.3.4 Examples
...
sort([X, 1], [1, 1]).
Succeeds, unifying X with 1.
sort([1, 1], [1, 1]).
Fails.
If necessary, a separate procedural description is added, starting with "Procedurally,". It again does not cover any errors at all. This is one of the big advantages of the standard descriptions: Errors are all separated from "doing", which helps a programmer (= user of the built-in) catching errors more systematically. To be fair, it slightly increases the burden of the implementor who wants to optimize by hand and on a case-to-case basis. Such optimized code is often prone to subtle errors anyway.
x.y.z.2 Template and modes
Here, a comprehensive, one or two line specification of the arguments' modes and types is given. The notation is very similar to other notations which finds its origin in the 1978 DECsystem-10 mode declarations.
8.5.2.2 Template and modes
arg(+integer, +compound_term, ?term)
There is, however, a big difference between ISO's approach and Covington et al.'s guideline which is of informal nature only and states how a programmer should use a predicate. ISO's approach describes how the built-in will behave - in particular which errors should be expected. (There are 4 errors following from above plus one extra error that cannot be seen from above spec, see below).
x.y.z.3 Errors
All error conditions are given, each in its own subclause numbered alphabetically. The codex in 7.12 Errors:
When more than one error condition is satisfied, the error that is reported by the Prolog processor is implementation dependent.
That means, that each error condition must state all preconditions where it applies. All of them. The error conditions are not read like an if-then-elsif-then...
It also means that the codifier has to put extra effort for finding good error conditions. This is all to the advantage of the actual user-programmer but certainly a bit of a pain for the codifier and implementor.
Many error conditions directly follow from the spec given in x.y.z.2 according to the NOTES in 8.1.3 Errors and according to 7.12.2 Error classification (summary). For the built-in predicate arg/3, errors a, b, c, d follow from the spec. Only error e does not follow.
8.5.2.3 Errors
a) N is a variable — instantiation_error.
b) Term is a variable — instantiation_error.
c) N is neither a variable nor an integer—type_error(integer, N).
d) Term is neither a variable nor a compound term— type_error(compound, Term).
e) N is an integer less than zero— domain_error(not_less_than_zero, N).
x.y.z.4 Examples
(Optional).
x.y.z.5 Bootstrapped built-in predicates
(Optional).
Defines other predicates that are so similar, they can be "bootstrapped".
I can't find any info on this online... I am also new to Prolog...
It seems to me that Prolog could be highly concurrent, perhaps trying many possibilities at once when trying to match a rule. Are modern Prolog compilers/interpreters inherently* concurrent? Which ones? Is concurrency on by default? Do I need to enable it somehow?
* I am not interested in multi-threading, just inherent concurrency.
Are modern Prolog compilers/interpreters inherently* concurrent? Which ones? Is concurrency on by default?
No. Concurrent logic programming was the main aim of the 5th Generation Computer program in Japan in the 1980s; it was expected that Prolog variants would be "easily" parallelized on massively parallel hardware. The effort largely failed, because automatic concurrency just isn't easy. Today, Prolog compilers tend to offer threading libraries instead, where the program must control the amount of concurrency by hand.
To see why parallelizing Prolog is as hard as any other language, consider the two main control structures the language offers: conjunction (AND, serial execution) and disjunction (OR, choice with backtracking). Let's say you have an AND construct such as
p(X) :- q(X), r(X).
and you'd want to run q(X) and r(X) in parallel. Then, what happens if q partially unifies X, say by binding it to f(Y). r must have knowledge of this binding, so either you've got to communicate it, or you have to wait for both conjuncts to complete; then you may have wasted time if one of them fails, unless you, again, have them communicate to synchronize. That gives overhead and is hard to get right. Now for OR:
p(X) :- q(X).
p(X) :- r(X).
There's a finite number of choices here (Prolog, of course, admits infinitely many choices) so you'd want to run both of them in parallel. But then, what if one succeeds? The other branch of the computation must be suspended and its state saved. How many of these states are you going to save at once? As many as there are processors seems reasonable, but then you have to take care to not have computations create states that don't fit in memory. That means you have to guess how large the state of a computation is, something that Prolog hides from you since it abstracts over such implementation details as processors and memory; it's not C.
In other words, automatic parallelization is hard. The 5th Gen. Computer project got around some of the issues by designing committed-choice languages, i.e. Prolog dialects without backtracking. In doing so, they drastically changed the language. It must be noted that the concurrent language Erlang is an offshoot of Prolog, and it too has traded in backtracking for something that is closer to functional programming. It still requires user guidance to know what parts of a program can safely be run concurrently.
In theory that seems attractive, but there are various problems that make such an implementation seem unwise.
for better or worse, people are used to thinking of their programs as executing left-to-right and top-down, even when programming in Prolog. Both the order of clauses for a predicate and of terms within a clause is semantically meaningful in standard Prolog. Parallelizing them would change the behaviour of far too much exising code to become popular.
non-relational language elements such as the cut operator can only be meaningfully be used when you can rely on such execution orders, i.e. they would become unusable in a parallel interpreter unless very complicated dependency tracking were invented.
all existing parallelization solutions incur at least some performance overhead for inter-thread communication.
Prolog is typically used for high-level, deeply recursive problems such as graph traversal, theorem proving etc. Parallelization on a modern machines can (ideally) achieve a speedup of n for some constant n, but it cannot turn an unviable recursive solution method into a viable one, because that would require an exponential speedup. In contrast, the numerical problems that Fortran and C programmers usually solve typically have a high but quite finite cost of computation; it is well worth the effort of parallelization to turn a 10-hour job into a 1-hour job. In contrast, turning a program that can look about 6 moves ahead into one that can (on average) look 6.5 moves ahead just isn't as compelling.
There are two notions of concurrency in Prolog. One is tied to multithreading, the other to suspended goals. I am not sure what you want to know. So I will expand a little bit about multithreading first:
Today widely available Prolog system can be differentiated whether they are multithreaded or not. In a multithreaded Prolog system you can spawn multiple threads that run concurrently over the same knowledge base. This poses some problems for consult and dynamic predicates, which are solved by these Prolog systems.
You can find a list of the Prolog systems that are multithreaded here:
Operating system and Web-related features
Multithreading is a prerequesite for various parallelization paradigmas. Correspondingly the individudal Prolog systems provide constructs that serve certain paradigmas. Typical paradigmas are thread pooling, for example used in web servers, or spawning a thread for long running GUI tasks.
Currently there is no ISO standard for a thread library, although there has been a proposal and each Prolog system has typically rich libraries that provide thread synchronization, thread communication, thread debugging and foreign code threads. A certain progress in garbage collection in Prolog system was necessary to allow threaded applications that have potentially infinitely long running threads.
Some existing layers even allow high level parallelization paradigmas in a Prolog system independent fashion. For example Logtalk has some constructs that map to various target Prolog systems.
Now lets turn to suspended goals. From older Prolog systems (since Prolog II, 1982, in fact) we know the freeze/2 command or blocking directives. These constructs force a goal not to be expanded by existing clauses, but instead put on a sleeping list. The goal can the later be woken up. Since the execution of the goal is not immediate but only when it is woken up, suspended goals are sometimes seen as concurrent goals,
but the better notion for this form of parallelism would be coroutines.
Suspended goals are useful to implement constraint solving systems. In the simplest case the sleeping list is some variable attribute. But a newer approach for constraint solving systems are constraint handling rules. In constraint handling rules the wake up conditions can be suspended goal pair patterns. The availability of constraint solving either via suspended goals or constraint handling rules can be seen here:
Overview of Prolog Systems
Best Regards
From a quick google search it appears that the concurrent logic programming paradigm has only been the basis for a few research languages and is no longer actively developed. I have seen claims that concurrent logic is easy to do in the Mozart/Oz system.
There was great hope in the 80s/90s to bake parallelism into the language (thus making it "inherently" parallel), in particular in the context of the Fifth Generation Project. Even special hardware constructs were studied to implement "Parallel Inference Machine" (PIM) (Similar to the special hardware for LISP machines in the "functional programming" camp). Hardware efforts were abandoned due to continual improvement of off-the-shelf CPUs and software effort were abandoned due to excessive compiler complexity, lack of demand for hard-to-implement high-level features and likely lack of payoff: parallelism that looks transparent and elegantly exploitable at the language level generally means costly inter-process communication and transactional locking "under the hood".
A good read about this is
"The Deevolution of Concurrent Logic Programming Languages"
by Evan Tick, March 1994. Appeared in "Journal of Logic Programming, Tenth Anniversary Special Issue, 1995". The Postscript file linked to is complete, unlike the PDF you get at Elsevier.
The author says:
There are two main views of concurrent logic programming and its
development over the past several years [i.e. 1990-94]. Most logic programming
literature views concurrent logic programming languages as a
derivative or variant of logic programs, i.e., the main difference
being the extensive use of "don't care" nondeterminism rather than
"don't know" (backtracking) nondeterminism. Hence the name committed
choice or CC languages. A second view is that concurrent logic
programs are concurrent, reactive programs, not unlike other
"traditional" concurrent languages such as 'C' with explicit message
passing, in the sense that procedures are processes that communicate
over data streams to incrementally produce answers. A cynic might say
that the former view has more academic richness, whereas the latter
view has more practical public relations value.
This article is a survey of implementation techniques of concurrent
logic programming languages, and thus full disclosure of both of these
views is not particularly relevant. Instead, a quick overview of basic
language semantics, and how they relate to fundamental programming
paradigms in a variety of languages within the family, will suffice.
No attempt will be made to cover the many feasible programming
paradigms; nor semantical nuances, nor the family history. (...).
The main point I wish to make in this article is that concurrent logic
programming languages have been deevolving since their inception,
about ten years ago, because of the following tatonnement:
Systems designers and compiler writers could supply only certain limited features in robust; efficient implementations. This drove the
market to accept these restricted languages as, in some informal
sense, de facto standards.
Programmers became aware that certain, more expressive language features were not critically important to getting applications
written, and did not demand their inclusion.
Thus my stance in this article will be a third view: how the initially
rich languages gradually lost their "teeth," and became weaker, but
more practically implementable, and achieved faster performance.
The deevolutionary history begins with Concurrent Prolog (deep guards,
atomic unification; read-only annotated variables for
synchronization), and after a series of reductions (for example: GHC
(input-matching synchronization), Parlog (safe), FCP (flat), Fleng (no
guards), Janus (restricted communication), Strand (assignment rather
than output unification)), and ends for now with PCN (flat guards,
non-atomic assignments input-matching synchronization, and
explicitly-defined mutable variables). This and other terminology will
be defined as the article proceeds.
This view may displease some
readers because it presupposes that performance is the main driving
force of the language market; and furthermore that the main "added
value" of concurrent logic programs over logic programs is the ability
to naturally exploit parallelism to gain speed. Certainly the reactive
nature of the languages also adds value; e.g., in building complex
object-oriented applications. Thus one can argue that the deevolution
witnessed is a bad thing when reactive capabilities are being traded
for speed.
ECLiPSe-CLP, a language "largely backward-compatible with Prolog", supports OR-parallelism, even though "this functionality is currently not actively maintained because of other priorities".
[1,2] document OR- (and AND-)parallelism in ECLiPSe-CLP.
However, I tried to get it working some time using the code from ECLiPSe-CLP's repository, but I didn't get it though.
[1] http://eclipseclp.org/reports/book.ps.gz
[2] http://eclipseclp.org/doc/bips/kernel/compiler/parallel-1.html