examples of prolog meta-interpreter uses? - prolog

I'm reading several texts and online guides to understand the possibilities of prolog meta-interpreters.
The following seem like solid use cases:
proof explainers / tracers
changing proof search strategy, eg breadth first vs depth first
domain specific languages
Question - what other compelling use-cases are there?

Quoting from A Couple of Meta-interpreters in Prolog which is a part of the book "The Power of Prolog":
Further extensions
Other possible extensions are module systems, delayed goals, checking for various kinds of infinite loops, profiling, debugging, type systems, constraint solving etc. The overhead incurred by implementing these things using MIs can be compiled away using partial evaluation techniques. [...]
This quite extends your proposed uses, e.g., by
changing the search of p(X) :- p(s(X)). to detect loops (including "obvious" ones like this one),
hinting at where most compute time is spent ("profiling"),
or by reducing a program to a simpler fragment that is easier to analyse—but still has the property of interest: unexpected non-termination (explained via failure-slice), unexpected failure, or unexpected success.

Related

Looking for a more compact syntax for Prolog

Prolog is a nice language. I use it occasionally, from time to time.
But approaching it every subsequent time makes me feel less and less comfortable syntactically.
The modern programming languages are moving to allow
programmer less repeating himself
omit unnecessary pieces if they can be deduced, or their names are just placeholders.
The DCG is a step in the right direction allowing one to write
sentence --> noun_phrase, verb_phrase.
instead of
sentence(A,Z) :- noun_phrase(A,B), verb_phrase(B,Z).
but its entanglement with difference lists makes it less useful.
So what I am looking for are projects giving Prolog
a more compact syntactic representation, while preserving its semantic expressiveness.
Higher-order programming based on call/N is still a pretty much unexplored terrain. Major implementations like SICStus Prolog added call/N as late as 2006. So there is still a lot to explore. Consider library(lambda), library(reif) (both here) and other definitions using the meta-predicate declaration.
One thing you might want to look into in case of Swi-Prolog are actual language extensions introduced specifically by Swi-Prolog 7:
http://www.swi-prolog.org/pldoc/man?section=extensions
Another thing is Quasi-Quotation library which allows you to insert pieces of code in your own language (defined using DCG) inside "regular" Prolog code:
http://www.swi-prolog.org/pldoc/man?section=quasiquotations
The last thing I can recommend is the list of additional Swi-Prolog packages, some of which are specifically designed to extend the language, e.g. 'func', 'lambda', etc.:
http://www.swi-prolog.org/pack/list

Attributed variables: library interfaces / implementations / portability

When I was skimming some prolog related questions recently, I stumbled upon this answer by #mat to question How to represent directed cyclic graph in Prolog with direct access to neighbour verticies .
So far, my personal experience with attributed variables in Prolog has been very limited. But the use-case given by #mat sparked my interest. So I tried using it for answering another question, ordering lists with constraint logic programming.
First, the good news: My first use of attributed variables worked out like I wanted it to.
Then, the not so good news: When I had posted by answer, I realized there were several API's and implementations for attributed variables in Prolog.
I feel I'm over my head here... In particular I want to know the following:
What API's are in wide-spread use? Up to now, I found two: SICStus and SWI.
Which features do the different attributed variable implementations offer? The same ones? Or does one subsume the other?
Are there differences in semantics?
What about the actual implementation? Are some more efficient than others?
Can be (or is) using attributed variables a portability issue?
Lots of question marks, here... Please share your experience / stance?
Thank you in advance!
Edit 2015-04-22
Here's a code snippet of the answer mentioned above:
init_att_var(X,Z) :-
put_attr(Z,value,X).
get_att_value(Var,Value) :-
get_attr(Var,value,Value).
So far I "only" use put_attr/3 and get_attr/3, but---according to the SICStus Prolog documentation on attributed variables---SICStus offers put_attr/2 and get_attr/2.
So even this very shallow use-case requires some emulation layer (one way or the other).
I would like to focus on one important general point I noticed when working with different interfaces for attributes variables: When designing an interface for attributed variables, an implementor should also keep in mind the following:
Is it possible to take attributes into account when reasoning about simultaneous unifications, as in [X,Y] = [0,1]?
This is possible for example in SICStus Prolog, because such bindings are undone before verify_attributes/3 is called. In the interface provided by hProlog (attr_unify_hook/2, called after the unification and with all bindings already in place) it is hard to take into account the (previous) attributes of Y when reasoning about the unification of X in attr_unify_hook/2, because Y is no longer a variable at this point! This may be sufficient for solvers that can make decisions based on ground values alone, but it is a serious limitation for solvers that need additional data, typically stored in attributes, to see whether a unification should succeed, and which are then no longer easily available. One obvious example: Boolean unification with decision diagrams.
As of 2016, the verify-attributes branch of SWI-Prolog also supports verify_attributes/3, thanks to great implementation work by Douglas Miles. The branch is ready for testing and intended to be merged into master as soon as it works correctly and efficiently. For compatibility with hProlog, the branch also supports attr_unify_hook/2: It does so by rewriting such definitions to the more general verify_attributes/3 at compilation time.
Performance-wise, it is clear that there may be a downside to verify_attributes/3, because making several variables ground at the same time may let you sooner see (in attr_unify_hook/2) that a unification cannot succeed. However, I will gladly and any time exchange this typically negligible advantage for the improved reliability, ease of use, and increased functionality that the more general interface gives you, and which is in any case already the standard behaviour in SICStus Prolog which is on top of its generality also one of the faster Prolog systems around.
SICStus Prolog also features an important predicate called project_attributes/2: It is used by the toplevel to project constraints to query variables. SWI-Prolog also supports this in recent versions.
There is also one huge advantage of the SWI interface: The residual goals that attribute_goals//1 and hence copy_term/3 give you are always a list. This helps users to avoid defaultyness in their code, and encourages a more declarative interface, because a list of pure constraint goals cannot contain control structures.
Interestingly, neither interface lets you interpret unifications other than syntactically. Personally, I think there are cases where you may want to interpret unifications differently than syntactically, however, there may also be good arguments against that.
The other interface predicates for attributed variables are mostly easily interchangable with simple wrapper predicates for different systems.
Jekejeke Minlog has state-less or thin attribute variables. Well not exactly, an attribute variable can have zero, one or many hooks, which are allowed to be closures, and hence can carry a little state.
But typically an implementation manages the state elsewere. For this
purpose Jekejeke Minlog allows creating reference types from variables,
so that they can be used as indexes into tables.
The full potential is unleashed if this combined with trailing and/or
forward chaining. As an example we have implemented CLP(FD). There is also a little solver tutorial.
The primitive ingredients in our case are:
1) State-less Attribute Variables
2) Trailing and Variable Keys
3) Continuation Queue
The attribute variables hooks might have binding effects upto extending the continuation queue but are only executed once. Goals from the continuation queue can be non-deterministic.
There are some additional layers before realizing applications, that are mostly aggregations of the primitives to make changes temporarily.
The main applications so far are open source here and here:
a) Finite Domain Constraint Solver
b) Herbrand Constraints
c) Goal Suspension
Bye
An additional perspective on attributed variable libraries is how many attributes can be defined per module. In the case of SWI-Prolog/YAP and citing SWI documentation:
Each attribute is associated to a module, and the hook
(attr_unify_hook/2) is executed in this module.
This is a severe limitation for implementers of libraries such as CLP(FD) as it forces using additional modules for the sole purpose of having multiple attributes instead of being able to define as many attributes as required in the module implementing their library. This limitation doesn't exist on the SICStus Prolog interface, which provides a directive attribute/1 that allows the declaration of an arbitrary number of attributes per module.
You can find one of the oldest and most elaborate implementations of attributed variables in ECLiPSe, where it forms part of the wider infrastructure for implementing constraint solvers.
The main characteristics of this design are:
attributes must be declared, and in return the compiler supports efficient access
a syntax for attributed variables, so that they can be read and written
a more complete set of handlers for attribute operations, so that attributes are not only taken into account for unification, but also for other generic operations such as term copying and subsumption tests
a clear separation between the concepts of variable attribute and suspended goals
used in over a dozen of ECLiPSe's libraries
This paper (section 4) and the ECLiPSe documentation have more details.

What is the modern popular standard Prolog that hobby Prolog interpreters should aim for?

I'm writing a Prolog interpreter as an exercise and wondering what I should be aiming for. Unfortunately there are many versions of Prolog to choose from and they are documented to various degrees. I quickly found this question from someone who was apparently expecting far too much from the internet by wanting a detailed html specification of Prolog. The answer to that was that you can get the ISO standard for $30, but that's rather impractical. Users are never going to pay $30 just to read about Prolog when they can get a Prolog interpreter for even less money, so if you pay the money and conform to the standard few people will ever recognize your effort. Therefore it doesn't surprise me at all that the ISO standard isn't universally respected.
Starting from the assumption that the ISO standard is a joke, what is the real version of Prolog that an interpreter should be aiming for? I don't mean that every little Prolog interpreter should fully implement every feature, but when constructing a Prolog interpreter there's no end to the little decisions that must be made. How should someone discover the consensus of the Prolog community about what Prolog should be?
If you are writing a Prolog system as an exercise, don't expect that you get too much done. After all, it is quite an effort.
To start with, aim for the core of the ISO standard, that is 13211-1:1995 including Cor.1:2007
and Cor.2:2012. That core is pretty much supported by many systems like: IF, SWI, YAP, B, GNU, SICStus, Jekejeke, Minerva. So while this core just covers the very basics, it will be still a lot of work to you.
Then, you can consider what further direction you want to go. From a standard's viewpoint these are implementation specific extensions. Systems pretty much differ in the way they offer extensions, so there is no clear way to choose. The most popular systems are SICStus (commercial) and SWI (open source). An open source system with better conformance than SWI is GNU.
You are putting a lot of quite debatable implications into your question, so let me try to sort some out:
Price of standards. ISO standards do cost something - these documents have a certain legal status - depending on your country and legislation. Freely available web documents can serve as evidence only. See for example the C standard which you get for the same prices: One official high price (USD 285) and a reduced one by INCITS (USD 30). The difference is only the cover sheet. At least, you can get the Prolog standard for a significantly reduced price.
Relevance. There is just one standard. And systems conform quite closely. Where they differ, they differ rather randomly. As an example, look at this detailed comparison of syntax which covers both reading and writing terms. Typically, such differences are reported by users who get hit by one or another difference. These differences are nowhere formally defined.
I don't agree with your assumption about ISO Prolog, indeed I would suggest to try to implement a small subset of ISO Prolog (i.e. be 'sure' to properly implement findall/setof).
A principal problem with ISO standard it's the module directive. Then choose an implementation to model modules, or skip them altogether.
Even some 'undiscussed' builtin will be difficult to implement, depending on the language you use (C, Haskell, Lisp, SQL, Javascript, C++...) and the choices you will do about the degree of translation. Most implementations out there are not interpreters, but bytecode compilers with various degrees of runtime support. The most used choice for the bytecode level is Warren's Abstract Machine (WAM, as you surely know).
When I wrote my Prolog interpreter, many years ago, I designed and implemented an object oriented database model, using algorithm ABC instead of the WAM, and I designed and implemented the Variables handling with ingenuity... but I left out setof/bagof, for instance...
I think SWI-Prolog pretty-much drives the Prolog standard nowdays, so take a look at their documentation... Other than that, what you are asking is being debated over and over again during the past years in the Prolog Standardization meetings. Some argue that tabling should be made into the standard, others claim the same for automatic indexing, and so on. So, in my humble opinion, the best you can do is "mimic" what SWI does for most stuff, and you'll be almost certainly in the standard

How does a system like Wolfram Alpha or Mathematica solve equations?

I'm building a web-based programming language partially inspired by Prolog and Haskell (don't laugh).
It already has quite a bit of functionality, you can check out the prototype at http://www.lastcalc.com/. You can see the source here and read about the architecture here. Remember it's a prototype.
Currently LastCalc cannot simplify expressions or solve equations. Rather than hard-coding this in Java, I would like to enhance the fundamental language such that it can be extended to do these things using nothing but the language itself (as with Prolog). Unlike Prolog, LastCalc has a more powerful search algorithm, Prolog is "depth-first search with backtracking", LastCalc currently uses a heuristic best-first search.
Before delving into this I want to understand more about how other systems solve this problem, particularly Mathematica / Wolfram Alpha.
I assume the idea, at least in the general case, is that you give the system a bunch of rules for manipulation of equations (like a*(b+c) = a*b + a+c) specify the goal (eg. isolate variable x) and then let it loose.
So, my questions are:
Is my assumption correct?
What is the search strategy for applying rules? eg. depth first, breadth first, depth first with iterative deepening, some kind of best first?
If it is "best first", what heuristics are used to determine whether it is likely that a particular rule application has got us closer to our goal?
I'd also appreciate any other advice (except for "give up" - I regularly ignore that piece of advice and doing so has served me well ;).
I dealt with such questions myself some time ago. I then found this document about simplification of expressions. It is titled Rule-based Simplification of Expressions and shows some details about simplification in Mupad, which later became a part of Matlab.
According to this document, your assumption is correct. There is a set of rules for manipulation of expressions. A heuristic quality metric is is used as a target function for simplification.
Wolfram alpha is developed by Mathematica
mathematica is stephen wolphram's brainchild. Mathematica 1.0 was released in 1988. mathematica is much like maple and they both rely heavily on older software libraries like LaPack.
The libraries that these programs are, based on, and often simply, legacy software. They've been around, and modified, for a very long time.
If you would like to know about the background programs running, sagemath is a free open source alternative; you could possible reverse engineer the solutions to your questions:
SageMath.org

General approach to constraint solving w/optimization over large finite domains?

I have a constraint problem I've been working on, which has a couple "fun" properties:
The domain is massive; basic constraints bring it down to around 2^40 to 2^30, but it's hard to bring it down further without...
Optimization for the solution. There is no single constrained solution; I'm looking for the best fit in the domain based on some complex predicates.
In searching for a way to handle this problem, I've brushed up on my Erlang, Haskell, and Prolog, but these languages don't already have the advanced predicates I'm looking for. I know that some of my optimizations could bring down the search space, and humans can peruse the domain fairly quickly and make really good guesses about optimal answers. (The domain is parameterized on a dozen variables; it's really easy to pick outliers as probable candidates for being close to the best in the domain.)
What I'm looking for in this question isn't a magical algorithm to handle this search, but an answer to the question: Since Prolog and Haskell aren't the right tools for this, which language or library might be a better answer? I have written this up in Haskell, but on a trivial restricted search of 6 million items, it couldn't even reach ten thousand comparisons per second, and perhaps that is because Haskell is not a good fit for expressing these kinds of problems.
If I remember correctly, Coq has a nice support for computations wit constraints. At least, if your domain may be described as formal system, Coq will help to write it down as a code and perform basic computations.

Resources